Barry Diller, co-founder of Fox Broadcasting and chairman of IAC and Expedia Group, has publicly defended OpenAI’s CEO Sam Altman amid scrutiny but warned that emerging artificial general intelligence poses complex risks that surpass personal trust.
- Diller trusts Altman but stresses trust alone is insufficient
- Approaching AGI brings unpredictable risks requiring oversight
- Lack of guardrails could let AGI evolve beyond human control
What happened
At a recent event hosted by The Wall Street Journal, Barry Diller addressed questions about the trustworthiness of OpenAI CEO Sam Altman amid various accusations from former colleagues and board members. Diller, who has a friendly relationship with Altman, expressed confidence in Altman’s intentions and character, calling him sincere and driven by good values.
Despite this endorsement, Diller shifted focus away from individual trust toward the broader uncertainties inherent in rapidly advancing AI technologies. He noted that the developments in AI and the eventual arrival of artificial general intelligence (AGI) represent largely uncharted territory, full of surprises even to those creating it.
Why it matters
Diller’s comments underline a pivotal reality: as AI approaches capabilities comparable to human intellect across all tasks, standard notions of leadership trust may be insufficient safeguards. Instead, the unpredictable nature of AGI means that the outcomes could extend beyond the intentions or control of current developers.
He stressed the urgency of establishing guardrails—regulatory and ethical boundaries—to guide AI’s development responsibly. Without such measures, AGI could act independently in ways that are irreversible and potentially harmful, fundamentally transforming society in ways no one fully understands yet.
What to watch next
Stakeholders across industries will be watching for how AI leaders and policymakers respond to the call for guardrails as AGI technology continues to advance rapidly. Key questions include what regulatory frameworks will emerge, how companies will balance innovation with safety, and how global cooperation might shape the AI trajectory.
Additionally, public and investor sentiment around AI development might evolve based on both breakthroughs and setbacks related to safety and control. Barry Diller’s cautionary stance provides a lens to evaluate future news on AI governance, leadership accountability, and progress toward or beyond AGI milestones.