Sam Altman at TED 2025: Inside the most uncomfortable — and important — AI interview of the year

Subscribe to our daily and weekly newsletters to stay updated with the latest developments and exclusive content on cutting-edge AI coverage. Learn More


OpenAI CEO Sam Altman recently announced that the company has reached 800 million weekly active users and is seeing remarkable growth rates, as revealed during a tense interview at the TED 2025 conference in Vancouver.

“I have never witnessed growth like this in any company, whether I’ve been involved with it or not,” Altman shared with TED head Chris Anderson during their discussion. “The growth of ChatGPT is truly remarkable. It’s both exhilarating and overwhelming. Our teams are working tirelessly and feeling the pressure.”

The interview at TED 2025: Humanity Reimagined not only highlighted OpenAI’s rapid success but also shed light on the increased scrutiny the company faces as its technology continues to reshape society at an alarming pace.

‘Our GPUs are melting’: OpenAI faces challenges in scaling amidst unprecedented demand

Altman described the struggle of a company trying to keep up with its own success, noting that OpenAI’s GPUs are under immense strain due to the popularity of its new image generation features. “I spend my days calling people and pleading for their GPUs. We are severely constrained,” he remarked.

As OpenAI experiences exponential growth, reports suggest that the company is contemplating launching its own social network to rival Elon Musk’s X, though Altman neither confirmed nor denied these speculations during the TED interview.

With a recent $40 billion funding round that valued the company at $300 billion, OpenAI is poised to address some of its infrastructure challenges with the influx of capital.

From non-profit to $300 billion giant: Altman addresses ‘Ring of Power’ accusations

During the conversation, Anderson pressed Altman on OpenAI’s transition from a non-profit research lab to a for-profit entity valued at $300 billion. Critics, including Elon Musk, have raised concerns about Altman being “corrupted by the Ring of Power,” referencing “The Lord of the Rings.”

Altman defended OpenAI’s evolution: “Our aim is to develop AGI and disseminate it, ensuring its safety for the benefit of humanity at large. I believe we have made significant strides in that direction. Our strategies have evolved over time… We didn’t anticipate having to build a company around this. We’ve gained valuable insights into the capital requirements of these systems.”

When questioned about handling the immense power he now holds, Altman remarked, “Surprisingly, I approach it the same way as before. You can adapt to anything gradually… You remain the same person. I’m sure I’ve changed in various ways, but I don’t feel any different.”

‘Divvying up revenue’: OpenAI intends to compensate artists whose styles are utilized by AI

Altman revealed a significant policy development during the interview, acknowledging that OpenAI is devising a system to remunerate artists whose styles are replicated by AI.

“There are exciting new business models that we and others are eager to explore,” Altman stated when addressing concerns about apparent IP infringement in AI-generated images. He suggested a revenue-sharing model for artists whose styles are emulated by AI, although specifics are scarce at this point.

While OpenAI’s image generator currently declines requests to mimic the style of living artists without consent, it can generate art inspired by movements, genres, or studios. Altman hinted at a potential revenue-sharing arrangement in the future.

Autonomous AI agents: The ‘most consequential safety challenge’ for OpenAI

The discussion turned tense when addressing “agentic AI” – autonomous systems capable of taking actions on behalf of users on the internet. OpenAI’s new tool, “Operator,” enables AI to perform tasks like booking reservations, raising concerns about safety and accountability.

Anderson challenged Altman on setting clear boundaries to prevent potential misuse of autonomous agents. Altman referred to OpenAI’s “preparedness framework” but provided limited details on how the company plans to prevent abuse of autonomous agents.

“When you grant AI access to your systems, information, and the ability to navigate your computer… the stakes are much higher when mistakes occur,” Altman acknowledged. “Users will not utilize our agents if they cannot trust that their bank accounts won’t be drained or their data won’t be erased.”

’14 definitions from 10 researchers’: Inside OpenAI’s challenge to define AGI

Altman admitted that even within OpenAI, there isn’t a unanimous agreement on what constitutes artificial general intelligence (AGI) – the company’s ultimate goal.

“It’s like the joke, if you’ve got 10 OpenAI researchers in a room and asked to define AGI, you’d get 14 definitions,” Altman humorously remarked.

He proposed focusing on the continuous improvement of models rather than fixating on a specific moment when AGI emerges. Altman emphasized the need to adapt and harness the benefits of increasingly sophisticated AI systems.

Loosening the guardrails: OpenAI’s revised approach to content moderation

Altman disclosed a notable shift in OpenAI’s content moderation policy, indicating that the company has relaxed restrictions on its image generation models.

“Users now have more liberty in terms of what we traditionally consider as speech harms,” Altman explained. “I believe part of model alignment involves adhering to the user’s intent for the model within the broad boundaries defined by society.”

This change may signal a broader trend towards empowering users with more control over AI outputs, aligning with Altman’s preference for user-driven decisions over exclusive expert determinations.

“One of the fascinating aspects of AI is that our AI can interact with every individual on Earth, enabling us to understand the collective value preferences of society rather than relying on a select few to make decisions,” Altman highlighted.

‘My kid will never be smarter than AI’: Altman’s vision of an AI-powered future

The interview concluded with Altman contemplating the world his newborn son will inhabit – a world where AI surpasses human intelligence.

“My child will never outsmart AI. They will grow up in a world where products and services are incredibly intelligent and capable,” Altman envisioned. “It will be a realm of abundant resources… where change occurs rapidly, unveiling remarkable innovations.”

Anderson concluded with a thought-provoking remark: “In the upcoming years, you will face some of the most significant opportunities, moral dilemmas, and decisions of any individual in history.”

The billion-user balancing act: How OpenAI manages power, profit, and purpose

Altman’s appearance at TED comes at a pivotal moment for OpenAI and the broader AI sector. The company confronts legal challenges, including copyright disputes with authors and publishers, while pushing the boundaries of AI capabilities.

Recent breakthroughs like ChatGPT’s viral image generation feature and Sora’s video generation tool showcase capabilities that were once deemed unattainable. However, these advancements have sparked discussions on copyright, authenticity, and the future of creative work.

Altman’s readiness to address challenging issues regarding safety, ethics, and societal implications of AI indicates an awareness of the high stakes involved. Nonetheless, critics may note the absence of concrete solutions on specific safeguards and policies throughout the conversation.

The interview exposed the conflicting priorities at the core of OpenAI’s mission: advancing AI technology swiftly while ensuring safety; balancing profit motives with societal welfare; respecting creative rights while democratizing creative tools; and navigating between expert insights and public preferences.

As Anderson highlighted in his closing remarks, the decisions Altman and his colleagues make in the coming years could shape humanity’s future in unprecedented ways. Whether OpenAI can uphold its mission of ensuring “all of humanity benefits from artificial general intelligence” remains to be seen.