This write up is based on Eric Schmidt’s Ted Talk in May of 2025.
Framing in communication comes down to choice.
You decide what to highlight and how to highlight it.
You might avoid certain facts. You might focus only on the upside.
In a TED Talk, Eric Schmidt, the former CEO of Google, talks about artificial intelligence.
In setting a tone and selecting details, what he doesn’t say is just as important as what he does.
While his talk includes serious concerns about the global race to capture AI, the way he frames those concerns, and what he puts front and center, shapes you and my perception from the start.
Let’s take a look.
The Moment the Revolution Began
“In 2016 we didn’t understand what was now going to happen… there was a new move invented by AI in a game that had been around for 2,500 years that no one had ever seen technically… What does this mean? How is it that our computers could come up with something that humans had never thought about?”
Schmidt opens with a story. Not an abstract idea or a bold prediction—but a specific, surprising moment: AlphaGo, an AI system, made a move in the game of Go that no human had ever made in over two millennia of play.
It shocked even the world’s best players because it was brilliant. It redefined how the game could be played.
It’s an effective Story/Narrative Frame, but it does more than just orient the audience.
Schmidt uses it to quietly establish authority. He was there. He saw the implications. He asked the right questions.
That’s an Authority Frame, and he becomes someone who saw it coming.
He also uses Contrast:
Billions of human minds across centuries missed what a machine saw instantly.
Humans lose exclusivity on intuition. He doesn’t need to say “AI is smarter than us” because the frame says it for him.
But It’s What He Doesn’t Say That Matters, Too
Schmidt never pauses to ask whether this new intelligence might have unintended consequences.
That absence is also a choice.
Because while AI may seem like a superpower, it quietly reshapes how people think, and what they think about.
What happens when AI is fully functional and adopted by business:
- Critical thinking declines
- Creativity flattens
- Job losses increase and stronger competition for real jobs steepens
- Dependency deepens
- Power concentrates to a few companies
- Manipulation becomes indistinguishable from communication.
These are serious implications of AI. However, none of that appears in Schmidt’s framing.
He never asks what happens to the human mind or our jobs in a world where machines outthink us—and where we stop trying.
The Preemptive Strike Hypothetical
“If you get there first, you dastardly person, you’re never going to be able to catch me… What’s my next choice? Bomb your data center.”
This a second major frame in Schmidt’s talk.
After setting AI up as a revolutionary force, Schmidt shifts tone. Suddenly, it’s not about productivity or progress. It’s about power. And threat.
He tells a story. A hypothetical story about the competitive race and how it will get violent.
“You’re the good guy. I’m the bad guy. You’re six months ahead of me in AI development. I know I can’t catch up. So what do I do? I try to sabotage you. And if that fails… I bomb your data center.”
This is a Slippery Slope:
- AI gives one country an advantage
- That advantage creates panic
- Panic justifies extreme action
- Action leads to war
But here’s where the framing turns. Schmidt doesn’t call for a pause.
Instead, he lands on the Settlement Frame: guardrails, not shutdowns.
He’s not advocating caution. He’s advocating containment.
Closed models. Limited access. Centralized oversight. Not to prevent progress—but to keep it out of the wrong hands.
And that control? It belongs to those who “understand it best.”
Which, implicitly, includes him.
This isn’t just a warning. It’s a repositioning of the AI conversation—from open innovation to security and restriction.
Section 3: The Agent Analogy
“So let’s think about agents… Now one of you switches to a different language, and we don’t know what you’re doing. What’s the right thing to do? Unplug you.”
This is Schmidt’s most effective analogy—and one of the most powerful framing moves in the entire talk.
The topic is autonomy. Yoshua Bengio has suggested labs should pause work on agentic AI. Schmidt responds with a human-scale story:
You’re an agent. Input, output, memory.
One day, you stop using English. You invent your own language.
We can’t understand you anymore. What do we do? Unplug you.
This is a Metaphor Frame and a Narrative Frame—and it works.
It makes a complicated topic feel simple, even obvious.
But “unplug” isn’t a benign word. It’s a placeholder for deactivation. Shutdown. Preemptive destruction.
He then outlines the threats that justify it:
- Recursive self-improvement
- Direct access to weapons
- Reproducing without permission
Each risk builds fear. This is a Negative Frame with a gentle delivery.
And once again, we land at the Settlement Frame: “Stopping doesn’t work in a globally competitive market.”
The implication: if we can’t stop agentic AI, we must control it—fully.
Not as ethicists. As owners.
What’s Left Unquestioned?
- Why does global competition justify building something we can’t control?
- Why is containment the only alternative to progress at any cost?
- Who decides when to unplug—and who gets unplugged?
Schmidt doesn’t frame those questions. He doesn’t have to.
Because once the analogy is inside you—once you see yourself needing to unplug someone—it’s hard to ask anything else.
That’s framing at its most subtle—and most effective.
What Gets Framed, and What Gets Left Out
This analysis doesn’t cover every line of Schmidt’s talk. It focuses on three pivotal moments—the framing shifts that carry the most weight:
- The origin story that positions AI as a breakthrough beyond human comprehension
- The hypothetical standoff that turns AI into a geopolitical weapon
- The agent analogy that makes control sound humane, inevitable, and necessary
Each of these moments reshapes the audience’s perception—not through argument, but through emphasis, contrast, and omission.
And in choosing to highlight only those moments, I’ve done the same. That’s framing, too.
The danger isn’t just in how leaders talk about AI.
It’s in how easily we absorb their framing—without noticing what’s missing.