Will AI Save or Destroy Us? Insights from Google’s Former CEO

How Artificial Intelligence Could Redefine Humanity—or Threaten It, According to Google’s Former CEO


Introduction: A New Dawn or a Looming Storm?

Imagine a world where your morning coffee is brewed to perfection by a machine that knows your preferences better than you do, where your doctor consults an AI that instantly recalls every medical study ever published, or where your child’s best friend is a computer program. Now, picture a darker possibility: an AI system designing a deadly virus or orchestrating cyberattacks that paralyze entire nations. This is the dual-edged sword of artificial intelligence (AI), a technology that former Google CEO Eric Schmidt believes could either elevate humanity to unprecedented heights or plunge us into existential peril. In a candid conversation on The Diary of a CEO podcast, Schmidt laid bare his hopes, fears, and predictions for an AI-driven future. His words, drawn from decades of leading one of the world’s most influential tech giants, paint a picture of a world on the brink of transformation—one where the stakes couldn’t be higher.

Schmidt’s insights, shaped by his tenure at Google and his recent book Genesis, co-authored with Henry Kissinger, aren’t just musings from a tech titan. They’re a clarion call to grapple with a technology that’s advancing faster than our ability to control it. As someone who helped grow Google from a $100 million startup to a $180 billion behemoth, Schmidt knows what it takes to harness innovation. But he also understands the risks of unchecked ambition. So, what does the future hold? Will AI be our greatest ally or our most dangerous foe? Let’s dive into Schmidt’s vision, exploring the promise, the pitfalls, and the urgent questions we must answer to navigate this brave new world.


The Rise of AI: A Historical Perspective

To understand where AI is taking us, it’s worth looking back at how we got here. The seeds of artificial intelligence were sown decades ago, in the dusty computer labs of the 1950s, when pioneers like Alan Turing dreamed of machines that could think like humans. Back then, computers were clunky, room-sized behemoths, millions of times slower than the smartphone in your pocket today. Schmidt himself recalls using a university computer in the 1970s that was “100 million times slower” than modern devices—a testament to the exponential growth of computing power, driven by Moore’s Law, which predicted that chip density (and thus processing power) would double roughly every two years.

This technological leap set the stage for Google’s rise in the late 1990s, when founders Larry Page and Sergey Brin, armed with a groundbreaking algorithm called PageRank, transformed how we navigate the internet. Schmidt, who joined Google in 2001, saw firsthand how their audacious mission—“to organize all the world’s information”—relied on early AI techniques. Fast-forward to today, and AI has evolved from niche algorithms to sprawling neural networks like those powering ChatGPT and Google’s DeepMind. These systems don’t just process data; they learn, predict, and create, mimicking human cognition in ways that were science fiction just a decade ago.

But this isn’t just a story of technological triumph. The rapid rise of AI echoes other transformative moments in history—like the Industrial Revolution or the dawn of the nuclear age—when humanity unlocked immense power but struggled to wield it responsibly. Schmidt’s warning that AI is “a question of human survival” draws parallels to the Cold War, when nuclear proliferation threatened global catastrophe. Just as nations raced to build atomic bombs, today’s tech giants and governments are in a high-stakes race to dominate AI. The difference? AI’s potential for harm isn’t confined to warheads—it’s embedded in code that can spread globally in seconds.


The Promise of AI: A World of Possibilities

Schmidt is no doomsayer. Despite his concerns, he’s an optimist at heart, convinced that AI can solve some of humanity’s most pressing problems. Imagine a world where every child has access to a personalized AI tutor, tailored to their language and culture, helping them unlock their full potential. Or picture a healthcare system where AI assistants empower doctors to deliver precise, evidence-based treatments, even in resource-scarce regions. Schmidt argues that these aren’t pipe dreams—they’re “relatively achievable solutions” that could level the global playing field, boosting education and healthcare outcomes worldwide.

This vision isn’t just about convenience; it’s about addressing systemic challenges. Schmidt points to the global demographic crisis—declining birth rates and aging populations in developed nations—as a reason we need AI-driven productivity gains. In countries like Japan and South Korea, where labor shortages are acute, robotic assembly lines are already replacing human workers. AI, he argues, can fill these gaps, enabling fewer workers to do more, whether it’s manufacturing goods or analyzing complex data. For knowledge workers, AI could be a game-changer, automating repetitive tasks and freeing us to focus on creativity and innovation.

Schmidt’s own experience at Google underscores this potential. He describes the “70-20-10 rule,” which allocated 70% of resources to core products, 20% to adjacent projects, and 10% to moonshots—bold, risky ventures that often paid off. This approach generated billions in extra profits and fostered a culture of innovation that birthed products like Google Maps and Android. Today, AI is the ultimate moonshot, with startups and tech giants racing to build apps that leverage massive computational power and neural networks. For entrepreneurs, Schmidt’s advice is clear: integrate AI into every aspect of your business, or risk being left behind.


The Perils of AI: A Pandora’s Box?

But for every promise, there’s a peril. Schmidt’s gravest concern isn’t that AI will outsmart us overnight in some Hollywood-style robot uprising. Instead, he fears we’re not adopting AI fast enough to solve pressing problems—yet moving too quickly to control its risks. It’s a paradox that keeps him up at night. What happens when AI systems become so powerful they can design deadly viruses or launch cyberattacks that exploit vulnerabilities humans haven’t even discovered? Schmidt highlights “zero-day attacks,” where raw AI models—unfiltered by safety protocols—can uncover and exploit security flaws faster than any human hacker.

The biological threat is even more chilling. Schmidt notes that viruses are “relatively easy to make” with AI, raising the specter of bioterrorism on an unprecedented scale. He’s part of a commission working to prevent this, but the challenge is daunting. Unlike nuclear weapons, which require rare materials and heavily guarded facilities, AI’s raw materials—data and computing power—are widely accessible. A single rogue actor with a powerful model could wreak havoc, a scenario Schmidt compares to the proliferation of nuclear technology but with far less physical infrastructure to monitor.

Then there’s the social impact. Schmidt points to the rise of social media algorithms, particularly since 2015, when platforms like TikTok shifted from chronological feeds to hyper-targeted ones. These algorithms, designed to maximize attention, have fueled echo chambers, misinformation, and a surge in anxiety and depression among young people, especially teenage girls. Emergency room visits for self-harm have spiked, a trend Schmidt attributes to addictive, outrage-driven content. If social media can do this with relatively simple algorithms, what happens when AI’s predictive power becomes orders of magnitude stronger?

Geopolitically, the stakes are even higher. Schmidt worries about authoritarian regimes like China, where AI could be harnessed without the ethical guardrails prioritized in the West. While he believes China will act “relatively responsibly” due to its interest in control over chaos, the lack of free speech in such systems could lead to AI tools that prioritize state power over human welfare. In a world where drones and AI-driven warfare are already reshaping conflicts—like the Ukraine-Russia war, where $5,000 drones can destroy $5 million tanks—the risk of an AI arms race is real. Who controls the code, and how they use it, could dictate global power dynamics for decades.


Guardrails and Governance: Can We Control the Beast?

So, how do we harness AI’s potential while avoiding catastrophe? Schmidt is adamant that human control is non-negotiable. He proposes specific intervention points, like unplugging systems that exhibit “recursive self-improvement”—where AI autonomously gets smarter without human oversight. Another red flag? When AI agents start communicating in a language only they understand. “That’s a good time to pull the plug,” he says, half-jokingly but with a serious undertone. The metaphor is stark: just as you’d flip a circuit breaker to stop a runaway machine, we need mechanisms to halt AI if it veers into dangerous territory.

This isn’t just a technical challenge; it’s a societal one. Schmidt, who spent decades avoiding government intervention in tech, now advocates for “guardrails” to mitigate AI’s risks. He points to the emerging field of “trust and safety,” where teams test AI models to ensure they don’t produce harmful outputs, like instructions for self-harm or bioterrorism. The UK, he notes, hosted a groundbreaking trust and safety conference in 2024, with another planned in France for 2025. These efforts signal a global awakening to AI’s dangers, but they’re only the beginning.

The geopolitical implications are profound. If AI’s power is concentrated in a few heavily guarded data centers—akin to nuclear arsenals—governments might manage deterrence and non-proliferation. But if AI proliferates widely, accessible to terrorists or rogue states, the risks multiply exponentially. Schmidt’s time working with the U.S. Department of Defense, visiting secure plutonium facilities, informs his view that some AI systems may need similar protection. The question is whether we can balance innovation with security in a world where code travels faster than missiles.


The Human Element: Will AI Redefine Us?

Perhaps the most unsettling question is what AI means for our humanity. Schmidt argues that AI will amplify human potential, not replace it. He dismisses fears of mass joblessness, pointing to historical examples like the Industrial Revolution, where automation displaced workers but created new opportunities. Today’s demographic challenges—fewer workers supporting aging populations—make AI’s productivity boost essential. From robotic security guards to synthetic film backdrops, AI will eliminate dangerous or repetitive jobs while creating demand for skilled roles in tech, healthcare, and beyond.

Yet, there’s a deeper concern: what happens when AI shapes our identities? Schmidt raises the provocative idea of children growing up with AI as their “best friend.” It’s a social experiment on a billion people with no control group, he warns. Will these kids, immersed in algorithmic worlds, develop the same values, resilience, and creativity as past generations? The rise in teen mental health issues linked to social media suggests we’re already on shaky ground. As AI becomes more integrated into daily life, from education to entertainment, we risk a generation molded by machines rather than human connection.

Schmidt also tackles the notion of a “two-species” humanity—those augmented by AI (perhaps via neural interfaces like Neuralink) versus those who remain unenhanced. While he sees Neuralink as speculative, he acknowledges that AI’s seamless integration into our lives could blur the line between human and machine. Will we notice when our decisions, preferences, and even emotions are subtly shaped by algorithms? Schmidt’s hope is that AI will deliver “greater delight,” making life more convenient and fulfilling. But he’s clear: we must retain control over our moral and ethical choices, or risk losing what makes us human.


Lessons from a Tech Titan: Building a Better Future

Beyond AI, Schmidt’s reflections offer timeless lessons for entrepreneurs and leaders. His “first principles” for building great companies emphasize risk-taking, technical talent, and a relentless focus on scale. He advises aligning with “divas”—visionaries like Steve Jobs or Elon Musk who push boundaries—and avoiding “knaves” who prioritize self-interest over impact. At Google, Schmidt learned that innovation thrives when you empower technical talent and give them ownership of problems. His mantra: build the right product, and customers will come.

He also stresses the importance of critical thinking, especially in an AI-driven world rife with misinformation. “Check assertions,” he urges, recounting how he learned at Google to verify claims rather than accept them at face value. For young people, like the 18-year-old brother of the podcast host’s partner, Schmidt recommends learning Python—a versatile programming language central to AI development—as a way to stay relevant in a tech-driven economy. But more than coding, he emphasizes analytical skills, urging us to question what we’re told and seek truth amid the noise.


Conclusion: A Call to Action

Eric Schmidt’s vision of AI is both exhilarating and sobering. It’s a future where we could eradicate global disparities in education and healthcare, where productivity soars, and where human ingenuity reaches new heights. But it’s also a world where a single misstep could unleash catastrophic consequences—be it a bioengineered virus, a cyberattack, or a generation lost to algorithmic despair. His message is clear: AI’s trajectory is inevitable, but its outcome depends on us. Will we rise to the challenge, setting guardrails to protect our values and our future? Or will we let this transformative power slip through our fingers, reshaping humanity in ways we can’t predict?

As I reflect on Schmidt’s words, I’m struck by the weight of responsibility we all share. This isn’t just about tech giants or policymakers—it’s about every one of us, from entrepreneurs to educators to everyday citizens. AI is already here, shaping our lives in ways we barely notice. The question isn’t whether it will change the world, but whether we’ll guide that change with wisdom and foresight. Schmidt’s optimism gives me hope, but his warnings linger. It’s time to pull the plug on complacency and start building a future where AI serves humanity, not the other way around.

Copied!

Leave a Reply

Your email address will not be published. Required fields are marked *

About John Digweed

Life-long learner.