In a world where artificial intelligence is no longer just a buzzword but a force reshaping industries, one startup is daring to chase what sounds like science fiction: superintelligence tailored for businesses. Reflection AI, founded by former DeepMind researcher Misha Laskin, recently emerged from stealth mode with a hefty $130 million in funding and a bold mission to create autonomous systems that think and act like the sharpest minds in a company. But what does “superintelligence” even mean in the context of everyday work? And why is coding the unlikely gateway to this future? In a recent podcast interview, Laskin peeled back the layers on his vision, revealing a pragmatic path forward amid the hype.
As someone who’s followed AI’s evolution from clunky chatbots to tools that can write essays or generate art, I find Laskin’s approach refreshingly grounded. He’s not promising robots taking over the world; instead, he’s talking about AI that acts like that indispensable senior engineer who knows every nook and cranny of a company’s operations. It’s a reminder that true innovation often lies in solving real problems, not just chasing benchmarks.
From Physics Dreams to AI Frontiers: Laskin’s Unconventional Path
Laskin’s story reads like a modern-day immigrant tale infused with scientific ambition. Born in Russia, he moved to Israel as a toddler before settling in rural Washington state—a place far removed from Silicon Valley’s glitz. Growing up in a town tied to the Manhattan Project (think atomic-themed everything, from bowling alleys to street names), he developed an early fascination with physics. “As a kid, I got pretty obsessed with physics and wanted to be a theoretical physicist,” he recalled. His parents, chemists at a national lab, fueled that curiosity with books like Richard Feynman’s lectures.
But physics, as Laskin discovered during his PhD, felt like a field frozen in time. The breakthroughs he studied were from the early 20th century—Einstein, Bohr, Heisenberg. Then came AlphaGo in 2016, DeepMind’s AI that defeated a world champion in the ancient game of Go. It was a watershed moment in AI history, proving that reinforcement learning could create superhuman performance in complex domains. For Laskin, it was a wake-up call: “I realized that I had picked an interesting science, but not the science of our time.”
This pivot mirrors broader shifts in science. Just last year, the Nobel Prizes in physics and chemistry went to AI pioneers like Geoffrey Hinton and Demis Hassabis, highlighting how AI is “eating” traditional fields. Hinton, often called the “godfather of deep learning,” warned of AI’s risks even as he celebrated its potential. Laskin, inspired, dove into reinforcement learning at UC Berkeley’s lab under Pieter Abbeel, a hub that birthed startups like Perplexity AI and robotics firm Skydio. By 2020, amid the pandemic, he was rubbing shoulders with future innovators, co-authoring papers that laid groundwork for today’s generative models.
His next stop? DeepMind in Toronto, where he joined a team tackling “general agents”—AI that learns without constant hand-holding. There, he contributed to Gemini, Google’s flagship AI model, focusing on reinforcement learning from human feedback (RLHF), a technique that fine-tunes models to be more helpful and aligned with user needs. It’s the same method behind ChatGPT’s conversational magic. But after shipping Gemini 1.5 in early 2024, Laskin felt a shift: LLMs (large language models) had crossed a utility threshold. They weren’t just research toys anymore; they were ready for real-world impact.
Geopolitically, this era echoes the Cold War’s space race. The U.S. dominates AI talent, but China’s investments in surveillance AI and Europe’s regulatory push (like the EU AI Act) create tensions. Laskin’s move from academia to industry underscores a talent war, with Meta’s Mark Zuckerberg offering multimillion-dollar packages to poach researchers. Yet, as Laskin notes, many are driven by discovery, not just dollars.
Redefining Superintelligence: From AGI Hype to Organizational Oracles
Superintelligence—a term popularized by philosopher Nick Bostrom in his 2014 book—conjures images of god-like machines outsmarting humanity. But in AI circles, it’s become a slippery concept. Once, AGI (artificial general intelligence) meant human-level smarts across tasks; now, with models like GPT-4 acing exams, the goalposts have shifted. “Superintelligence is being used synonymously with what AGI used to mean,” Laskin quipped, likening it to Indiana Jones swapping a relic for a sandbag—hoping nothing collapses.
Reflection AI’s twist? “Organizational superintelligence.” Imagine an AI oracle that knows a company inside out, like a principal engineer or top sales leader rolled into one. It comprehends codebases, chats, docs, and even tribal knowledge—the stuff in employees’ heads that vanishes when they leave. Why does this matter? In enterprises, knowledge silos cost billions in lost productivity. A McKinsey report estimates AI could add $13 trillion to global GDP by 2030, mostly through efficiency gains.
Laskin’s focus on coding as the entry point is clever. Coding isn’t just for developers; it’s how AI will interface with software like Salesforce or creative tools via APIs. “If you solve coding, you’ve built the hands and legs of digital AI,” he explained. Historically, GUIs (graphical user interfaces) made computers intuitive for humans, but LLMs, trained on internet text, find code more natural. It’s the reverse of our evolution—humans mastered tools through spatial intuition, while AI thrives on structured language.
But here’s a concern: If AI handles coding autonomously, what happens to jobs? Laskin sees it augmenting, not replacing—freeing engineers from grunt work like debugging legacy code, which eats 70% of their time. Still, as automation accelerates, societies must grapple with reskilling. Geopolitically, nations leading in AI coding tools could dominate software development, a $500 billion market.
Asimov: The First Step Toward AI-Powered Code Research
This week, Reflection AI unveiled Asimov—named after sci-fi legend Isaac Asimov, whose Three Laws of Robotics warned of AI gone awry. It’s not a code-writing bot but a “code research agent” for organizations, tackling the comprehension gap in current tools.
Think of it as a deep-dive detective. Engineers spend most time unraveling problems—why a bug persists, how legacy systems interact. Asimov aggregates data from codebases, project tools like Jira, chats, and docs. A standout feature: “teamwide memories,” capturing institutional knowledge so it doesn’t walk out the door with departing staff. Senior engineers, often bombarded with questions, love it—finally, a way to document without drudgery.
Under the hood, it’s a multi-agent system: a central reasoning brain dispatches “scout” agents to fetch context, using neural retrieval for accuracy over speed. This beats rag (retrieval-augmented generation), which Laskin calls “primitive”—full of false positives and one-shot limits. Agentic search, like in Claude’s tools, is a step up but still like navigating a jungle with a flashlight.
Pragmatism shines here. Reflection deploys in virtual private clouds for security, a must for enterprises wary of data leaks. Reinforcement learning sharpens it over time, fixing blind spots in third-party models. Laskin credits four breakthroughs: deep neural networks (2012 ImageNet), reinforcement learning (AlphaGo 2016), transformers with internet data (GPT-3 2020), and RLHF/reasoning models. “We have all the ingredients,” he asserts.
Yet, challenges loom. Benchmarks like SWE-Bench overestimate real-world prowess—70% success doesn’t mean solving 70% of tasks. And while junior-level autonomy is here (L4 engineer equivalent), reaching principal level requires that “context engine.”
Navigating the AI Startup Landscape: Talent, Capital, and the Road Ahead
Building in this era isn’t easy. With Zuckerberg’s poaching sprees and startups like Safe Superintelligence raising eyebrows, how does a newcomer compete? Laskin’s team—ex-DeepMind, OpenAI, Anthropic—chose impact over big-lab stability. “We attract people with that internal drive,” he said. Equity stakes help; joining early could yield generational wealth, as with Anthropic’s rise.
Capital? It’s crucial but efficient—10x less than frontier labs if focused. Reflection’s $130 million funds GPU scaling, not endless experiments. The team splits half-research, half-product, blending PhDs with builders.
Looking ahead, Laskin envisions a “garden” of specialized superintelligences—medical, scientific, organizational—forming a collective general one. No single lab dominates; it’s collaborative, product-driven. But risks abound: misalignment could amplify biases, and geopolitical rivalries might weaponize AI.
As I reflect, Laskin’s journey from Hanford’s atomic shadows to AI’s forefront feels poetic. In an age of hype, his emphasis on useful, enterprise-focused AI offers hope. Will organizational superintelligence transform work? If Reflection succeeds, the answer might be sooner than we think.