The Year Was 2010
And the conditions were set
Welcome, this is A Short Distance Ahead, a weekly series exploring a single year in the history of artificial intelligence. I’m writing 75 essays to mark the 75th anniversary of when Alan Turing posed the question: “Can machines think?” This is essay 60. You don’t need to read these in order; each piece stands alone. The goal isn’t prediction, but perspective: understanding how we got here, and how the futures we live with often emerge sideways from the ones we set out to build.
It was August, 2010.
And if you were anywhere else in the country that week, you were probably sweating. It was a record heatwave on the East Coast. Subways stalled. Asphalt softened. The air felt thick and sticky.
But in San Francisco, at the foot of Market Street, the world was gray.
The fog—the “marine layer,” as the meteorologists called it—had moved in and decided to stay. It was the coldest summer in forty years.
The Hyatt Regency stood out on the waterfront like a contained future. Its immense atrium, conceived by architect John Portman was inspired by a 1930s H.G. Wells film called, Things to Come. The story was about civilization rebuilding itself after the end of the world, offering a vision of renewal through scale—humanity starting over, this time inside glass and steel. Inside the Regency, in 2010, people were gathering to talk about the Singularity.
They drank coffee from paper cups, and imagined the year 2045. They debated when the machines would begin to move faster than the bodies and institutions that built them. And about intelligence escaping its biological constraints and accelerating beyond human control.
But outside, the cable cars were still clanging. The air still tasted like salt and damp wool. People hurried to work. Tourists clustered at corners, waiting for the sun to poke through the low-hanging gray sky.
For Demis Hassabis who had traveled from London to speak that morning, the future being debated inside the hotel conference rooms was not a prophecy so much as a problem. Something to be approached carefully. Something to be built, piece by piece, from rules, incentives, and systems.
For most of his life, Hassabis had been building such worlds.
As a child, he learned to think backward from endings. He grew up “the lone mathematical genius in a family of bohemian creatives. His mother, Angela, was a devout Baptist who immigrated to the UK from Singapore, then met her future husband in the host family she was staying with in North London, a free-spirited Greek Cypriot named Costas Hassabis.”1
By the age of eleven, he was the second best chess player in the world under the age of fourteen. Chess taught him that the present only made sense in light of what might come next. Every move carried futures inside it.
In his twenties, he carried that habit with him into video games—not games of reflex or spectacle, but games of systems. Theme Park. Creatures. Black & White. Video games he designed. Places where you didn’t play a hero so much as set the conditions and watch behavior emerge. You adjusted hunger, reward, punishment. You tuned feedback. Then you waited to see what happened.
What fascinated Hassabis was not simply that these worlds worked, but that they worked without being told how. Intelligence appeared where none had been explicitly programmed. Coherent behavior arose from simple rules. The games behaved as if they understood their environments, even when they did not.
That success didn’t satisfy him so much as sharpen his curiosity. The systems produced convincing behavior without explaining its source. They demonstrated intelligence without accounting for it. So Hassabis did what few successful designers do. He stepped away—not from the phenomenon itself, but toward it. He went looking for the mechanisms that made such behavior possible in the first place.
He turned to neuroscience, to the slow study of how real minds learn, remember, plan, and imagine. He believed the brain followed physical rules that could be studied and modeled, but produced forms of intelligence complex enough to be worth understanding in their own right. If intelligence could be reverse-engineered—if its principles could be uncovered rather than merely imitated—it might illuminate something deeper than performance. It might clarify how minds arise at all.
By the time Hassabis returned to building, he was no longer interested in narrow victories. He was interested in generality. In systems that could move across domains. Learn in unfamiliar environments. Transfer what they knew from one problem to another. A close colleague, Shane Legg, had helped formalize this ambition with a name—artificial general intelligence—not as a slogan, but as a technical challenge: intelligence defined not by a single task, but by the capacity to achieve goals across many.
The idea was unfashionable. AI, as a field, still carried the scars of its own history. Expert systems had promised too much and delivered too little. Rule-based approaches had collapsed under their own brittleness. Neural networks were circulating again, interesting but marginal, more curiosity than inevitability. Most serious researchers avoided talk of general intelligence altogether. It was safer to specialize. Safer to publish. Safer to create a sensible academic career.
But Hassabis and Legg were never really interested in career safety.
The first outlines of what would become DeepMind emerged in conversations over lunches and long walks near University College London—less a business plan than a shared recognition that this problem could not be solved inside existing institutions. Along with Legg, was Mustafa Suleyman, then in his mid-twenties, he had come to technology less through code than conviction. He was drawn not to narrow problems but to planetary ones—poverty, climate, instability—the kinds of challenges that seemed to exceed human coordination itself. He had already cofounded a conflict-resolution firm. Now he was circling neuroscience, looking for leverage. When he encountered Hassabis’s ideas, and Shane Legg’s belief in a form of intelligence that could move across domains, he saw a way to close what felt like an expanding gap between the scale of human problems and our ability to solve them.2
They met quietly, often at a nearby Carluccio’s, choosing anonymity over ambition. “We didn’t want people to hear our crazy talk about starting AGI,” Legg would later recall. It became clear, quickly, that they would never build it in academia. Academia moved too slowly. They would age out before the resources arrived. Scale required a company. But corporate labs optimized too narrowly. If they were going to attempt something this large, they would need to build a new kind of organization: small teams, tightly focused, attacking pieces of a problem too large to grasp all at once.
What united them was the belief that intelligence itself was the bottleneck. What divided them was why it mattered. Suleyman wanted to put such a system to work—send it out into the world, gather feedback, solve immediate harms. Hassabis wanted to work backward from the endgame, treating intelligence as a scientific problem whose solution might explain something more fundamental than utility. The tension never quite resolved. It simply became the shape of the company. Hassabis structured DeepMind like a modern Manhattan Project, inspired by his reading of The Making of the Atomic Bomb, with small teams of scientists focused on subsections of a larger, unknowable whole.
That ambition required money. And not the cautious kind.
In Britain, investors were willing to fund sensible ideas with clear paths to revenue. They were not interested in underwriting a theory of intelligence itself. But Legg’s work had already been circulating in a fringe community that took such ambitions seriously. That was how the invitation to the Singularity Summit arrived. And that was how Hassabis found himself at the Hyatt Regency, on a San Fransisco morning, addressing a room of people who believed the future would not simply arrive—it would accelerate.
Inside the conference rooms, the speakers and attendees spoke of timelines and inevitabilities. Of curves that bent upward and never returned. They spoke as if the Singularity were an event waiting on the calendar.
Hassabis spoke differently.
For him, intelligence was not a destiny but a construction problem. Not a rupture, but an accumulation. Something that would only emerge if the conditions were right.
One of the kindred thinkers Hassabis had hoped would be listening to him speak was Peter Thiel (see: The Year Was 2000-2009), one of the few investors willing to back projects whose payoff was not a product, but a reordered future. But when Hassabis looked out from the stage that day, Thiel wasn’t there. He wasn’t in the front row. He wasn’t in the audience at all. The pitch, it seemed, had failed before it began.
But, later that evening, Legg and Hassabis were invited to a party at Thiel’s Bay Area mansion. Hassabis had learned that Thiel liked chess (see The Year Was 2006), how he had been a strong junior player in his youth, with a lasting fascination for the game. During a conversation over canapés, Hassabis mentioned it casually. “I think one reason chess has survived so successfully over generations,” he said, “is because the knight and bishop are perfectly balanced. That creates all the creative asymmetric tension.”
Thiel was intrigued. “Why don’t you come back tomorrow and do a proper pitch?” he said.3
Within weeks, Thiel invested $1.4 million. It wasn’t a bet on revenue. It was a bet on outcome. Other early backers that followed shared that orientation, though not always the same beliefs. One worried less about how fast AGI would arrive than whether it might destroy its creators. Another pushed for safety and alignment, steeped in the writings of Eliezer Yudkowsky (see The Year Was 2000) and the growing LessWrong community, where existential risk was discussed with near-religious intensity.4 From the beginning, DeepMind sat at the intersection of faith, fear, and formal systems.
By 2010, the experiment that began with the age of the web had largely reached completion.
The boundary between the digital world and the physical one had thinned (see The Year was 1978). You no longer “went online.” Reality flowed continuously through screens and back again. Middlemen and mediators—editors, gatekeepers, institutions—were increasingly bypassed, replaced by direct, frictionless access. Smartphones collapsed distance. Feedback arrived instantly. Problems were framed as systems to be optimized, leveled up, tuned. Daily life dissolved into loops of signal and response.
That October Paul Graham (see The Year Was 2006), the computer scientist who had become a millionaire after selling his e-commerce company to Yahoo, wrote a blog post titled What We Look for in Founders. Graham’s ideas around treating the vision of a start-up’s founder as sacrosanct, would go on to shape the prevailing wisdom, and more significantly, the dual-class share structure that would define that eras startup founders abilities to hold unique levels of control of their companies. Graham’s blog post that year mentioned a young founder named Sam Altman (see The Year Was 2002, 2006, 2007, 2008), who impressed Graham with, not brilliance so much as temperament: an unusual seriousness, a capacity to inhabit systems defined by incentives, feedback, and iteration—and his ability to treat those conditions not as metaphor, but as the environment itself.
That year, the iPad moved from announcement to retail shelves—not to solve a new problem, but to make screen time flatter, easier, more continuous (see The Year Was 2007).
The sensation of being part of an augmented humanity became ordinary. Smartphones privileged mobility over posture, lightening the human–keyboard–screen relationship until devices felt less like tools than extensions of body and mind.
Instagram launched that fall, with the hope of “igniting communication through images”), and teaching a generation to experience reality twice: once while living it, and again while posting it. Moments were filtered, framed, ranked. Practice runs for a life increasingly lived inside metrics.
By December, a street vendor in Tunisia would set himself on fire, birthing something that hinted at a new form of mass coordination—one that required no central authority, only platforms and momentum. Around the same time, Eli Pariser began writing about filter bubbles, giving a name to a sensation many already felt: mediation hadn’t disappeared. It had gone dark.
This world bore a quiet resemblance to the environments Hassabis had once built for fun—environments shaped by incentives. Saturated with feedback. Dense with signals.
That same year, Andrew Ng (see The Year Was 2005, 2008, 2009) met with Larry Page (see The Year Was 1998) .
The conversation was more clinical than philosophical, but it pointed in the same direction. Intelligence, they believed, could be treated as an infrastructure problem: data, compute, and ambition.
Sebastian Thrun (see The Year Was 2005) was thinking along similar lines, pushing autonomy, exploration, and scale. Across the industry, the intuition was converging. If the world was now structured as a network of measurable behavior, continuous feedback, and scalable incentives, then intelligence could be trained inside it.
Apple, meanwhile, chose a different path. In 2010, it acquired Siri, a system born of an earlier vision of AI, one focused on fitting intelligence around the human rather than scaling it beyond us. Siri was cautious. Intimate. Constrained. It treated language as interface, not a substrate. In hindsight, it reads less like a failure than a relic of restraint. A reminder that another future had once been imaginable.
The story of 2010 was the moment when human culture finished building the environment artificial intelligence would later learn from.
Games had taught us to live inside systems. Platforms had taught us to accept opaque mediation. Screens had taught us to narrate ourselves in real time. Belief, money, and machinery aligned just enough for the question Hassabis had been asking since his youth to become actionable.
Nothing dramatic announced this shift. There was no single breakthrough. No public reckoning. But the environment had taken shape. From that point on, intelligence—human and otherwise—would be trained inside it.
We did not yet know what we had built.
But we had built it well enough for something else to begin learning.
A Few Good Listens
I’ll be sharing more soon on my look back at this past year, and specifically when it comes to curation, why the clearest way to read this AI moment is from two directions at once: from inside the systems being built, and from the lives already being shaped by them. I’ll include my thoughts and some recommended reads. But, for the end of the year, I highly recommend:
The work of Matthew Boll and Andy Mills and the entire team at Longview. Their podcast series, The Last Invention, treads a lot of similar ground as this newsletter, but it is so well done and so enjoyable to listen to. I’m grateful for their work being out there.
What Evan Ratliff is doing with Shell Game is precisely the kind of storytelling we need now more than ever to help us think a bit more sideways about the acceleratingly weird times we live in.
For a small glimpse into the peculiar depths of Singularity-era thinking, see Wikipedia's entry about Roko's Basilisk - a post in the LessWrong community forum that year. For the deeper story, check out Tom Chivers, The Rationalist's Guide to the Galaxy: Superintelligent AI and the Geeks Who Are Trying to Save Humanity’s Future





