The Year Was 2011
And scale became the safest form of ambition
Welcome, this is A Short Distance Ahead, a weekly series exploring a single year in the history of artificial intelligence. I’m writing 75 essays to mark the 75th anniversary of when Alan Turing posed the question: “Can machines think?” This is essay 61. You don’t need to read these in order; each piece stands alone. The goal isn’t prediction, but perspective: understanding how we got here, and how the futures we live with often emerge sideways from the ones we set out to build.
The year was 2011 and the sixth annual Singularity Summit1 was back in New York City. Back inside the honey-colored wood-paneled auditorium of the 92nd Street Y, with the names of prophets and founders atop the walls: Moses, David, Isaiah, Washington, Jefferson.
The Summit no longer felt fringe. The room was full: investors, engineers, founders, investors, researchers. The tone was earnest, occasionally anxious. At times, people spoke as if time itself was running out.
That year, the Summit landed on the cover of TIME Magazine, framed by a long essay by Lev Grossman making the case that the Singularity was no longer a fringe obsession, but a serious hypothesis about the future of life on Earth. Exponential curves, intelligence explosions, immortality — the ideas once predominantly confined to mailing lists, Listservs, and conference rooms — had crossed into mainstream culture.
Downtown, a twenty-minute subway ride away, tents filled Zuccotti Park. Occupy Wall Street protestors were camped out demanding accountability from banks, from institutions, from systems that no longer seemed answerable to anyone.

Inside the Y, the conversation was different. Reforming institutions seemed beside the point — less interesting, and less workable, than simply bypassing or blowing them up, and building new ones from scratch. The future, if it was going to arrive at all, would have to be built another way.
Earlier that year, millions of people had watched a computer win Jeopardy!. Ken Jennings, one of the humans it defeated, was now onstage at the Summit, joking that he might be the first person to lose his job to a machine. The line landed because it captured something unsettled but unresolved. The machine didn’t understand the questions. It didn’t explain itself. It simply worked—and that was enough.
The Summit’s theme was The Changing Roles of Humans and Machines, but the deeper question was harder to name: if intelligence could be scaled without being understood, what still required collective responsibility?
The talks moved quickly: Stephen Wolfram, Christof Koch, Max Tegmark, Eliezer Yudkowsky2, and of course, the Transcendent Man himself, Ray Kurzweil.3 Slides clicked by. Exponential curves filled screens. At one point, swarms of small robots skittered across the stage to demonstrate that intelligence did not need to be centralized to be effective; a thousand simple machines, coordinated loosely, could outperform a single sophisticated one.
Scale, not elegance, was doing the work.
Even the institutions built around Singularity thinking were beginning to reflect this logic. Singularity University, still young and already well connected, was less a university than a replication engine: short programs, rotating faculty, high-status participants, a shared set of frameworks for thinking in exponential curves. It did not need to persuade people the future was coming. It assumed the premise and trained them to act accordingly. Inspiration mattered, but only insofar as it could be packaged, networked, and repeated.
Peter Thiel4 followed with a diagnosis that would recur throughout the weekend. Society, he argued, had entered a period of stagnation—not because ideas had run out, but because ambition had narrowed.
We had become very good at spreading what already worked and deeply uncomfortable with committing to what did not yet exist. Globalization, Thiel suggested, had taught us how to copy and optimize what already worked—but not how to commit to building what didn’t yet exist.
He illustrated the shift through science fiction itself. In Star Trek’s original series, Spock was half-human, half-Vulcan, striving toward pure logic—toward something more rational than himself. By the time of The Next Generation, Data, the android, was striving in the opposite direction, trying to become more human. The reversal mattered.
Earlier stories imagined technology pulling humanity forward; newer ones imagined it needing to be softened, restrained, aligned. For Thiel, this wasn’t a storytelling quirk. It was evidence that society had lost confidence in its own technological future.
A similar unease was surfacing elsewhere. That same year, and speaking at the Summit, Tyler Cowen was describing a “Great Stagnation,” a period in which the easy gains of earlier eras had been exhausted. Growth continued, but it felt thinner—more optimization than transformation. The gains were real, but smaller, harder to see, easier to doubt.
Science fiction writer Neal Stephenson named the underlying problem more directly. He called it Innovation Starvation. (Theil referenced Stephenson’s thinking in his own talk.) Society, Stephenson argued, still produced visions of the future in abundance—but those visions no longer functioned as plans, or even as commitments. Science fiction could imagine worlds endlessly5, but imagining was no longer the hard part. Execution was. The failure, Stephenson suggested, was not technical but civic. Big projects require shared agreement, long coordination, patience, and the willingness to accept blame when things go wrong. Increasingly, those conditions felt unavailable.
Figures like Ray Kurzweil had spent decades turning exponential curves into narratives about destiny, about immortality, superintelligence, and species-level transformation. But by 2011, those narratives were no longer required for the work to continue.
What the Summit still framed as belief, the industry was already turning into method.
Inside Google, researchers like Andrew Ng were betting that intelligence did not need to be carefully designed or explained. It needed data, compute, and enough scale for patterns to emerge on their own. Long-dismissed neural networks were suddenly outperforming more interpretable approaches simply by being larger. ImageNet had crossed a threshold where millions of imperfect labels mattered more than clean theories. The results spoke for themselves.6
Ng was also rethinking how intelligence was taught. That year, Stanford University’s Introductory AI course escaped the classroom, enrolling tens of thousands of students online.
Expertise could be abstracted from institutions, distributed globally, and refined through feedback rather than tradition. Teaching no longer required shared context, mentorship, or agreement about what learning should feel like.
It only required that it scale.
The pattern was the same everywhere. Systems no longer needed to be understood in order to be trusted. They didn’t need to justify themselves in advance. They could proceed first and explain later—if at all.
“The data suggests.”
“The model learned.”
“Users preferred.”
Scale, in this sense, was epistemically modest—it claimed no deep understanding of causes or consequences. And it was morally deflective—it allowed enormous impact without explicit ownership.
Scale, in this sense, was epistemically modest—it claimed no deep understanding of causes or consequences. And it was morally deflective—it allowed enormous impact without explicit ownership.
That same year, Marc Andreessen7 put the logic into words. In an essay titled Software Is Eating the World, he argued that software-driven companies were no longer just disrupting individual industries, but methodically replacing them—media, retail, transportation, finance, even defense. The case was not philosophical or ethical. It was operational. Software worked at global scale, could be deployed without permission, and improved through use rather than consensus. It didn’t need to persuade institutions or reform them. It could simply move around them.
In Andreessen’s telling, this wasn’t a political or cultural transformation. It was a technical one—and that was precisely the point.
In October 2011, Apple released the iPhone 4S. Siri spoke.
Steve Jobs did not appear onstage.
Jobs had personally courted the Siri team, drawn to the idea of an interface that could hide extraordinary technical complexity behind something conversational, even magical. Interfaces, he understood, were how technology could spread without demanding understanding from its users.
But Jobs himself represented something that could not scale: taste, intuition, visible judgment—a future you could point to and say this person chose this.
The day after the phone’s release, Jobs died.
The news traveled the way everything did by then: through screens, alerts, messages. People paused at their desks. Phones buzzed again. The day continued.
2011 was not the year machines began to think. It was the year many of the hardest questions about the future stopped being argued about—and started being handled quietly, at scale.
Two From Today
If you’ve felt the last year of AI coverage swing between hype and dismissal, this Radiolab episode (reported/produced by Simon Adler) is a smart reset. It tries to answer a more basic question first: what kind of intelligence are we actually dealing with?
And also from Adler, a non-AI related recommendation, check out:
W I N D S T A R Enterprises, Adler’s super-creative music project. It feels like a small world you can step into that’s playful, thoughtful, and engineered with the same incredible care he brings to his audio stories. It’s strange in the best way.






Fantastic piece. The shift from Spock striving toward logic to Data striving toward humanity is such a sharp observation about how our relationship with scale and optimization changed. Back in 2011 I was still thinkig we'd need to solve interpretability before trusting these systems broadly, but the industry basically said "nah, user adoption is the real test." Scale became the answer precisely beacuse it didnt require collective buy-in or governance structures that would slow deployment down.