The year was 2001 and Nick Bostrom, a Swedish philosopher at Yale, published a paper in February with a title that should have given us pause: “Are You Living in a Simulation?” His argument was deceptively simple, almost playful. If advanced civilizations eventually run many realistic simulations of human history, then most conscious beings must live in simulations rather than in any original reality. Therefore, Bostrom concluded, we probably live inside someone else’s computation.
The paper circulated through academic conferences and late-night dorm rooms like a potent intellectual intoxicant, profound at two in the morning, absurd by breakfast, yet persistent in quiet moments thereafter. Without realizing it, Bostrom had made reality itself seem artificial—sacred existence reduced to mere computation. It fed the nascent Rationalist community, where engineers and futurists debated existential risks, AI alignment, and the ethics of simulated realities.
"Reality may thus contain many levels," Bostrom wrote casually, as if describing a crowded apartment building. Our simulators might themselves be simulated, stacked like virtual machines all the way up—or down, depending on your perspective.
The Birth of Big Data
Meanwhile, in Redmond, Washington, Michele Banko and Eric Brill, two Microsoft researchers, were discovering something that would prove even more consequential. Working on the unglamorous problem of confusion set disambiguation, their task was teaching computers to differentiate "to," "too," and "two.” But, what Banko and Brill discovered overturned conventional wisdom in machine learning. It was long assumed that smarter algorithms mattered most and that after a certain point, more data produced diminishing returns. But they found the opposite: simple algorithms, when fed hundreds of millions of examples—far beyond traditional limits—kept improving dramatically. Rather than hitting a plateau, performance climbed steadily, almost indefinitely, in a log-linear ascent.
"We may want to reconsider," they wrote with understated precision, "the trade-off between spending time and money on algorithm development versus spending it on corpus development."
Translation: Don’t just try to build smarter algorithms. Feed simple algorithms more data.
This was the birth of Big Data—though no one called it that yet. It signaled the dawning realization that quantity itself could become a form of quality, that scale could compensate for intelligence. It was a revelation that would soon reshape our relationship to knowledge, privacy, prediction—and reality itself.
A brief interruption: A SHORT DISTANCE AHEAD is reader-supported. Each week, I excavate a single year from AI's history while simultaneously drafting these essays with the very tools I'm examining—a strange recursive loop that feels both necessary and unsettling. I know these subscription appeals are everywhere now, each newsletter another voice in an overwhelming chorus asking for support. But if you find value here—whether through subscribing, sharing, or supporting financially—I'd be grateful. For the cost of one monthly cup of fancy coffee, you can help keep this exploration going. Paid subscribers will also get access to additional notes on my process and findings (coming soon!). As someone deeply concerned about algorithmic text diluting human storytelling, I'm grateful for every reader who values this admittedly contradictory experiment. Thank you for your time and attention—still the scarcest resources in our accelerating world.
The Behavioral Engineers
In Palo Alto, PayPal had survived the dot-com crash by pioneering digital payments and confronting fraud head-on. Peter Thiel, who replaced Elon Musk in a boardroom coup the previous year, understood something deeper: PayPal wasn't in the banking business. It was in the business of engineering human behavior.
John Kothanek, PayPal's fraud investigator and former military intelligence officer, had spent months drawing spider web diagrams on whiteboards, mapping connections between fraudulent accounts. All roads led back to someone in Russia operating under the handle "Igor." But in stopping Igor, Kothanek had inadvertently created something more powerful than a fraud detection system. He'd built a behavioral prediction engine.
The same algorithms that could identify criminal patterns could profile customers, target advertisements, and shape behavior at scale. PayPal had discovered what would become Silicon Valley's core insight: human behavior could be programmed through careful application of psychological triggers. A $10 referral bonus here, a progress bar there, friction removed at just the right moment.1
Meanwhile, Thiel's libertarian ethos was quietly attempting to shape political channels through campaign contributions, his belief clear: democracy itself was becoming outdated, inefficient, a system ripe for disruption by technological superiority. The company formed a political action committee, funneling money to legislators who would protect PayPal from being classified as a bank—preserving the very regulatory blindspot that allowed them to build detailed behavioral maps of millions of users without oversight.2
On September 10, 2001, Thiel flew to New York with CFO Roelof Botha for a meeting with Morgan Stanley bankers. PayPal confused them—was it a technology company or an unlicensed bank? The bankers weren't interested. Dejected, Thiel and Botha took a car to JFK. Their plane, the last United flight of the day, sat on the tarmac for hours. They chose to wait rather than disembark.
They made it home to San Francisco very early on September 11. Hours later, Thiel learned that a San Francisco-bound aircraft from Newark—flight UA 93—had crashed in a field in Pennsylvania. Some of the people they'd been waiting with the night before were now dead.
Reality Breaks Through
September 11th crystallized a sense of fractured realities. No algorithm could have predicted it. No simulation could contain it. No technology could undo it.
The attacks revealed the fragility of the systems we'd been building. PayPal's fraud detection, so good at catching "Igor," had missed something infinitely more dangerous. The NSA's vast surveillance apparatus had failed at its most basic purpose. From airport security to foreign policy, every human interaction became mediated by a new algorithm—fear.
In some ways, the more lasting impact wasn't just the attacks themselves but how they reshaped our reality, perception, and memory. The events became data points, constantly replayed, reshaped, and resimulated by news media and human memory itself.
Governments quickly discovered that the same Big Data tools developed for commercial purposes—pattern recognition, behavioral prediction, network analysis—could be repurposed for surveillance and control. The security state expanded dramatically, with the potential of turning Kothanek's spider web diagrams into a model for society itself.
Digital Optimism Persists
Yet somehow, the techno-optimism continued. On October 23, Steve Jobs stood on a stage in Cupertino and pulled an iPod from his pocket. "This amazing little device holds a thousand songs," he said, presenting it like a magician revealing something both ordinary and impossible.
The initial reactions were skeptical. "No wireless. Less space than a Nomad. Lame," read one infamous Slashdot comment. The press was underwhelmed—"Apple's iPod Spins Toward Uncertain Market" warned CNET. At $399, it seemed like an expensive toy for Mac users only. But Jobs understood something the critics didn't: the device wasn't just about music. It was about transforming your relationship with your entire music collection.
The iPod was the physical manifestation of cultural compression—music as data, infinitely copyable, instantly transferable. Where Napster had threatened to destroy the music industry by making everything free, Apple offered a more seductive bargain. You could have all your music with you always—just let us manage the transaction. Songs became files, files became data, data became a service. The gleaming white device with its satisfying scroll wheel was so perfectly designed that few noticed they were trading ownership for access.
Reality’s New Editors
Wikipedia had launched quietly in January, the overlooked stepchild of Nupedia’s failed attempt at expert-curated knowledge. Wikipedia flipped the model entirely: anyone could write, everyone could edit. By year's end, it had 20,000 articles in 18 languages. Knowledge became a collaborative, living entity—ever-changing and endlessly editable. The impossible was happening daily: thousands of people were volunteering to write an encyclopedia for free. Reality had become a rough draft, ever-changing, continually refined. It was promising truth could emerge from chaos, democratic participation could yield not just knowledge, but reliability. Old gatekeepers vanished, replaced by an edit button and the hopeful assumption of good faith.
Mars as Exit Strategy
And on Labor Day weekend, Elon Musk, recently ousted from PayPal, was driving back from the Hamptons when he decided to colonize Mars. "I've always wanted to do something in space," he told his friend, "but I don't think there's anything that an individual can do."
Or was there?
By the time they reached the Midtown Tunnel, they'd decided it was possible. That night, Musk logged onto NASA’s website, expecting detailed plans for reaching Mars. He found nothing. At a Mars Society dinner soon after, Musk scribbled a $5,000 check—overpaying dramatically for admission—attracting attention from society president Robert Zubrin, who convinced him that humanity needed to become multiplanetary.
"Life cannot merely be about solving problems," Musk later insisted. "It also must be about pursuing great dreams."3
The Interface Dissolves
In research labs, the boundaries between human and machine were deliberately blurring. The field of Intelligent User Interfaces was creating systems that didn't merely respond to commands but understood intention, context, and emotional cues. Conversational agents and embodied pedagogical avatars began anticipating human needs. At Mitsubishi, MIT, and USC, researchers described machines that could engage in genuine dialogue, maintain eye contact, and express synthetic empathy.
The Agile Manifesto, signed at a Utah ski resort earlier that year, formalized this dissolution: "Individuals and interactions over processes and tools." It sounded like software methodology but represented something deeper—making human systems as malleable as code. Move fast. Test constantly. Respond to user behavior. Iterate based on data.
Duality and Transformation
Thus, 2001 became a moment of profound duality: unprecedented digital optimism and existential uncertainty. Each innovation was a shiny dime, gleaming with promise. The iPod's scroll wheel, Wikipedia's edit button, PayPal's one-click payments—all so elegant, so obviously better than what came before. We grabbed them eagerly, not noticing that each transaction changed us.
We weren't just buying products; we were selling our previous relationship with reality itself. Reality had become simultaneously more malleable and more surveilled, more democratic and more controlled, more connected and more fractured.
Bostrom's simulation hypothesis suggested we might be programs running on some cosmic computer. The Big Data revolution proved we could predict human behavior without understanding it. PayPal showed that freedom and control were two sides of the same algorithmic coin. Wikipedia demonstrated that truth could emerge from chaos—or perhaps that chaos was all there ever was. And somewhere in Los Angeles, Elon Musk was reading Russian rocket manuals, planning humanity's escape route.
By year's end, we lived in a different world. Not just because of 9/11, though that was the most visible rupture. But because we'd learned that with enough data, anything could be predicted except what actually mattered. That reality could be edited by anyone but controlled by few. That escaping Earth might be easier than fixing it.
We had begun to simulate ourselves—not in Bostrom's philosophical sense, but in a practical one. Every click, purchase, and edit fed into systems that could model, predict, and modify human behavior at scale. The line between user and program, between reality and simulation, had begun to dissolve.
And perhaps that was the shiniest dime of all: the illusion that we were still in control of the dissolution.
The Year Was 2000
In the year 2000, while the world was recovering from Y2K preparations that cost $600 billion to prevent a catastrophe that never came, two parallel stories were unfolding in California. One was about discovery. The other was about warning. Together, they would define the next quarter-century of human-machine interaction.
The Contrarian: Peter Thiel and Silicon Valley's Pursuit of Power, by Max Chafkin, 2021