The Year Was 2002
And humanity crossed the analog threshold
Welcome to A Short Distance Ahead. Each week I excavate a single year from AI's history while simultaneously drafting these essays with the very tools I'm examining, a strange recursive loop that feels both necessary and unsettling.
I know these subscription appeals are everywhere now, each newsletter another voice in an overwhelming chorus asking for support. But if you find value here—whether through subscribing, sharing, or supporting financially—I'd be grateful. For the cost of one monthly cup of fancy coffee, you can help keep this exploration going. Paid subscribers will also get access to additional notes on my process and findings (coming soon!).
As someone deeply concerned about algorithmic text diluting human storytelling (here’s a recent summary of my thoughts on the topic), and someone who mostly writes without AI, I have strong feelings about the potential negative impact, so I’m grateful for every reader who values this admittedly contradictory experiment. Thank you for your time and attention—still the scarcest resources in our accelerating world.
The year was 2002, and on one unremarkable day somewhere in the first half of the year, without ceremony or notice, humanity crossed an invisible threshold.
For the first time in history, more than half of all information stored on the planet existed in digital form.1 Analog had reigned since humans first painted on cave walls. Digital dominance took just two decades. Martin Hilbert and Priscila López would later document this tipping point in Science, but at the time, no one noticed. We were too busy filling our hard drives with MP3s and digital photos, replacing our VHS tapes with DVDs, typing emails instead of writing letters.
The shift seemed mundane, inevitable. But we had fundamentally altered our relationship with memory itself. Analog storage—those tapes and papers and photographs—degrades gracefully, fading like human recollection. Digital storage either disappears without a trace or stays flawless forever. There's no subtle fading, no graceful forgetting, no middle ground. There is no in-between.
The Digital Natives
In Los Angeles that May, Elon Musk was incorporating Space Exploration Technologies. Just months earlier, he'd flown to Moscow with $20 million in cash, hoping to buy surplus ICBMs for a Mars mission.2 The Russians had literally spat on him. They'd quoted him $18 million for two rockets, then abruptly doubled the cost to $18 million each when he pushed back.

On the flight home, Musk pulled out his laptop and began building spreadsheets. He calculated the raw materials cost of rocket components—aluminum, titanium, fuel. The markup was astronomical. "I think we can build this rocket ourselves," he told his companions, showing them the numbers. Where the Russians saw immutable hardware, Musk saw variables in an equation.
This was first-principles thinking in action, but more importantly, it was digital thinking. Reality wasn't fixed; it was code to be optimized. If you could model it in spreadsheets, you could rebuild it in aluminum.
Meanwhile in Palo Alto, Peter Thiel was taking PayPal public. The February IPO valued the company at nearly $1 billion—remarkable for a firm that had almost died from fraud just two years earlier.3 To celebrate, Thiel hosted simultaneous chess exhibitions in the company parking lot, playing multiple games at once.

By summer, he was throwing parties in the Santa Cruz mountains where senior staff donned fat suits and sumo-wrestled while employees cheered.
The spectacle marked Thiel's own transformation. The libertarian philosopher who'd once handed out copies of Ayn Rand was becoming something new: a power broker who understood that Silicon Valley's algorithms could reshape society more effectively than Washington's laws. While Wall Street reeled from Enron's collapse, PayPal's IPO signaled that financial power was migrating west. The dot-com crash hadn't killed the digital revolution—it had clarified who would control it. These weren't just payment systems; they were behavior modification engines. Every transaction taught their algorithms something new about desire, trust, and impulse.4
In a St. Louis high school, a seventeen-year-old named Sam Altman stood at a podium, gripping a speech he'd rewritten three times that morning. The school had invited a gay rights speaker for an assembly. Some students organized a protest. Altman, the only openly gay student he knew of, had signed up to address the school.
He'd spent the night awake, terrified. In the hallway before assembly, he kept crossing out the paragraph where he revealed his own story, then writing it back in. At the podium, under bright lights that mercifully obscured individual faces, he made his choice. "Either you have a tolerant community or you don't," he concluded. "You don't get to pick and choose which topics you're tolerant on."
The standing ovation surprised him. Students approached him throughout the day—some tearfully grateful, others empowered to finally voice their own truths. He'd learned something that Silicon Valley would later codify: if you have the will to bend the world toward your vision, if you're right just once when it matters, everything changes.5
The Agents Among Us
While Musk calculated rocket costs and Thiel celebrated IPOs, AI researchers were quietly building a different kind of digital native: autonomous agents that could think, adapt, and act without human supervision. At the AAAI-2002 conference, researchers proclaimed agents as the key to AI's original promise. After decades of trying to replicate human consciousness and failing, they'd shifted focus: build software agents that could perceive, decide, and act on their own, but only within limited domains like online shopping or calendar management. Less ambitious, more achievable, ultimately more useful
At USC, researchers developed “Electric Elves,” AI agents that infiltrated corporate life by scheduling meetings, tracking locations, and managing the mundane logistics humans despised. IBM's Natural Language Assistant project promised to make e-commerce sites conversational, blending statistical learning with old-school rules to create something eerily competent. These weren't the grand AIs of science fiction, they were digital servants, each optimized for specific tasks.
The practical turn extended to elder care, where AI systems combined passive monitoring with proactive support. Privacy concerns emerged immediately. Who watched the watchers when the watchers were algorithms? But the efficiency was undeniable.
The Machines Come Home
That same year, iRobot released the Roomba. For $199, Americans could buy a robot that would vacuum their floors while they slept. It wasn't sophisticated—just sensors, algorithms, and rotating brushes. But it was the first autonomous robot to enter millions of homes.
The Roomba succeeded because it solved a real problem no one wanted to solve themselves. While we watched reality TV—The Osbournes premiered that March, turning family dysfunction into entertainment—our robots cleaned up after us. We were quietly outsourcing physical chores while loudly broadcasting our own dramas—transforming private reality into public spectacle.
American Idol launched too, promising to democratize fame through audience votes. Call this number for your favorite. (Text codes would come later.) Every vote was instant feedback, teaching producers exactly what the masses wanted. Human preferences were becoming real-time data streams—coded, measured, and optimized."
In New York, scientists led by Eckard Wimmer at SUNY Stony Brook synthesized a functioning poliovirus from scratch. They downloaded the genetic sequence, ordered custom DNA fragments by mail, and assembled them like Lego blocks. The virus they created was indistinguishable from the natural version. Life had become downloadable, an executable file printed out in strands of DNA.
The implications were staggering. If you could synthesize polio, what else could you build? But the deeper revolution was conceptual. Biology wasn't sacred or mysterious—it was information to be copied, edited, and executed.
Stephen Wolfram published A New Kind of Science, arguing that reality itself might be computational.
Simple rules, iterated endlessly, could generate all of nature's complexity. Douglas Hofstadter publicly wrestled with existential unease, haunted by AI that composed music indistinguishable from human creativity. Would we even know the difference?
The Willful Ones
By late 2002, the pattern was clear. A new generation had learned that everything analog could be digitized, and everything digitized could be optimized. They weren't asking permission.
Musk's SpaceX promised to reach orbit by September 2003 and Mars by 2010—impossible timelines that transformed "completely insane" into "merely very late." His spreadsheets showed that rockets had an "idiot index" of 50:1—they cost fifty times their raw materials. That gap was pure inefficiency, and inefficiency was just another bug to fix.6
Thiel used PayPal's IPO wealth to launch a hedge fund, betting he could program financial markets the way PayPal had programmed human behavior. He'd furnished his new San Francisco mansion with help from an assistant who chose everything from his furniture to his Ferrari's color. The personal had become another system to optimize.7
In internet forums, Eliezer Yudkowsky was warning about AI safety while Nick Bostrom refined his simulation hypothesis. If we lived in a simulation, did it matter? Perhaps the real question was whether we were writing the code or just executing it.
The answer came from those like Altman, who'd learned young that with enough will, you could rewrite reality's source code. Stand at the podium. Say the thing that needs saying. The world updates its parameters.
The Revolution Nobody Noticed
We'd crossed the digital threshold so smoothly that no one thought to mourn what we'd left behind. Analog memories fade like paintings; digital memories disappear like unpaid server bills. Analog errors faded into charming imperfections; digital errors triggered complete collapse.
But the willful ones weren't looking back. They'd discovered that in a digital world, everything was just information to be transformed. Rockets were overpriced aluminum. Biology was sequences to be synthesized. Human behavior was patterns to be programmed. Even truth itself, as Wikipedia's rapid growth in 2002 showed, could emerge from carefully structured chaos.
The year 2002 wasn't when the digital age began. It was when being digital became mandatory. When the willful ones, armed with laptops and conviction, started rebuilding the world in code. When AI agents began colonizing the spaces between human intention and action. When we stopped adapting to technology and started adapting technology to our will.
Somewhere, a Roomba bumped into a wall, recalculated, and kept cleaning. Somewhere else, an Electric Elf rescheduled a meeting no human remembered making. We barely noticed. We were too busy becoming digital ourselves8, one uploaded photo, one posted thought, one optimized behavior at a time.
The transformation was complete. Now came the hard part: living with what we'd become. We had become data: editable, vulnerable, endlessly replicable.
Exabytes: Documenting the 'digital age' and huge growth in computing capacity, originally published in The Washington Post, 2011, linked here from: martinhilbert.net
X-Man: The Elon Musk Origin Story - Episode 3: Planet B podcast series from Pushkin and Jill Lepore
The Contrarian: Peter Thiel and Silicon Valley’s Pursuit of Power, by Max Chafkin, 2021
Episode 11: Sam Altman: Growing Up Silicon Valley, First Contact with Laurie Segall, Dot Dot Dot Media and iHeart Radio
The Contrarian: Peter Thiel and Silicon Valley’s Pursuit of Power, by Max Chafkin, 2021


