Welcome to A Short Distance Ahead. Each week I explore a single year from AI's history, using AI itself to sift through archives, discover hidden connections, and unpack complex technologies, ultimately helping me structure narratives that would otherwise take weeks to produce. In other words, AI helps me write about AI itself, but I decide where we look, and why.
If you’re just joining us, this is essay fifty-five out seventy-five—we started this journey in 1950 and are marching towards the 75th anniversary of Alan Turing’s seminal paper, where he posed the question: “Can machines think?” You don’t need to read these essays in order or catch up from the beginning, each one stands alone. So jump in whenever and wherever you’d like. I’m glad you're here.
As someone deeply concerned about algorithmic text diluting human storytelling (here’s a recent summary of my thoughts on the topic), and someone who mostly writes without AI, I have strong feelings about the potential negative impact. So, I deeply appreciate your support for this admittedly contradictory experiment. Thank you for your time and attention—still the scarcest resources in our strangely blurred world.
On March 13, fifteen vehicles lined up at the starting line in Barstow, California. The DARPA Grand Challenge offered a simple proposition: navigate 142 miles of Mojave Desert without any human intervention. By day's end, Carnegie Mellon's Sandstorm had traveled the farthest—7.4 miles—before getting caught on a rock. The machines, built to operate without us, couldn't manage even a tenth of the journey alone.

Elsewhere, other machines were doing better on their own. NASA's rovers Spirit and Opportunity had landed on Mars in January, designed for 90-day missions. Each sol (Martian day), they woke with the sunrise, photographed their surroundings, analyzed rocks, and transmitted their findings back to Earth. The communication delay—between 4 and 24 minutes—meant real-time control was impossible. The rovers had to make decisions alone, sending their discoveries into the void, hoping someone was listening.
In a Harvard dorm room, Mark Zuckerberg had recently launched TheFacebook. The site began as FaceMash—an algorithmic system for comparing photos of female students. "Were we let in for our looks? No. Will we be judged on them? Yes," the site had declared. By spring, the comparison engine had evolved into something different: a platform where people could present themselves as they wished to be seen, to an audience that might understand them better than anyone in their physical proximity.
Peter Thiel found this compelling enough to invest $500,000. At their first meeting, Zuckerberg barely looked up from the table while Thiel explained power and leverage. "Don't fuck it up," Thiel said at meeting's end. The entrepreneur who'd spent PayPal's early days battling fraud had learned that human behavior could be shaped through careful system design—the right incentives, the right constraints, the right feedback loops.1
While funding Facebook, Thiel was also founding Palantir Technologies with former PayPal colleagues. The parallel projects shared more than just an investor—they shared an office. While Zuckerberg and Parker pitched their social network in a forty-third-floor conference room, another group of young men sat at desks just feet away. Stephen Cohen, an undergraduate who'd edited the Stanford Review, Joe Lonsdale, a former PayPal intern, and Nathan Gettings, who'd worked on PayPal's anti-fraud efforts, were testing whether the algorithms that caught "Igor" in Russia could be repurposed to catch terrorists.
Thiel named the project Palantir after Tolkien's seeing stones—objects that allowed users to observe distant events. (It was a curious choice: in the books, the stones are chiefly used by Sauron to spy and manipulate, dangerous tools for those who don't understand their power.) The company's mission was to mine the government's 'near-endless trove of data,' as Thiel put it: financial records, cell phone data, network analysis. ‘There's all this information about people and we want to know it,' was how one person summarized Thiel's pitch.2
But by 2004, potential investors were skeptical. PayPal's fraud detection was straightforward, you were either a fraudster or a legitimate user. Intelligence work was messier: analysts wrote subjective reports, disagreed about threats, stored data in incompatible systems. While Gettings built prototypes and pitched to investors who said it would never work, Thiel drove to the University Club in Palo Alto to recruit Alex Karp, his old Stanford Law classmate. Karp was an outsider in Thiel's conservative circle—a Haverford graduate who'd fled to study philosophy in Germany, returning as an unlikely fundraiser with wild hair and theatrical charm. When Thiel asked him to run Palantir, Karp accepted on the spot.
That these projects developed side by side—social networking and surveillance, connection and observation—captured something essential about 2004's technological moment. The proximity wasn't coincidental. In the same Silicon Valley office, one team was building tools to share college photos while another built tools to surveil entire populations. One created social graphs; the other, threat matrices. Both promised to see people more clearly than they could see themselves. The early PayPal algorithms could be repurposed to find other patterns, other threats. The seeing stones would be real, built from SQL queries and network analysis, watching from a safe distance.
At IBM's T.J. Watson Research Center, a team began work on a question-answering system that would eventually be called Watson. The challenge was to create a machine that could parse the ambiguity of natural language, understanding not just words but context, wordplay, and implication. A machine that could finally understand what humans meant, not just what they said.
The AI research community was at its own crossroads, a field grappling with practical deployments and philosophical questions. Automated essay evaluation systems were using natural language processing to grade student writing. Semantic web services promised 'serendipitous interoperability.' Researchers debated emotional AI and long-term human-machine interaction. The Turing Test, fifty-four years after its proposal, still haunted the field—researchers proposing alternatives like the Lovelace Test and Winograd Schema Challenge, each trying to capture what it really meant for a machine to think. Meanwhile, defense applications dominated funding: AI for detecting terrorist activities, synthetic adversaries for urban combat training, robotic systems for first responders.
While researchers debated what thinking meant, millions of people were already reshaping their own thinking to fit new systems. Across the technology landscape, a particular pattern was emerging. MySpace users spent hours alone in their rooms, crafting profiles, selecting their "Top 8" friends, choosing the perfect song to auto-play. LinkedIn professionalized the same impulse. Flickr, founded by Caterina Fake—who wasn't allowed to watch TV as a child and "spent all her time writing poems and listening to classical music"—let people share not just photos but perspectives, gazes, ways of seeing.
Even our entertainment reflected this shift. Lost premiered in September, stranding 48 survivors on a mysterious island, forced to forge new connections after their old world disappeared. The show's elaborate mythology demanded participation—viewers creating wikis, mapping connections, theorizing about the island's nature. Watching became a collective project, strangers united by shared puzzles.
After their meeting in the French Alps in 1985, three researchers—Geoffrey Hinton, Yann LeCun, and Yoshua Bengio “embarked on a ‘conspiracy’—in LeCun’s words—to boost the popularity of the networks, complete with a rebranding campaign offering more alluring concepts of the technology such as ‘deep learning’ and ‘deep belief nets.’”3 They'd been working separately on biologically-inspired algorithms, mostly ignored by mainstream AI. (Twenty years later, LeCun would work for Zuckerberg, teaching machines to recognize those same faces at scale—the faces that once were compared would now be tagged, tracked, understood by algorithms that finally learned to see.)

The infrastructure enabling all this had reached critical mass. Broadband penetration crossed key thresholds. Digital cameras became cheap enough for everyday use. Storage costs plummeted while processing power accelerated. But infrastructure alone doesn't explain what happened.
Consider Thiel's celebration after PayPal's IPO—renting the Dallas Mavericks' plane to fly eighty employees to Maui. The Iraq War was intensifying, the beheadings of Nicholas Berg and others dominating headlines, yet in Silicon Valley money flowed like water. "Get massages, go surfing, drink as much as you wanted," one attendee recalled. At one point, Joe Lonsdale offered $10,000 to anyone who could beat him at arm wrestling. Another partner wrote a $10,000 check for a particularly good round of karaoke. Money thrown around 'like candy,' creating its own peculiar gravity—connection through excess, intimacy through competition, a bubble of privilege while the wider world burned.4

Sean Parker understood this dynamic. After being pushed out of Plaxo for what some called his "huge drug issues," he bonded with Thiel over the experience of "tangling with Moritz"—Mike Moritz, the legendary Sequoia Capital investor who had fired Thiel from PayPal's board and whom Parker blamed for his ouster from Plaxo. Parker had created Napster at nineteen, turning "every high school and college student into an intellectual property thief." His next venture, Plaxo, functioned as "an enormous spam machine"—once you gave it your contacts, it would email friends relentlessly until they signed up too.5 Connection as contagion.

These weren't just parties or products; they were experiments in human dynamics. What happens when traditional constraints—money, distance, propriety . . . suddenly disappear? The same question animated many of 2004's innovations. What happens when anyone can publish (blogging)? When anyone can broadcast (podcasting)? When anyone can surveil (Palantir)? When anyone can connect (Facebook)?
The December 26 tsunami provided one answer. The disaster killed over 230,000 people, but its documentation was unprecedented, tourists' digital cameras and early camera phones capturing the wave's approach, videos spreading across the nascent social web faster than traditional news could report. Technology didn't prevent catastrophe, but it fundamentally changed how we witnessed and responded to it, together, yet apart, united by screens.
The autonomous vehicles in the Mojave would try again the next year, and some would complete the course. The Mars rovers would outlive their missions by decades, Opportunity wouldn't stop transmitting until 2018, alone on Mars for fifteen years. Facebook would drop "The" from its name and open to the world. But in 2004, everything was still becoming. The systems we now inhabit, always on, always watching, always connecting—were just learning to function.
What made 2004 distinctive wasn't any single breakthrough but the strange ecology taking shape: machines trying to navigate without us, humans building elaborate systems to navigate around each other, and the first glimpses of what would happen when those projects converged. In some way, in our own ways, maybe we were all traveling through the desert, looking for connection.
Three from Today
This past Friday, former Palantir employee,
, wrote about about how he views the current threats of AI exploitation, and some of the current work and influence of Palantir.And, in a recent New Yorker review of Alex Karp's book "The Technological Republic," which argues that America's survival depends on Silicon Valley reconnecting with the military-industrial complex. Karp laments that tech has abandoned national defense for "stupid farm games" and consumer products, calling for a return to the civic-minded innovation of the Cold War era when federal support gave engineers meaningful challenges and national purpose.
Gideon Lewis-Kraus notes that Karp's book reads like "an automated Spotify playlist of the greatest hits of national decline" that recycles conservative grievances while claiming to be liberal. Lewis-Kraus is skeptical of Karp's circular logic (national security creates national pride, but national pride is needed for national security) and Karp’s romanticized view of the military-industrial complex, pointing out that it lost public trust through "pointless and destructive wars," not cultural decline. Lewis-Kraus also highlights the irony of Karp calling for civic purpose while running a surveillance company that many civil libertarians compare to "Minority Report."
The Palantir Guide to Saving America’s Soul
Also in The New Yorker, Hua Hsu, a professor of literature at Bard College, writes an article exploring how college students in 2024 use AI for virtually all their academic work—from writing papers to texting to therapy—revealing a generation that views ChatGPT not as cheating but as just another productivity tool, while professors struggle to adapt their teaching methods and grapple with what education means when the process of learning can be bypassed entirely.
What Happens After A.I. Destroys College Writing?
Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots, by John Markoff (p.150)
Great cover image