The year was 1998, and children across America were whispering secrets to small furry robots with electronic hearts. The Furby—an owl-hamster hybrid with motorized eyelids and an infrared port between its eyes—listened and appeared to learn. Parents swore the toys were spying; the NSA banned them from its offices; and children insisted their Furbys really understood them. The creatures started life speaking only "Furbish" but gradually incorporated English phrases as they "aged," creating the illusion they were learning from their human companions.
They weren't, of course. The Furby's apparent learning was pre-programmed—a carefully scripted performance of intelligence rather than the real thing. But it didn't matter. What mattered was that millions of children believed their Furbys were listening, understanding, remembering. A bond formed between silicon and carbon, between the programmed and the living. And in that space of willing belief, something new was born.
Meanwhile, in a Menlo Park garage, two Stanford PhD students were founding a company that would transform how humanity accessed information. Google wasn't the first search engine, but it was the first to harness the collective intelligence of the web itself. Larry Page and Sergey Brin realized they could extract meaning from the hyperlink structure of the internet—treating each link as a vote, a signal of relevance and authority.
The genius of Google wasn’t that it knew anything itself, but that it knew how to interpret what everyone else knew. As computer scientist Terry Winograd mentored Larry Page through his dissertation, steering him away from autonomous vehicles and toward the architecture of knowledge aggregation, he embodied a broader intellectual pivot: from machines that sought to replace human cognition to ones that could amplify it. After years developing AI systems, Winograd had come to believe that the most profound advances would come not from mimicking intelligence, but from scaffolding and scaling it.1
That same year, another kind of scaffolding quietly entered the world: the MNIST database, a collection of handwritten digits that would become the Rosetta Stone of machine learning. Created by Yann LeCun and colleagues, MNIST wasn’t flashy, it was standardized, constrained, monochrome. But it allowed a generation of researchers to train, test, and compare neural networks in a consistent way. It gave machines a shared childhood. Before intelligence could be distributed, it had to be measured.
The scaffolding of intelligence—first in datasets, then in networks—gave way to its distribution. What Google understood—and what Furby, in its chirpy, animatronic way, also hinted at—was that intelligence needn’t reside in a single mind or machine. It could be outsourced, networked, diffused. That same year, Hans Moravec published Robot: Mere Machine to Transcendent Mind, declaring that machines would match human intelligence by 2040. "In a post-biological world," he wrote, "there is no need for people in the conventional sense." The center of intelligence was no longer a brain—but a web.
AI researchers, too, were redefining intelligence. In AI Growing Up, James Allen declared 1998 a transitional year—the Kitty Hawk moment for artificial intelligence. For the first time, intelligent systems like: TRAINS-95 and TRIPS were not just demos but real tools that worked with human input in messy domains. Randall Davis, in What Are Intelligence?, reminded the field that intelligence had always been plural, ambiguous, and contingent, more a legacy patchwork than a clean design. The mind, he wrote, was “a 400,000-year-old legacy application… and you expected to find structured programming?”
Software agents began to proliferate—modular, autonomous, socially interactive entities capable of operating without human oversight. As The Many Faces of Agents observed, these systems weren’t just software; they were envoys of cognition, defined by adaptivity, autonomy, and “situatedness.”
This outsourcing of intelligence was transforming not just technology but global markets. That August, Russia defaulted on its debt, triggering a financial crisis that would bring down Long-Term Capital Management—a hedge fund run by Nobel Prize-winning economists whose quantitative models had failed to predict the unpredictable. Even as Wall Street was outsourcing risk assessment to algorithms, the limitations of such delegation were becoming painfully apparent.
And yet the fever persisted. By the end of 1998, everyday Americans were flooding into the stock market, driven by media hype and dot-com delirium. Nearly 50,000 companies had formed since 1996 to commercialize the Internet, backed by more than $250 billion in venture capital. CNBC addicts and casual investors alike piled in. As one skeptical analyst put it, “It defies my imagination that so many people with so little sophistication are speculating on these stocks.” That analyst was Bernie Madoff.2
As Titanic swept the Oscars and Seinfeld aired its final episode, as nuclear tests in India and Pakistan raised global tensions, and as the Good Friday Agreement brought hope to Northern Ireland, the human and the artificial were entering a new relationship—one where we increasingly entrusted machines not just with our tasks but with our knowledge, our memories, our sense of direction, and eventually, our companionship.
The iMac's candy-colored shell made computing friendly and approachable, hiding the complexity inside a smooth, translucent surface. This was the new face of technology: accessible, personal, seemingly alive. The Furby chattered, the iMac smiled, Google answered—and we began to forget what it was like to navigate the world without them.
What no one could have predicted in 1998 was how completely we would come to depend on this outsourced intelligence—how thoroughly we would delegate the work of remembering, of finding our way, of knowing things, to systems that only simulated understanding. The distance between the Furby's scripted responses and Google's algorithmic ranking was vast, but both represented early steps toward a world where intelligence existed between humans and machines rather than solely within either.
As 1999 approached, with the Y2K bug looming on the horizon, we stood at the threshold of a new era, one where our relationship with machines would become increasingly intimate, increasingly essential, and increasingly difficult to disentangle from our sense of self.
Thanks for reading A Short Distance Ahead, a weekly exploration of artificial intelligence’s past to better understand its present, and shape its future. Each essay focuses on a single year, tracing the technological, political, and cultural shifts that helped lay the foundation for today’s AI landscape.
If you're a new reader, welcome. For context, I'm exploring the creative process of narrative design and storytelling by drafting these essays in tandem with generative AI tools, despite deep concerns about how algorithmic text might dilute the human spirit of storytelling. By engaging directly with these tools, I hope to better understand their influence on creativity, consciousness, and connection. I also write elsewhere without AI assistance and I’m at work on my first collection of short stories. If you find value in this work, please consider sharing it or becoming a subscriber to support the series.
As always, sincere thanks for your time and attention.
Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots, John Markoff, (p. 180 e-book)
How the Internet Happened: From Netscape to the iPhone, Brian McCullough (p.141 e-book)