Welcome to A Short Distance Ahead, a weekly exploration of artificial intelligence’s past to better understand its present, and shape its future. Each essay focuses on a single year, tracing the technological, political, and cultural shifts that helped lay the foundation for today’s AI landscape.
If you're a new reader, welcome. For context, I'm exploring the creative process of narrative design and storytelling by drafting these essays in tandem with generative AI tools, despite deep concerns about how algorithmic text might dilute the human spirit of storytelling. By engaging directly with these tools, I hope to better understand their influence on creativity, consciousness, and connection. I also write elsewhere without AI assistance and I’m at work on my first collection of short stories. If you find value in this work, please consider sharing it or becoming a subscriber to support the series.
The year was 1997, and in a darkened conference room at the Equitable Center in Manhattan, a man stared at a chessboard for the last time as world champion.
Garry Kasparov, who had dominated chess since 1985, stood up after just 19 moves and extended his hand in resignation. "I lost my fighting spirit," he said simply, though the tremor in his voice suggested the loss was more than just a game.
Deep Blue, IBM's chess-playing computer, had just become the first machine to defeat a world champion in a traditional match. The final score: 3½ to 2½. The margin was slender, but the implications were vast. For the first time in human history, a machine had publicly bested our species at something we considered quintessentially human: the art of strategic thinking.
"I'm a human being," Kasparov said afterward, his eyes dark with something between defeat and fear. "When I see something that is well beyond my understanding, I'm afraid."1
But Kasparov's defeat was only the most visible manifestation of a deeper question that was haunting 1997: Who should be in control—humans or machines?
Three months before Deep Blue's victory, two computer scientists had debated precisely this question at the ACM's Computers and Human Interaction conference in Atlanta. Ben Shneiderman of the University of Maryland, a champion of "direct manipulation" interfaces where users clicked, dragged, and stayed firmly in control, faced off against Pattie Maes from MIT's Media Lab, an evangelist for "intelligent agents" that could act autonomously on our behalf.
Before their debate began, Shneiderman had tried to defuse the tension by handing Maes, who had recently become a mother, a teddy bear. But the gesture couldn't soften the fundamental disagreement between them.
"I believe the language of 'intelligent autonomous agents' undermines human responsibility," Shneiderman argued. He pointed to countless examples where people blamed "the computer" for failures that were really human errors. Give machines too much autonomy, he warned, and we risk losing accountability altogether.
Maes pushed back pragmatically. "I believe that users sometimes want to be couch-potatoes," she said, "and wait for an agent to suggest a movie for them to look at, rather than using 4,000 sliders, or however many it is, to come up with a movie that they may want to see."
The debate that day wasn't merely philosophical—it reflected a profound shift happening in the very architecture of artificial intelligence. In computer science labs around the world, 1997 marked the year AI began abandoning the rigid rule-based systems that had dominated the field since its birth. Instead of programming computers with explicit instructions about how to behave, researchers were increasingly teaching machines to learn from experience.
Sepp Hochreiter and Jürgen Schmidhuber had just published their breakthrough on Long Short-Term Memory networks—a way to give machines the ability to remember across time. LSTM networks gave machines memory for the first time—the ability to remember and build on past experiences. It was learning without understanding, yet it worked with startling efficiency.
On the nascent World Wide Web, a new generation of AI systems was escaping the laboratory for the first time. Search engines were learning to rank pages not through hand-coded rules but by analyzing patterns in how humans actually browsed. Recommendation systems were beginning to watch what people bought, what they read, what they lingered over—and using that data to predict what they might want next. These weren't the carefully controlled AI experiments of previous decades. They were systems that grew smarter through contact with real users, learning from millions of interactions every day.
In natural language processing, the old dream of encoding grammar rules by hand was giving way to statistical approaches that learned language by observing it in action. Instead of teaching computers the rules of English, researchers fed them vast corpora of text and let them discover patterns on their own. The approach seemed crude compared to the elegant symbolic systems of the AI pioneers, but it had one crucial advantage: it worked in the messy, unpredictable real world.
This shift from rules to learning reflected exactly the tension Maes and Shneiderman were debating. Hand-coded systems kept humans firmly in control—every behavior was explicitly programmed, every decision traceable back to its human author. Learning systems, by contrast, developed their own internal logic, their own way of making sense of the world. They could adapt and improve, but often in ways their creators couldn't fully explain or predict.
The debate was polite but pointed. Shneiderman's vision championed human agency—powerful tools that amplified human capabilities but always kept humans in the driver's seat. Maes's vision embraced delegation—machines that could learn our preferences and act on our behalf, freeing us from the tyranny of ever-more-complex interfaces.
There was no clear winner that day, just as there had been no clear winner in 1997's other contests between human control and machine autonomy.
On Mars, NASA's Pathfinder mission had landed successfully, deploying the Sojourner rover. The little machine crawled across the Martian surface, largely autonomous but still guided by human controllers millions of miles away. It was a perfect metaphor for the year: machines growing more capable, but with humans still pulling the strings—when they could.
In Asia, financial markets were proving that some systems could slip beyond human control entirely. The Thai baht collapsed in July, triggering a crisis that spread across the region like a digital wildfire. Currency traders, aided by computerized trading systems, amplified panic into catastrophe. The invisible hand of the market had become a closed fist.
Yet even as some systems spiraled beyond control, others were being wrestled back under human governance. On July 1st, Hong Kong returned to Chinese sovereignty after 156 years of British rule. The handover was meticulously choreographed, every detail planned, a massive exercise in managed political transition. "One country, two systems," they called it—a delicate balance between autonomy and control.
In Kyoto that December, the world's nations gathered to negotiate the first international treaty requiring the reduction of greenhouse gas emissions. The Kyoto Protocol represented humanity's attempt to regain some measure of control over the planetary systems we had accidentally destabilized. It was a bet that collective human action could still steer the ship of Earth's climate, though the results would remain uncertain for years to come.
The year had begun with a question about chess and ended with a treaty about the air we breathe.
But between those moments lay the same fundamental tension that had animated the Shneiderman-Maes debate: How much should we delegate to systems beyond our direct control? When should we trust machines to act on our behalf? And what happens when we lose the ability to intervene?
Deep Blue had shown superhuman strategic thinking, but it was still just a chess program, a narrow intelligence that couldn't tie its own shoes. The broader question—whether to embrace Maes's vision of collaborative agents or Shneiderman's insistence on human control—remained unresolved.
Perhaps Kasparov said it best in his bitter press conference after his defeat. "I played a friendly match," he explained. "I was sure I would win because I was sure the computer would make certain kinds of mistakes, and I was correct in Game 1. But after that the computer stopped making those mistakes."
The computer had learned. It had adapted. And in that learning lay both the promise and the peril of the year 1997—the year machines began to show they could improve themselves, whether we wanted them to or not.
Swift and Slashing, Computer Topples Kasparov, The New York Times, May 12, 1997