
The year was 1989, and walls were coming down.
In Berlin, young Germans with sledgehammers attacked concrete barriers that had divided their city for twenty-eight years, while border guards who had shot escapees just months earlier now stood by, watching history crumble at their feet. In AI laboratories across America, researchers were dismantling very different kinds of walls—the intellectual barriers that had separated artificial intelligence from statistics, probability theory, and other disciplines.
Neither revolution made headlines for its methodology. The world celebrated the visible results—freedom, reunification, the end of divisions—but paid little attention to the processes that made these breakthroughs possible. Yet, while CNN broadcast jubilant Berliners dancing atop the wall, no cameras captured the quiet shift happening in AI, where the rigid rules and symbolic reasoning that had defined the field since its inception were giving way to something more flexible, more probabilistic, more connected with the messiness of the real world.
For decades, AI researchers had sought elegance—building theories that mimicked the clean precision of mathematics and physics. But by 1989, a quiet shift was underway. As one report from The Fifth International Conference on Machine Learning observed, "One cannot help but notice the sharing of data sets between researchers. These data sets provide a common yardstick for comparing different approaches." AI was beginning to embrace uncertainty, probability, and data-driven experimentation. Theories were no longer judged on intuition alone but on their ability to learn, adapt, and perform in the messy, unpredictable real world. Few outside the field noticed, but the age of handcrafted knowledge was ending, and the machine learning revolution had begun.
At the dawn of 1989, artificial intelligence remained in winter — funding had dried up, commercial applications had disappointed, and the grand promises of thinking machines seemed further away than ever. The Pentagon's Defense Advanced Research Projects Agency (DARPA) – which The New York Times described that year as having "an impact on the nation's technology development well out of proportion to its size" — was being propelled into the role of de facto venture capitalist for America's high-technology industry, including AI. Yet even with DARPA's support, expert systems had revealed their brittleness. They could diagnose rare diseases or configure computers within carefully constrained domains, but they collapsed when confronted with ambiguity or novelty. Like the political systems of Eastern Europe, they reflected a form of rigid thinking that was increasingly showing its limitations.
But beneath the surface, something was stirring. Examples like The Fifth International Conference on Machine Learning and the 1988 Workshop on Cognitive Models of Speech Processing suggested new directions. Hidden Markov models were transforming speech recognition. Bayesian networks were providing frameworks for reasoning under uncertainty. Decision trees were learning from data rather than relying on hand-coded rules. These were not just new techniques but new philosophies — approaches that embraced uncertainty, learned from experience, and crossed disciplinary boundaries.

While the world witnessed profound political upheavals—watching Tiananmen Square, where Chinese students erected a papier-mâché "Goddess of Democracy" before tanks rolled in to crush their protest, Romanians rose against Ceausescu’s brutal regime, leading to his execution on Christmas Day, and the global anti-apartheid movement gained momentum with boycotts and protests—AI researchers were quietly rebelling against their own orthodoxies. The symbolic approach that had dominated AI since its inception — the belief that intelligence could be reduced to the manipulation of symbols according to logical rules — was giving way to methods that leveraged data, probability, and statistical inference. These shifts in AI methodology couldn't compare to the courage and sacrifice of those fighting for political freedom, but both represented challenges to established systems of thought that would ultimately reshape their respective worlds. In one direction, students faced tanks; in the other, researchers faced entrenched academic traditions. Both confrontations would reshape the future, though in vastly different ways.

After years of grand promises and disappointments, a new humility was emerging in AI. Instead of claiming to mimic human thought, researchers focused on building systems that worked, Speech recognition systems didn't need to understand language; they just needed to transcribe it accurately. Chess-playing programs didn't need to think like grandmasters; they just needed to win. This shift from cognitive modeling to practical application helped AI to weather its winter.
Meanwhile, across the cultural landscape, the year seemed to pulse with new voices in popular culture who were challenging old assumptions. Spike Lee's "Do the Right Thing" forced audiences to confront racial tensions in America. Public Enemy's "Fight the Power" became an anthem of resistance. The Simpsons debuted, bringing subversive social commentary into prime-time animation. Each in its own way challenged simple categorizations, embraced complexity, and created new connective tissue between previously separate domains.
At Carnegie Mellon University, a retrofitted Army ambulance named ALVINN (Autonomous Land Vehicle In a Neural Network) was quietly making its own kind of history, navigating campus roads without human intervention. Unlike rule-based systems that would have required explicit programming for every possible driving scenario, ALVINN used neural networks to learn from experience—demonstrating how this new paradigm could tackle complex real-world tasks that had stymied traditional AI. Though operating on computing power one-tenth that of today's Apple Watch, this proto-self-driving car embodied the field's shift toward systems that could adapt rather than merely follow instructions.
The world of 1989 was in transition from binary oppositions – East and West, capitalism and communism – to more complex, networked arrangements. Similarly, AI was moving from brittle rule-based systems toward robust, data-driven models that would eventually enable the deep learning revolution decades later. Francis Fukuyama might have been declaring "the end of history" with the triumph of liberal democracy, but both global politics and artificial intelligence were entering messier, more ambiguous territories where clean ideological lines blurred and adaptation trumped rigid doctrine. Old certainties were giving way to new possibilities, neither fully formed nor easily categorized.
The most profound changes often happen not with dramatic declarations but through quiet shifts in thinking. In 1989, as walls fell and barriers dissolved, the foundations were being laid for the world we inhabit today — a world where intelligence, whether human or artificial, thrives by making connections rather than enforcing divisions.
And on March 12 of that pivotal year, Tim Berners-Lee submitted a proposal to CERN for what he modestly described as "a 'web' of notes with links" — a simple idea that would become the World Wide Web and transform how humanity shares knowledge, connects, and collaborates across every boundary imaginable.
Three From Today
First from
’s :Introducing the Vibe Worker: AI isn't just automating tasks–it’s freeing our best thinking
Azhar suggests that AI isn't just automating structured tasks—it’s revolutionizing how we translate vague, half-formed ideas into tangible work. Enter vibe working, a new mode of thinking where AI helps refine intuition into structured output, allowing creatives, strategists, and developers to iterate faster and with more clarity. This piece explores how AI tools can transform brainstorming, research, and planning into seamless, dynamic workflows, unlocking what Azhar frames as a 10x boost in productivity.
It’s worth contemplating Azhar’s post with the post from neuroscientist
of that I shared a few weeks back:brAIn drAIn: The enhancement and atrophy of human cognition go hand in hand
Essentially, Hoel asks: is AI making us sharper thinkers—or lazier ones? While some hail AI as a tool that helps turn intuition into structured thought, there’s growing evidence that over-reliance on AI erodes critical thinking. A recent study from Microsoft suggests that knowledge workers who use AI extensively engage less with complex problem-solving, raising concerns about cognitive atrophy. If AI is the future of work, how do we ensure it enhances, rather than replaces, human judgment?"
And finally, from Derek Thompson’s podcast Plain English:
Leisure reading has plummeted, literacy scores are in decline, and even elite college students are struggling to finish entire books. In a recent discussion with The Atlantic’s Rose Horowitch, Thompson explores this unsettling trend, uncovering how students—accustomed to excerpts, videos, and digital distractions—are losing the ability to engage deeply with complex texts. Professors at top universities report a dramatic shift in students' reading habits, with many unable to sustain focus even on short works. Thompson and Horowitch discuss if the rise of screens and social media are rewiring how we process information.
From the introduction to the podcast, Thompson shares the following (bold my own):
Why, with everything happening in the world, would I want to talk about reading? The business podcaster Joe Weisenthal has recently turned me on to the ideas of Walter Ong and his book Orality and Literacy. According to Ong, literacy is not just a skill. It is a specific means of structuring society’s way of thinking. In oral cultures, Ong says, knowledge is preserved through repetition, mnemonics, and stories. Writing and reading, by contrast, fix words in place. One person can write, and another person, decades later, can read precisely what was written. This word fixing also allows literate culture to develop more abstract and analytical thinking. Writers and readers are, after all, outsourcing a piece of their memory to a page. Today, we seem to be completely reengineering the logic engine of society. The decline of reading in America is not the whole of this phenomenon. But I think that it’s an important part of it.