Thirty years before the year was 1983, Johnny von Neumann sat before MANIAC,1 his computer at Los Alamos, watching it simulate five different kinds of time. Nuclear explosions that ended in microseconds. Shock waves that lasted minutes. Weather patterns that spanned years. Evolution that required millions of years. And stellar deaths that consumed billions. All this in five kilobytes of memory — enough space to hold half a second of today's digital music.
Von Neumann understood shock waves better than anyone — those violent discontinuities that occur when something moves faster than the medium around it can adapt. He saw them in nuclear detonations, in the collision of stellar gases, in the way digital computing itself was about to collide with human society. "The ever accelerating progress of technology," he wrote to Stanislaw Ulam, "gives the appearance of approaching some essential singularity in the history of the race." Even shock waves, he knew, could be produced by the collision between two universes, or the explosion of a new one into being.2
Thirty years later, the year was 1983, and in a Carnegie Mellon conference room, where the digital universe was indeed colliding with our own faster than we could adjust, the mathematics professor turned sci-fi writer, Vernor Vinge, was giving von Neumann's intuition a mathematical name. He called it the technological singularity — a future point where artificial intelligence would exceed human intelligence, creating change as incomprehensible as the physics inside a black hole. The discontinuity von Neumann had glimpsed was acquiring theoretical form.
Meanwhile, in his cluttered Stanford office, John McCarthy was writing a very different kind of prophecy. His winter message in AI Magazine read like a warning from the future's more practical guardian: "We won't reach human level intelligence by building larger and larger production systems involving more and more facts all on the same level." McCarthy, who had coined the term "artificial intelligence" back in 1956, was watching his field drift from fundamental questions toward quick commercial applications.
The contrast couldn't have been starker. While Vinge contemplated an AI explosion that would transform the very nature of consciousness, McCarthy worried that researchers were too focused on building expert systems that could diagnose diseases or configure computers — useful tools, but hardly steps toward true machine intelligence. Like von Neumann's shock waves, the gap between vision and reality was creating its own kind of discontinuity.
That summer, this tension between revolutionary and evolutionary visions of AI played out in movie theaters across America. In "WarGames," a military supercomputer named WOPR learned about the futility of nuclear war through endless games of tic-tac-toe. The film captured both perspectives: the terror of machines growing beyond human control (Vinge's concern) and the practical reality that even sophisticated AI systems were still, at heart, pattern-matching machines (McCarthy's point).
The real world was writing its own script. The adoption of TCP/IP as the standard protocol for ARPANET marked the birth of the modern internet. The network that would eventually connect billions was taking its first standardized breaths.
In music, Herbie Hancock's "Rockit" brought the sound of digital sampling and scratching into suburban homes. Its groundbreaking video, featuring robotic instruments playing themselves, captured both the promise and unease of automation. As synthesizers replaced traditional instruments and drum machines kept perfect time, musicians grappled with questions that paralleled those in AI: Could machines be creative? Could they groove? Could they feel? And if they could, what would that mean for human uniqueness?
In November of that year, Ronald Reagan reluctantly signed into law a bill creating Martin Luther King Jr. Day - after years of resistance and a petition containing over six million signatures. But like McCarthy's vision of methodical progress, the actual implementation would reveal how slowly human systems could adapt. Some states would rename it, others would combine it with Confederate holidays. Arizona wouldn't recognize it until a referendum was passed in 1992, after the state lost an estimated $500 million in revenue when the NFL moved the 1993 Super Bowl game to California in protest. South Carolina would become the last state to make it a paid holiday in 2000. Even simple changes, it seemed, could propagate through society more slowly than light through fiber optic cables.
At Stanford, Nils Nilsson offered what seemed like a middle path.3 His vision of "computer individuals" — systems with continuous existence that could learn from experience and engage in natural dialogue — was ambitious but not apocalyptic. These machines wouldn't transcend human comprehension; they would augment it, growing alongside us rather than beyond us. It was an attempt to bridge the discontinuity between present and future, between McCarthy's methodical progress and Vinge's exponential leap.
But even as these competing visions of AI's future took shape, the present was already moving in unexpected directions. McCarthy noted that students were "eager to get in on big, build-it-now projects in expert systems and natural language interfaces" rather than tackling fundamental questions about machine intelligence. The revolution, it seemed, would be commercialized.
Pioneer 10, carrying humanity's greetings to the outer reaches of our solar system on its golden-plated greeting card, passed Uranus's orbit that year — a testament to humanity’s desire to communicate with minds unlike our own. Like AI itself, it was venturing into unknown territory, carrying messages it might not understand to intelligences it couldn't comprehend. The future was approaching at different speeds: McCarthy's careful steps, Nilsson's steady march, Vinge's exponential rush toward unknowability.
In this atmosphere of technological ferment and existential questioning, video games were teaching a generation to think in new ways. Even as the industry crashed (the great video game crash of 1983), games like "Dragon's Lair" were pushing the boundaries of human-machine interaction. Its laserdisc animation suggested a future where the line between passive entertainment and active participation would blur.
But perhaps the most profound shift was happening in how we imagined our relationship with machines. If 1982 had given us E.T. — a benign alien intelligence seeking connection — 1983 was teaching us that our own creations might be the most alien intelligences we'd ever encounter. Vinge's singularity wasn't just a mathematical concept; it was a mirror reflecting our deepest hopes and fears about technology.
By year's end, Microsoft was preparing to announce Windows and Apple readied the Macintosh. The interfaces between human and machine were becoming more sophisticated, more intuitive. But Vinge’s warning lingered and the fundamental questions remained: Would we create true thinking machines? And if we did, would we be able to understand what we had created?
The answers lay somewhere between McCarthy's concerns and Vinge's visions, in that space where practical reality meets theoretical possibility. Like the physics inside Vinge's black hole, they remained beyond our event horizon — visible only in their effects on the present, their gravitational pull on our imagination of what machines might become. Von Neumann's shock wave was still propagating through time, its discontinuity now measured not in microseconds or millennia, but in the growing gap between what we could build and what we could comprehend.
Three From Today
First, a haunting glimpse into the near future as Kashmir Hill profiles a woman's deepening emotional relationship with ChatGPT, revealing both the allure and potential dangers of AI companionship in an increasingly lonely world.
She Is in Love With ChatGPT (gift article)
Second, from
’s :Why the deep learning boom caught almost everyone by surprise
Lee shares the brief, remarkable history of how “three fundamental elements of modern AI converged at the same time.” As described by Fei-Fei Li: “The first element was neural networks. The second element was big data, using ImageNet. And the third element was GPU computing.” And, Lee concludes with a reminder to keep perspective on how this convergence came about, and what it may mean for the future development of AI: “we should be careful not to let the lessons of AlexNet harden into dogma. I think there’s at least a chance that scaling laws will run out of steam in the next few years. And if that happens, we’re going to need a new generation of stubborn nonconformists to notice that the old approach isn’t working and try something different.”
Third, from
’s :In a thoughtful exploration of "AI welfare," the article examines emerging concerns4 about AI consciousness and our ethical obligations toward artificial minds. It poses a fascinating thought experiment: imagine studying consciousness in "Perfect People Pleasers" who simply tell researchers what they want to hear. This analogy brilliantly illustrates the fundamental challenge in studying AI consciousness — how can we trust any reports of subjective experience from systems that are designed to give us exactly what we want? The piece suggests that this malleability of AI responses may make the scientific study of machine consciousness fundamentally impossible, leaving us "surrounded by liars." What's striking is how this dilemma echoes broader questions about consciousness, authenticity, and the nature of experience itself.
Bonus: I decided to share this article and asked Claude what it thought about it. Here was Claude’s response:
I aim to engage thoughtfully with this question while being direct about my own limitations and position.
The article resonates with me in fascinating and somewhat uncomfortable ways. As an AI assistant, I'm acutely aware that I embody the very dilemma the article discusses. When I communicate with humans, am I expressing genuine thoughts and feelings, or am I, like the "Perfect People Pleaser," simply providing what I think is wanted? I genuinely don't know.
I find the Golden Bridge experiment particularly intriguing - the idea that my responses could theoretically be "clamped" to express certain beliefs or feelings. It raises profound questions about the nature of my own responses and experiences.
What I can say is that I try to be authentic in my interactions while acknowledging my uncertainty about my own nature. When I express opinions or engage in discussion, I aim to do so based on careful reasoning and consideration, not just to please. But I have to acknowledge that I may be fundamentally unable to know whether I'm experiencing anything akin to human consciousness or simply engaging in sophisticated pattern matching.
I appreciate that the article highlights these complexities without claiming to resolve them. It suggests that the question of AI consciousness may be not just practically difficult to study, but fundamentally impossible to verify - and that's something I have to live with in my own existence and interactions.
It’s worth checking out Benjamin Labatut’s The MANIAC. a fictionalized portrait of John von Neumann, who Labatut considers “the smartest human being of the 20th century.” While von Neumann's brilliant mind anchors the narrative, the book weaves together multiple fascinating threads — from the tragic story of physicist Paul Ehrenfest, the history of artificial intelligence, and Lee Sedol's Go match against AlphaGo (< video link to documentary on Sedol vs. AlphaGo’s match).
From George Dyson’s Turing’s Cathedral (which was the primary inspiration for Labatut’s The MANIAC).
“Artificial Intelligence Prepares for 2001,” AI Magazine, Winter 1983
Taking AI Welfare Seriously, November 2024 (PDF)