A quick note of gratitude
To The Philadelphia Citizen who picked up and published my introduction to last week’s essay as a slightly edited, stand-alone article. In case you missed it, you can read that here: Guest Commentary: Not Another Election Post-Mortem.
The year was 1974, and everyone who claimed to know better was having a terrible time of it.
In Britain, researchers were quietly clearing out AI labs, boxing up dreams alongside papers and punch cards. The Lighthill Report had done its work, convincing Parliament that artificial intelligence was more artificial than intelligent.1 Funding dried up like spilled tea on academic papers, leaving behind only stains of what might have been.
Meanwhile, across the Atlantic, an American president was packing up his own office, though with considerably more fanfare. Richard Nixon, who had once declared "I am not a crook" with all the confidence eighteen missing minutes could buy, was discovering that even the highest authority in the free world couldn't delete an inconvenient tape recording.
The BBC production teams whose cameras had just months before recorded Sir James Lighthill debating the future of AI at the Royal Institution in London with leading practitioners Professor John McCarthy, Professor Don Michie, and Professor Richard Gregory, were now filming variety shows and quiz programs.
The grand questions about machine intelligence2 had been replaced by more practical concerns, like how to heat homes during the oil crisis or whether the next prime minister would last longer than the prior one.
The experts were having a particularly bad year. Economists who had promised eternal growth watched helplessly as inflation soared past 12%. The OPEC nations, dismissed as peripheral players in the grand game of global economics, were showing the West that expertise counted for little when you controlled the oil. In gas stations across America, metal numbers clicked upward with a mechanical regularity that would have impressed any computer, each click marking another dent in the armor of institutional infallibility.
Even in the quieter corners of computer science, for example, where Terry Winograd's SHRDLU program was teaching a computer to understand simple commands about colored blocks, there was a growing sense that the grand ambitions of artificial intelligence had outpaced their actual achievements. A young researcher named Paul Werbos submitted a PhD thesis containing something called "backpropagation" - an idea that would later revolutionize machine learning - but in 1974, it disappeared into library stacks with barely a whisper, like Nixon into his California exile.
In the early morning hours of August 7, Philippe Petit stepped onto a wire illegally strung between the Twin Towers of the World Trade Center. For forty-five minutes, he danced between New York's newest monuments to institutional might, turning their architectural swagger into his personal stage. The Port Authority, that grey-suited arbiter of what could and couldn't be done with its buildings, was left helplessly watching from below as a Frenchman with a balance pole made a mockery of their authority.
In Hadar, Ethiopia, anthropologist Donald Johanson was about to make another kind of mockery of authority - this time of our own species' pretensions. The discovery of Lucy, a 3.2-million-year-old ancestor, suggested our own institutions were rather young affairs compared to the long dance of evolution. Lucy's small skull held no trace of committees, bureaucracies, or artificial intelligence labs, yet she had managed to pass her genes down through millions of years without any of them.
The first retail barcode scanner was installed in a Marsh Supermarket in Troy, Ohio. On June 26, a pack of Wrigley's chewing gum passed across its electronic eye, marking the moment when machines began keeping track of what we bought. The same institutions that couldn't track their own corruptions or predict economic disasters were now cataloging every purchase of gum and bread with digital precision.
And then, The Rumble in the Jungle. In Kinshasa, Zaire (now known as the Democratic Republic of Congo), Muhammad Ali stood in a boxing ring facing George Foreman, the heavyweight champion whose dominance had been certified by every boxing authority and expert in the world. Using a strategy he called "rope-a-dope," Ali let the champion punch himself into exhaustion against the ropes. It was a metaphor for the year itself - the mighty exhausting themselves against the resilience of those they thought they could easily defeat.
Science marched on, though perhaps with less confidence in its step. Astronomers studying quasars - those impossibly bright, impossibly distant objects - were discovering the universe was stranger than anyone had imagined. The same species that couldn't keep its president honest was finding out it didn't understand basic facts about cosmic architecture. It was as if the very fabric of space-time was joining the general revolt against established wisdom.
Thomas Nagel published his philosophical paper "What is it like to be a bat?" asking whether consciousness could ever be fully understood from the outside. The AI researchers, had they not been busy clearing out their labs, might have recognized their own predicament in his question. How could machines think like humans when humans weren't even sure how they thought themselves?
It was a year when the grandest ambitions seemed to collapse under their own weight, while simpler technologies quietly found their place in the world.The same institutions that had promised thinking machines couldn't explain their own thoughts. Those who claimed to predict the future couldn't see what was right in front of them. The experts had lost their expertise, the authorities their authority.
The year ended quietly, without fanfare or prophecy. No machines learned to think, no presidents learned to tell the truth, and no economists learned to predict the price of oil. But perhaps something more important was happening: humans were learning the limits of their own certainty.
And in the end, that might have been the most intelligent thing anyone did in 1974.
Pamela McCorduck offers some of the most interesting perspective on The Lighthill Report in her 1979 classic, Machines Who Think. For example, on the conspiratorial murmurs: “Charitably speaking, the report seems to have been done in a hurry— in such a hurry, in fact, that rumors immediately flew that its main purpose was personal vendetta, with Sir James as hatchet man for others who were nettled by AI and some of its practitioners. Since the report coincided with, and surely exacerbated, the dissolution and reorganization of the Edinburgh University AI laboratory, the rumors seemed true. But Bernard Meltzer, still at Edinburgh, has doubts as to any conspiracy, though hes one of several who complained to me about how the report was done." (p. 276 Kindle Edition)
McCorduck also references The Lighthill Report’s reference to potential “womb-envy” inspiring early AI reasearch. From the report: “Incidentally, it has sometimes been argued that part of the stimulus to laborious male activity in creative fields of work, including pure science, is the urge to compensate for lack of the female capability of giving birth to children. If this were true, then Building Robots might indeed be seen as the ideal compensation! There is one piece of evidence supporting that highly uncertain hypothesis: most robots are designed from the outset to operate in a world as like as possible to the conventional child's world as seen by a man; they play games, they do puzzles, they build towers of bricks, they recognise pictures in drawing-books (bear on rug with ball); although the rich emotional character of the child's world is totally absent.” (Artificial Intelligence: A General Survey)