Note:
As we approach the end of our first decade in A Short Distance Ahead, it seems fitting to pause and reflect on the deeper roots of artificial intelligence. This week's essay includes a historical 'flashback' that reaches beyond our usual yearly focus. Drawing heavily from Michael Kanaan's book "T-Minus AI," specifically Chapter Four: "Secret Origins of Modern Computing," we'll explore how events from the early 20th century set the stage for Turing’s 1950 paper and the subsequent AI developments we've been tracking. This detour into the past will help illuminate one of the long arcs of technological progress that led to the birth of AI as we know it today.
The year was 1959, and as the first decade since Alan Turing's seminal paper on computing machinery and intelligence drew to a close, the world continued its relentless march towards an uncertain future. In Cuba, a bearded revolutionary named Fidel Castro seized power,1 while the United States expanded its celestial ambitions by adding two new stars to its flag - Alaska and Hawaii.2 The Soviets, not to be outdone, hurled a metal ball called Luna 1 towards the moon, as if the cosmos itself were a chessboard in their grand game of geopolitical one-upmanship.3
Yet as these events unfolded on the world stage, a quieter revolution was brewing in the realm of computing and artificial intelligence. To understand its origins, we must look back to a time before Turing's groundbreaking work, to a pivotal moment that would set the stage for the birth of modern computing.
In the early nineteenth century, a war between Spain and Mexico set in motion a strange string of events that ultimately led, more than 100 years later, to the creation of the world’s first computer capable of processing amounts of information far beyond our own capacity.4
In the early 1800s, after three centuries of Spanish rule, Mexico fought for its independence in a conflict that lasted from 1810 to 1821. When the fighting ended, Mexico found itself in control of a vast territory stretching from the Yucatán Peninsula in the south to present-day California in the north, including what would later become Texas.5 However, not everyone in this expansive land was content with Mexican governance, especially those in what was called “Mexican Texas.” So they decided to have their own fight, and after a series of intense battles, including the famous siege at the Alamo, Texas declared its independence and became the Republic of Texas, much to Mexico's displeasure.
Not too long after, the United States, in its period of westward expansion, annexed Texas as its 28th state in 1845. This move, combined with ongoing border disputes and America's interest in acquiring more land, led to the Mexican-American War in 1846. The conflict lasted two years and concluded with the Treaty of Guadalupe Hidalgo in 1848. As a result, Mexico was compelled to cede more than half of its territory to the United States in exchange for $15 million and the settlement of $3.25 million in debt owed to American citizens.6
This territorial acquisition dramatically reshaped the map of North America. The United States gained lands that now comprise all of California, Nevada, and Utah, most of Arizona, about half of New Mexico, a quarter of Colorado, and a small portion of Wyoming. This massive land transfer not only expanded the United States significantly but also redefined its relationship with Mexico and set the stage for future tensions and would continue to influence North American geopolitics for generations to come.
Now fast forward to June 1914, the assassination of Archduke Franz Ferdinand of Austria-Hungary in Sarajevo set off a chain reaction of alliances and declarations of war across Europe. Within weeks, Germany, Russia, France, Britain, and several other nations were embroiled in what would become known as World War I. The conflict quickly spread beyond Europe, reaching into Russia and East Asia.
The United States initially maintained a position of neutrality, but tensions with Germany grew as the war progressed. By early 1917, Germany was preparing to launch an unrestricted submarine campaign in the North Atlantic, targeting both Allied vessels and neutral cargo ships, including those from America. This strategy threatened to draw the United States into the conflict.
It was in this context of escalating global tensions that German Foreign Minister Arthur Zimmermann made a bold and secretive move. In January 1917, he sent an encoded telegram to the German ambassador in Mexico, proposing an alliance between Germany and Mexico against the United States. The telegram read:
We intend to begin on the first of February unrestricted submarine warfare. We shall endeavor in spite of this to keep the United States of America neutral. In the event of this not succeeding, we make Mexico a proposal of alliance on the following basis: make war together, make peace together, generous financial support and an understanding on our part that Mexico is to reconquer the lost territory in Texas, New Mexico, and Arizona . . . Signed, Zimmermann7
The British intelligence service intercepted and decrypted this message, revealing Germany's clandestine proposal. Despite strong opposition to war involvement among many Americans and members of Congress, President Wilson saw the telegram as an opportunity to shift public opinion. He presented it to Congress and authorized its release to the American media. On March 1, 1917, news of Germany's secret overture to Mexico made headlines across the nation. Some skeptics initially dismissed it as propaganda, but Zimmermann's subsequent acknowledgment of the telegram's authenticity silenced most doubts.
With public sentiment now largely unified behind him, Wilson confidently requested a declaration of war from Congress. On April 6, 1917, the United States officially entered World War I.
The interception and decryption of the Zimmerman telegram would not only draw America into the war but also spark a cryptographic arms race that would shape the future of computing. In the aftermath of World War I, a pervasive sense of paranoia and urgency to safeguard information spread across nations, particularly in Germany. In response to this need for enhanced security, Arthur Scherbius, an innovative German electrical engineer, patented a groundbreaking machine capable of encoding information with unprecedented complexity. In 1923, Scherbius established the Cipher Machines Corporation in Berlin to manufacture his invention, which he named Enigma.8
The German Enigma machine led directly to the work of Alan Turing and others at Bletchley Park during World War II.9 Their efforts to crack these codes resulted in the creation of early computers like Turing’s Bombe and Tommy Flowers’ Colossus,10 both of which were kept classified and remained completely secret outside of the highest ranks of British Intelligence for years. But both, along with the ENIAC in Philadelphia, which had originally been used primarily to calculate artillery firing tables and solutions for the US Army’s Ballistic Research Laboratory, were critical in laying the groundwork for the digital age.
And as the 1950s progressed, this legacy manifested in two distinct approaches to artificial intelligence: the symbolic and the sub-symbolic. The symbolic approach, championed by researchers like John McCarthy, sought to represent knowledge and reasoning using formal logic and symbol manipulation. This led to the development of LISP, a programming language that McCarthy had been refining since its inception in 1958.
On the other hand, the sub-symbolic approach, inspired by the structure of the human brain, focused on creating artificial neural networks that could learn from experience. Frank Rosenblatt's Perceptron, unveiled just a year earlier in 1958, exemplified this approach.
In 1959, these two paradigms were like two saplings, newly planted but already straining towards the sun. The symbolic approach, with its roots in logic and mathematics, promised a clear path to machine reasoning. The sub-symbolic approach, drawing inspiration from biology, offered the tantalizing possibility of machines that could learn and adapt.
As the decade ended, no one could have predicted which approach would bear the most fruit. Like Robert Noyce's newly invented integrated circuit, which squeezed multiple electronic components onto a single chip, the future of AI seemed to be one of increasing complexity and integration.11
And so, as 1959 drew to a close, the seeds planted in the early 20th century - from wartime cryptography to peacetime computing - were beginning to sprout. The next decade would see these ideas grow and intertwine in ways that even the brightest minds of 1959 could scarcely imagine. The question was no longer if machines could think, but how they would think, and what that would mean for a world already grappling with revolutions both political and technological.