"Does History Even Matter? Early Political and Philosophical Debates in the AI Scientific Community"
A short essay on the ideas and debates of Joseph Weizenbaum, Alan Turing, and other key AI figures
Greetings friends and colleagues. We are back with the second installment in our mini-essay series based on our Northwestern class on AI, equity, and public education. It’s hard to believe that we are almost at the midway point of the quarter. In this essay, we describe some key ideas and themes from the first unit of our course, which examines the hidden social and political histories of AI. Our foray into AI history sets the stage for our next unit exploring the rapidly expanding literature on AI ethics in education. If you’re arriving here for the first time, please subscribe below and read the introduction to the series.
The week before last, we engaged heavily with Alan Turing's key provocation: "Can machines think?" In addition to Turing’s classic 1950 article, “Computing Machinery and Intelligence,” we read a collection of short essays on the “Untold History of AI” produced by the Institute of Electrical and Electronics Engineers. The topics cover a wide range of ideas, including the history of ARPAnet, Mechanical Turk, and the fascinating history of algorithmic bias originating in St George’s Hospital in the UK in the 1970s. We situated the provocative question, “Can machines think?” in these histories and discussed with students how remarkable it is that in many ways this is precisely the question our society currently faces with the advent of generative AI.
Building from those readings, last week we continued our historical inquiries. We read a piece published in The Atlantic in 1945 by Vannevar Bush, the American engineer who oversaw all wartime R&D as Director of the U.S. Office of Scientific Research and Development during WW2. In the essay, Bush lays out a vision for post-war science that very much foreshadows the coming decades’ scientific preoccupation with intelligent machines. Next, because we’re interested in the history of AI and the history of teaching machines, we turned to Audrey Watters. We read excerpts from Watters’ book Teaching Machines: The History of Personalized Learning, attending closely to the history of B. F. Skinners' teaching machines, and the connections between what scholars have called “ed-tech saviorism” and contemporary ed-tech evangelists such as Sal Khan. We also discussed how the civil rights leader and math educator Bob Moses came to understand the problems of programmed instruction. As a member of the Student Nonviolent Coordinating Committee, Moses was interested in whether teaching machines could be embedded in Freedom Schools in the South to help with adult literacy programs. However, according to Watters, Moses eventually came to view the machinery as insufficient for and incompatible with the goals of liberatory education.
There is much more to say about all of the above, but we dedicate the rest of this post to another significant yet often overlooked historical AI figure we discussed with our students. Joseph Weizenbaum was a German-American MIT professor and computer scientist known both for developing one of the earlier natural language processing computer programs known as ELIZA as well as being one of the early critics of AI. We explored his ideas and life largely through Ben Tarnoff's insightful profile of Weizenbaum in The Guardian, which describes Weizenbaum's growing skepticism and critique of AI as deeply interwoven with his personal and social history. Weizenbaum’s family escaped Nazi Germany in 1936 and immigrated to the US. He started his academic career in 1941 as an undergraduate mathematics student at Wayne State University in Detroit, and over the next couple of decades became increasingly involved in the civil rights and anti-war movements sweeping across the nation. Interestingly, as Tarnoff describes, Weizenbaum’s “leftwing political commitments complicated his love of mathematics.” We asked our students to reflect on the source of this tension and how it ultimately influenced his instrumental role in the development of AI in its early days, as well as his unique place in history as one of AI's earliest critics.
Weizenbaum had substantial intellectual and ideological conflicts with other important AI figures, such as Marvin Minsky and John McCarthy, and notably also with Roger Schank. Our students in SESP were surprised to learn that Schank is widely considered one of the founders of the Learning Sciences. In 1989, he was granted $30 million in a ten-year commitment to his research and development by Andersen Consulting, which allowed him to leave Yale and set up the Institute for the Learning Sciences (ILS) at Northwestern University. We emphasized to our students that Weizenbaum's growing criticism of AI was deeply related to the consolidation of the military-industrial complex, especially at his own university, MIT. In 1963, with a $2.2 million grant from the Pentagon, MIT launched Project MAC—an acronym with many meanings, including "machine-aided cognition." As we discussed this history with the class, we also made sure to (albeit briefly) discuss the relatively unknown history of student and faculty activism against military-funded science and technology development. We related this to the current and ongoing protests against the Gaza War, and specifically to calls for divestment from tech companies that are directly profiting from the ongoing war on Gaza. While Weizenbaum’s critiques were clearly political, they also included fundamental philosophical disagreements with many of the early architects of AI. While Alan Turing asked us to consider, "Can machines think?" Weizenbaum provoked a more radical question: "What does it mean to be human?" His early challenges to the AI scientific community helped advance the important idea that AI is not merely a set of technologies but also a set of ideologies attached to particular kinds of political interests. And this is precisely our jumping-off point as we transition to the AI ethics segment of our course this week.
As we conclude the history unit of our class, we leave you with the question we asked our students and ourselves: What lessons, insights, or cautions do these histories hold in our current conversations about AI? Does history even matter? In Audrey Watters’ introduction to Teaching Machines: The History of Personalized Learning, she quotes a technology entrepreneur’s take on history. The entrepreneur says, “‘The only thing that matters is the future. I don’t even know why we study history. It’s entertaining, I guess–the dinosaurs and the Neanderthals and the Industrial Revolution, and stuff like that. But what already happened doesn’t really matter. You don’t need to know that history to build on what they made in technology, all that matters is tomorrow’” (Watters, 2021, p. 8). Sadly, that sentiment sums up a prevailing attitude in society towards technological advancement writ large. Our decision to dedicate the first unit of our class to historical context is an effort to counteract this ahistorical approach. Exploring the scientific and political history of AI is not only tremendously fascinating, but is vital for our ongoing efforts to accurately understand the stakes, risks, and possible futures we should be building toward or actively resisting.