Three weeks ago, prior to the election, we began our unit on the ethics of AI, examining the social and political context and implications raised by the broad set of technologies that fall under the “AI” umbrella. We engaged with work by scholars such as Meredith Whittaker and Timnit Gebru examining algorithmic bias, the environmental harms of generative LLMs, and how AI is inseparable from the politics of racial capitalism, militarism, and corporate power. This work sparked conversations in class that challenged normative views of AI as a transformative technology for good and questioned the kind of sales pitches favored by Marc Andreessen, Elon Musk, and other Silicon Valley elite who’re positioned to shape how AI is–and more likely, is not–regulated in the years to come. We discussed how young people’s use of chatbots could have tragic consequences, examined how AI-powered surveillance can have outsized harms on marginalized communities, and staged a class debate on AI’s alleged benefits in the fight against climate change versus its current environmental costs.
We grappled with the question of how AI ethics is showing up (or not) in the seemingly ubiquitous discussions of AI literacy and AI education. We started with a publicly available lesson plan from Common Sense Education. The lesson plan, meant for grades 6-12, is titled “How Is AI Trained?” The lesson is part of Common Sense Education’s AI literacy curriculum. Common Sense Media–the parent organization of Common Sense Education–entered into a partnership with OpenAI in January 2024. We shared this context with students along with Common Sense Media’s press release announcing the partnership, which includes this line: “Common Sense Media, the nation's leading advocacy group for children and families, announced a partnership with OpenAI to help realize the full potential of AI for teens and families and minimize the risks.” We asked students to think carefully and critically about the implications of industry leaders like OpenAI shaping students’ understandings of literacy and ethics in this space.
Our class is full of sharp and insightful students, and during the discussion of the lesson plan, the students drew upon our previous class discussions on the extractive labor practices that make AI possible. They noted the absence of, for example, the experiences of data workers laboring under harsh conditions in the Global South. For us, this attention to labor is often a missing piece of how AI literacy is conceived and practiced, and one of our goals for the class is to help students recognize and interrogate what it looks like when AI literacy curricula have been captured by Big Tech.
We’re also unsatisfied to only name the problems with AI without then making a crucial move to imagining alternative relations with the technology. That’s why we assigned “The Internet Doesn’t Exist in the Sky: Literacy, AI, and the Digital Middle Passage” by Mia Shaw, S. R. Toliver, and Tiera Tanksley. The article uses multimodal storytelling and analysis to document AI’s negative impacts on education, the environment, and laborers in Haiti. Yet the authors end the narrative at the heart of their article on a note of hope and a call to action. They write, “Let us reimagine and redesign technologies that do not serve us. Let us use our collective knowledge to center Black hope, healing, futurity, and life.”
The day after the unit ended, more than half the electorate in the United States voted for Donald Trump. Our class meets on Monday and Wednesday mornings, and we intentionally left the class after Election Day open. That morning, we held space for students and ourselves. We sat in silence punctuated by the occasional question, reflection, and source of hope. Many of the students’ questions came down to this: How did we get here, and where do we go from here? These are precisely the right questions to keep asking–about the political and cultural realities of the United States, as well as about the history, place, and future of AI technologies in schools. While engaging the particularities of our class, or any class for that matter, may seem futile relative to the enormity of the political moment, we must persist.
As we settle into the dark reality of this political moment, we are reminded of the words of Mariame Kaba, the prison abolitionist, who reminds us that, “Hope is a discipline.” Indeed, hope is hard, constant work. Hope is a room full of young people engaging with complex questions of how and whether powerful technologies that often serve the most powerful can be repurposed or redesigned to protect our planet and our communities.
As someone who is increasingly using AI for a living, including having been on the training, scaling, and development side (I worked for facebook in the early days) - let me know if you need someone (a heel?) to come to take the AI-is-actually-good side. I'm an adjunct over in Medill but not teaching this year and love a good debate.