I had the great honor of delivering the closing keynote at the NSF ECR PI meeting. I’m sharing my text here from my talk, titled “Will AI Expand Opportunity and Equity in STEM Education?: Considerations of Ethics and Power.” To those outside of academia, the National Science Foundation is a federal agency charged with supporting fundamental research and education in all science and engineering-related disciplines (non-medical). THE ECR PI meeting is a research-focused 2-day convening, where most attendees are researchers working in the field of STEM (Science, Technology, Engineering, and Mathematics) education research. I was invited to give a talk that addressed the future of AI in STEM education. My talk was about 20 minutes, followed by a Q&A with Computer Science Professor Yolanda Rankin from Emory University, one of the leading researchers working on issues of computing education and equity.
Will AI Expand Opportunity and Equity in STEM Education?
As we think about the role of AI in education, there is clearly a broader context of AI to consider. This is not just the AI moment in education. AI is, depending on your perspective, either threatening or transforming society, with implications across science, culture, business, and politics. Let us name something right away. AI is an academic subfield originating at the intersections of the nerdiest of disciplines, computer science, cognitive science, electrical engineering, mathematics, and philosophy. I say this with pride by the way - my own academic roots are in Electrical Engineering and my masters thesis advisor Dr. Ali Sayed is one of the pioneers of statistical methods for artificial learning and adaptation - Yet somehow this esoteric academic subfield has rapidly evolved into a highly politicized topic. Who knew us STEM people could be so passionate?
To state what may be obvious, there are strong feelings and strong passions about AI, and those sentiments also exist right here in this room, within our universities and departments, and if we’re lucky even within our research labs and classrooms. And I say lucky because I fundamentally believe that diversity of thought and perspective is crucial to the role and future of AI in our field - it’s not an obstacle to overcome, but actually represents a great opportunity for the field of STEM education. In a recent convening led by the Spencer foundation just last week, which was co-sponsored with NSF, I had the great privilege of working with leaders Na’ilah Nasir (President of the Spencer Foundation) and Alondra Nelson (Professor at the Institute of Advanced Study, and former acting director of the White House Office of Science and Technology Policy) to design a space with productive tension around the possibilities of AI at its very core. And that kind tension and complexity is exactly what I’m excited to embrace and engage throughout my comments and in my conversation with Dr. Yolanda Rankin.
So where shall we begin? Let’s start from what we know and where we are. It is self-evident at this point that AI will alter the future of work and business; the future of medicine and healthcare; scientific research and learning (the Royal Society just put out a report “Science in the Age of AI" ); and of course AI is also shaping the future of warfare. Whatever ones own political orientations may be, there is no denying that AI is playing a significant role in the ongoing war in Gaza. To put a finer point on it, Ukraine and Palestine have in effect become laboratories for AI technologies. I know we get a bit tense around these issues, especially with the complexity of the Middle East and the implications for higher education, academic freedom, and student protests that we are seeing across the nation. But if we’re serious about talking about AI ethics, then ignoring the context of AI usage in asymmetric warfare would be dishonest and irresponsible. Fortunately, we are not alone in thinking through these issues. There are other academic communities that we can and should be in dialogue with, such as the Association of Computing Machinery’s FaaCt Conference, which released multiple statements on AI’s use in warfare ahead of its upcoming conference next week in Rio De Janeiro, Brazil. Which I’ll be attending. I told my wife who was understandably a bit skeptical: Babe, listen I need to be there, it just so happens they decided to host the conference in Rio. I would have been just as excited to attend if it was held here in Arlington, Virginia!
But it’s not just politics, business, and science. Significantly, AI carries profound implications for culture and for the arts. For how we experience, make sense, and reflect on the world around us and its deeper meanings. Like many of you, I watched with great interest screen-writers striking for 5 months against the biggest studios in Hollywood. Amazingly, AI, and more specifically protecting their art and their livelihoods from the threat AI, was the central issue in their contract negotiation. And incredibly, they were successful. This is a stunning recent example of the power and politics, and passions, fueling current discussions and possible futures for AI technologies.
But it would also be a mistake to reduce AI and the arts to a battle between struggling artists on the one hand, and exploitative Hollywood studios on the other. Thousands of other artists are embracing AI as a new tool to unlock their creativity and ultimately expand the range of human expression. To deepen our exploration of human connection and the human condition through technologically-mediated tools - just as the computer and the internet have done - so might AI. And even though in recent weeks perhaps the most visible artist to embrace AI is currently off somewhere licking his wounds after a humiliating defeat in a rap beef to a Pulitzer prize-winning artist whose natural wit and brilliance had no need for AI, the point is that the question of how AI should and will impact the future of the arts is a really fascinating and complex question. And it’s an instructive case for us whose work is in STEM education - with similar tensions, possibilities, constraints, and passions.
This is what is clear and what we know. Where there is big data, AI and machine learning will play an increasingly important role. From space exploration to data-driven medicine. Results of a systematic review of the role of AI tools and methods during the Covid-19 pandemic was recently published in the Journal of Thoracic Disease, and reveals an astounding range of ways that AI played a role in cutting-edge applications in medicine, treatment, and target recognition.
In just the several cases I’ve referenced (and there are many more) - the future of AI in the arts or AI-enabled data analytics in medicine, or AI in contexts of surveillance and war, what is clear is that AI is not a theoretical technology that we are merely on the cusp of. Its already here. Its potential and impacts will be real. Complex and multi-faceted, and fraught. In fact, AI as a set of statistical techniques and algorithms, has been here for 60 plus years, but we are here talking about it now because it has exploded into the public discourse via the advent of generative AI technologies like Open AI's ChatGpt, Gemini, and so forth.
And, critically, education is often the primary use case when envisioning the transformative potentials of generative AI. Despite how relatively recent the arrival of generative AI is across K12 and higher ed, it has already made waves. We can see this in districts scrambling over the last year developing AI policies for their schools, educators either experimenting with or recoiling from the use of AI in their classrooms, universities forming new steering committees to grapple with the future of data and AI for scientific research, to the seemingly infinite number of seminars and keynotes - like this one - pontificating about what in the world is going on and what should be done about AI. It’s making waves.
But so if AI is making waves in education, the question becomes: should we all learn how to surf, and blissfully ride the digital waves into the future, or rather should we all be investing in life vests and swim lessons preparing not for awe-inspiring waves that lift all boats, but instead chaotic and unruly waves from the digital tsunami of AI that our education systems are unprepared to deal with. This is the question, right? The big question that is demanding an answer in STEM education, and I’ll be clear. Asking the question or naming it doesn’t suggest I’m advocating for a big takeup of AI in STEM classrooms in schools. There’s a big difference between asking the question, naming that it is indeed a question, and blind advocacy. And this is why I humbly believe the work we all do in STEM education research is so vital and necessary for the moment.
So AI is here. It’s making waves. The train has left the station. The AI ship has sailed. Pick your favorite metaphor. And so the question is now to understand its trajectory and the implications of that trajectory in education, and to figure out what we need to know such that we might be able to shift it towards directions that are good for us. Good for schools, and good for children. Especially those that have been most acutely marginalized by current and past systems of education. To leverage AI as a tool for learning and for equity in STEM.
What might it mean to think about the potential of AI as a tool for learning and equity? My friend and colleague, science education professor Dr. Danny Morales Doyle gave a talk at Northwestern University recently ahead of the release of his new book Transformative Science Teaching, where he provided a historical framing for STEM education rooted in the words of one of our great American teachers. In thinking about equity in STEM education and the AI moment in particular, the words of Dr. Martin Luther King Jr, from his Nobel Lecture on December 11, 1965, are particularly insightful:
“Modern man has brought this whole world to an awe-inspiring threshold of the future. He has reached new and astonishing peaks of scientific success. He has produced machines that think and instruments that peer into the unfathomable ranges of interstellar space. He has built gigantic bridges to span the seas and gargantuan buildings to kiss the skies. His airplanes and spaceships have dwarfed distance, placed time in chains, and carved highways through the stratosphere. This is a dazzling picture of modern man’s scientific and technological progress. Yet, in spite of these spectacular strides in science and technology, and still unlimited ones to come, something basic is missing. There is a sort of poverty of the spirit which stands in glaring contrast to our scientific and technological abundance. The richer we have become materially, the poorer we have become morally and spiritually. We have learned to fly the air like birds and swim the sea like fish, but we have not learned the simple art of living together as brothers [and sisters].” - Dr. MLK
These words were written just shy of 60 years ago. Cleary the scientific and technological abundance has only become even more abundant. AI, literally the stuff of science fiction in years past, is now real life. And not just that, but we are at an inflection point. The projections and predictions from AI experts indicate we are at the very beginning of a technological revolution born out of the co-occurrence of vastly expanded computational power and algorithmic complexity.
Some of you may know the name Joseph Wiezenbaum, the famed MIT computer scientist who developed one of the first bots ever, ELIZA, and who was also one of the earliest critics of AI. From an early age, his experiences as a Jew in Nazi Germany sensitized him to experiences of oppression, and his career was defined by a joint commitment to leverage the power of computers as an instrument for scientific innovation as well as for addressing deep societal and moral challenges of the time. I believe it would serve us well to use the example of AI pioneers like Weizenbaum as well as Dr. MLK’s profound provocations as a starting point to think hard about the potential of AI as a tool for equity and justice in STEM education.
Some of you are on board..but others may be thinking…Okay what do we mean by equity in education? I’m struck by a question from the audience during last night’s opening keynote with Dr. James Moore about the state of equity given massive amounts of government and corporate investment. So what do we mean by equity? Well…If you’re asking me, there are a few dimensions. First, this year marks 70 years from Brown v Board of education, and tragically the seemingly basic project of providing access to quality education for all children is still shamefully unrealized. For a detailed and sobering perspective on the state of education 70 yrs since Brown, I suggest the Spencer Foundation’s White Paper written by Professor kihana miraya ross, titled “On the Road to Brown and Beyond: Troubling Integration, Desegregation, and Segregation in the Fight for Black Educational Equity, Opportunity, and Justice.” The potential for AI to contribute to equity certainly entails expanding access to engaging, personalized, high-quality learning. But it’s also about systems. We know from decades of equity scholarship in education that equity is unattainable without systemic and institutional transformation. How will AI play into the complex systems and structures that currently comprise the American public education system? How does the promise of personalized learning address systemic issues?
Another element of equity is tied to culture, identity, and values. Over the past two decades or so, learning scientists have demonstrated that learning is a fundamentally cultural process. And current AI tools have largely failed to take up modern theories of learning. This perspective is articulated especially powerfully in a recent commentary by science education professor Lucy Avraamidou in JRST titled, “AI colonization of science education.” As a professor in a Learning Sciences department, I’ll say this is one of the most urgent imperatives of learning sciences as a field. In fact the early formation of the Learning Sciences as a field is inseparable from AI research funded in universities in the 1960s. Any serious engagement with equity must take into account the cultural and social dimensions of learning, schools, and education more broadly. And the absence of this in current AI technologies is both a problem, and an opportunity.
So, to conclude, I’ll return to the title of the talk, “Will AI Expand Opportunity and Equity in STEM Education?” The honest answer..I’m not sure. It depends. There was another great question yesterday from the audience after Dr. James Moore’s opening keynote about the fundamental questions facing the field of STEM education. I think this is undoubtedly one of them. In order for us as a field to be in a position to answer that question, say in 5 or 10 years from now, I offer my perspective on what should be done and how to approach the question.
First, to borrow a phrase from the great educational historian Larry Cuban, we need to engage in a “ruthless scrutiny” of AI ethics. While there have been many technologies that have promised (and failed) to revolutionize education (hey MOOCs!, read MIT Professor Justin Reich’s book for a thorough history of technology and education), AI is unique in the kinds of moral and ethical questions that it raises. Environmental, warfare, surveillance, to name a few. AI ethics in STEM education cannot be viewed as separate from these broader ethical and moral questions. Our field must engage with the political economy of AI, asking always who is controlling the tech, the data, and to whose benefit? How might NSF, and those of us who have NSF-funded projects, intervene and shift this ecosystem?
Some say the answer is to abolish AI. I don’t think that;s right, if nothing else because its not realistic. Instead, we should work towards democratizing AI. By democratize I mean we need to find ways to empower other groups - outside of big tech companies - to be on the cutting-edge of innovation. There are examples of this work already happening. I want to highlight the work of one community-based Indigenous-led media organization working in New Zealand, Te Hiku Media, that is using machine-learning techniques towards language revitalization. Specifically, the Maori language, the native language of the Indigenous peoples of New Zealand. I think as university researchers we should be thinking creatively and boldly about how to partner with organizations like Te Hiku Media, to simultaneously cultivate cutting-edge science and also shift the balance of power in the AI landscape.
Three more “we shoulds” if you can indulge me just a minute more before I sit down for a conversation with Professor Rankin. Despite what’s happening, or perhaps particularly given the broader backlash we are seeing around DEI, now is the time that we should show resolve and double down on efforts to advance racial equity in CS and STEM education. We should provide all learners not just with high-quality technical education, but also provide opportunities for learners to explore the historical, ethical, and social implications of technology in local and global contexts. And finally, as we think about workforce projections across industry, academia, policy, education and community contexts, there is an urgent need to develop new pipelines of talent - specifically people who have solid foundations in technology, matched with equally rigorous groundings in the social, ethical, and policy challenges of new technologies including AI. Thank you.