[This is an essay about the politics of surveillance and surveillance technologies in Chicago, and implications for youth and schools. Co-written with PhD students Anonymous and Shai Moore. We have created pedagogical endnotes for this essay for educators. Special thanks to University of Illinois Chicago Professors Daniel Morales-Doyle and David Stovall for their feedback on this piece; and to the members of the AI ethics curriculum working group, a collaboration between the TREE lab and educators from Evanston Township Highschool.]
My eldest son is 6 now, but this story is from when he was around 4 years old. We live in a socioeconomically and racially diverse South Evanston neighborhood, with significant populations of African American, Caribbean, African, Latinx, Muslim, and other immigrant families. This is not to suggest it's any kind of racial utopia. If you zoom in on the level of a single street, the racial and economic disparities come into clear view. Our street for example is intersected by two major avenues, Dodge and Ridge. On the corners of both of those intersections are large apartment buildings, with single-family houses in between. The families of color are almost exclusively confined to the apartment buildings, and the houses are owned by white families. So while the neighborhood might be characterized as diverse, wealth and whiteness dance just as one might expect. One particular Fall evening, I was on a walk with my son. Now I have to say, my kid is full of life - equal parts joy and mischief, constantly testing the boundaries. Sometimes this means he's running into neighbors' lawns or driveways - many of them our friends who welcome his antics and run out to greet him with smiles and hugs. But on this particular day, he ran into the driveway of an unfamiliar house equipped with a Ring doorbell. His running around was detected by the camera and triggered a digital recording that said something to the effect of “you are being recorded. Please leave the premises.” This stopped him in his tracks, leading to several questions. I told him to quickly get off their property, and as we continued towards the park where we were headed, I found myself having a conversation with my 4-year-old son about the logic and ethics of surveillance technologies. Of what it means to be watched. And the role of tech.
How should we think and talk about surveillance with our kids, or for the educators out there, to our students? [Note 1] As community members, how should we talk about surveillance with our friends and neighbors? There are many different forms of surveillance. [2] For example, communities of people might surveil their neighborhoods by organizing block watch groups, using apps like Citizen and Nextdoor, or installing video doorbell cameras in their homes. As users of technology like computers, smartphones, and the internet, we are subject to surveillance via targeted ads based on past purchases, recommendation engines offering us suggestions for the next song or video to play based on our browsing history, or through the massive amounts of data about us that gets collected, bought, and sold by data brokers. There are, of course, many innocuous or helpful types of surveillance, like a lifeguard at a beach, a baby sleep monitor, or surveillance tech used during a rescue mission.
But we (our research group at Northwestern University) start from the point of view that surveillance is far from a neutral field. Particularly given the advent of AI technologies, surveillance also entails tech-enabled monitoring actions of universities, schools, militaries, and local law enforcement groups. The role of technologies like machine learning and artificial intelligence have become increasingly important in the development, deployment, and expansion of surveillance in our daily lives. Surveillance technologies supercharged by data-intensive techniques like machine learning and “predictive” algorithms carry widespread and serious risks of harm, like false arrest and imprisonment based on faulty predictions. These risks and harms are not shared evenly across all people. As Just Tech Fellow Adrienne Williams wrote recently for Visible Magazine:
“AI requires data collection and data collection requires surveillance. Since data is money, surveillance is too. Hence, why nearly every company is now building surveillance into their products. To be clear, the unregulated data collection free-for-all we are currently experiencing affects everyone, but those living within vulnerable communities experience the brunt of its harms.”
Surveillance technologies are often lauded by politicians and law enforcement agencies as being tools to expand and protect safety. And many people do feel that having a Ring camera on their door or ShotSpotter in their neighborhood makes them safer. As we write from Chicago–which has a massive tech-powered surveillance infrastructure–local activists have criticized Mayor Brandon Johnson’s renewal of the controversial ShotSpotter contract despite campaigning on a promise to cancel it outright. ShotSpotter is an acoustic gunshot detection technology that relies on a network of microphones installed across the city to record loud sounds, apply computational analysis to the waveforms, and flag them as potential gunshots for law enforcement response. Many legal experts, scholars, and activists have argued that ShotSpotter functions to intensify regimes of surveillance and control that disproportionately impact low-income communities of color across the nation [3].
We believe everyone deserves to feel safe. But we also want to advocate for a deeper analysis, a more complex understanding of whether these technologies actually positively impact community safety. The crime-reducing benefits touted by SoundThinking (the company that sells ShotSpotter) are dubious and unproven at best and actively harmful at worst, with people in communities of color and low-income neighborhoods disproportionately at risk of false arrest and imprisonment or violent–sometimes fatal–encounters with police responding to ShotSpotter alerts.
Do these technologies actually make neighborhoods safer? For whom, and at what costs? Grappling with these questions, we take inspiration from Chicago’s young organizers who continually mount powerful challenges against those in power and rightfully scrutinize a city government that dedicates half of its budget to policing and surveilling its citizens. In 2020, when young activists pushed the Chicago Board of Education to cancel its $33 million contract to deploy police in schools, they offered concrete and clear alternatives, demanding the district instead invest in resources such as school nurses and libraries, arts programming, social workers and mental health supports–things that they argued actually make them feel safer than armed officers patrolling their hallways. To put these demands in context, in 2022 it cost over $85,000 per school day to continue the school resource officer (SRO) program in a few dozen CPS schools; how many social workers’ entire yearly salaries might be funded with this money instead? In response to those who argue that funneling public funds toward surveillance technologies in the name of safety, these young people instead continue to ask: what really makes us feel safe, keeps us safe?
These are also the kinds of questions our high school students have taken up in the Young People Race Power and Tech Project, an afterschool program that grew out of a collaboration between our research lab, Evanston Township High School, Family Matters (a community-based, youth-serving organization in Rogers Park), and the digital advocacy group Lucy Parsons Labs. Working with Chicago-based filmmaker Raphael Nash, YPRPT youth have created short documentaries about the impacts of surveillance tech in their neighborhoods. Their films have been shared in public screenings hosted by the Block Museum and made available for educators on our website.
The youth perspective on surveillance is critical, yet rarely solicited and as a result poorly understood. In our work with high school youth from around the city, we have learned just how pervasive skepticism towards surveillance tech is, especially within Black and Latinx communities [4]. We have learned that the promise of surveillance technologies such as facial recognition tech, license plate readers, or ShotSpotter often exploits and commodifies legitimate community concerns related to safety and security while neglecting other community values such as the need for privacy, autonomy, and dignity. Perhaps most importantly we’ve learned that the young people of Chicago have a range of important perspectives about tech and surveillance and that supporting their learning requires respecting the depth and range of their thinking [5.1, 5.2]. The youth are brilliant and wise, and they have much to offer in the unfolding conversations about surveillance and tech in their city and beyond.
This is not surprising, or at least it shouldn’t be. Chicago is one of the oldest, largest, and most diverse and dynamic cities in the United States, and also one of the most racially and economically segregated. The history of surveillance in Chicago is deep, far-reaching, and complex. For example, there is a fascinating history of the FBI's surveillance of Al Capone and other Chicago gangsters of the early 20th century, but we will leave that for the city’s historians. To grapple with Chicago's current surveillance politics, a glance at a more recent history is instructive. Beginning with the civil rights era, the surveillance and targeting of the Black Panther Party through COINTELPRO, including the murder of Black Panther Illinois chapter chair Fred Hampton in 1969. In the Judas and the Black Messiah Shaka King produced and directed a powerful biopic of Fred Hampton and his legacy. The surveillance of Chicago’s formidable labor and anti-war movements of the 1960s and 1970s has also been thoroughly documented. Scholar Katherine Perrotta provides a detailed historical account of this riveting history focused on FBI actions against the War Resisters League. Perrotta details how the group's activities were monitored by federal agents as well as the legal challenges they faced. The presence of the FBI in Chicago was well understood:
“In June 1969, the ACLU challenged the Justice Department’s use of warrantless wiretapping of certain groups that were justified on the grounds of national security. The FBI admitted to wiretapping in Chicago, claiming that government eavesdropping without warrants was necessary in “foreign intelligence matters.”
There were similar acts of surveillance in Chicago associated with labor politics, particularly during the McCarthy era. A cursory review of this history makes evident that Chicago is and has always been an intensely political city. There is a deep history of repression and just as deep–if not deeper–legacy of resistance. Surveillance of dissidents and those who are deemed a threat is a key part of all that history, stretching into the present. Just this past summer in Chicago, there was reporting on the FBI monitoring of a bookstore in the predominantly Latinx community of Pilsen. Particularly given the atrocities we are witnessing right now in Palestine, we have to note the surveillance of Muslim and Middle Eastern groups in Chicagoland. In particular, the FBI’s surveillance of the majority Palestinian Muslim community in the Chicago suburb Bridgeview has received lots of attention and was the subject of the award-winning film, The Feeling of Being Watched, written and directed by Assia Boundaoui [6].
While it's critical to examine the surveillance conducted at the local level, it's equally important to understand that Chicago’s high-tech network of surveillance extends far beyond the city limits. The highly networked nature of this technological infrastructure makes it much more powerful and expansive than before because those who are “watchers” (local, state, and federal agencies like ICE and the FBI, as well as private technology companies) can collaborate in real-time, across state lines and time zones, to monitor and control people. In terms of individual civil liberties and privacy, these densely interconnected, tech-powered surveillance dragnets present novel legal questions and risks.
What is unique and somewhat perplexing about Chicago is that by and large the policy context of Illinois leans towards the protection of individual privacy. In particular, the Biometric Information Privacy Act (BIPA), the states 2008 biometric privacy law [7] has proven to be a thorn in the side of the biometric tech industry:
Since BIPA went into effect in 2008, it has proved to be one of the country’s strongest privacy laws. Facebook recently agreed to pay $550 million to settle a class-action lawsuit alleging that the facial recognition features on its social media platform violated the law. BIPA stands out among most privacy laws, one of the few that specifically addresses the use of facial recognition in the U.S.
This underscores how important the legal and policy realm is in ongoing efforts to reel in the negative impacts of technology on communities. The need for legal and civil rights frameworks to be updated to meet the realities of our increasingly digital and computational society has been commented on extensively. And specifically, there is a need for smart and ethical regulations and laws that are designed to protect individuals from the data-hungry tech industry. In Illinois, BIPA has been the legal foundation for many class action lawsuits against big tech companies, employers, and universities. Northwestern University where we work is facing a lawsuit from 2021 for illegally collecting and using “biometric data” of students through its use of remote proctoring software. While BIPA has been an important legislative defense against the biometric industry specifically, it doesn’t provide protections against the kinds of surveillance we’ve discussed so far such as ShotSpotter. For now, the work is happening by artists, activists, scholars, and journalists who have elevated the public's understanding of the harmful effects of the technology industry.
While we recognize and applaud the recent increase in public discussion on the societal implications of technology, young people in Chicago like elsewhere are largely kept away from such topics. Where technology is concerned, the focus is often restricted narrowly to teaching students how to code. In fact, in 2016 computer science became a graduation requirement in Chicago Public Schools, and since 2023 Illinois state law requires that all high schools in the state offer computer science courses. In Chicago, advocates for computer science education are a motley crew including industry partners like Apple, university partners including our own Northwestern University, as well as local artists and activists like Chance the Rapper. But while expanding access to computer science education and by extension careers in science and technology is a noble and important endeavor, we should also be asking: what is really driving Big tech companies (and the US military) to invest in CS education? And further, where in the K-12 curriculum do students get to explore the ethics and politics of tech? How technology is impacting our schools and communities?
Take, for instance, Chicago’s notorious gang database, also known as the Chicago Police Department’s Strategic Subject List, which was developed in 2012 with support from academics at the Illinois Institute of Technology. Central to the database was the use of advanced data analytics to assist law enforcement in identifying and apprehending potential gang members. Many of those marked as “gang-affiliated” were children and youth. During our first iteration of the Young People’s Race, Power, and Technology Program (YPRPT) in 2020-21, one filmmaking team created a short documentary about the gang database, called Targeted. In the closing scene of the film, one student reflects on how being marked as a gang member threatens their future:
“We still have stuff that we can learn about this gang database… we should be curious about it ... I’m curious about honestly why they made it. ‘Cause, former gang members, that wanted to change they life, some have, and they still labeled as gang members. So it also messes up they names even if they change. They still have people [in the database] that’s like, 200 some years old. That shows you right there that there ain’t no way off, even if you die.”
This also points to the fact that the database is notoriously error-riddled, including individuals with ages anywhere from 0 to 200. Despite evidence that the database was both inaccurate and racist and effectively functioned to support the school-to-prison pipeline, several reports indicate the database was used by schools. In fact, according to a city inspector general report,
“the only other public agencies that queried the database more often than Chicago Public Schools were the Cook County Sheriff’s Department and the Illinois Department of Correction. The inspector general found CPS made over 87,000 queries from 2009 to October 2018.”
Fortunately, in early September 2023, an oversight commission voted to “scrap” the database once and for all. This has been heralded as a victory for community groups who have been focused on this issue over the past several years, such as Organized Communities Against Deportations (OCAD), Black Youth Project 100, and community-based nonprofit Brighton Park Neighborhood Council.
While the defeat of the database represents an important victory, there are other facets to technology-powered surveillance in Chicago schools. For example, the district is in a three-year rollout of new and updated security cameras–a deal costing $76.3 million–which the district’s Chief of Safety and Security Jadine Chou says is a move toward expanding school safety and protecting students from harm. While some might hail public schools in the US as embattled yet fundamentally democratic institutions, meaningful community involvement in technology procurement decisions is uneven and often limited. Even school board members are sometimes left in the dark. When the new security camera contract was brought before the board for approval in February 2023, Chicago Board of Education member Elizabeth Todd-Breland, professor of history and author of the book A Political Education: Black Politics and Education Reform in Chicago Since the 1960s, was caught off guard. In her remarks during the public board meeting, Dr. Todd-Breland asked CPS CEO Pedro Martinez for evidence that security cameras contribute to safety: “I go back to what Chief of Safety and Security Jadine Chou says all the time–what keeps us safe is relationships… so can you speak more to what feels like a big bet on technologies of surveillance?” In this meeting (and many others), Board Member Todd-Breland makes it clear that the district’s stated goal to foster safety by centering relationships with students and families is in tension with such a massive investment in surveillance technologies like cameras. Rather than allocating that money toward evidence-based interventions that nurture and support strong relationships–which CPS officials publicly agree build a foundation of safety–the district has continually moved to adopt costly surveillance technologies instead.
Debates about student safety have only intensified, particularly in the wake of devastating school shootings across the country. For Chicago Public Schools, the need to protect students from harm and violence has necessitated investment in newer technologies alongside the more old-school tools like surveillance cameras. For instance, social media monitoring has been embraced by the district in recent years, with multiple contracts with companies that claim to give school administrators and staff the tools necessary to intervene before violence occurs. Social media monitoring software often is installed on district-provided student devices like laptops or tablets. Every action or keystroke can be tracked on school devices, regardless of whether a student is at school or using it during school hours. This means that students’ activity on their devices is potentially surveilled around the clock. Many of these software products employ machine learning to flag “concerning” behavior or language stemming from school assignments, editing history in Google Docs, search queries and browser history, and even students’ personal social media accounts. The imperative to protect students and stop violence before it happens must also be weighed against the harmful impacts of constant monitoring, which, again, are disproportionately faced by racialized, disabled, low-income, and otherwise minoritized students.
These aren’t easy or comfortable conversations. Yet, they’re necessary. Whether it's a 4-year-old stumbling into Ring doorbells on a neighborhood walk, or high school students being monitored by cameras and algorithms, advanced technologies are increasingly penetrating the lives and experience of our young people. They deserve opportunities to examine, understand, resist, and reimagine the role of surveillance and tech in their schools and communities.
Wondering when a solution based in activating youth’s sense of their own power and personhood through creative, rather than reactive, means will rise as the most logical, effective, and long lasting path to social change! Allowing kids a (necessary) sense of their own power and agency that wouldn’t involve gang affiliation or a life lived in fear, constantly on the defensive—how about that?! The opportunity to contribute to dreaming up and building better systems—on a human level (since, despite AI’s efforts, we are, at the moment, still human) would nurture a sense of control and importance they’ve sadly been left to seek through sometimes unlawful means…or in blind adherence to hollow or even shameful policies. Surveillance not needed. Put money into projects that don’t give up on human potential. Generously fund arts and creative social programs for kids—all kids—and see what happens….