Tech

Here’s Why I’m Campaigning Against Facial Recognition in Schools

University of North Georgia - Brooke Trogdon (They_Them)

Erica Darragh is a community organizer and harm reductionist from Georgia. She currently serves on the board of directors of Students for Sensible Drug Policy and has coordinated campus organizing for the campaign to ban facial recognition. You can find her on Twitter at @erica_darragh.

The surveillance industry has a practice of monetizing crises, and the epidemic of gun violence in schools is no exception.

Videos by VICE

In the wake of Parkland and other tragedies, surveillance vendors quietly began marketing campaigns which advertise facial recognition as the high tech security solution. Some public school districts, like Lockport City in New York and Broward County in Florida, bought into this narrative and installed facial recognition systems. However, the US government’s own research challenges the underlying notions of these claims: that mass surveillance prevents violent attacks, and facial recognition is a reliable method of identifying potential security threats.

Facial recognition systems that are sold to schools are often advertised as comparing people captured on security cameras against a whitelist (individuals who are allowed to be on campus) or a blacklist (individuals who are not allowed on campus). But both of these methods are inadequate at identifying the typical school shooter: a current student.

Facial recognition systems that claim to be able to identify aggression or other complex emotions are also highly suspect. An extensive review of emotion recognition research found that facial expression is not really associated with emotion, making the surveillance vendors’ claims no better than pseudoscience. As horrifying as mass shooting incidents are, they are too statistically rare for any type of algorithm to predict them. This is clear even to surveillance vendors and advocates, who concede that facial recognition cannot actually stop school shootings.

Facial recognition technology is highly inconsistent – research has demonstrated that facial recognition used by police in the U.K. incorrectly identifies targets up to 98 percent of the time, especially if the target is not a white male. In 2019, federal research found that even top-tier facial recognition algorithms misidentify faces of people of color at 10 to 100 times the rate that they misidentify white faces. This is an extension of a well-documented cognitive bias that already contributes to the over-criminalization of communities of color.

Implementing facial recognition risks intensifying harassment and false identification of the most vulnerable members of society. Considering that police shootings are a leading cause of death for young black men in the United States and ICE regularly deports US citizens that were misidentified as undocumented immigrants, this is literally a matter of life and death for people of color.

Because facial recognition is entirely unregulated, private surveillance vendors operate with minimal oversight. The larger threat that this poses made national news in the past several weeks when the New York Times published an article about Clearview AI and its relationship with law enforcement agencies. Clearview uses artificial intelligence to analyze a database of over three billion images that were scraped off the internet from places like Facebook, Instagram, public mugshot databases, and other websites. When possible, the images are connected to personal information found on social media. The software compares incoming images to those in the database, allowing the user to identify people in real time as they are exposed to the camera.

To make matters even worse, access to the software is for sale to just about anyone. On February 27, Clearview reported that it had been hacked and its client list had been compromised. Soon after, Buzzfeed published the client list, and it included higher education institutions like University of Alabama, Florida International University, and University of Minnesota, as well as big corporations like Verizon and WalMart, and multiple law enforcement agencies, including ICE. This report was published just days after UCLA reversed its decision to implement facial recognition on campus, following community backlash that was amplified by Fight for the Future’s ongoing campaign to ban facial recognition on campus. UCLA joined over fifty other institutions which have taken that pledge, and on March 2nd young people across the country participated in a day of action against facial recognition on campus.

Students for Sensible Drug Policy organized several of those events, including one at my alma mater, University of North Georgia. The chapter on my campus was founded during the height of the opioid crisis, and we lobbied successfully for medical amnesty and naloxone access legislation in the State of Georgia. Recognizing that drug policy has been used as form of institutionalized oppression, I was compelled to play a role in preventing the digital evolution of systemic discrimination. Because of my time organizing in drug policy reform and harm reduction, I came into the campaign to ban facial recognition with a focus on racial justice and consent education.

The fact that personal data can be easily harvested and added to a privately-owned database underscores how our understanding of consent has not evolved with technology. Consent is informed, revokable, and negotiated by parties with equal power (or with consideration of power dynamics). Considering that few people completely understand the risks of handing over biometric data to private companies, these decisions are far from fully informed. Because critical data sharing details are buried in miles of legal jargon and use of services and platforms is dependent upon complete cooperation with the terms and conditions, our implementation of digital consent is still in its infancy.

In 2017, data surpassed oil as the most valuable commodity on the market, and without meaningful legislation protecting the flow of data, it is being brokered and sold to the highest bidder. We already know that commercial entities share (and sell) data without individual consent, and currently there are few regulations that address this issue. Even if “consent” is granted to these companies and regulations are implemented, individuals will likely never know how their data is being used until there is a high-profile data breach or horrifying scandal—such as Facebook and the Cambridge Analytica incident.

Critics of facial recognition often conclude that the solution is increased regulation, but digital rights advocacy nonprofit Fight For the Future recognizes that regulations cannot fix a fundamentally flawed and unjust technology. Like nuclear and biological weapons, facial recognition poses a unique threat to humanity which far outweighs any potential benefits.

Over forty civil liberties organizations, including the ACLU and Color of Change, signed on to an open letter condemning facial recognition on campus. Over 150 faculty members signed on to a separate open letter in solidarity with student organizers like myself. Contrasting these levels of support, the campaign sparked controversy at Oakland Community College, where administrators attempted to ban an educational forum and censor a student government resolution. Administrators later completely reversed this decision after the ACLU and Detroit Justice Center identified the censorship as a violation of the First Amendment.

Several other campuses are working to pass student government resolutions that ban facial recognition on campus, and more educational events are planned throughout spring.

Young people across the country are demanding meaningful policy change, and I encourage others to join our campaign, sign the petition, and work to catalyze policy change in their own communities.

We have already given up so much privacy and liberty for the sake of “security,” but facial recognition must be where this stops. Facial recognition does not improve security and may actually make it worse. It’s also a technology that, once mainstreamed, can never be taken back. It is fundamentally coercive for educational institutions to require students to participate in biometric surveillance in order to attend class.

While we wait for the government to ban facial recognition at the federal level, young people can take control of the narrative and demand policies that ban the technology in school districts and on college campuses. Surveillance dystopia is not inevitable, but we must act now.