Tech

Schools Spy on Kids to Prevent Shootings, But There’s No Evidence It Works

Schools Spy on Kids to Prevent Shootings, But There's No Evidence It Works

It was another sleepy board of education meeting in Woodbridge, N.J. The board gave out student commendations and presented budget requests. Parents complained about mold in classrooms. Then, a pair of high schoolers stepped up to the podium with a concern that took the district officials completely off guard.

“We have students so concerned about their privacy that they’re resorting to covering their [laptop] cameras and microphones with tape,” a junior said at the October 18, 2018 meeting.

Videos by VICE

Woodbridge had recently joined hundreds of other school districts across the country in subscribing to GoGuardian, one of a growing number of school-focused surveillance companies. Promising to promote school safety and stop mass shootings, these companies sell tools that give administrators, teachers, and in some cases parents, the ability to snoop on every action students take on school-issued devices.

The Woodbridge students were not pleased.

“We just want to ask again: How are you going to assure our right to privacy when we have been having these problems and we have so many fears because of GoGuardian, and the fact that they can monitor everything that we see and we do?” the student asked the school board.

After a pause, board president Jonathan Triebwasser responded: “A very fair question. I don’t know enough about GoGuardian to give you a fair answer.” He asked the district’s superintendent to look into it.

The capabilities of software programs like GoGuardian vary, but most can monitor the user’s browsing history, social media activity, and location, and some even log keystrokes. That surveillance doesn’t stop at the school doors, but continues everywhere children carry their school-issued computers and whenever they log into school accounts.

The companies that make this software—popular brands include Securly, Gaggle, and Bark—say that their machine learning detection systems keep students safe from themselves and away from harmful online content. Some vendors claim to have prevented school shootings and intervened to save thousands of suicidal children.

There is, however, no independent research that backs up these claims.

The few published studies looking into the impacts of these tools indicate that they may have the opposite effect, breaking down trust relationships within schools and discouraging adolescents from reaching out for help—particularly those in minority and LGBTQ communities, who are far more likely to seek help online.

“I’m sure there are some instances in which these tools might have worked, but I haven’t seen the data and I can’t verify in any way that what they’re saying is correct, or that there weren’t other ways available to get that information without subjecting the entire school to that surveillance,” said Faiza Patel, director of the Brennan Center for Justice’s liberty and national security program, who researches surveillance software.

School spying software has spread quickly as districts have increasingly put personal laptops and tablets in the hands of students. Meanwhile, school officials are under intense pressure to protect their wards from explicit online content and, even more urgently, detect early signs of potential school shootings.

Bark says that its free monitoring software for schools protects more than 4 million children. Its tools have “prevented” 16 school shootings and detected more than 20,000 “severe self-harm” threats, according to the company’s homepage. From January through August 2018 alone, Bark claims, it identified five bomb and shooting threats, nine instances of online predators contacting children, 135,984 instances of cyberbullying, 309,299 instances of students using school accounts to talk about or buy drugs, 11,548 instances of children expressing desires to harm themselves or commit suicide, and 199,236 instances of children sharing explicit content.

Numbers like that are understandably convincing to district administrators and parents, especially when companies offer their products to schools for free. Bark spokeswoman Janelle Dickerson said Bark makes its money from the $9-per-month version of its tool that it sells to families. The paid version currently covers 200,000 children, a small fraction of the 4 million children watched by the free version in schools.. Securly offers a paid premium product with more features than its free tool. Both companies categorically denied profiting from the data they collect on millions of students through their free offerings.

Upon closer inspection, the numbers Bark touts for its school software appear much more like marketing copy than legitimate data.

For one thing, the company’s numbers don’t always appear to be consistent. Earlier this year, Bark told TV stations in North Carolina and South Carolina that from May 2018 to May 2019, it had identified 14,671 instances of students expressing desires to harm themselves or commit suicide in those states alone.

When compared to the national statistics on its website, that would mean that the two states—which include just 50 of the more than 1,200 K-12 districts Bark claims as customers—produced a huge proportion of the incidents Bark flags across all 50 states.

The numbers suggest that during a 12-month period the company identified significantly more instances of kids contemplating self harm in the Carolinas (14,671) than it did nationwide during an overlapping nine-month period (11,548). Similarly, the 50 districts in the Carolinas apparently produced 88,827 instances of cyberbullying during that year, equivalent to 65 percent of the 135,984 cyberbullying cases detected in all 1,200 Bark districts across the country during that same period. The rest of the data shared with the Carolina TV stations is similarly disproportionate.

Statistics like these have prompted academics and school policy officials to question the integrity and consistency of digital surveillance companies’ data.

“What is particularly challenging about this issue is the tremendous urgency school districts are being faced with to do something and do something now [about suicide and school shootings] … combined with a tremendous lack of evidence that these tools do what they say they do,” said Elizabeth Laird, the senior fellow for student privacy at the Center for Democracy & Technology.

“If there is evidence or research that is available, it’s provided by the vendor. It’s not provided by an independent researcher.”

Bark’s claims also dwarf those of some of its larger competitors, suggesting a severe lack of consistency across the industry when it comes to defining what constitutes a threat.

For example, Securly, which also offers many of its products to schools for free, says it serves more than 10 million kids across 10,000 districts. During the last school year, its artificial intelligence systems and human monitors detected a comparatively miniscule 465 “imminent threats” to students—86 percent of those cases involved instances of potential self-harm, 12 percent violence toward others, 1 percent cyberbullying, and 1 percent drug-related comments, according to Mike Jolley, a former North Carolina school principal who now serves as Securly’s director of K-12 safety operations.

Asked what evidence Bark relies on to determine whether its products make schools or students safer, a company spokeswoman responded: “The primary evidence is the testimonials we receive from parents and schools daily.”

She added that Bark has never participated in an independent study of its services because “We do not retain data nor would we share user data with a third party.” However, the company does retain data for the purpose of publishing aggregate marketing statistics.

Other companies, like GoGuardian, don’t publicize their threat detection statistics as part of their marketing material. GoGuardian did not respond to multiple requests for an interview or written questions.

Motherboard signed up for Bark’s free service, giving the company access to an email account, Twitter, Spotify, Google Drive, and web browsing history. Inexplicably, the monitoring extension for the Chrome browser didn’t appear to work, even after Motherboard verified it was installed correctly with a Bark representative. During the course of the month-long experiment the extension didn’t flag a single issue, despite a reporter visiting numerous sites that included the same keywords and content that Bark flagged in emails.

During the month of the experiment, Bark flagged 78 potential issues, which were summarized in daily emails sent to a Motherboard account registered as a parent. The vast majority of the flagged content came from daily email roundups from news outlets—including the Washington Post, MIT Technology Review, and others. This echoes a complaint made by students in Woodbridge and other school districts—that surveillance software often blocks access to legitimate news and educational websites.

After filtering out the newsletters, there were a few remaining activities that may have caused some parents of minors genuine concern: Drake lyrics, and an email conversation with a catering company that included a wine and beer order.

But most of what was left merely demonstrated the limits of language analysis algorithms when it comes to understanding context. Bark flagged a retweet about the U.S. withdrawing troops from Syria as hate speech and cyberbullying. It deemed a Seamless ad for the restaurant Jerk off the Grill to be sexual content.

Slightly humorous miscategorizations like these may be warnings of more significant issues with algorithms designed to detect violent or worrying behavior.

Natural language processing algorithms have been shown to be worse at recognizing and categorizing African American dialects of English. And popular tools used to screen online comments for hate speech and cyberbullying tend to disproportionately flag posts from African Americans.

“One of the things to kind of understand about surveillance software is that it’s going to have a huge number of false positives,” Patel said. “The question becomes: Well, what do you do when kids are flagged and how does the school react to that? We know that school discipline disproportionately targets African American and Latino youth, regardless of the offense.”

Several school surveillance software companies claim that their algorithms go beyond simple keyword identification—such as flagging when a student writes “bomb” or “gun”—and analyze the context of the message along with recent web activity. How they do that, though, is considered a proprietary secret.

“With sentiment analysis, a student can say ‘I can’t take this anymore, I want to end it all’ … something that’s just looking for keywords may not catch that,” said Jolley, the Securly director of K-12 security.

But the task becomes much more difficult when you consider LGBTQ students, or those from other marginalized groups, who rely on the internet for health information and positive communities.

Valerie Steeves, a criminologist at the University of Ottawa, has researched the effects of school surveillance on children extensively. She’s currently gathering data from students exposed to similar tools in Eastern and Central Canada.

“The trans and LGBTQ kids we talk to … they articulate very clearly that these kinds of technologies (internet forums and social media) have been great for them because they need some kind of place to find community and someplace to go to find health information,” Steeves told Motherboard. “And yet, they find they’re under so much surveillance that it affects them in ways that shuts them out of those resources. They learn not to look. They learn not to trust online public spaces.”

Jolley acknowledged that Securly is grappling with just that problem.

“It’s hard because students do use derogatory slang … and they say ‘Johnny you’re gay,’ and they may mean that in a bullying aspect,” he said. “We are actively working on ways to continue [improving our algorithms]. We have made efforts.”

“I feel like we’re doing a lot of positive things for student learning and how things are working at the school but I don’t have hard data,” he added.

There is no definitive study proving students perform worse when schools monitor their web activity and personal messages—nor are there any that show monitoring makes them safer, according to experts.

But there are real incidents that justify students’ fears—like the ones that prompted Woodbridge high schoolers to stick tape over their webcams. Woodbridge Superintendent Robert Zega initially agreed to an interview for this article, but did not speak to Motherboard before publication.

Nine years before the Woodbridge students spoke at their local board of education meeting, sophomore Blake Robbins was called into an assistant principal’s office in nearby Lower Merion, Pennsylvania. She accused him of dealing drugs. The evidence: a photo of Robbins sitting in his room with brightly colored pill-like objects that was taken when the district remotely activated his school-issued laptop’s webcam using device monitoring software called LANrev.

The picture was part of a cache of 56,000 photographs that the district took of students without their knowledge. It included sensitive material like Robbins standing shirtless in his room.

The “drugs” in the picture turned out to be candy. Following a federal class action lawsuit, the Lower Merion School District settled for $610,000. Robbins received $175,000 and a second student who joined the case received $10,000. The rest of the settlement covered their lawyers’ fees.

But the spyware that enabled the covert surveillance was bought and rebranded by Vancouver-based Absolute Software. It is the precursor to software that is now tracking devices in a number of school districts, including Baltimore Public Schools.

Egregious invasions of students’ privacy, like in the Lower Merion case, will grab headlines. But school communities should be equally worried about the long-term effects of using surveillance software on children, said Andrew Hope, a sociologist at Federation University, in Australia, who studies youth surveillance.

“Our contemporary surveillance technologies indoctrinate our students, our citizens … into a culture of observation in which they learn to be watched and are accepting of unremitting surveillance as a norm,” he said. “There is a behavioral modification that happens, but we’re not entirely sure what the outcomes of such a modification might be. Are we teaching them to be surveilled? To be producers of data in a surveillance economy?”