The most innocent clues can crack a case. In 2012, a holiday photo of a woman and child holding freshly caught fish ended up being a key lead in a child pornography investigation.
Found within a cache of illegal, explicit material, the photo would eventually point detectives to a outdoor camping site in Richville, Minnesota, and result in the victims’ rescue, and suspect’s conviction in December 2012.
Videos by VICE
But first, detectives had to determine where the photo was taken. To do that, they cropped out the fish, sanitized the image, and sent it to Cornell University for identification, Jim Cole, the National Program Manager for Victim Identification at US Immigration and Customs Enforcement (ICE), an agency within the Department of Homeland Security (DHS), recalled to Motherboard in a phone call.
The university determined the species of fish, which was found in a particular region. Investigators then edited the suspect and victim out of the photo, Cole said, and distributed it to advertisers for camping grounds in the area, one of which recognized the location.
When detectives arrived, the same photo was on the wall of the camping office, Cole added.
“It’s all about making the haystack smaller, so we can find the needle,” he said.
That’s just one example of how the Department of Homeland Security and other teams across the world are using image, video and audio manipulation tools, some of which include commercial, off-the-shelf pieces of software, to solve child pornography crimes.
The National Centre of Missing and Exploited Children (NCMEC), a nonprofit that works with law enforcement, on average has reviewed 500,000 files every week in 2016, according to a spokesperson.
Contributions to this “mind-blowing amount of data,” as Cole describes it, could be found during investigations, sent in by communications and internet service providers, or come via tip-offs from the public.
Naturally, Cole has seen many of those traumatic images as part of his job. He and others have become somewhat desensitized to them, but “I still see material that shocks me,” he said.
The first step after receiving an image is determining whether the victim has already been previously identified. That might be done by comparing hashes, or the cryptographic signatures of files.
“But that’s not 100 percent reliable,” Cole said. Indeed, pornographic images could be traded between people thousands of times, and different versions uploaded to sharing sites. If one pixel is different, a piece of metadata has been changed, or perhaps the file was compressed in a different way, the hashes may not line up.
Instead, Cole might use Photo DNA, a tool from Microsoft that creates a “fingerprint” based on the image, which then allows law enforcement to compare it to other similar examples—perhaps ones that have been resized or otherwise manipulated—and still get a match. If the victim is a previously unidentified one, investigators will upload the material to INTERPOL’s International Child Sexual Exploitation image database.
When it comes to actually digging through the photo or video itself for clues, Cole uses a myriad of different programs for different stages of the investigation.
To start, there’s Analyze DI, which Cole says is useful to see different parts of the file in an intuitive way. From there, Cole can easily see which photos may have investigative value, and takes them into Photoshop for enhancement or clarification. There’s also Adobe Premiere for video, and Adobe Audition for audio—programs that everyday computer users are already pretty familiar with. Others products from different companies are probably less well known, like Amped Five, a video enhancement suite, and Forensics Image Analysis System (FIAS) as well.
“Photoshop is kind of the workhorse,” Cole said.
Cole has a close relationship with Adobe, and has even suggested features, or been given early access to new tools.
“We are going to pick apart that image. I’m going to look at every factor, I’m going to look at everything in it,” he added. That’s not just the content, but also EXIF data, which may reveal the type of camera used.
In one set of images involving two victims, the suspect was wearing a charcoal gray sweatshirt with a hard to read navy blue logo on the left breast, Cole said. To be able to pick out the text, Cole brought up the exposure, but that wasn’t enough. He then reduced the blur, applied a sharpening fliter, and then used a tool to manipulate the colour gradient. Layer upon layer of technique, until the logo was revealed as one for a heating, plumbing and electrical outfit in Maryland. The suspect turned out to be a former employee, and four victims were saved from future abuse.
Other cases have involved picking out details on a prescription pill bottle, or highlighting the regional fast food restaurant logo on two soda cups in the image background. But every instance is different, Cole said. Some can be solved in a matter of hours, some are still open over a decade later.
Some suspects even deploy the same tools as Cole to cover up their tracks—or what Cole calls “counter victim identification techniques.” On some dark web forums, child pornographers trade advice on how to make the best use of common software.
“There’s a tutorial out there on how to use Premiere Pro, and the tracking feature to basically obscure your own face [in a video],” Cole said.
And it’s the dark web where a lot of Cole’s cases originate. In fact, Cole dealt with imagery from Playpen, a child pornography site that the FBI seized in February 2015, and then used to deploy a hacking tool to suspected child pornographers.
“Every time I think I’ve seen it all, something comes along and proves that that’s not the case.”