It sounds like a joke, but the idea actually makes sense: More bugs, not less, could theoretically make a system safer. Carefully scatter non-exploitable decoy bugs in software, and attackers will waste time and resources on trying to exploit them. The hope is that attackers will get bored, overwhelmed, or run out of time and patience before finding an actual vulnerability.
Computer science researchers at NYU suggested this strategy in a study published August 2, and call these fake-vulnerabilities “chaff bugs.”
Videos by VICE
Brendan Dolan-Gavitt, assistant professor at NYU Tandon and one of the researcher on this study, told me in an email that they’ve been working on techniques to automatically put bugs into programs for the past few years as a way to test and evaluate different bug-finding systems. Once they had a way to fill a program with bugs, they started to wonder what else they could do with it.
“I also have a lot of friends who write exploits for a living, so I know how much work there is in between finding a bug and coming up with a reliable exploit—and it occurred to me that this was something we might be able to take advantage of,” he said. “People who can write exploits are rare, and their time is expensive, so if you can figure out how to waste it you can potentially have a great deterrent effect.”
Exploiting software bugs is a long, time-consuming process, involving an assessment (knows as a triage) of the system’s possible bugs, determining whether they’re exploitable (meaning, harmful to the system if used against it), building a way to exploit them, and then deploying them back into the system. They illustrate the attacker-bug relationship in this very scientific figure:
“Our prototype, which is already capable of creating several kinds of non-exploitable bug and injecting them in the thousands into large, real-world software, represents a new type of deceptive defense that wastes skilled attackers’ most valuable resource: time,” the researchers write.
“I’ve been really surprised (and gratified!) by how much interest there’s been in the paper since we posted it,” Dolan-Gavitt told me. “I think people like the sort of ‘so-dumb-it’s-smart’ angle – it’s really counterintuitive, but could actually work.” He said that his favorite reaction so far was this semi-viral tweet about how high the researchers must have been when they came up with this. “Being known as Prof. Huge Bong Rip has always been a life goal, really.”
Dolan-Gavitt said that because of its many limitations, it probably isn’t a method that will see widespread use anytime soon, and it might never be practical. To name a few of those limitations: It can’t be used on open-source software, you have to be positive that the chaff bugs are in fact harmless, it only works if it’s okay if the program crashes on malicious outputs, and you have to make sure the faux bugs are indistinguishable from naturally occurring bugs. “But I think it’s still an idea that’s worth exploring, and it may find practical use in some environments,” he said.
Whether this is smarter or more efficient that trying to write airtight code in the first place is yet to be seen—but as automated systems get smarter and faster at coming up with ways to defeat each other, it might be better to join them if we can’t beat them.