Image: Submarine Cable Map
A few weeks ago, more NSA leak-related news broke: it turned out the US government was tapping into undersea cables. The operation’s code name? Fairview. The revelation was met with some skepticism. Was it just hyperbole?
At Motherboard, we wondered if this type of surveillance was inefficient on the one hand, and, on the other, whether or not it would even be possible to filter the several terabytes of information that pulse through them each second. As it turns out, despite the skepticism, monitoring and recording a significant chunk of–or perhaps all–the information exchanged through the cables is possible. The hardware and software is out there, and acquiring the capability and scale is really just a matter of money.
Videos by VICE
Globally, there are around 200 fiber optic cables linking all continents but Antarctica. Through them, countries jack in. Without them, a near total blackout. With the vastness of Earth’s oceans comes the logistical nightmare of patrolling the cables, to prevent divers or submersibles from severing or tapping them. Sound far-fetched? Historically, submarine cables have been targets during time of war. In World War I, British and German forces engaged in cable sabotage. And when it comes to surveilling the data flowing between continents, what we don’t know is how easily the metadata (and other data) can be tapped and filtered. Indeed, most of us know very little about the technology it would take to pull it off.
We spoke to security expert Anton Kapela at 5NINES about the hardware, software and storage capacity required to tap and filter the amount of information flowing through the undersea cables. Kapela knows a thing or two about hacking internet communications. Several years ago at Defcon, he and Alex Pilosov, CEO of Pilosoft, proved that they could intercept internet traffic on an NSA-level scale. Kapela and Pilosov massaged the Internet router protocol BGP (Border Gateway Protocol), which would allow a hacker peer into unencrypted communication anywhere around the globe. What is more, the two proved that they could alter communications in the process. Their epic hack was profiled in Wired, and scared the wits out of internet users the world over.
(Full Disclosure: I grew up with Mr. Kapela, and jammed with him on more than one occasion. That is, when he wasn’t mastering the art of computer programming, and I wasn’t reading Vonnegut or Stephen King’s The Dark Tower series.)
Tap, Tap, Tap
I asked Kapela if it would be inefficient to tap undersea cables, and he almost laughed.
“I think the notion of efficiency might be a misnomer or mischaracterization when talking about tapping undersea cables,” he said in a phone interview. “Let’s say you want to tap the trans-Atlantic, trans-Pacific cables and all other pathways—and you want to tap all of them—the number of cables is pretty small. It’s a four-digit number or less.” In other words, there aren’t many undersea cables on the planet, and their locations are mapped. In March of this year the Egyption navy caught three divers attempting to sever the submarine cable SEA-ME-WE 4, which runs from France to Singapore, and connects Egypt to the internet. These were men in a dinghy. Think of what well-budgeted government agencies can do.
“We casually refer to these things as giant gobs of connectivity in that there is a lot of delivery and carrying capacity, but it’s physically not that extensive,” he said. He noted that the biggest system he is aware of features eight to sixteen individual fibers—fewer than those used between metro cities or any land operation. This has to do, according to Kapela, with the nature of undersea engineering. It is a very expensive and a “quirky thing to get right.”
“The constraints are many,” said Kapela. “By constraints I primarily mean power. The way they build these things demands that systems feature some sort of mechanism to transport power to regeneration elements across the span.”
He points to the cable systems between Nova Scotia and the United Kingdom, which are run by several different operators. Amplifiers, or signal regeneration mechanisms, are placed at specific intervals along the cable. Think of them as little units that give a burst of speed to the fiber optic signal, kind of like a turbo boost to a vehicle in a video game, or the various stages of a rocket. This happens all along the cable until the data reaches the continental landing station. The intervals could be as small as 70 km, for example, or as high as 1,000 km or more on recent designs.
Kapela, an individual operator with one computer, can store eight months of traffic logs of everything exchanged across a network that spans an entire city. One man with one computer can do that.
“When you regenerate, you have to open that bundle up, patch it into some chunk of hardware, and they’re large and complicated, and limited by the power you can deliver,” said Kapela. “The power is literally delivered end to end, from continent to continent. Thousands of volts of potential exist across the US side and UK side, powering all of the optics, regeneration hardware, and laser devices in these periodic cut-ins.”
“The spans are getting bigger and bigger as amplifiers get better and better and the fiber can take higher power,” Kapela said.
The incentive is high for undersea cable systems owners to cram the most information possible in the smallest number of fiber optic strands. And these strands, few in number, carry hundreds of gigabits of data bi-directionally.
Kapela says that if you have an aggressively engineered undersea system today, you would be able to engineer something in the 10 to 40 gigabit per second range per sub-band of fiber. A lot of wavelengths of light. Multi-systems have somewhere between 40 and 80 separate channels of light operating in parallel, and each channel can carry 40, maybe even 100 gigabits per second now.
“The fibers start with simple flashlight concepts—light goes on, light goes off,” said Kapela. “1 is light on, 0 is light off. This flashing gets faster and faster and faster as you tend to drive more data down the cable.
A photo of the innards of undersea cables (Image: Energy.Hawaii.gov)
Kapela’s conservative estimate of how much a fiber might carry—in one direction—is 3,200 gigabits. Four fibers transmitting and four receiving for a total of 12,000 gigabits, or 12 terabits per cable. “That would be a lot of gigs to look at, but not all of these are fully utilized,” said Kapela. “Looking at peak capacity of the cable and looking at what has been sold on it are very different.”
Take the US to UK cable span, for instance. Kapela says there is plenty of opportunity for useless, discardable data, or 0’s. If the cable is only 40% sold (with useful information), the other information—non-0 frames from a user, or the spaces between packets—can be discarded as useless. Someone analyzing the data could then begin chopping away the data and putting it into very small boxes in rapid fashion.
“So, if someone is looking at the highest level, or superficial features, like a heat-map of internet traffic, you’re not talking about a lot of data,” said Kapela, who used 5NINES’s system as a point of comparison.
“To put it in perspective, here at 5NINES, we serve one city and not even that many people; we have 16,000 or so individual IP addresses on our network, and those aren’t even all utilized,” said Kapela. “Right now it’s maybe a few thousand. The communication volume that our entire network exchanges—and I’m watching it right now on a program designed to aggregate and look at all these connections coming in and out of our network borders—is so easily aggregated and reduced that I only need one computer to look at every piece of communication between every computer on our network and the internet at large, and vice versa.”
In other words, Kapela—one man with one computer—can store eight months of traffic logs of everything exchanged across a network that spans an entire city. “That’s not even trying, and I have no budget by the way,” said Kapela, who noted that every ISP does this type of stuff.
If he had a budget of billions of dollars, Kapela said there would be a lot of possibility in doing a “very comprehensive approach to not just the recording of the communication, but also the actual content of the communication.” That’s right: not just metadata, but internet user content—emails, text messages, etc. He said the scaling out of this technology is pretty trivial. One really just needs to buy and multiply hardware.
“The software that I am currently running [the commercial application FlowTraq] has built-in clustering features, so if I have a bigger network, it would support that,” said Kapela. “Now I can ask bigger, crazier, and maybe even more privacy-busting questions of this data, which has already been produced and scavenged by the routers in our network, and recorded centrally; but, now all of a sudden it has a certain analysis power it didn’t have before.”
Getting Content
“I would argue—and there is data to support it—that you can monitor and record a substantial part or maybe all of the exchanged information,” said Kapela, in a particularly startling moment in our conversation. This would only require a small amount of electrical power, making a surveillance operation incredibly efficient.
“I would argue—and there is data to support it—that you can monitor and record a substantial part or maybe all of the exchanged information”
This could be done on a large scale with something like an ASIC chip, which is dedicated to some sort of searching, looking up, or comparison task. And this chip can work at low levels of electrical power. What is more, it is readily available.
Kapela also pointed to the Juniper Networks routers’ energy efficiency when considering how easy it would be to tap and filter undersea cable data. “On their website they claim 3.38 watts per gigabit of traffic moved through their routers, and the analysis that the router does can be huge chains of events just to pass one packet into one interface and out another,” said Kapela, who pointed this out not because it’s some remarkable feat. Quite the contrary, if Juniper can build a full-scale router with all of the usual ISP features, and they can pass a gigabit of that stuff through less than 4 watts of look-ups of those packets, then one knows it can be done easily.
“You can argue, and justifiably I think, that a majority of the data is going to be excluded and uninteresting to begin with,” said Kapela. “And when the capacities aren’t all maxed out, which is what people are often quoting, I think you start to see a different picture.”
And what is the picture that takes shape? “One where [tapping and filtering] is very, very practical and incredibly achievable, to the degree that I would be shocked if someone didn’t try it just to say they did.”
If You Could, You Would
Kapela thinks that if you are a defense contractor that designed something like this, you would take the risk and try it out. “I think you would, and you would probably get pretty far down the road and probably monitor as much as you want, with an approach that is power efficient, while drawing from learned, published, academic methods,” added Kapela.
Cable landing stations, not undersea regeneration points, are the likely locations for tapping, according to Kapela. “That’s the obvious place to intercept, or near that place—upstream of it.” Here, user data can be electronically copied in a way that is completely invisible. “There are a lot of possibilities there, so many such that you’d have to argue with special pleading to say that it is not practical,” said Kapela, adding that one should assume it is going to happen. Lovely news.
“If you’re Raytheon, Northrup Grumman, or Booz Allen Hamilton, and you’re a prime on a large contract, you’re going to get results,” added Kapela, who reemphasized that without doing any real work, he can get great results looking at his network’s data.
“Casual folks like me can do a lot with a little… Nothing is physically stopping anyone.”
“I would argue traffic analysis extrapolates super-linearly to almost perfect knowledge,” said Kapela. “This is what defense contractors do. This is their job—well, not all of it. They do make useful stuff that blows up, and there has been a lot of great knowledge gleaned by DOD and DOE research work over the decades. I mean, we have them to thank for packetized communication, or maybe we have them to blame.”
“But, there it is. And packetized communication makes this a very procedural, computer science question,” he added.
Another interesting point that Kapela raises is that of the rights over the data. New York, for example, is a place where a bunch of these undersea cables enter the country before being piped over to data center buildings. “Just because the fiber goes between the UK and the US means nothing about the US rights put over it,” said Kapela. “The ownership of these cable systems is usually a consortium format, but that has nothing to do with the rights of the use of the information, and whose stuff can get spied upon.”
“Casual folks like me can do a lot with a little,” he added. “I would argue that our government is anything but casual, so extrapolate that how you will. If you’re asking the right questions, and have some targets in mind, you’re going to be a leg up. Extrapolate that to billions of dollars in budget—say, the budget of the NSA—and you’re going to have profoundly complete results and likely very accurate ones.
“Nothing is physically stopping anyone.”