Last month, the commercial satellite company Planet completed its 149-satellite constellation with the launch of the final 88 satellites into low Earth orbit. According to Planet’s website, the company was founded with one goal in mind: figure out a way to take a high resolution image of the entire planet every day. At the moment, earth observation satellites like Sentinel-2 or Landsat have satellite revisit times (the amount of time that passes in between a satellite observation the same point on the planet) of 10 and 16 days, respectively, and far lower resolution imagery than the Planet constellation.
Earth observation is a booming industry, expected to reach $3.5 billion globally by 2024, so Planet’s single-minded effort to get that daily Earth selfie makes sense—those pictures will be worth a lot of moolah. The problem, however, is how to sort through these massive troves of satellite imagery data in such a way that the images can actually be useful (as National Defense Magazine put it, when it comes to satellite imagery, there is too much information and not enough intelligence).
Videos by VICE
Enter GeoVisual Search, a new machine learning platform that puts this space data to use by allowing anyone to do a visual search of the entire globe for similar looking objects.
A sort of Google Images for public satellite imagery, GeoVisual Search is the latest technology demonstration from Descartes Labs, a spinoff from deep learning scientists at Los Alamos National Laboratory. Building off of lessons learned from their flagship product, which combines machine learning and plant physics to analyze satellite imagery to predict crop growth patterns, GeoVisual is less of a product than a technology demonstration showing what Descartes’ AI platform is capable of.
Drawing from a trove of years’ worth of publicly and privately available satellite data, Descartes Labs created three high resolution composite maps: one of the entire globe based on imagery from NASA’s Landsat 8 earth observation satellite, one of just the United States using images from the National Aerial Imagery Program, and a third of China based on images from the Planet satellite constellation.
Each of these maps is then parsed into small tiles measuring 128 pixels to a side—the NAIP map of the US is the highest resolution and has about 2 billion 128×128 pixel tiles. These tiles are then fed into a neural net, a type of machine learning platform modeled after the human brain. This neural net analyzes each tile in accordance with 512 features which detect elements in the tile like color and edges to determine what is visually distinct about that tile, thereby compressing the roughly 390,000 bits of information contained in each tile to the 512 bits of information that detail its features and making it possible to search.
When a user clicks anywhere on the map, the algorithm will then search the relevant map (whether this is of the globe, the US or China) for tiles that seem to have objects that are visually similar to the tile you selected. Although the algorithm isn’t perfect yet, it is capable of returning hundreds of results that are incredibly similar, visually speaking, especially when it comes to objects like wind turbines or solar panels in solar fields.
For now, outside of being an amusing way to kill some time, the applications of GeoVisual Search are an open question. Descartes’ CEO Mark Johnson imagines it mostly being used by professionals who often spend their days searching through aerial imagery by hand, such as geographic information systems experts, and could use a boost by having Descartes’ AI make suggestions based on what they’re looking for an image.
“GeoVisual Search is really about building that initial infrastructure to enable global scale machine learning and start doing global scale analysis,” Johnson told me. “There’s a lot of potential once we’ve begun to identify objects on the planet.”