Tech

Charles Babbage’s Analytical Engine Takes on Deep Learning

Charles Babbage’s Analytical Engine was never completed, but that’s not really the point. It was the idea: a machine capable of computing anything that could possibly be computed. That’s more or less the definition of Turing completeness, but Alan Turing wouldn’t be alive for another 80ish years. Babbage was laying his plans as far back as 1834, still decades before even electric power would come to the UK.

The thing basically consisted of stacks of gear-based addition units arranged into columns in clever enough ways to be able to handle the four primary operations of arithmetic. Crucially, it could store results on punchcards and it could also accept programs (“formulae”) on punchcards. Much like the device you’re currently reading this on is based on primitive assembly language, it was these atomic operations (instructions) that enabled a theoretically unlimited scope of computability.

Videos by VICE

This limitlessness is at the heart of the problem British combinatorialist Adam P Goucher set out to solve: Implementing a convolutional neural network, a form of deep learning, using a simulated Analytical Engine.

Fortunately, it’s easy enough to simulate programming Babbage’s proposed machine just by limiting yourself to the same primitive instructions, memory (RAM-style), and speeds offered by the Analytical Engine.

The details are in the lecture above, which is reasonably short and accessible. (You’ll probably learn a thing about neural networks in the process.) Ultimately, Goucher’s program was able to learn to recognize handwritten digits with about 96 percent accuracy based on a training set of about 20,000 images.

Via Goucher’s blog, here’s an image of the thing running, where dots represent failed recognition attempts and asterixis represent successes (that’s learning: trial, error, and memory).