How he’d describe his job to a 10-year-old: “We’re working to understand the operation of the brain and copy it on a computer chip.”
Bobbleheads and brains: Mike Davies recently showed the promise of neuromorphic computing to build computing systems that work more like our brains. He used Intel’s new self-learning research chip — codenamed Loihi — which mimics how the brain learns based on feedback from the environment. Loihi was able to rapidly distinguish — in just four seconds, and based on a handful of photos — among a rubber duck, an elephant figurine and a bobblehead of scientist Rosalind Franklin. “It’s a small but exciting example of how neuromorphic computing could deliver more efficient artificial intelligence,” Mike says. “While this is a proof-of-concept that uses less than 1 percent of the chip’s resources, it shows that the architecture works, and we expect to see orders of magnitude gains in efficiency as the networks are scaled up to larger problems.”
How we will experience neuromorphic in the future: The list is long. Mike predicts that robotics will be the killer app for neuromorphic computing. He foresees smart surveillance cameras that can trigger an alarm if an intruder enters a room. He envisions industrial applications that will monitor everything from ball bearings to bridges. “These neuromorphic chips will one day provide productivity benefits to otherwise tedious, time-intensive jobs for humans,” he says.
A new approach to computer architecture: Here’s the thing about Central Processing Units (CPUs): Human brains still possess far more raw computational power than the most advanced supercomputers. Traditional computing architecture has long rested on two distinct elements: processor and memory. Neuromorphic computing upends that model. It’s more like the 86 billion neurons inside our brains, which use data to learn, make inferences and get smarter over time. “It’s a complete rethinking of computer architecture,” Mike says.
Restless and ready: After earning a master’s degree in electrical engineering from Caltech, Mike, who holds five patents, came to Intel in 2011 through the acquisition of Fulcrum Microsystems, a firm that commercialized asynchronous design research for Ethernet switch silicon. After working on five generations of switches, “I started getting restless and wanted a new challenge,” he says. So he decided to shop the asynchronous approach around Intel. “It turned out that asynchronous design was really well suited for neuromorphic chips,” Mike say. Intel Labs, the company’s research arm, immediately put Mike and his team to work.
Pioneering brains: Replicating the power of the human brain has been a goal of computer science from its earliest days. Mike points out that half a century ago both Alan Turing, the father of theoretical computer science and artificial intelligence, and John von Neumann, the mathematician and computer scientist, used the language of neurons and brains in their work. Von Neumann’s 1958 classic was entitled “The Computer and the Brain.” Neuromorphic computing researchers are now in the early stages of learning to mimic the brain’s basic processes. “We’re not trying to build a high-level architectural copy of the brain with a hippocampus and a neocortex,” Mike says, adding, “Not yet.”