Press "Enter" to skip to content

Brain-inspired chips could soon help power autonomous robots and self-driving cars

Hillsboro, Oregon—Though he catches flak for it, Garrett Kenyon, a physicist at Los Alamos National Laboratory, calls artificial intelligence (AI) “overhyped.” The algorithms that underlie everything from Alexa’s voice recognition to credit card fraud detection typically owe their skills to deep learning, in which the software learns to perform specific tasks by churning through vast databases of examples. These programs, Kenyon points out, don’t organize and process information the way human brains do, and they fall short when it comes to the versatile smarts needed for fully autonomous robots, for example. “We have a lot of fabulous devices out there that are incredibly useful,” Kenyon says. “But I would not call any of that particularly intelligent.”

Kenyon and many others see hope for smarter computers in an upstart technology called neuromorphic computing. In place of standard computing architecture, which processes information linearly, neuromorphic chips emulate the way our brains process information, with myriad digital neurons working in parallel to send electrical impulses, or spikes, to networks of other neurons. Each silicon neuron fires when it receives enough spikes, passing along its excitation to other neurons, and the system learns by reinforcing connections that fire regularly while paring away those that don’t. The approach excels at spotting patterns in large amounts of noisy data, which can speed learning. Because information processing takes place throughout the network of neurons, neuromorphic chips also require far less shuttling of data between memory and processing circuits, boosting speed and energy efficiency.

Neuromorphic computing isn’t new. Yet, progress has been slow, with chipmakers reluctant to invest in the technology without a proven market, and algorithm developers struggling to write software for an entirely new computer architecture. But the field appears to be maturing as the capabilities of the chips increase, which has attracted a growing community of software developers.

This week, Intel released the second generation of its neuromorphic chip, Loihi. It packs in 1 million artificial neurons, six times more than its predecessor, which connect to one another through 120 million synapses. Other companies, such as BrainChip and SynSense, have also recently rolled out new neuromorphic hardware, with chips that speed tasks such as computer vision and audio processing. Neuromorphic computing “is going to be a rock star,” says Thomas Cleland, a neurobiologist at Cornell University. “It won’t do everything better. But it will completely own a fraction of the field of computing.”

Intel’s venture into neuromorphic architecture edges the chip giant away from its famous general purpose computer chips, known as central processing units (CPUs). In recent years, the pace of advances in CPUs’ silicon technology has begun to slow. That has led to a proliferation of specialty computer chips, such as graphical processing units (GPUs) and dedicated memory chips, each tailored to a specific job. Neuromorphic chips may extend this trend. They excel at processing the vast data sets needed to give computers senses, such as vision and smell, says Mike Davies, who leads Intel’s neuromorphic research. That, along with their energy efficiency, appears to make them ideally suited for mobile devices that have a limited power supply and are untethered from traditional computer networks.

Intel’s Mike Davies thinks neuromorphic computing will help computers learn like people.Mason Trinca

Intel’s effort is centered here at the company’s Jones Farm campus, a research and development complex just west of Portland, Oregon. Bustling in nonpandemic times, one four-story Jones Farm building now has room after room of empty cubicles, as software and hardware engineers work from home. At the neuromorphic lab, where Davies and a skeleton staff test the pinkie-nail–size Loihi-2 chips, the wall clock is stuck at 7:43. Remote work likely slowed the rollout of the new chip by up to 6 months, Davies says.

As with Loihi-1, individual neurons in Loihi-2 can be programmed to amplify or inhibit the propagation of electrical spikes from neighboring neurons. Collaboration with neuroscientists such as Cleland prompted Intel engineers to add another brainlike feature to the new Loihi. Studies of olfactory processing in the brain have shown that the interval between spikes can encode additional information, and in 2020, Cleland and his colleagues demonstrated the power of adding temporal information to neuromorphic computing.

They set out to train a first-generation Loihi chip to recognize the scent of 10 hazardous chemicals in a mix of background compounds. The researchers recorded readings from 72 chemical sensors in a wind tunnel as scents wafted through it, including acetone, methane, and ammonia. They fed the data to Loihi, which used an algorithm to represent and analyze the odorants as streams of electrical pulses that varied in their temporal pattern. Loihi was able to identify each odor after only a single sample; deep learning approaches required training on up to 3000 samples to reach the same level of accuracy.

That success, Davies says, prompted Intel to equip its Loihi-2 chips with the ability to produce and analyze complex temporal spike patterns. “We are trying to establish a new flexible and versatile, general purpose, intelligent computing chip,” Davies says.

Two groups have already shown neuromorphic chips can match the capabilities of some of the most advanced AI programs on the market. Today’s workhorse AI software relies on a deep learning algorithm known as a backpropagation neural network (BPNN), which enables AI systems to learn from their mistakes as they are trained. In a preprint posted on arXiv in August, Andrew Sornborger, a physicist at Los Alamos, and colleagues reported programming the first-generation Loihi to carry out backpropagation. The chip learned to interpret a commonly used visual data set of handwritten numerals as quickly as conventional BPNNs, while drawing just 1/100 as much power.

Likewise, in unpublished work, Wolfgang Maass, a computer scientist at the Graz University of Technology, and his colleagues have developed a neuromorphic system that carries out BPNN learning with 1/1000 as much power as standard GPU-driven AI. “It’s not clear what the killer app will be for neuromorphic computing,” Maass says, but he thinks robotic devices that need to consume minimal power to sense their surroundings and navigate through them are a likely prospect.

Kenyon says that having benefited from an understanding of biology, neuromorphic processors may soon return the favor, helping neuroscientists better understand the evolution and workings of the brain. Standard AI systems aren’t much help, because they tend to be black boxes that don’t reveal how their learning takes place. But Loihi and similar chips are a better model because they behave like biological networks of neurons. Researchers can track the firing patterns in the silicon-based systems to reveal how they learn to process visual, auditory, and olfactory information—and hopefully gain new insights into how biology does similar jobs.

Last year, for example, when Kenyon and his colleagues were studying how a spiking neural network software program learns to see, they used a process known as unsupervised dictionary training. It involves classifying objects without having prior examples to compare them to. The researchers found that their network became unstable over time, its neurons firing continuously as it lost track of visual features it had learned. The unstable state, Kenyon says, “only arises when attempting to utilize biologically realistic, spiking neuromorphic processors or when trying to understand biology itself.”

In hopes of getting their algorithm back on track, the researchers exposed their network to a type of noise they believe mimics the input that biological neurons receive during sleep. The noise reset the network and improved the accuracy of its object classification. “It was as though we were giving the neural networks the equivalent of a good night’s rest,” says team member Yijing Watkins. Now at Pacific Northwest National Laboratory, she is working to implement the algorithm on Loihi to see whether the AI form of shut-eye helps the chip stably process information from a retinal camera in real time.

Future neuromorphic chips might revolutionize computing. But they might need naps to do it.

Source: Science Mag