Press "Enter" to skip to content

Smarter AIs could help us understand how our brains interpret the world

Researchers showed humans, monkeys, and computer models an odd assortment of objects and scenes. 

Jonas Kubilius

By Kelly Servick

PHILADELPHIA, PENNSYLVANIA—While artificial intelligence (AI) has been busy trouncing humans at Go and spawning eerily personable Alexas, some neuroscientists have harbored a different hope: that the types of algorithms driving those technologies can also yield some insight into the squishy, wet computers in our skulls. At the Conference on Cognitive Computational Neuroscience here this month, researchers presented new tools for comparing data from living brains with readouts from computational models known as deep neural networks. Such comparisons might offer up new hypotheses about how humans process sights and sounds, understand language, or navigate the world.

“People have fantasized about that since the 1980s,” says Josh McDermott, a computational neuroscientist at the Massachusetts Institute of Technology (MIT) in Cambridge. Until recently, AI couldn’t come close to human performance on tasks such as recognizing sounds or classifying images. But deep neural networks, loosely inspired by the brain, have logged increasingly impressive performances, especially on visual tasks. That “brings the question back to mind,” says neuroscientist Chris Baker of the National Institute of Mental Health in Bethesda, Maryland.

Deep neural networks work by passing information between computational “nodes” that are arranged in successive layers. The systems hone skills on huge sets of data; for networks that classify images, that usually means collections of labeled photos. Performance improves with feedback as the systems repeatedly adjust the strengths of the connections between nodes.

The complexity of these models makes it devilishly hard to figure out how they make decisions; speakers at the meeting variously described their innards as “mush” and “goo.” But they’re not completely inscrutable, McDermott says. “You can still look at different parts of a network—say, different layers—and ask what kinds of information can be read out.”

The answers might give scientists clues about how the brain breaks apart and processes the world around it, says cognitive neuroscientist Elissa Aminoff at Fordham University in New York City. For example, a human observer looking at a forest is aware of features such as shades of green or abundant vertical lines, she explains. But other statistical features that people are not conscious of and can’t easily describe in words might also help the brain recognize a forest. If a neural network identifies a forest by picking up on those same features, monitoring its activity might help neuroscientists determine which brain regions use what kinds of information.

Jonas Kubilius

At the meeting, Aminoff and collaborators at Carnegie Mellon University in Pittsburgh, Pennsylvania, presented a new publicly available data set that may encourage such comparisons. It contains functional magnetic resonance imaging (fMRI) scans of brain activity from four people observing about 5000 images of natural scenes—a dog; rolling hills; people playing tennis. The scenes come from image collections that computer vision researchers commonly use to train and test deep neural networks, which should make it easier to compare how computer models and brains represent the images. It could also allow machine learning experts to include fMRI data alongside the labeled images when they train a neural network. Such models might do “much more sophisticated tasks” with the images, Aminoff says, such as using the content of an image to reason and make future decisions.

Other neuroscientists remain ambivalent about the value of deep neural networks. “I question what exactly we learn about the brain by using them,” Baker says. He’s particularly wary of trying to make direct comparisons between network layers and brain regions. “You wouldn’t want to argue that a pitching machine is a model for the biomechanics of throwing a baseball,” he says.

AI researcher Jonas Kubilius of MIT hopes his work will help win over neuroscientists. At the meeting, he and MIT Ph.D. student Martin Schrimpf presented Brain-Score, a method for judging whether image-classifying neural networks are good models for the brain. The test relies on data the group collected from monkeys and humans as they viewed images of floating objects embedded in unrelated scenes. In the monkeys, an array of implanted electrodes recorded activity from the visual cortex. The humans saw the images for just one-tenth of a second and then had to choose which of two objects they had just seen.

A neural network’s score depends on how well it predicts both the pattern of activity from the cortical electrodes and the human response on the test—including wrong answers. The team hopes neuroscientists will submit new brain data that challenge the best models’ performance, revealing ways that they could become more like the brain.

So far, Brain-Score’s leaderboard, which went online this month, suggests the neural networks that best identify images aren’t necessarily the most brainlike. So Kubilius’s team also set out to create a set of deep neural networks that got higher Brain-Scores than many of the top-performing models. These relatively simple networks “are more penetrable and much easier to work with” than most neural networks, Kubilius says, and they have a brainlike feature that many models lack: They retain information in memory and feed it back from later layers to earlier ones. He hopes researchers put off by the inscrutability of many neural networks will buy in. His message to neuroscientists: “Don’t be scared of deep nets!” 

Source: Science Mag