Press "Enter" to skip to content

Computer programs can learn what other programs are ‘thinking’

iStock.com/uzenzen

By Matthew Hutson

Anyone who’s had a frustrating interaction with Siri or Alexa knows that digital assistants just don’t get humans. What they need is what psychologists call theory of mind, an awareness of others’ beliefs and desires. Now, computer scientists have created an artificial intelligence (AI) that can probe the “minds” of other computers and predict their actions, the first step to fluid collaboration among machines—and between machines and people.

“Theory of mind is clearly a crucial ability,” for navigating a world full of other minds says Alison Gopnik, a developmental psychologist at the University of California, Berkeley, who was not involved in the work. By about the age of 4, human children understand that the beliefs of another person may diverge from reality, and that those beliefs can be used to predict the person’s future behavior. Some of today’s computers can label facial expressions such as “happy” or “angry”—a skill associated with theory of mind—but they have little understanding of human emotions or what motivates us.

The new project began as an attempt to get humans to understand computers. Many algorithms used by AI aren’t fully written by programmers, but instead rely on the machine “learning” as it sequentially tackles problems. The resulting computer-generated solutions are often black boxes, with algorithms too complex for human insight to penetrate. So Neil Rabinowitz, a research scientist at DeepMind in London, and colleagues created a theory of mind AI called “ToMnet” and had it observe other AIs to see what it could learn about how they work.

ToMnet comprises three neural networks, each made of small computing elements and connections that learn from experience, loosely resembling the human brain. The first network learns the tendencies of other AIs based on their past actions. The second forms an understanding of their current “beliefs.” And the third takes the output from the other two networks and, depending on the situation, predicts the AI’s next moves.

The AIs under study were simple characters moving around a virtual room collecting colored boxes for points. ToMnet watched the room from above. In one test, there were three “species” of character: One couldn’t see the surrounding room, one couldn’t remember its recent steps, and one could both see and remember. The blind characters tended to follow along walls, the amnesiacs moved to whatever object was closest, and the third species formed subgoals, strategically grabbing objects in a specific order to earn more points. After some training, ToMnet could not only identify a character’s species after just a few steps, but it could also correctly predict its future behavior, researchers reported this month at the International Conference on Machine Learning in Stockholm.

A final test revealed ToMnet could even understand when a character held a false belief, a crucial stage in developing theory of mind in humans and other animals. In this test, one type of character was programmed to be nearsighted; when the computer altered the landscape beyond its vision halfway through the game, ToMnet accurately predicted that it would stick to its original path more frequently than better-sighted characters, who were more likely to adapt.

Gopnik says this study—and another at the conference that suggested AIs can predict other AI’s behavior based on what they know about themselves—are examples of neural networks’ “striking” ability to learn skills on their own. But that still doesn’t put them on the same level as human children, she says, who would likely pass this false-belief task with near-perfect accuracy, even if they had never encountered it before.

Josh Tenenbaum, a psychologist and computer scientist at the Massachusetts Institute of Technology in Cambridge, has also worked on computational models of theory of mind capacities. He says ToMnet infers beliefs more efficiently than his team’s system, which is based on a more abstract form of probabilistic reasoning rather than neural networks. But ToMnet’s understanding is more tightly bound to the contexts in which it’s trained, he adds, making it less able to predict behavior in radically new environments, as his system or even young children can do. In the future, he says, combining approaches might take the field in “really interesting directions.”

Gopnik notes that the kind of social competence computers are developing will improve not only cooperation with humans, but also, perhaps, deception. If a computer understands false beliefs, it may know how to induce them in people. Expect future pokerbots to master the art of bluffing.

Source: Science Mag