Artificial Intelligence

It’s Called Artificial Intelligence—but What Is Intelligence?


Elizabeth Spelke, a cognitive psychologist at Harvard, has spent her career testing the world’s most sophisticated learning system—the mind of a baby.

Gurgling infants might seem like no match for artificial intelligence. They are terrible at labeling images, hopeless at mining text, and awful at videogames. Then again, babies can do things beyond the reach of any AI. By just a few months old, they’ve begun to grasp the foundations of language, such as grammar. They’ve started to understand how the physical world works, how to adapt to unfamiliar situations.

Yet even experts like Spelke don’t understand precisely how babies—or adults, for that matter—learn. That gap points to a puzzle at the heart of modern artificial intelligence: We’re not sure what to aim for.

Consider one of the most impressive examples of AI, AlphaZero, a program that plays board games with superhuman skill. After playing thousands of games against itself at hyperspeed, and learning from winning positions, AlphaZero independently discovered several famous chess strategies and even invented new ones. It certainly seems like a machine eclipsing human cognitive abilities. But AlphaZero needs to play millions more games than a person during practice to learn a game. Most tellingly, it cannot take what it has learned from the game and apply it to another area.

To some members of the AI priesthood, that calls for a new approach. “What makes human intelligence special is its adaptability—its power to generalize to never-seen-before situations,” says François Chollet, a well-known AI engineer and the creator of Keras, a widely used framework for deep learning. In a November research paper, he argued that it’s misguided to measure machine intelligence solely according to its skills at specific tasks. “Humans don’t start out with skills; they start out with a broad ability to acquire new skills,” he says. “What a strong human chess player is demonstrating isn’t the ability to play chess per se, but the potential to acquire any task of a similar difficulty. That’s a very different capability.”

Chollet posed a set of problems designed to test an AI program’s ability to learn in a more generalized way. Each problem requires arranging colored squares on a grid based on just a few prior examples. It’s not hard for a person. But modern machine-learning programs—trained on huge amounts of data—cannot learn from so few examples. As of late April, more than 650 teams had signed up to tackle the challenge; the best AI systems were getting about 12 percent correct.

It isn’t yet clear how humans solve these problems, but Spelke’s work offers a few clues. For one thing, it suggests that humans are born with an innate ability to quickly learn certain things, like what a smile means or what happens when you drop something. It also suggests we learn a lot from each other. One recent experiment showed that 3-month-olds appear puzzled when someone grabs a ball in an inefficient way, suggesting that they already appreciate that people cause changes in their environment. Even the most sophisticated and powerful AI systems on the market can’t grasp such concepts. A self-driving car, for instance, cannot intuit from common sense what will happen if a truck spills its load.

Keep Reading

Josh Tenenbaum, a professor in MIT’s Center for Brains, Minds & Machines, works closely with Spelke and uses insights from cognitive science as inspiration for his programs. He says much of modern AI misses the bigger picture, likening it to a Victorian-era satire about a two-dimensional world inhabited by simple geometrical people. “We’re sort of exploring Flatland—only some dimensions of basic intelligence,” he says. Tenenbaum believes that, just as evolution has given the human brain certain capabilities, AI programs will need a basic understanding of physics and psychology in order to acquire and use knowledge as efficiently as a baby. And to apply this knowledge to new situations, he says, they’ll need to learn in new ways—for example, by drawing causal inferences rather than simply finding patterns. “At some point—you know, if you’re intelligent—you realize maybe there’s something else out there,” he says.


This article appears in the June issue. Subscribe now.

Let us know what you think about this article. Submit a letter to the editor at mail@wired.com.


Special Series: The Future of Thinking Machines



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.