Artificial Intelligence

The problem with anthropomorphizing artificial intelligence


Robot Playing Chess
Close-up Of A Robot’s Hand Playing Chess

This article is part of Demystifying AI, a series of posts that (try to) disambiguate the jargon and myths surrounding AI.

Last week, in an essay for The New York Times, famous mathematician Steven Strogatz praised the recently published performance results of AlphaZero, the board game–playing AI developed by DeepMind, a British AI company acquired by Google in 2014. While his examination of AlphaZero’s findings is an interesting read, some of the conclusions Strogatz draws about the general advances in AI are problematic.

“[AlphaZero] clearly displays a breed of intellect that humans have not seen before, and that we will be mulling over for a long time to come,” Strogatz writes early in the article.

Further down, Strogatz writes, “By playing against itself and updating its neural network as it learned from experience, AlphaZero discovered the principles of chess on its own and quickly became the best player ever.”

Strogratz also stated that AlphaZero “seemed to express insight” and described its gameplay as intuitive, beautiful and romantic.

Strogratz’s praise for AlphaZero’s innovation is understandable. The achievements of AlphaZero were among the most impressive AI developments of 2017. However, the problem with his essay is that he is trying to describe artificial intelligence and deep learning in terms of human characteristics.

This kind of thinking can lead to wrong interpretations of technological achievements and unrealistic expectations of AI innovations. While I deem it unlikely that as a leading voice in mathematics, Strogratz has bloated illusions about the limits of current blends of AI, his writing can surely contribute to depicting a wrong image of where AI stands today.

Anthropomorphizing deep learning

stones on a Go board

Anthropomorphizing AI is a problem that has been all too common in the history of computers and artificial intelligence. For decades, we have tried to create correspondences between the functionalities of artificial intelligence and the human brain. We like to think that sometime in the future, AI will be able to replicate the abstract thinking of the human mind.

We try to give think of AI algorithms as beings that can love (Her and Wall-E), hate (HAL 9000), have evil ambitions (Matrix), make sacrifices for friends (Big Hero 6, Terminator 2) and manifest many other types of human emotions and behaviors.

Those examples all relate to works of fiction. The audience knows beyond the shadow of a doubt that what they’re seeing and reading about are not even remotely possible. However, when it comes to contemporary technology, anthropomorphizing AI can have more direct consequences.

To be fair, there’s ample reason to humanize machine learning and deep learning, the most popular subsets of artificial intelligence. Deep learning and its underlying technology, artificial neural networks, have been able to solve problems that have been historically challenging for classical approaches to creating software. Neural networks are especially efficient at detecting and classifying objects in images and videos, facial and speech recognition, transcribing audio speeches and synthesizing artificial voices that sound natural. Neural networks can also interact with humans in their own language in ways that were previously impossible.

Thanks to advances in deep learning and the use cases it has unlocked, the interactions between humans and computers have changed immensely. We now have digital assistants that we speak to in spoken language and personify by calling them names such as Siri, Alexa and Cortana.

For those who remember days when computer software were rigid pieces of code that could only perform tasks based on distinct rules, these feats are impressive enough to warrant a reminder of Arthur C. Clarke’s third law:

Any sufficiently advanced technology is indistinguishable from magic.

In this regard, AlphaZero has even more merit than many of the other achievements of deep learning. First, AlphaZero uses zero input from humans (hence the name) and “learns” board games from scratch by playing against itself. This is contrary to the general practice in deep learning, which involves meticulous labeling and classification of training data by humans operators, a discipline that has given rise to a labor industry of its own.

Second, AlphaZero has, after a fashion, overcome one of the known limits of deep learning. Most deep learning algorithms can become very good at performing a task they’ve been trained for, but terrible at doing anything that falls out of their narrow domain. For instance, a neural network trained to play chess will be of no use in playing Go. You’ll need to retrain it from scratch. On the other hand, AlphaZero has managed to generalize, to a certain degree, the automation of board games. DeepMind’s scientists succeeded to use the same algorithm to play chess, shogi and Go, three board games with totally different rulesets.

Why it’s wrong to humanize deep learning

Robot human hands touching
Image credit: depositphotos

But in spite of all its marvels, AlphaZero is nowhere comparable to the human mind. There’s nothing intuitive, beautiful and romantic about its gameplay.

As AI expert and venture capitalist Kai-Fu explains in his acclaimed book “AI Superpowers,” “With all of the advances in machine learning, the truth remains that we are still nowhere near creating AI machines that feel any emotions at all. Can you imagine the elation that comes from beating a world champion at the game you’ve devoted your whole life to mastering? AlphaGo did just that, but it took no pleasure in its success, felt no happiness from winning, and had no desire to hug a loved one after its victory.”

AlphaZero also did “master the principles of chess,” but not in the same way that a human grandmaster would.

Board-game AI generally have two components: a value function and a tree search algorithm. The value function helps the AI evaluate the possibility that a certain arrangement on the board will result in each of the players winning. The tree search algorithm helps the AI navigate the sets of possible moves and their associated values in an optimal fashion.

Stockfish, the strongest chess-playing algorithm before AlphaZero, is much closer to principles of the game as is known to human players. Programmers have meticulously hardcoded its value function with the principles and strategies of playing chess as is known to human experts.

In contrast, AlphaZero uses a neural network to develop its value function. This means that it examines millions of chessboard states and the resulting outcome and develops a mathematical function that can assign values to new chessboard arrangement based on its similarities to other examples it has seen before.

AlphaZero also uses reinforcement learning, which means it requires no input or training by human operators. It plays against itself numerous times, starting with random moves and gradually updating its value function as it tries different sequences. While reinforcement learning is a very exciting and advanced subset of deep learning, it still has many hurdles to overcome. When left to their own devices, neural networks can get stuck or develop irrational behavior.

In the following example, a neural network playing CoastRunners decided that running around in circles and hitting objects is more rewarding than staying on course and trying to complete the game.

The ingenuity of AlphaZero was that its creators managed to develop tricks that helped it go through self-play without getting stuck. But again, it’s not magic. It’s the right tuning of neural networks and Monte Carlo tree search. AlphaZero doesn’t appreciate its wins. It’s not using tactics in the sense that humans do. It doesn’t have a mental model of the game. It’s not trying to read its opponents mind. It’s just optimizing to produce a certain type of result, which in this case, is a winning move.

And let’s not forget that board games are nearly as complex as some of the other domains that neural networks and deep learning algorithms have ventured in. In board games, players have full knowledge of the entire environment and they take turns at making moves. This is a common denominator between chess, shogi and Go, all three games that AlphaZero has mastered. Basically, you can train the same network on pictures of board states of different games and obtain acceptable results.

The same can’t be said of other areas where deep learning is being applied, such as self-driving cars or even other games being explored by AI algorithms such as poker or real-time strategy video games.

None of this means that AlphaZero, or other applications of deep learning and neural networks and other AI innovations are to be underestimated and devalued. They’re some of the most important and powerful developments of our age.

But that doesn’t mean we should start humanizing deep learning and drawing wrong conclusions. At the end of his article, Strogratz suggests AlphaZero will possibly evolve “into a more general problem-solving algorithm.” AlphaZero is a statistical beast and it can master board games because they can be well represented in statistical terms.

AlphaZero may have generalized board games, but general problem-solving requires commonsense and abstract thinking, characteristics that are still exclusive to the human mind. The leading voices in artificial intelligence believe that we’re nowhere near creating “general AI,” computers that can match the intellectual and thinking skills of humans.

Then again, when you describe deep learning neural network in terms that apply to humans, it’s easy to think that they can soon solve any possible problem.





READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.