Artificial intelligence tools like GPT-3 can mimick human writing in a somewhat convincing way. But fundamentally, they’re learning patterns and then emulating them — they don’t understand what they’re actually doing.
In order to train AI to use language with intent, scientists from Facebook and the Georgia Institute of Technology dropped an algorithm into the middle of a roleplaying game, MIT Technology Review reports. The idea was that by giving the AI a specific quest to complete, they could teach it to pursue goals like a human gamer — and maybe even move toward a deeper understanding of language.
In the game, the AI agent played as a dragon who was tasked with specific missions like hoarding gold. To succeed, it had to speak to other AI agents or humans playing the game by inputting specific commands, just like in any other text-based adventure.
The results were a bit clunky — the dragon would make nonsensical threats to make characters do what it wanted — but in the end the AI agent succeeded, according to the team’s research.
The dragon did need some help along the way. The researchers preprogrammed it with an understanding of how different characters might interact, which they referred to as giving it a sort of common sense understanding of the game.
But even with that assist, the results are an exciting glimpse of future AI that might be able to process what words actually mean — and what impact they may have in the real world.
READ MORE: How role-playing a dragon can teach an AI to manipulate and persuade [MIT Technology Review]
More on AI games: This Roleplaying AI Makes a Great Dungeon Master