Artificial Intelligence

Artificial intelligence needs global ground rules


In 1986, the nanotechnologist Eric Drexler invented the concept of grey goo. It was the idea that a nano-replicator, a minuscule machine, could make an infinite number of copies of itself at exponential pace, covering the earth and obliterating life. “Dangerous replicators could easily be too tough, small and rapidly spreading to stop,” he wrote.

The runaway tech scenario was updated for artificial intelligence in 2003 by Oxford philosopher Nick Bostrom. This time the scenario was a superintelligent AI programmed to make paper clips, single-mindedly using all available resources on earth to do so: the AI will outwit any human attempt to hinder its paper clip-maximising goal.

The world is no more likely to be covered in paper clips now than it was by grey goo in the 1980s. The point of such thought experiments is to focus attention on how to use and control powerful new technologies, while underlining the fears they always arouse.

In the AI case, the fear has a geopolitical twist, thanks to the uncomfortable broader relationship between the two AI superpowers, the US and China. Beijing has been working through the UN’s International Telecommunications Union to set standards for facial recognition technology that give advantages to Chinese groups and raise concerns about how it may be used.

Standard-setting through international bodies occurs in all industries, and standard wars are not uncommon — the VHS versus Betamax formats for videotapes (remember those?) was a well-known example. Generally, industry competitors and regulators tussle over international standards until a winner emerges and the technical details are codified. Sometimes — as with smartphone operating systems iOS and Android — different standards coexist for long periods.

But the grey goo/paper clip scenarios concern the use and misuse of powerful new technologies, rather than common rules about how they work. The US might not want to buy Chinese facial recognition technology for commercial or economic reasons. But if America does do so, it could in principle set its own terms for the technology’s use. Although there is concern that the ITU draft goes beyond technical details to use cases, it seems unlikely that an international standard would force any government to allow what it considers to be serious breaches of citizens’ rights.

Still, the controversy over facial recognition technology underlines the need to develop common rules for future uses of AI. Several general statements of principles already exist, including one from the OECD and the G20. That raises the questions of whether additional international regulation is necessary and whether it could ever be agreed, given how much China and the US seem to diverge on acceptable uses.

There are at least two reasons for cautious optimism. One is that the general deployment of AI is not really — as it is often described — an “arms race” (although its use in weapons is indeed a ratchet in the global arms race). In the context of a commercial product, the metaphor should not be taken literally. Indeed, economists generally think competition — a less fraught term for arms races — is a good thing, spurring better products and innovation.

The other reason for thinking some consensus on limiting adverse uses of AI may be possible is that there are relatively few parties to any discussions, at least for now.

At the moment, only big companies and governments can afford the hardware and computing power needed to run cutting-edge AI applications; others are renting access through cloud services. This size barrier could make it easier to establish some ground rules before the technology becomes more accessible.

Even then, previous scares offer a hopeful lesson: fears of bioterrorism or nano-terrorism — labelled “weapons of knowledge-enabled mass destruction” in a famous 2000 Wired article by Bill Joy — seem to have overlooked the fact that advanced technology use depends on complex organisational structures and tacit knowhow, as well as bits of code.

There are other reasons why more AI regulation is desirable. Businesses want clarity on the rules of use. Recent research suggests that the more managers learn about the complexity of the emerging regulatory landscape, the less they want to bother with using AI. To assuage fears and to enable businesses to deploy AI for benign uses, we need to move the global regulatory conversation beyond high-minded statements of principles and sharp-elbowed fights over standards. It is time for some serious conversations about this powerful new technology.

 
The writer is Bennett Professor of Public Policy at the University of Cambridge



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.