Artificial Intelligence

The American public is already worried about AI catastrophe


For decades, some researchers have been arguing that general artificial intelligence will, if improperly deployed, harm and possibly endanger humanity. For a long time, these worries were unheard of or on the back burner for most people — AI looked to be a long way away.

In the past few years, though, that has been changing. We’ve seen major advances in what AI systems can do — and a new report from the Center for the Governance of AI suggests that many people are concerned about where AI will lead next.

“People are not convinced that advanced AI will be to the benefit of humanity,” Allan Dafoe, an associate professor of international politics of artificial intelligence at Oxford and a co-author of the report, told me.

The Center for the Governance of AI is part of Oxford University’s Future of Humanity Institute. The new report, by Dafoe and Yale’s Baobao Zhang, is based on a 2018 survey of 2,000 US adults.

There are two big surprises in the report. The first is that the median respondent expects big developments in AI within the next decade — and, relatedly, the median respondent is nearly as concerned with “big picture” AI concerns like artificial general intelligence as with concerns like data privacy and cyberattacks.

The second is that concern about risks from AI, often stereotyped as a concern exclusively of Silicon Valley software engineers inflating the importance of their own work, is actually common at all income levels and for all backgrounds — with low-income people and women being the most concerned about AI.

Surveying the general public isn’t a good way to learn whether AI is actually a risk, of course — small differences in phrasing on surveys can affect the responses dramatically, and especially on a topic as contentious as AI, the public can easily be just misinformed. But surveys like this still matter for AI policy work — they help researchers identify which AI safety concerns are now mainstream and which are misunderstood, and they paint a clearer picture of how the public is looking at transformative technology on the horizon.

What concerns about AI do people have?

The word AI is used to refer both to present-day technology like Siri, Google Translate, and IBM’s Watson and to transformative future technologies that surpass human capabilities in all areas. That means surveying people about “risks from AI” is a fraught project — some of them will be thinking about Facebook’s News Feed, and some of them, like Stephen Hawking, about technologies that exceed our intelligence “by more than ours exceeds that of snails.”

The survey handled this by identifying 13 possible challenges from AI systems. Each respondent saw five of the 13 and was asked to rank those five on a scale from 0 (not at all important) to 3 (very important).

Among the concerns respondents rated most likely to impact large numbers of people, and most urgent for tech companies and governments to tackle, were data privacy, digital manipulation (for example, fake images), AI-enhanced cyberattacks, and surveillance. But the most striking result was that respondents were also deeply concerned with more “long-term” concerns.

Many people who talk about AI safety distinguish between the problems we’re already having today — with algorithmic bias, transparency, and interpretability of AI systems — and the problems that won’t arise until AI systems are vastly more capable than they are today, like extinction risks from general artificial intelligence. Other experts think this is a false dichotomy — the reason general artificial intelligence will be so dangerous is that the machine learning systems we have today often pursue their goals in unexpected ways, and their behavior can get more unpredictable as they get more powerful.

Survey respondents on average ranked tomorrow’s AI concerns — like technological unemployment, failures of “value alignment” (failing to design systems that share our goals), and “critical AI safety failures” that kill at least 10 percent of people on Earth — as nearly as important as present-day concerns. “The public regards as important the whole space of AI governance issues, including privacy, fairness, autonomous weapons, unemployment, and other risks that may arise from advanced AI,” Dafoe told me. That might suggest that policymakers should be trying to address all these issues hand in hand — and that it’d be a mistake to ignore any.

Who’s afraid of risks from advanced artificial intelligence?

Fears of risks from advanced artificial intelligence are often attributed to Silicon Valley, and sometimes covered as if they’re yet another fad out of the Bay Area tech community. “If Silicon Valley Types Are Scared of A.I., Should We Be?” wondered an article in Slate in 2017, worrying that risks from AI might be “a grandiose delusion, on the part of computer programmers and tech entrepreneurs and other cloistered egomaniacal geeks.”

The report suggests that gets it exactly wrong. An overwhelming majority of Americans — 82 percent — “believe that robots and/or AI should be carefully managed,” Zhang and Dafoe write, noting this is “comparable to survey results from EU respondents.” Men are less concerned than women, high-income Americans are less concerned than low-income Americans, and programmers are less concerned than people working in other fields.

Not only are high-income programmers and tech entrepreneurs far from the only ones concerned with AI risk, they are, as a group, more optimistic about AI than most respondents. “People who have CS or engineering degrees or CS or programming experience seem to be more supportive of developing AI and seem to be less concerned with these AI governance challenges we ask about,” Zhang said. (Of course, many prominent computer scientists and machine learning researchers are also among those calling for AI safety research.)

Do different demographic groups fear different AI scenarios, though? For example, is it the case that programmers and tech entrepreneurs are more concerned with disastrous AI system deployments, while low-income respondents fear technological unemployment?

Studying cross-sections of a survey like that can introduce some spurious results, so a better analysis of this question will need a lot more data, but there’s no indication that’s going on here. Fears of disastrous system deployments and fears of data privacy problems aren’t held by disparate groups of people; most respondents ranked both highly. It might be time to lay “AI is a rich techie concern” to rest — AI will affect everyone, and this poll suggests that almost everyone has some reservations about it.

The public expects huge advances in AI — soon

Expert estimates of when we can next expect big advances in AI vary immensely. While some expect to keep building on the momentum of recent years and deploy world-altering systems within the next few decades, others have argued that general AI might be centuries off.

The general public, according to the new report, expects progress quickly. The survey asked respondents to predict “when machines are able to perform almost all tasks that are economically relevant today better than the median human.” That would be a sea change in the global economy. The median respondent predicted a 54 percent chance of AI with those capabilities by 2028.

This, as the report notes, is “considerably sooner than the predictions by experts in two previous surveys. In Müller and Bostrom (2014), expert respondents predict a 50 percent probability of high-level human intelligence being developed by 2040-2050 and 90 percent by 2075. In Grace et al. (2018), experts predict that there is a 50 percent chance that high-level machine intelligence will be built by 2061.

Part of the difference might be that Zhang and Dafoe ask about an AI that surpasses median human capabilities, while Grace asked about an AI that surpasses most human capabilities — but Zhang and Dafoe found the gap between popular opinion and expert opinion when they asked the exact same question as Grace asked experts.

Some machine learning researchers worry that high public expectations about AI could actually kill the industry: If results don’t arrive as quickly as people are expecting them, the public will quickly grow disillusioned, and there’ll be less public pressure for good policy around AI when the public has dismissed it.

If this survey is right — and, again, it’s just one survey — it looks like the public is paying attention to advances in AI and is apprehensive about future advances. That doesn’t mean public expectations necessarily match machine learning researchers’ best understanding of which problems are the key ones ahead. Progress toward safe deployment of AI systems takes more than public interest in a topic, but the public interest in the topic nonetheless suggests that AI safety may be starting to go mainstream.


Sign up for the Future Perfect newsletter. Twice a week, you’ll get a roundup of ideas and solutions for tackling our biggest challenges: improving public health, decreasing human and animal suffering, easing catastrophic risks, and — to put it simply — getting better at doing good.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.