Artificial Intelligence

Ex-Google exec describes 4 top dangers of artificial intelligence


California’s Senate last week advanced a bill that would force Amazon (AMZN) to reveal details behind the productivity-tracking algorithm used in its warehouses; meanwhile, Facebook (FB) this week faced criticism over a Wall Street Journal report finding it knows its Instagram feed makes some teenage girls feel worse about themselves.

These developments make up a backlash not necessarily against big tech, so much as its algorithms, which use artificial intelligence (AI) to adapt performance for individual users or employees.

In a new interview, AI expert Kai-Fu Lee — who worked as an executive at Google (GOOG, GOOGL), Apple (AAPL), and Microsoft (MSFT) — explained the top four dangers of burgeoning AI technology: externalities, personal data risks, inability to explain consequential choices, and warfare.

“The single largest danger is autonomous weapons,” he says.

“That’s when AI can be trained to kill, and more specifically trained to assassinate,” adds Lee, the co-author of a new book entitled “AI 2041: Ten Visions for Our Future.” “Imagine a drone that can fly itself and seek specific people out either with facial recognition or cell signals or whatever.”

‘It changes the future of warfare’

A ban on autonomous weapons has drawn support from 30 countries, though an in-depth report commissioned by Congress advised the U.S. to oppose a ban, since it could prevent the country from using weapons already in its possession. 

In 2015, prominent figures in tech like Tesla CEO Elon Musk and Apple co-founder Steve Wozniak, as well as thousands of AI researchers, signed an open letter calling for a ban on such weapons.

See also  Artificial intelligence isn’t here to take over the workforce, according to an expert

Autonomous weapons will transform warfare since their affordability and precision will make it easier to wreak havoc and near-impossible to identify who committed the crime, Lee said.

“I think that changes the future of terrorism, because no longer are terrorists potentially losing their lives to do something bad,” he says. “It also allows a terrorist group to use 10,000 of these drones to perform something as terrible as genocide,” he says.

“It changes the future of warfare,” he adds. “We need to figure out how to ban or regulate it.”

The second significant risk posed by artificial intelligence is the unintended negative consequences that result when AI fixates on a single goal but excludes other concerns, Lee said.

“Externalities happen when AI is told to do something, and it’s so good at doing that thing that it forgets — or actually ignores — other externalities or negative impacts that may cause,” he says. 

“So when YouTube keeps sending us videos that we’re most likely to click on, it’s not only not thinking about serendipity, it’s also potentially sending me very negative views or very one-sided views that might shape my thinking,” he adds.

This risk may have befallen the user feed presented on Instagram, according to a report from the Wall Street Journal that shows Facebook internal research from 2019 found Instagram made body image issues worse for one in three teenage girls who were on the app and experiencing such issues.

Teens attributed increased rates of anxiety and depression to Instagram, the internal Facebook report found, according to the Journal. For its part, Facebook is testing a way to ask users if they need a way to take a break from Instagram, two of its researchers told the Journal. The researchers also noted that some of their studies involved a small number of users, and in some cases causality of their findings was unclear.

See also  4 Current Use Cases That Illustrate the Widening Adoption of AI -
Activists from the Campaign to Stop Killer Robots, a coalition of non-governmental organisations opposing lethal autonomous weapons or so-called 'killer robots', stage a protest at Brandenburg Gate in Berlin, Germany, March, 21, 2019. REUTERS/Annegret Hilse

Activists from the Campaign to Stop Killer Robots, a coalition of non-governmental organisations opposing lethal autonomous weapons or so-called ‘killer robots’, stage a protest at Brandenburg Gate in Berlin, Germany, March, 21, 2019. REUTERS/Annegret Hilse

Kai-Fu Lee has been at the center of AI development for decades, ever since he helped develop speech recognition and automated speech technology as a doctoral student at Carnegie Mellon University.

Since 2009, he has served as the CEO of Sinovation Ventures, a tech-focused venture capital firm in China with over $2.5 billion in assets under management.

Speaking to Yahoo Finance, Lee cited a final set of AI dangers around vulnerable personal data and an inability to explain consequential decisions made by the technology. 

Decisions made by AI are especially crucial in life or death situations, such as a thought experiment known as the trolley problem, in which a decision-maker must choose whether to divert a runaway train from killing many people in its path, at the risk of killing fewer people on an alternative one, Lee said.

“Can AI explain to us why it made decisions that it made?” he says. “In four key things like driving autonomous vehicles, the trolley problem, medical decision-making, surgeries.”

“It gets serious,” he adds. 

Read more:

Follow Yahoo Finance on Twitter, Facebook, Instagram, Flipboard, LinkedIn, YouTube, and reddit.





READ SOURCE

Leave a Reply

This website uses cookies. By continuing to use this site, you accept our use of cookies.