Artificial Intelligence

DARPA to Tackle Ethics of Artificial Intelligence


Defense Advanced Research Projects Agency (DARPA) officials will include a panel discussion on ethics and legal issues at the Artificial Intelligence (AI) Colloquium being held March 6-7 in Alexandria, Virginia.

“We’re looking at the ethical, legal and social implications of our technologies, particularly as they become powerful and democratized in a way,” reveals John Everett, deputy director of DARPA’s Information Integration Office.

Questions abound regarding the ethics and legal implications of AI, such as who is responsible if an self-driving automobile runs over a pedestrian, or whether military weapon systems should have a “human in the loop” controlling unmanned systems to prevent mistakes on the battlefield. Those questions become more acute as AI becomes more prevalent. “A lot of the technology of the 20th century was not widely accessible to people. That’s not true any more. You have high school students editing genes,” Everett notes. “AI is accessible to people with a laptop, an Internet connection and maybe some cloud [computing] time. So, we need to be thinking about the broader implications of these technologies.”

For the military, autonomous weapons are the “perennial concern,” he adds. “There are actually other concerns that are probably going to become more important. If you see an autonomous tank driving across a plain toward troops, in some sense, does it matter if it’s driven by a human or computer? It’s autonomous. It’s got either a computer or a biological computer—a human—running it.”

Defense Department policy requires a human in the loop for any lethal action, but adversaries such as Russia or China are exploring fully autonomous weapon systems that could prove to be faster and more lethal than human-operated systems. “Then there’s an ethical question of force protection. How do we make sure we’re protecting our forces, particularly if they have an adversary that may not be as concerned about collateral damage? They have a more aggressive stance toward autonomous systems, so we need to understand if it’s possible to provide adequate force protection,” Everett says.  

Another concern is that systems built and programmed by humans will exhibit human biases. “The more subtle ethical issues will arise in systems where they’re composed of many different machine learning systems, and the results are not catastrophically wrong, but they’re biased, and their systematically biased,” Everett offers. “They’re biased in such a way that they have subtle impacts on groups of people, for example.

And that could have a substantial societal cost if we don’t understand ways to test for bias, to detect it, prove that it’s there and to correct it.”

Privacy also is an issue to consider, says Valerie Browning, director of DARPA’s Defense Sciences Office. “As AI becomes more prevalent, AI tools are used for diagnostics or in any form where it’s taking something particular about you to make a recommendation. It could all be completely well intentioned, but there is that issue of you putting some information about yourself somewhere that it’s very accessible [to others],” she notes.

Additionally, AI-enabled prosthetics and implanted medical devices could be accessible to nefarious hackers. Former Vice President Dick Cheney revealed in 2013 that the wireless function on his heart implant had to be disabled for fear terrorists might hack into it. “We have prosthetics that are being controlled by the brain. AI is going to play an increasing role in training these systems,” Browning explains. “There could be vulnerabilities, attack surfaces in AI. There’s a cyber aspect to that we have to worry about.”

During the AI Colloquium DARPA researchers and program managers will discuss work that is advancing the fundamentals of AI, as well as those programs that are exploring the technology’s application to defense-relevant challenges, from cyber defense and software engineering to aviation and spectrum management. The event also will feature an update on DARPA’s AI Exploration program, which involves a series of high-risk, high- reward projects that seek to establish the feasibility of new AI concepts within 18 months of award, as well as poster sessions that provide attendees opportunities to engage with the researchers actively involved with DARPA’s AI programs.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.