Artificial Intelligence

Pentagon issues artificial intelligence (AI) ethics guidelines to safeguard access to the latest technology

WASHINGTON – Thousands of Google employees protested in 2018 when they found out about their company’s involvement in Project Maven — a controversial U.S. military effort to develop artificial intelligence (AI) to analyze surveillance video. MIT Technology Review reports. Continue reading original article

The Military & Aerospace Electronics take:

9 Dec. 2021 — Officials of the U.S. Department of Defense (DOD) know they have a trust problem with Big Tech — something they must tackle to maintain access to the latest technology.

In a bid to promote transparency, the Defense Innovation Unit, which awards DOD contracts to companies, has released what it calls “responsible artificial intelligence” guidelines that it will require third-party developers to use when building AI for the military, whether that AI is for an HR system or target recognition.

The AI ethics guidelines provide a step-by-step process for companies to follow during planning, development, and deployment. They include procedures for identifying who might use the technology, who might be harmed by it, what those harms might be, and how they might be avoided—both before the system is built and once it is up and running.

Related: Artificial intelligence (AI) in unmanned vehicles

Related: Artificial intelligence and machine learning for unmanned vehicles

Related: Ethical artificial intelligence (AI) must be responsible; equitable; traceable; reliable; and governable

John Keller, chief editor
Military & Aerospace Electronics


See also  Artificial Intelligence (AI) in Cybersecurity Market 2020-2026 | Comprehensive Study COVID19 Impact Analysis | Worldwide Key Players: Darktrace, Cylance, Securonix, IBM, NVIDIA Corporation, etc.

Leave a Reply

This website uses cookies. By continuing to use this site, you accept our use of cookies.