Artificial Intelligence

War and artificial intelligence: Who’s to blame when something goes wrong?

To pull the trigger—or, as the case may be, not to pull it. To hit the button, or to hold off. Legally—and ethically—the role of the soldier’s decision in matters of life and death is preeminent and indispensable. Fundamentally, it is these decisions that define the human act of war.

It should be of little surprise, then, that states and civil society have taken up the question of intelligent autonomous weapons—weapons that can select and fire upon targets without any human input—as a matter of serious concern. 

Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.

Meanwhile, intelligent systems that merely guide the hand that pulls the trigger have been gaining purchase in the warmaker’s tool kit. And they’ve quietly become sophisticated enough to raise novel questions—ones that are trickier to answer than the well-­covered wrangles over killer robots and, with each passing day, more urgent: What does it mean when a decision is only part human and part machine? And when, if ever, is it ethical for that decision to be a decision to kill?

Of the DoD’s five “ethical principles for artificial intelligence,” which are phrased as qualities, the one that’s always listed first is “Responsible.” In practice, this means that when things go wrong, someone—a human, not a machine—has got to hold the bag.

This is an excerpt. Read the full article here


This website uses cookies. By continuing to use this site, you accept our use of cookies.