Artificial Intelligence

Ethical Implications of Artificial Intelligence and the Role of Governments


It’s not surprising when a popular musician sells a concert out in less than fifteen minutes, but would anyone believe a technology conference selling out in just over eleven? That happened this month for the Conference on Neural Information Processing Systems (NIPS), the premier artificial intelligence (AI) conference.

This insatiable appetite is because AI will undoubtedly dominate the future of every industry, every service and every user-experience. Companies and countries alike are competing for AI supremacy and AI researchers are in high demand, commanding seven figure salaries around the globe.

As we consider the implications of AI, there is a bright future ahead for countries that invest and participate. China, the United Kingdom, France, United Arab Emirates and Canada are just some of the 18 regions or countries that have already published national AI strategies to prepare their citizens to compete in the economy of the future. At least five other countries have strategies in the works. Many of these strategies are focused on research and development and reaping the economic benefits of a welcoming environment for AI focused companies. Some raise concerns about potential ethical issues with AI.

The U.S. is currently not one of these countries with a public national AI strategy and it could have lasting implications if we don’t better prepare. Of particular importance is the role of governments to provide guidance, not just for economic security, but for the ethical implications AI will have as it is refined. This is especially true when it comes to warfare and our criminal justice system. The nations choosing to invest in AI now are deciding these norms for the rest of the world, with or without the US.

The future of warfare will almost certainly involve AI, but we must determine when, or even if, life or death decisions may be made without human oversight. The Pentagon’s current policy requires “appropriate levels of human judgement over the use of force,” which allows for flexibility in determining human oversight. But is that enough? It may be, but we need a national strategy to understand the implications of that decision in a broader context. 

In the Fall, the United Nations convened a meeting about lethal autonomous weapon systems, and a small number of countries, including Russia, Israel, South Korea, Australia and the U.S., prevented consensus on a ban for what some call ‘killer robots.’ The U.S. argued that outlawing technology with great potential to also improve civilian safety is premature. If proven true, the U.S. also must put its money where its mouth is and take leadership in AI development to serve the greater good in this context.

Not only can AI improve civilian safety, it will vastly improve battlefield logistics, creating better scenarios for men and women in uniform. There is a scenario where AI may keep U.S. soldiers off the battlefield entirely, by using AI as a strategic advantage to prevent conflict or using it to find the critical piece of information within surveillance footage. At a minimum it can be used to improve the maintenance of equipment, movement of supplies and other logistics that otherwise could leave our soldiers lacking in moments of critical need. 

The recent uproar from Google employees over Project Maven, the Department of Defense’s project which uses AI to help analyze large amounts of captured surveillance footage, highlights how the benefit of this technology could be interpreted differently. According to reports, Project Maven was being used to distinguish vehicles from people, but many Google employees did not want to be a part of a program that could potentially be used in “targeted killing,” so the company decided not to seek an additional contract. It is possible to see why Project Maven could be viewed in that light, but this technology could also reduce civilian deaths by distinguishing between a truck of terrorists with weapons and family traveling from one place to the next. Failure to capitalize on these capabilities and lacking a national strategy defining limitations will result in collateral damage.

Next, consider the benefits of incorporating AI in our criminal justice system. It will take a fraction of time to process rape kits and DNA samples, reducing investigation times from months to days. Simultaneously we will need to thoughtfully determine when predicative analytics for policing, sentencing and parole decisions infringe on individual privacy rights or are vulnerable to ensconced systemic bias. For example, a recent report from ProPublica found that AI risk assessments in predicting the likelihood of criminal recidivism, which are used during probation, treatment and sentencing decisions, were biased against African American offenders. In addition to racial disparities, the algorithm was unreliable in forecasting violent crime. The study raises serious concerns about the use of the technology and illustrates the critical need for further testing and refinement. 

As we continue to apply AI to new fields, ethical dilemmas will arise and the answers will not be clearly defined. The path forward requires leadership from governments around the world to wrestle with these challenges and navigate a way forward that balances the protections that humanity deserves. 

The U.S. government, both the executive and legislative branches, have an incredibly important role to play in this discussion, which is why the House IT Subcommittee that I chair held a series of hearings on AI and has released a white paper with recommendations and lessons learned. Should we abdicate leadership in this arena, at best, we miss out on a piece of the economic pie, and at worst, we leave a vacuum for authoritarian countries to put their spins on global warfare, privacy and bias standards. 

At a time when China is already issuing its citizens’ social scores to prohibit travel and Russia is creating robots for combat, the U.S. is already behind and has the most to lose.  

Originally published in The Hill.





READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.