Artificial Intelligence

MEPs back plans for artificial intelligence and robotics, but ethical concerns remain – EURACTIV.com


MEPs in the European Parliament’s Committee on Industry, Research and Energy backed plans on Monday evening (14 January) for a comprehensive policy framework on artificial intelligence (AI) and robotics, weeks after ethical concerns in the field were highlighted in a EU report.

Parliament’s report, though not legally binding, gives a clear signal that MEPs will seek to pressure the Commission to draw up an industrial policy for artificial intelligence and robotics.

“This is a key area and I am pleased that we have been able to make some strong suggestions on AI,” British Conservative MEP Ashley Fox said on Tuesday evening. “The technology is not confined to the boundaries of the single market and it is imperative that the EU work at the international level to agree on standards.”

MEPs noted the future potential for AI and robotics to transform a number of sectors ranging from health, energy, manufacturing and transport, and also urged member states to develop new training programmes that cultivate skills in areas that are likely to be affected by future autonomous technologies.

They also stressed the need for the future development of AI to be governed by a code of ethics that takes into account the importance of liability and transparency.

“It is vital that we ensure that we are able to understand why AI has arrived at certain decisions and that is why we have emphasised in this report the need for explainability. Without such safeguards in place we will not achieve a human centred approach,” Fox said.

AI and ethics

A draft report published by the Commission’s High Level Expert Group on Artificial Intelligence and Ethics at the end of last year echoed Fox’s stance on the need for ‘human-centred’  development of AI.

However, the report, which surveyed the future potential ethical hurdles in the future evolution of AI, also drew attention to a number of critical concerns, such as citizen scoring, covert artificial intelligence, identification technologies – such as facial-recognition software – and lethal autonomous weapons systems, otherwise known as ‘killer robots.’

The ‘killer robots’ conundrum has frustrated many in Brussels after the European Parliament adopted a resolution in September 2018 calling for an international ban, stressing that “machines cannot make human-like decisions” and that humans should remain accountable for decisions taken during the course of war.

However, the United Nations has failed to reach a consensus on a blanket ban on the use of killer robots. A bloc of states headed by the US and Russia and including South Korea and Israel are said to be against a ban.

Increasing trust in AI

The overarching aim of the Commission’s High Level Group is to foster AI that is trustworthy. Speaking after the publication of the draft report in December, Vice-President for the Digital Single Market Andrus Ansip said: “AI can bring major benefits to our societies, from helping diagnose and cure cancers to reducing energy consumption.”

“But for people to accept and use AI-based systems, they need to trust them, know that their privacy is respected, that decisions are not biased.”

The current papers on the table represent a draft report of the Commission’s ethical guidelines. The final edition of the report is set to be published in March 2019.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.