The report, adopted on Wednesday with 364 votes in favour, 274 against, 52 abstentions, calls for an EU legal framework on AI with definitions and ethical principles, including its military use. It also calls on the EU and its member states to ensure AI and related technologies are human-centred (i.e. intended for the service of humanity and the common good).
Military use and human oversight
MEPs stress that human dignity and human rights must be respected in all EU defence-related activities. AI-enabled systems must allow humans to exert meaningful control, so they can assume responsibility and accountability for their use.
The use of lethal autonomous weapon systems (LAWS) raises fundamental ethical and legal questions on human control, say MEPs, reiterating their call for an EU strategy to prohibit them as well as a ban on so-called “killer robots”. The decision to select a target and take lethal action using an autonomous weapon system must always be made by a human exercising meaningful control and judgement, in line with the principles of proportionality and necessity.
The text calls on the EU to take a leading role in creating and promoting a global framework governing the military use of AI, alongside the UN and the international community.
AI in the public sector
The increased use of AI systems in public services, especially healthcare and justice, should not replace human contact or lead to discrimination, MEPs assert. People should always be informed if they are subject to a decision based on AI and be given the option to appeal it.
When AI is used in matters of public health, (e.g. robot-assisted surgery, smart prostheses, predictive medicine), patients’ personal data must be protected and the principle of equal treatment upheld. While the use of AI technologies in the justice sector can help speed up proceedings and take more rational decisions, final court decisions must be taken by humans, be strictly verified by a person and be subject to due process.
Mass surveillance and deepfakes
MEPs also warn of threats to fundamental human rights and state sovereignty arising from the use of AI technologies in mass civil and military surveillance. They call for public authorities to be banned from using “highly intrusive social scoring applications” (for monitoring and rating citizens). The report also raises concerns over “deepfake technologies” that have the potential to “destabilise countries, spread disinformation and influence elections”. Creators should be obliged to label such material as “not original” and more research should be done into technology to counter this phenomenon.
Rapporteur Gilles Lebreton (ID, FR) said: “Faced with the multiple challenges posed by the development of AI, we need legal responses. To prepare the Commission’s legislative proposal on this subject, this report aims to put in place a framework which essentially recalls that, in any area, especially in the military field and in those managed by the state such as justice and health, AI must always remain a tool used only to assist decision-making or help when taking action. It must never replace or relieve humans of their responsibility”.