Artificial Intelligence

European Union mulls new tougher rules for artificial intelligence


The European Union is considering new legally binding requirements for developers of artificial intelligence in an effort to ensure modern technology is developed and used in an ethical way.

The EU’s executive arm is set to propose the new rules apply to “high-risk sectors,” such as health care and transport, and suggest the bloc updates safety and liability laws, according to a draft of a so-called “white paper” on artificial intelligence obtained by Bloomberg. The European Commission is due to unveil the paper in mid-February and the final version is likely to change.

        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        

 

The paper is part of the EU’s broader effort to catch up to the U.S. and China on advancements in AI, but in a way that promotes European values such as user privacy. While some critics have long argued that stringent data protection laws like the EU’s could hinder innovation around AI, EU officials say harmonizing rules across the region will boost development.

European Commission President Ursula von der Leyen has pledged her team would present a new legislative approach on artificial intelligence within the first 100 days of her mandate, which started Dec. 1, handing the task to the EU’s digital chief, Margrethe Vestager, to coordinate.

A spokesman for the Brussels-based Commission declined to comment on leaks but added: “To maximize the benefits and address the challenges of Artificial Intelligence, Europe has to act as one and will define its own way, a human way. Trust and security of EU citizens will therefore be at the center of the EU’s strategy.”

The EU is also considering new obligations for public authorities around the deployment of facial recognition technology and more detailed rules on the use of such systems in public spaces. However, the provision on facial recognition isn’t among the three policy options officials recommend that the commission pursue.


        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        

 

The provision suggests prohibiting use of facial recognition by public and private actors in public spaces for several years to allow time to assess the risks of such technology.

“Such a ban would be a far-reaching measure that might hamper the development and uptake of this technology,” the commission says in the document, adding that it’s therefore preferable to focus on implementing relevant provisions in the EU’s existing data protection laws.

As part of the recommended policy measures, the EU also wants to urge its member states to appoint authorities to monitor the enforcement of any future rules governing the use of AI, according to the document.

In the draft, the EU defines high-risk applications as “applications of artificial intelligence which can produce legal effects for the individual or the legal entity or pose risk of injury, death or significant material damage for the individual or the legal entity.”

Artificial intelligence is already subject to a variety of European regulations, including rules on fundamental rights around privacy, nondiscrimination, as well as product safety and liability laws, but the rules may not fully cover all specific risks posed by new technologies, the Commission says in the document. For instance, product safety laws currently wouldn’t apply to services based on AI.

        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        
        





READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.