Artificial Intelligence

Orgalim input into the European Commission consultation on “Artificial Intelligence – ethical and legal requirements”


Brussels, 10 September 2020

Orgalim input into the European Commission consultation on “Artificial

Intelligence – ethical and legal requirements”

Orgalim strongly shares the view that, by powering digital transformation across sectors, Artificial Intelligence (henceforth ‘AI’) can drive economic growth while enabling new solutions to challenges in areas from energy to healthcare, from manufacturing to mobility, and beyond. Europe is already a leader in industrial AI, and an enabling framework for future technological developments will be the key to building on these strengths and securing our competitive edge in the world. This view has already been expressed in our AI Manifesto of January 2020which provided insights into what industrial AI entails for the European economy.

Orgalim supports Option 1

With this paper, we aim to provide input into the public consultation released by the European Commission on “Artificial Intelligence – ethical and legal requirements”. We strongly endorse the overall policy objective of ensuring the development and uptake of trustworthy AI across the Single Market. As this inception impact assessment outlines various options, Orgalim would like to affirm its support for Option 1 of the alternative options to the baseline scenario – i.e. the option of an EU ‘soft law’, a non-legislative approach to facilitate and encourage industry-led intervention (with no EU legislative instrument). The arguments in favour of this option are detailed further in the text below. In addition, we believe that a ‘soft law’ approach could build upon existing national initiatives and encourage the quicker transformation of the industrial world by using AI-driven systems to automate and reinvent fundamental industrial processes, from product development and manufacturing to supply-chain and field operations.

Orgalim could also envisage the potential of implementing Option 4. However, this requires more detailed analysis and discussions with the industry, especially when it comes to the different levels of risks generated by AI applications. We see this option only as a combination of all other options based on a ‘soft law’.

The need for careful analysis

Orgalim believes that, before choosing any option, existing regulation needs to be carefully analysed, potential gaps precisely formulated, and the right tools adequately proposed, based on a realistic definition of AI. For the manufacturing sector, the most important aspect to keep in mind is that AI is not a product, but a technology embedded in products (applications), which puts all concerns related to AI into another perspective. More AI requirements within the EU product legislation should be avoided.

Orgalim represents Europe’s technology industries, comprised of 770,000 companies that innovate at the crossroads of digital and physical technology.

Our industries develop and manufacture the products, systems and services that enable a prosperous and sustainable future. Ranging from large

globally active corporations to regionally anchored small and medium-sized enterprises, the companies we represent directly employ 11.5 million people

across Europe and generate an annual turnover of over €2,100 billion. Orgalim is registered under the European Union Transparency Register – ID number: 20210641335-88.

Orgalim aisbl

+32 2 206 68 83

BluePoint Brussels

secretariat@orgalim.eu

Boulevard A Reyers 80

www.orgalim.eu

B1030 | Brussels | Belgium

VAT BE 0414 341 438

2

Industries all over the world will face a great challenge in becoming carbon neutral. Use of data and AI will be the major tools used to adjust and reform the processes to make them more efficient. Regulation should not limit the development nor deployment of new technologies that will not only make our industries green but also more productive. European companies should be the pioneers of this development.

Not one AI, not one approach

Today AI is often perceived by the general public and policy makers alike as one broad umbrella notion, creating the impression that there is only one integral – AI – that can be tackled uniformly, and eventually regulated as such. However, looking more closely at the technologies being developed and classified as AI applications, this seems to be an incorrect perception. There are very diverse applications that might be deemed AI-based or AI-operated systems, ranging from a driverless car to a smart-toothbrush, a robot- companion, or a non-embedded expert system for medical diagnosis.

AI and other emerging digital technologies, such as the Internet of Things or distributed ledger technologies, have the potential to transform our societies and economies for the better. However, their rollout must not come with new regulation by definition. New regulation should be introduced only where it is necessary, and where it delivers clear benefits (e.g. helps to uptake the new technologies by creating a level playing field, ensuring safety etc.), and with a reference to industry standards which reflect the state of the art.

Recognition of the challenges, varying degrees of risk

Orgalim stands for an enabling and innovation-friendly AI framework in Europe with our built-in democratic values. We recognise that AI applications might bring new challenges, including for the industrial sectors represented by Orgalim. It is important for policymakers to differentiate between the varying degrees of risk linked to use of AI technologies in their different applications. An indiscriminate understanding of AI- technologies risks hampering innovation and creating uncertainty, and should be avoided.

Clear criteria should be established for identifying critical areas in a way that is legally certain. In Orgalim’s view, the quality of any future regulation will depend on the ability to identify a common, transparent and easily applicable understanding of ‘high-risk’.High-risk situations should be defined in cooperation with industry, based on risk-benefit considerations and adjusted when necessary. Clear definition of criteria for perceived high-risk applications and the degree of autonomy is crucial, in order to avoid over-regulation of completely harmless automation. For example, in mechanical engineering, we believe AI applications are generally safe and uncritical, according to their risk assessment.

When something has been identified as a high-risk application (which we believe will be a minority of industrial AI applications) a targeted approach to risk-management could be the right one. Taking this into account, it can for instance be concluded that most industrial AI application use cases have entirely different ethical implications compared to consumer-oriented AI solutions for end-consumers. It is crucial that the framework for identifying high-risk use cases is predictable and proportionate in order to create a stable environment for investments.

Defining AI

From a policy-making perspective, clearly identifying the object to be regulated is essential. In the absence of a precise definition, which is currently the case for AI, the scope of any intended regulation would be uncertain, potentially being either over- or under-inclusive, and triggering litigation. In such a sensitive field of technological innovation, this uncertainty might also hamper the very development of desirable technologies, and ultimately harm the market that well-conceived regulatory intervention otherwise aims to foster. A correct understanding of AI is also necessary to help the general public to fully understand what these technologies entail, what changes they might bring about, and how they might affect – where relevant

  • our ways of life, and our rights.

Orgalim aisbl

+32 2 206 68 83

BluePoint Brussels

secretariat@orgalim.eu

Boulevard A Reyers 80

www.orgalim.eu

B1030 | Brussels | Belgium

VAT BE 0414 341 438

3

Orgalim would like to highlight a definition of AI, as outlined in our previous position papers 1: AI refers to computer systems based on algorithms designed by humans that, given a complex task, operate by processing the structured or unstructured data collected in their environment according to a set of instructions, determining the best step(s) to take to perform the given task, via software or hardware actuators. AI computer systems can also adapt their actions by analysing how the environment is affected by their previous actions.

This definition is similar to the definition given by the Commission’s High-Level Experts Group on AI2, as it insists on the human origin of any AI and highlights the fact that a machine can only perform an action assigned from the outset by a human – whether a designer, computer specialist or manufacturer. This ‘narrow AI’ has been deployed effectively and safely in manufacturing for several decades.

Safety and liability

As of today, the only possible fundamental and universal consideration about AI-systems is that there are no philosophical, technological or legal grounds to consider them anything else but artifacts generated by human intellect. Embedded AI-basedapplications are considered products both by the EU (1) product safety legislation (before the placing on the market) and in the (2) Product Liability Directive (PLD – after the placing on the market). While the first set of rules imposes essential safety requirements for products to be certified and thus distributed into the market, the latter aims at compensating victims for the harm suffered from the use of defective goods.The first body of norms relating to product safety legislation is essential and plays an important role in ensuring the safety of users and consumers within the European market. We could build on this ex ante detailed regulation and technical standardisation by potentially looking at the development of specifically tailored and appropriate norms for emerging technologies, where necessary.

When it comes to machine safety, we do not see a need for action by the EU-legislator. State of the art testing technology, standards and validation methods must be further developed, but at the legislative level

  • such as the EU Machinery Directive – where the safety requirements are formulated in a technology- neutral manner and also apply to machines with AI elements. Therefore, factories with AI are just as safe as those without AI, as all safety requirements must be fulfilled in the same way.

Despite the fact that there are several views on the PLD3, Orgalim supports the Commission’s assessment which stipulates that the current PLD is “coherent with the EU legislation protecting consumers, relevant and future-proof” and “fit for purpose”4. In addition, as regards the Product Liability Directive of 1985, Orgalim has assessed that, thanks to its technology-neutral specifications, its provisions remain valid even in the digitalised domain. Therefore, its scope should not be extended to services or stand-alone software, the concept of “joint liability” should not be introduced, of “defect” should remain interpreted on a case-by- case basis by the Courts, and of “damage” should be limited to material damage. More generally on liability, Orgalim fully supports paragraph 29.7 of the High-Level Experts Group on AI’s Policy and Investment Recommendations urging “policy-makers to refrain from establishing legal personality for AI systems or robots”. Moreover, there is also no need to adjust the national liability regimes; they provide a legal framework within which AI problems can be solved.

  1. See https://www.orgalim.eu/position-papers/orgalim-comments-upcoming-impact-assessment-machinery-directive (p.2) See https://orgalim.eu/position-papers/digital-transformation-orgalim-manifesto-european-agenda-industrial-ai(p. 3)
  2. See the “Ethics Guidelines for Trustworthy AI” of 8 April 2019 of the High Level Expert group on AI: “Artificial intelligence (AI)
  3. https://ec.europa.eu/info/sites/info/files/report-safety-liability-artificial-intelligence-feb2020_en_1.pdf
  4. See European Commission (2018). Report from The Commission to the European Parliament, the Council and the European Economic and Social Committee on the Application of the Council Directive on the approximation of the laws, regulations, and administrative provisions of the Member States concerning liability for defective products (85/374/EEC). Brussels, European Commission., 34 and 70.

Orgalim aisbl

+32 2 206 68 83

BluePoint Brussels

secretariat@orgalim.eu

Boulevard A Reyers 80

www.orgalim.eu

B1030 | Brussels | Belgium

VAT BE 0414 341 438

4

Regulatory sandboxes for a harmonised Single Market

Finally, Orgalim believes that a framework for the definition and governance of regulatory sandboxes should be developed by the EU and stakeholders to offer a harmonised approach by Member States, as suggested by the High-Level Expert Group on AI in its Policy and Investment Recommendations5. A framework for AI needs to include this aspect, which would allow for controlled regulatory experimentation and exchange of information between developers, users and regulators. This would facilitate the introduction of new cutting- edge and responsible AI solutions, while allowing close monitoring and assistance during a trial period. Orgalim calls for an EU framework, including a clear definition, for regulatory sandboxes where certain exceptions could be given to Member States.

Conclusion

Orgalim believes that the views expressed in this position paper would contribute to the stimulation of the uptake of trustworthy AI in the EU economy. We look forward to working with the Commission and all stakeholders involved to build a future for AI that will enable the fast-evolvingand strategic technology to take full advantage of the tremendous opportunities in Europe.

5 High-Level Expert Group on AI, Policy and Investment Recommendations, para 29.2.

Orgalim aisbl

+32 2 206 68 83

BluePoint Brussels

secretariat@orgalim.eu

Boulevard A Reyers 80

www.orgalim.eu

B1030 | Brussels | Belgium

VAT BE 0414 341 438

This is an excerpt of the original content. To continue reading it, access the original document here.

Disclaimer

Orgalim – Europe’s Technology Industries published this content on 10 September 2020 and is solely responsible for the information contained therein. Distributed by Public, unedited and unaltered, on 10 September 2020 13:49:03 UTC



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.