This article is an extract from GTDT Market Intelligence Artificial Intelligence 2021. Click here for the full guide.
1 What is the current state of the law and regulation governing AI in your jurisdiction (including any legislation, non-binding guidance and case law)? How would you compare the level of regulation with that in other jurisdictions?
There is currently no law or other specific regulation in force in Germany that explicitly and exclusively deals with AI. The closest thing to that has been section 1(a) of the German Road Traffic Act (StVG) since 2017, which regulates that vehicles with highly or fully automated (note: not autonomous) driving functions are permitted. Beyond that, applicable law forms the framework for the development and use of AI systems. A large number of laws can therefore play a role when developing and using AI systems. The following areas are particularly noteworthy.
Both the training of AI systems and their actual use regularly involve the automated processing of personal data. Aside from the European GDPR, the German federal as well as state data protection acts must be observed. Moreover, the individual systems grouped under AI could be protected by national copyright law as computer programs, whereas it is also discussed in legal literature how their output (such as digital art) may also be deemed protectable. Furthermore, when placing products on the market that contain AI components and those are deficient, there are specific product liability regulations as well as general liability regulations which may be applicable. It is also discussed in legal literature how the parameters of legal transactions (particularly contracts) are applicable in case an AI system is deployed and involved in the legally relevant activities. Regularly, it is deemed that the current contract law is rather fitting, whereas this is discussed in the case of ever more autonomous AI systems.
Beyond that, there are hardly any court rulings with specific relation to AI, but the number is slowly but continuously growing, for example with regard to algorithmic decision-making. For instance, according to a recent decision by the Higher Regional Court of Dresden, the malfunction of an algorithmic based filter software cannot be attributed to the operator if the mistake is duly rectified (Decision dated 5 October 2021, ref. 4 U 1407/21).
The lack of a comprehensive (federal) law regarding AI and the resulting application of other laws that prevails in Germany is not uncommon and can also be encountered in other countries (eg, the United States). The uniform legal framework created by the General Data Protection Regulation (GDPR) in the EU is an important achievement for AI development by working towards a level playing field within the EU. The same is expected from the EU Artificial Intelligence Act, which was proposed in April 2021 by the European Commission.
2 Has the government released a national strategy on AI? Are there any national efforts to create data sharing arrangements?
On 15 November 2018, the federal government released the ‘Artificial Intelligence Strategy’. It was prepared under the joint leadership of the Ministry of Education and Research, the Ministry for Economic Affairs and the Ministry of Labour and Social Affairs. Against the backdrop of the dynamic development of this technology field, this strategy is intended as the federal government’s framework for action. It is part of the federal government’s digitisation implementation strategy. The AI Strategy pursues three main goals: to make Germany and Europe a leading location for the development and application of AI technologies and to secure Germany’s future competitiveness, to ensure responsible and public good-oriented development and use of AI, and to embed AI ethically, legally, culturally and institutionally in society within the framework of a broad societal dialogue and active political shaping. The Strategy focuses on 12 fields of action, in which funding programmes, initiatives, collaborations, etc, are started to make Germany a leading location for AI. In December 2020, the AI Strategy has been updated to respond to new developments in the field of AI.
Also, in its so-called bureaucracy relief package, the German government decided on 13 April 2021 to examine in the future for each law whether regulatory sandboxes can be made possible by including an experimentation clause. These experimentation clauses – which have yet to be enacted – may allow to test AI in specified circumstances.
In general, owing to the fundamental importance of data sharing for the creation of AI systems, there are many different efforts to facilitate and improve data sharing, both in the private and in public sectors. Furthermore, data sharing is mentioned in various federal strategies such as the Data Strategy, the Open Data Strategy and the AI Strategy, whereby rather only declarations of intent are made for the promotion and creation of data spaces.
However, there is currently no further concretisation of data exchange specifically for AI. There are also no detailed legal regulations on the exchange of AI data. Therefore, the only remaining option is to apply the existing regulations. If the question is whether there are national efforts to share data with exclusive reference to AI developments, the answer is also in the negative.
3 What is the government policy and strategy for managing the ethical and human rights issues raised by the deployment of AI?
On the one hand, these issues are addressed in the federal government’s AI strategy. There, the federal government states to rely on an ‘ethics by, in and for design’ approach throughout the process of AI development and application. Although the current jurisdiction and regulations are considered as a stable ground in the AI strategy, the federal government wants to review the regulatory framework for gaps in algorithm- and AI-based decisions, services and products and, if necessary, adapt them to make them reviewable with regard to possible inadmissible discrimination.
To develop standards on ethical aspects, the federal government is in dialog with national and international bodies such as the German ‘Data Ethics Commission’ or the EU Commission’s ‘High-Level Expert Group on AI’ and stated it would consider their recommendations. The federal government also wants to examine how transparency, traceability and verifiability of the AI systems can be made transparent and verifiable to ensure effective protection against distortions, discrimination, manipulation or other misuse, especially when using algorithm-based forecasting and decision-making systems. Therefore, the establishment or expansion of government agencies and private review institutions for the control of algorithmic decisions is planned to be examined. Lastly, the federal government states to support the development of innovative applications that promote self-determination, social and cultural participation as well as the protection of citizens’ privacy.
Besides that, a Data Ethics Commission was set up by the federal government in July 2018. The Data Ethics Commission is an independent and autonomous body of experts, which delivered its final report in October 2019. Among other things, it proposes a risk-based regulatory approach for algorithmic systems. This should include control instruments, transparency requirements and traceability of the results as well as regulations on the allocation of responsibility and liability for the use of algorithmic systems.
Likewise, the Enquete Commission ‘Artificial Intelligence – Social Responsibility and Economic, Social and Ecological Potential’ also dealt with the topic. The German Parliament appointed the Commission on 28 June 2018 at the request of various parties of the Parliament. The Commission consisted of members of the Parliament and experts proposed by the parties. It was mandated to examine the opportunities and potential of AI as well as the associated challenges and to develop answers to the multitude of technical, legal, political and ethical questions in the context of AI. The final report was submitted on 28 October 2020. The Commission places its elaboration under the guiding principle of ‘human-centred AI’. The focus on people means that AI applications should primarily be geared towards the well-being and dignity of people and bring societal benefits.
In addition, Germany is actively involved in the development of international ethical standards for AI use.
4 What is the government policy and strategy for managing the national security and trade implications of AI? Are there any trade restrictions that may apply to AI-based products?
In its AI strategy, the federal government aims to increase the attack security of AI systems and further expand AI as a basis for general IT security. Ensuring IT security is seen as a key prerequisite for the product safety of AI applications or products that use AI. The current focus on operators of critical IT infrastructures, for example in the IT, health or energy sectors, is seen as no longer sufficient in view of the federal government. Therefore, an adequate obligation for hardware and software manufacturers is aspired that promotes the principle of security by design.
The Federal Office for Information Security (BSI) plays a pioneering role here. The BSI established an AI unit in 2019. As a first result of work, the unit published an AI Cloud Service Compliance Criteria Catalogue (AIC4), which helps the users to evaluate the safety of AI-systems in a cloud. In addition, the BSI conducts basic research and develops requirements, test criteria and test methodologies that are both needs oriented and practical to make the use of AI safe for the benefit of the general public.
Currently, Germany has not imposed any trade restrictions on AI-based products. It is true that there are trade restrictions, also for software, but not because the product is intelligent software, namely AI.
5 How are AI-related data protection and privacy issues being addressed? Have these issues affected data sharing arrangements in any way?
As already mentioned, the data protection requirements also apply to the processing of personal data by AI. In particular, sections 31, 54 of the German Federal Data Protection Act (BDSG), which prohibit automated decisions and regulate so-called scoring, should be emphasised. The question of whether these requirements are sufficient for the processing of personal data by AI systems or whether new regulation is necessary has been addressed by various entities.
The federal government announced in the AI strategy to review the legal framework for the use of data for application of AI technology. Related to this, a roundtable was convened with data protection supervisory authorities and business associations to discuss AI-specific application issues of the GDPR and to establish a regular exchange. The constituent meeting was on 29 September 2019, another in January 2020. Results of these meetings were not published. The further procedure is also not known.
The Enquete Commission considers the specifications to be a solid legal basis under data protection law for the processing of personal data by AI systems. However, there would not yet be a secure, uniform interpretation and application of the legal provisions when assessing individual use cases in connection with the training or use of AI systems.
In the Hambacher Declaration, issued on 3 April 2019, the German ‘Data Protection Conference’ (a body formed by the German data protection supervisory authorities) has set out seven data protection requirements for artificial intelligence:
- AI must not treat people like objects;
- AI may only be used for constitutionally legitimised purposes and not override the purpose limitation requirement;
- AI must be transparent, comprehensible and explainable;
- AI must avoid discrimination;
- the principle of data minimisation applies to AI;
- AI needs accountability; and
- AI needs technical and organisational standards.
The declaration represents a recommendation by the authorities, which, although not legally binding, can serve as an aid to interpretation and can as such for example also be used by the courts.
In addition, the German data protection supervisory authorities are regularly of the opinion that a data protection impact assessment must be carried out for a large number of application areas of data processing using AI. Indeed, article 35 GDPR provides that in the case of data processing likely to present a high risk to the rights and freedoms of natural persons by virtue of the nature, scope, context and purposes of the processing, the controller must carry out a prior assessment of the impact of the envisaged processing operations on the protection of personal data. To concretise this obligation, the Data Protection Conference has published a ‘Black List‘, which lists the corresponding use cases, for example the use of artificial intelligence to process personal data to control interaction with the data subject or to evaluate personal aspects of the data subject.
An impact of the AI-related data protection issue on national efforts to launch data exchange programs is not apparent. However, the general mood in the market, judging by various comments made by companies and business associations submitted as part of a consultation by the federal government on the AI strategy, seems to be that the high data protection requirements are an obstacle to AI-related innovation and a competitive disadvantage compared to countries, which process and use data for AI in a GDPR-non-compliant way. The GDPR is perceived as a law with numerous undefined legal terms and high bureaucratic hurdles, whose subsequently still high implementation costs would rather contribute to negative effects on innovations and digital business models.
6 How are government authorities enforcing and monitoring compliance with AI legislation, regulations and practice guidance? Which entities are issuing and enforcing regulations, strategies and frameworks with respect to AI?
(If appropriate, discuss their enforcement powers.)
Accompanying the lack of a comprehensive legal regulation of AI, there is no sole responsibility of one federal level, authority or ministry. Thus, different federal authorities may be responsible depending on the area of application. For example, the Federal Office for Information Security (BSI) may be responsible if operators of critical infrastructures use AI or the Federal Financial Supervisory Authority (BaFin) if AI is used in decision-making processes by financial service providers. The states alone can regulate the use of AI in their administrative bodies.
Digitisation as a whole is the responsibility of both the Federal Ministry for Economic Affairs and Climate Action and the Federal Ministry of Digitalization and Transport. However, both ministries have so far avoided issuing independent frameworks for AI.
The federal government’s AI strategy is merely a framework for action for the federal government itself, consequently, it lacks enforceability.
7 Has your jurisdiction participated in any international frameworks for AI?
The German government is, inter alia, actively involved in the work of the G7 and G20, the European Council, the OECD and the Global Partnership on AI (GPAI) initiated by Canada and France, of which Germany is also a founding member.
The GPAI is a global initiative to promote responsible and people-centred development and use of AI. With the GPAI, a body of experts is being created to regularly monitor AI developments and to bundle global debates on the topic of AI (economy, work and society). The aim of the initiative is to facilitate and coordinate international cooperation in the field of AI. The GPAI will bring together experts from research, politics, business and civil society from around the world to monitor developments in the field of AI and to independently develop recommendations for policy makers.
In May 2019, the OECD adopted recommendations on artificial intelligence, which were adopted by the G20 countries as joint, non-binding AI principles.
8 What have been the most noteworthy AI-related developments over the past year in your jurisdiction (eg, regarding cybersecurity, privacy, intellectual property and competition)?
Significant developments were already mentioned in the above answers. Even more notable developments in AI have probably taken place in the area of autonomous driving.
Already on 21 June 2017, the Automated Driving Act (amendment of the Road Traffic Act) came into force. The core of this was changed rights and obligations of the vehicle driver during the automated driving phase. This means: automated systems (level 3) are allowed to take over the driving task under certain conditions. A driver is still necessary, however, who is allowed to turn away from traffic and vehicle control in automated mode. Now with a new law on autonomous driving, which came into force on 28 July 2021, the legal framework has been created for autonomous motor vehicles (level 4) to be able to drive in regular operation in defined operating areas on public roads.
This will make Germany the first country in the world to take vehicles without drivers out of research and into everyday use.
Also this year, on 2 December 2021, the Federal Motor Transport Authority (KBA) granted the world’s first type-approval in the field of automated driving for an Automated Lane Keeping System (ALKS) for a model of the German manufacturer Mercedes-Benz. The automatic lane-keeping system is assigned to automation level 3. Thus, Mercedes-Benz is the first vehicle manufacturer in the world to receive approval for highly automated driving. This marks a significant step in the development of AI-based technology taken to real-world use.
9 Which industry sectors have seen the most development in AI-based products and services in your jurisdiction (eg, financial services, healthcare and defence)?
Along with the lack of all-round competence in the area of AI, there is no breakdown of the sector-specific development of AI-related products or services. However, the applied AI initiative publishes an annual ‘German AI Startup Landscape’, which shows all companies founded since 2009 that focus on or significantly use machine learning. The 2021 landscape shows a continuous growth of AI start-ups in the following key industries: manufacturing, transport and mobility and healthcare.
10 Are there any pending or proposed legislative or regulatory initiatives in relation to AI?
So far, there is no pending or planned draft legislation regarding a uniform AI law. Considering the proposal for an Artificial Intelligence Act (AIA) by the European Commission in April 2021, it is potentially unlikely that there will be any drafts for national laws or regulations beforehand.
In the 2020 Update of the AI Strategy, the federal government expressed their preference for a draft of EU-wide harmonised principles and mentions their active participation in the processes and initiatives that have already been launched. The coalition agreement of the new federal government (from November 2021) also mentions support for the AI Act.
The AI Act seeks to achieve the following objectives. To ensure that AI systems placed and used on the Union market are safe and respect existing fundamental rights and EU values. It also aims to ensure legal certainty to promote investment in AI and innovative AI. It aims to strengthen governance and effective enforcement of existing law to uphold fundamental rights, as well as security requirements for AI systems, as well as to facilitate the development of a single market for legally compliant, secure and trustworthy AI applications and prevent market fragmentation. The draft follows a risk-based approach, according to which AI applications are grouped into four categories according to their potential risk: ‘unacceptable risk’, ‘high risk’, ‘low risk’ and ‘minimal risk’. While the draft provides for strong interventions with the prohibition of systems with unacceptable risk and the extensive regulation of systems with high risk, other AI applications, namely those with low or minimal risk, should deliberately remain largely unregulated according to the intention of the EU Commission to create innovation-friendly conditions.
11 What best practices would you recommend to assess and manage risks arising in the deployment of AI?
Responsible stakeholders in the AI field should be aware of the fragmentation and uncertainty surrounding the regulation of AI. Many specific questions of application have not yet been clarified by the legislator and case law.
A company that wants to use AI should first consider the purpose for which it wants to use the AI. To avoid risks, the scope of application should be narrowly defined. This is especially true for sensitive areas where discrimination can quickly occur, such as recruiting. It is very important to understand the AI used. For this, an understanding should be built up within the framework of an AI deployment management for the entire company. The company must also develop a sense of responsibility for the AI, that protective measures are put in place and that it is also possible to shut down the system in an emergency – including an AI governance and compliance scheme, particularly taking into account also legal requirements.
The Inside Track
What skills and experiences have helped you to navigate AI issues as a lawyer?
The willingness to further educate oneself owing to constant further development is a basic requirement in IT law in general, and especially true for legal questions concerning AI.
We follow technical developments very closely and regularly exchange views with our clients on this. It is important to understand the advantages and disadvantages in order to be able to regulate and advise the issues appropriately and understandably. We appreciate the challenge of often applying ‘old law’ to ‘new technologies’ and developing legal solutions for which there is no pattern.
Which areas of AI development are you most excited about and which do you think will offer the greatest opportunities?
AI can be a great asset to society. From the healthcare sector to early detection systems in disaster response, there are countless examples of the benefits of AI, so it is hard to single out specific developments.
If we had to choose, we would say that we are most excited about the developments in the field of autonomous driving, or in smart homes. Especially in view of our ageing society, AI could lead to us being able to live at home and be independent in old age, as AI makes our everyday lives easier.
What do you see as the greatest challenges facing both developers and society as a whole in relation to the deployment of AI?
Striking a balance between data protection aspects and technical innovation will be a major challenge for the developers of AI systems. In addition, we need to ensure that a basic understanding of AI systems prevails in society to ensure that citizens have faith in new AI systems.
With regard to recent developments in China (eg, Draft Cross-Border Data Rules; Shanghai Data Exchange) and the EU (eg, Draft Data Governance Act) developers will have to develop a strategy how they can acquire, import or export data to use to train AI. Data protection issues and data export may, for example require the use of synthetic data.