What Are The Practical Implications?



To print this article, all you need is to be registered or login on Mondaq.com.

On 19 February 2020, the European Commission published a White
Paper, “On Artificial Intelligence: A European approach to
excellence and trust”. The purpose of this White Paper on
artificial intelligence (AI), of which leaks began circulating
already in January 2020, is to discuss policy options on how to
achieve two objectives: (i) promoting the uptake of AI and (ii)
addressing the risks associated with certain uses of AI.

Europe aspires to become a “global leader in innovation in
the data economy and its applications”, and would like to
develop an AI ecosystem that brings the benefits of that technology
to citizens, business and public interest.

The European Commission identifies two key components that will
allow such an AI ecosystem to develop in a way that benefits EU
society as a whole: excellence and trust, and it highlights the
EU’s “Ethics Guidelines for Trustworthy Artificial
Intelligence
” of April 2019 as a core element that is
relevant for both of those components.

Like with many White Papers, however, the practical implications
appear far off in the future. We have therefore included a few
notes (“Did you know?”) with
additional information to illustrate them or show what already
exists, and conclude with some guidance on what you can already do
today.

1. Ecosystem of excellence

The European Commission identifies several key aspects that will
help create an ecosystem of excellence in relation to artificial
intelligence:

  • Funding: Investment
    in research in innovation in Europe is but a fraction of investment
    in other parts of the world (3.2 billion EUR in Europe in 2016
    versus 12.1 billion EUR in North America and 6.5 billion EUR in
    Asia). As a result, the European Commission aims to help
    significantly increase this level of investment in artificial
    intelligence projects, with an objective of 20 billion EUR per year
    over the next decade.
    [Did you know? You can already find certain
    funding opportunities about artificial intelligence on the European Commission’s
    website
    .]

  • Research &
    innovation
    : The White Paper highlights the issue of
    fragmentation of centres of competence. The European Commission
    wishes to counter this by encouraging synergies and networks
    between research centres and greater coordination of efforts.

  • Skills: Creating AI
    solutions requires skillsets that are currently underdeveloped, and
    the deployment of AI solutions leads to a transformation of the
    workplace. Upskilling will therefore be important, notably through
    greater awareness of AI among all citizens and a greater focus on
    AI (including the Ethics Guidelines) at the level of higher
    education and universities. The European Commission specifically
    mentions the importance of increasing the number of women trained
    and employed in this area, as well as the need to involve social
    partners in ensuring a human-centred approach to AI at work.
    [Did you know? A recent Gartner survey revealed that lack of skills
    was the top challenge to adopting AI for respondents (56%),
    followed by understanding AI use cases (42%) and concerns over data
    scope or quality (34%). In Belgium, the KU Leuven has been offering
    a Master’s degree in Artificial Intelligence
    since 1988
    .]

  • Adoption across sectors and
    organisation sizes
    : The White Paper then discusses various
    topics such as SMEs and start-ups, partnerships with the private
    sector and public-sector use of AI. The essence thereof is that
    private and public sector must both be involved – and both be
    encouraged to adopt AI solutions. Specifically in relation to SMEs
    and start-ups, the European Commission recognises access to (i) the
    technology and (ii) funding as key challenges, and suggests the
    strengthening of digital innovation hubs in each Member State to
    foster collaboration between SMEs.

  • Access to data and computing
    infrastructures
    : Without data, says the European
    Commission, “the development of AI and other digital
    applications is not possible”. This approach to AI therefore
    comes in addition to the European data strategy, which aims at
    “setting up a true European data space, a single market for
    data, to unlock unused data, allowing it to flow freely within the
    European Union and across sectors for the benefit of businesses,
    researchers and public administrations”.

2. Ecosystem of trust

Where AI is developed and deployed, it must address concerns
that citizens might have in relation to e.g. unintended effects,
malicious use, lack of transparency. In other words, it must be
trustworthy.

In this respect, the White Paper refers to the (non-binding) Ethics
Guidelines, and in particular the seven key requirements for AI
that were identified in those guidelines:

  • Human agency and oversight,

  • Technical robustness and safety,

  • Privacy and data governance,

  • Transparency,

  • Diversity, non-discrimination and
    fairness,

  • Societal and environmental wellbeing,
    and

  • Accountability.

Yet this is no legal framework.

a) Existing laws & AI

There is today no specific legal framework aimed at regulating
AI. However, AI solutions are subject to a range of laws, as with
any other product or solution: legislation on fundamental rights
(e.g. data protection, privacy, non-discrimination), consumer
protection, product safety and liability rules.
[Did you know? AI-powered chatbots used for
customer support are not rocket-science in legal terms, but the
answers they provide are deemed to stem from the organisation, and
can thus make the organisation liable. Because such a chatbot needs
initial data to understand how to respond, organisations typically
“feed” them previous real-life customer support chats and
telephone exchanges, but the use of those chats and conversations
is subject to data protection rules and rules on the secrecy of
electronic communications.]

According to the European Commission, however, the current
legislation may sometimes be difficult to enforce in relation to AI
solutions, for instance because of the AI’s opaqueness
(so-called “black box-effect”), complexity,
unpredictability and partially autonomous behaviour. As such, the
White Paper highlights the need to examine whether any legislative
adaptations or even new laws are required.

The main risks identified by the European Commission are (i)
risks for fundamental rights (in particular data protection, due to
the large amounts of data being processed, and non-discrimination,
due to bias within the AI) and (ii) risks for safety and the
effective functioning of the liability regime. On the latter, the
White Paper highlights safety risks, such as an accident that an
autonomous car might cause by wrongly identifying an object on the
road. According to the European Commission, “[a] lack of
clear safety provisions tackling these risks may, in addition to
risks for the individuals concerned, create legal uncertainty for
businesses that are marketing their products involving AI in the
EU”.

[Did you know? Data protection rules do not
prohibit e.g. AI-powered decision processes or data collection for
machine learning, but certain safeguards must be taken into account
– and it’s easier to do so at the design stage.]

The European Commission recommends examining how legislation can
be improved to take into account these risks and to ensure
effective application and enforcement, despite AI’s opaqueness.
It also suggests that it may be necessary to examine and
re-evaluate existing limitations of scope of legislation (e.g.
general EU safety legislation only applies to products, not
services), the allocation of responsibilities between different
operators in the supply chain, the very concept of safety, etc.

b) A future regulatory framework for AI

The White Paper includes lengthy considerations on what a new
regulatory framework for AI might look like, from its scope (the
definition of “AI”) to its impact. A key element
highlighted is the need for a risk-based approach (as in the GDPR),
notably in order not to create a disproportionate burden,
especially for SMEs. Such a risk-based approach, however, requires
solid criteria to be able to distinguish high-risk AI solutions
from others, which might be subject to fewer requirements.
According to the European Commission, an AI application should be
considered high-risk where it meets the following two
cumulative criteria
:

  • Inherent risk based on
    sector
    : “First, the AI application is employed in
    a sector where, given the characteristics of the activities
    typically undertaken, significant risks can be expected to
    occur”
    . This might include the healthcare, transport and
    energy sectors, for instance.

  • Solution-created
    risks
    : “Second, the AI application in the sector
    in question is, in addition, used in such a manner that significant
    risks are likely to arise”
    . The White Paper uses the
    example of appointment scheduling systems in the healthcare sector,
    stating that they “will normally not pose risks of such
    significance as to justify legislative intervention”
    .
    However, this example creates in our view precisely the level of
    uncertainty the White Paper aims to avoid, as it is not hard to see
    surgery appointments as often being crucial to the safety of
    patients.

Yet the White Paper immediately lists certain exceptions that
would irrespective of the sector be “high-risk”, stating
that this would be relevant for certain “exceptional
instances”.

In the absence of actual legislative proposals, the merit of this
principle-exception combination is difficult to judge. However, it
would not surprise us to see a broader sector-independent criterion
for “high-risk” AI solutions appear – situations
that are high-risk irrespective of the sector due to their impact
on individuals or organisations.

Those high-risk AI solutions would then likely be subject to
specific requirements in relation to the following topics:

  • Training data: There
    could be requirements in relation to the safety of
    training data (i.e. data used to train the AI), the
    non-discriminatory nature thereof (e.g.
    sufficiently representative data sets) and its compliance with
    privacy and data protection rules.

  • Keeping of records &
    data
    : To verify compliance (e.g. allow decisions to be
    traced back and verified), there might be requirements to maintain
    accurate records of the characteristics and selection process of
    the training data, perhaps even the data itself, as well as
    documentation on the programming and training methodologies, taking
    into account the protection of confidential information, such as
    trade secrets.

  • Provision of
    information
    : There could be transparency requirements,
    such as the provision of information on the AI system’s
    capabilities and limitations to the customer or person deploying
    the system, but also information to citizens whenever they are
    interacting with an AI system and not a human being if that is not
    immediately obvious.

  • Robustness &
    accuracy
    : There might be requirements that (i) AI systems
    are robust and accurate, or at least correctly reflect their level
    of accuracy; (ii) outcomes are reproducible, (iii) AI systems can
    adequately deal with errors or inconsistencies and (iv) AI systems
    are resilient against both overt attacks and more subtle attempts
    to manipulate data or algorithms themselves (with mitigation
    measures taken).

  • Human oversight:
    Some form of human oversight is deemed crucial to the creation of
    trustworthy, ethical and human-centric AI, in particular for
    high-risk AI solutions. Without prejudice to the data protection
    rules on automated decision-making, the White Paper states that
    different forms of human intervention might be appropriate,
    depending on the circumstances. In some cases, for instance, human
    review would be needed prior to any decision (e.g. rejection of
    someone’s application for social security benefits); for
    others, it might merely be the ability to intervene afterwards or
    in the context of monitoring.

  • Facial recognition in public
    places & similar remote biometric identification
    :
    Specifically mentioning the deployment of facial recognition in
    public places as an illustration, the White Paper states that
    “AI can only be used for remote biometric identification
    purposes where such use is duly justified, proportionate and
    subject to adequate safeguards”
    . The European Commission
    will in this context launch a “broad European
    debate”
    on the topic, with a view to defining when this
    can be justified and common safeguards.

In practice, these requirements would cover a range of aspects
of the development and deployment cycle of an AI solution, and the
requirements are therefore not meant solely for the developer or
the person deploying the solution. Instead, according to the
European Commission, “each obligation should be addressed
to the actor(s) who is (are) best placed to address any potential
risk”
. The question of liability might still be dealt
with differently – under EU product liability law,
“liability for defective products is attributed to the
producer, without prejudice to national laws which may also allow
recovery from other parties”
.

Because the aim would be to impose such requirements on
“high-risk” AI solutions, the European Commission
anticipates that a prior conformity assessment
will be required, which could include procedures for testing,
inspection or certification and checks of the algorithms and of the
data sets used in the development phase. Some requirements (e.g.
information to be provided) might not be included in such prior
conformity assessment. Moreover, depending on the nature of the AI
solution (e.g. if it evolves and learns from experience), it may be
necessary to carry out repeated assessments throughout the lifetime
of the AI solution.

The European Commission also wishes to open up the possibility for
AI solutions that are not “high-risk” to benefit from
voluntary labelling, to show voluntary compliance with (some or all
of) those requirements.

3. In practice: anticipating future rules

The White Paper sets out ambitious objectives, but also gives an
idea of the direction in which the legal framework applicable to AI
might evolve in the coming years.

We do feel it is important to stress that this future framework
should not be viewed as blocking innovation. Too many organisations
have the impression already that the GDPR prevents them from
processing data, when it is precisely a tool that allows better and
more responsible processing of personal data. The framework
described by the European Commission in relation to AI appears to
be similar in terms of its aim: these rules would help
organisations build better AI solutions or use AI solutions more
responsibly.

In this context, organisations working today on AI solutions
would do well to consider building the recommendations of the White
Paper already into their solutions. While there is no legal
requirement to do so now, anticipating those requirements might
give those organisations a frontrunner status and a competitive
edge when the rules materialise.

The content of this article is intended to provide a general
guide to the subject matter. Specialist advice should be sought
about your specific circumstances.



READ SOURCE

READ  Artificial Intelligence Sheds Light on Phase Transitions

LEAVE A REPLY

Please enter your comment!
Please enter your name here