Artificial Intelligence

Artificial Intelligence and how the courts approach the legal implications


Introduction

Artificial intelligence (AI) and automation are continually changing the way we do business. Organisations across all industries and sectors are deploying machine learning and NLP (natural language processing) technologies to automate processes in almost every part of their operation. For businesses, AI means improving efficiencies, amplifying productivity and reducing cost. But while there are many advantages, AI also presents a wide range of legal challenges – especially in areas such as regulatory compliance, liability, risk, privacy and ethics.

To compound matters, regulation of AI is slow to develop, leaving businesses with no choice but to navigate the unknown. The by-product of this will inevitably be an increasing number of disputes concerning AI. These will pose novel and challenging issues – even for lawyers!

While there has already been some significant case law in relation to issues like data privacy1, personal injuries and product liability, there has been relatively little treatment of the contractual liability that can arise in relation to AI.

This article is the first in a three part series where we will take a closer look at the contractual implications of AI, and how it can give rise to legal liability by exposing businesses to financial and reputational risk.

  • In this first article we will look at a selection of significant cases that demonstrate how courts (internationally) have dealt with the complexities of reconciling new AI technology with long-established legal principles.
  • Our second article will examine a number of key legal and contractual topics that are likely to arise for companies in a constantly evolving AI landscape.
  • In our final article, we set out some practical measures which businesses can adopt to safeguard their position in relation to some of these AI issues when entering into contracts or a contractual supply chain.

A bad first impression – Go2Net, Inc. v C I Host, Inc

One of the earlier cases to consider the impact of AI on contractual obligations was the 2003 Washington State appellate judgment in Go2Net, Inc. v C I Host, Inc.2.

In this case, C I Host retained Go2Net to display advertisements for C I Host on websites in its network. The agreements were based on Go2Net being able to deliver a certain number of “impressions” (i.e. the number of times an ad is viewed). However, it transpired that Go2Net was counting “views” by artificial intelligence agents (such as search engines) as “impressions” for the purposes of billing C I Host.

C I Host argued that the purpose and intent of the advertising agreements was to place ads “where they would reach human consumers with the capacity to purchase C I Host’s services”. However, C I Host’s downfall was a clause in the contract which provided that “all impressions billed are based on Go2Net’s ad engine count of impressions”. The court’s ruling meant that it did not matter whether C I Host believed it was paying for human impressions; the objective mutual intent between the parties was to stick to Go2Net’s method of counting impressions (and qualifying what constitutes an impression would require the court to alter the plain language of the agreement).

The legal rationale behind this point will be no surprise to parties familiar with the common law tradition of contractual interpretation. But we will never know what the actual intentions of the parties were regarding AI impressions when they entered into these agreements. The judgment highlights the risks involved where a court must interpret an agreement after the event by reference to facts that may never have been contemplated by the parties when the agreement was entered into. From a lawyer’s perspective it would have been interesting to hear the court’s views on whether AI agents could ever constitute a legitimate substitute for human targets. However this question was the subject of proceedings in a number of jurisdictions recently (albeit in a different context to contract law).

One percent inspiration, ninety-nine percent automation – Thaler v Comptroller-General of Patents, Designs and Trade Marks

Only last year, the UK High Court decided in the case of Stephen L Thaler v The Comptroller-General of Patents, Designs and Trade Marks3 that AI did not constitute a natural person under the Patents Act 1977, and could therefore not be categorised as an “inventor” of AI for the purposes of a patent application. By contrast, in the last couple of weeks, South Africa and Australia have both awarded patents that names an artificial intelligence as its inventor. The decision in South Africa does not follow a formal examination of the issues and is therefore still open to objection (while other decisions against granting recognition are reported to be under appeal).

Comments from Michael Smith J in the UK High Court decision are insightful and suggest that the UK courts still remain open to the central question:

“I would wish to make clear that I in no way regard the argument that the owner/controller of an artificially intelligent machine is the “actual deviser of the invention” as an improper one. Whether the argument succeeds or not is a different question and not one for this appeal: but it would be wrong to regard this judgment as discouraging an applicant from at least advancing the contention, if so advised.

Outside of the context of IP, the decision raises questions around how parties should seek to deal with AI actors in their contractual documentation. And crucially, whether it should be possible for AI to “own” rights to its inventions (or, conversely, whether humans will have to “claim” inventorship on its behalf). The importance of this, in the context of disputes, is the extent to which humans are responsible for the conduct of AI when things go wrong. Or, on the flip side, disputes could arise as to who claims the benefit of actions ultimately carried out by AI.

This case not only suggests that reform of the current law on AI is on the horizon, but also that there may be a degree of frustration in the UK Courts at the pace of legislative development and how the law will deal with the increasing use of AI.

Conclusion

The law in this area is still very much in its infancy and we expect further cases on the implications of integrating AI with the principles of contractual liability.

In our next article, we will discuss the types of issues we expect to arise with regards to AI and contractual liability, and how these might be developed before the courts.

Mitigation of the risks relating to AI requires early engagement with experienced lawyers who understand the cultural, legal and regulatory landscapes. Lawyers who will drive relentlessly to deliver results for their clients when a dispute or regulatory intervention is unavoidable. 



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.