Artificial intelligence (AI) is the newest golden child in technology. While definitions abound, we will define artificial intelligence as “the theory and development of computer systems able to perform tasks that normally require human intelligence.” This article addresses licensing issues that are unique to AI, including legal compliance of the AI decisions, allocating intellectual property rights, ownership and use rights of the components of AI, and data use and privacy.
AI Licenses and Service Agreements
Licensing AI capability from a third party is likely the fastest way to obtain AI for use by businesses. The license may take the form of an on-premise license of AI that will be installed, trained and operated by the business, or the license may be part of a software-as-a-service (SaaS) solution in the cloud by the provider.
A unique aspect of licensing AI is that the output, value and performance of an AI system generally cannot be accurately predicted before an AI solution is implemented. Too much is new or in the hands of the user for the licensor to make strong promises. Moreover, AI with machine-learning capabilities will change over time, generally but not necessarily improving, so it may be impossible to test at the outset how the product will work over the license or subscription term.
Many businesses are turning to a collection of AI providers to test the waters. A good, lower-risk way to do this is through a proof-of-concept arrangement. A proof-of-concept arrangement is a short-term agreement that allows a company to test and a supplier to prove the value of an AI product or service.
Once the proof-of-concept is complete, the business may license the AI from a provider. Businesses should seek to satisfy the usual requirements for license and SaaS agreements in their AI licenses and services agreements, with particular attention to the following four unique areas.
Legal Compliance: AI-based decisions must satisfy the laws and regulations that apply to businesses. This requires a business to apply the same level of diligence to the AI tool or service that the business applies to its other third-party products and services. Of particular concern is that AI-based decisions may discriminate because they rely on data that reflects a discriminatory past or looks only at correlation instead of causal factors. Businesses that use AI tools in credit decisions or fraud detection, for example, must ensure that these tools do not discriminate against certain protected classes of applicants or employees.
AI is an increasing focus for regulators. For example, the New York Department of Financial Services recently issued requirements on the use of “unconventional sources or types of external data” to address the risk of unlawful discrimination and a lack of data transparency in insurance decisions. If a business uses AI for a decision that it may need to explain, the licensee should look for AI systems to produce output that is transparent and auditable, and that can be explained—sometimes called “explainable AI.”