Artificial Intelligence

INSIGHT: ‘Discovering’ Artificial Intelligence


Given the use of artificial intelligence by businesses and governmental entities, it won’t be long before discovery is sought into the operation and results of a particular AI tool.

Using a hypothetical based on a pending civil action, we take a look at what that AI discovery might look like and how it might be opposed.

In August 2019, the American Bar Association House of Delegates adopted a resolution which urged courts and attorneys to “address the emerging ethical and legal issues” related to the use of AI. Noting the various different definitions of AI, the ABA adopted the following: “AI at its core encompasses tools that are trained rather than programmed. It involves teaching computers how to perform tasks that typically require human intelligence such as perception, pattern recognition, and decision-making.”

Michigan Ruling on Using AI to Intercept Tax Refunds

We will use this definition of AI and the facts of Bauserman v. Unemployment Insurance Agency, a pending civil action, for our hypothetical. In that case, on remand from the Michigan Supreme Court, the Michigan Court of Appeals affirmed the denial of a motion to dismiss and, in so doing, discussed the allegations of the operative complaint.

Bauserman is a putative class action brought against an agency of the state of Michigan. The representative plaintiffs alleged that the defendant violated their due process rights and committed constitutional torts by “unlawfully intercept[ing] their state and federal tax refunds, garnish[ing] their wages, and forc[ing] plaintiffs to repay unemployment benefits that they had lawfully received.”

The plaintiffs further alleged that the defendant had done so by, among other things, “using an automated computerized system ‘for the detection and determination of [alleged] fraud cases,’ which does not comport with due process.”

The plaintiffs alleged that the defendant uses “‘an automated decision-making system’ to spot suspected fraud in the receipt of unemployment benefits,” and that the system “ ‘initiates an automated process’ that can result in an individual being disqualified from receiving benefits, as well as having penalties imposed and being subjected to criminal prosecution. All of this, plaintiffs allege, occurs without plaintiffs being provided with notice, an opportunity to be heard and being allowed to present evidence in their defense.”

With these allegations as background—and assuming that the “automated computer system” in Bauserman meets the definition of AI—here is the hypothetical:

  1. A state has developed a computerized system to predict which individuals are most likely to engage in unemployment benefits fraud.
  2. The system looks at various “inputs,” including ZIP code of residence, to generate predictions based on algorithms that had been designed by a group of researchers.
  3. Now operational, the system is “self-teaching” and “improves” its predictive ability by taking into consideration other inputs such as income level and length of residence.
  4. A civil action has been brought against the state for alleged systematic discrimination against minorities, who have been targeted disproportionately for investigation and denial of benefits.

What Discovery Might Be Sought

Assume that the action has been filed in a federal district court, with the relief sought in the nature of prospective injunctive relief. What might the plaintiffs request in discovery and what might be the objections to those requests?

Recall the scope of discovery under Fed. R. Civ. P. 26(b)(1): “Parties may obtain discovery regarding any nonprivileged matter that is relevant to any party’s claim or defense***.”

Under our hypothetical, given the alleged discrimination and the central role of the automated system in enabling that discrimination, the plaintiffs should be entitled to seek discovery into the design, operation, and output of the system in issue. Presumably, this would include noticing and conducting depositions of the designers, perhaps by way of a Rule 30(b)(6) notice.

How might the state respond to such discovery requests? First, to argue a lack of relevance would likely be a non-starter given the claim in issue. However, assuming that there is a sufficient factual basis to do so, the state could argue that the discovery sought—assuming it to be “difficult” to assemble and/or produce— would run afoul of the proportionality requirement of Rule 26(b)(1), which requires discovery to be “proportional to the needs of the case, considering the importance of the issues at stake in the action, the amount in controversy, the parties’ relative access to relevant information, the parties’ resources, the importance of the discovery in resolving the issues, and whether the burden or expense of the proposed discovery outweighs its likely benefit.”

That argument, of course, would likely lead to discovery into the bases for the alleged difficulty and a hearing at which the state and the plaintiffs would proffer testimony and exhibits and which might require expert testimony (leading to significant expense).

Then there’s the question of discovery into the “self-teaching” nature of the automated system. What information might the plaintiffs seek? What evidence might be derived from the system itself? Should source code (or whatever it might be called) that reflects or records such self-teaching be sought? All this raises, again, questions of relevance and proportionality.

Finally, putting aside arguments related to scope under Rule 26(b)(1), might the state have a basis to seek a protective order under Fed. R. Civ. P. 26(c) if, for example, it were to argue that the system constitutes “a trade secret or other confidential research, development, or commercial information?”

This might be the case if, for example, rather that develop the system “in house,” the state were to have secured a license to operate the system from a third-party developer which insisted on a confidentiality provision in the license agreement.

AI Discovery Disputes Will Occur

AI can pose a number of challenges should it be relevant to a claim or defense of a civil action and fall within the scope of discovery. Existing rules establish the framework for parties to confer about discovery of AI and for courts to resolve disputes about its production.

Whatever those disputes might be, several considerations should be kept in mind: (1) those disputes will occur and (2) those disputes are likely to be time-consuming and expensive given the apparent need for expert testimony and findings of fact.

This column does not necessarily reflect the opinion of The Bureau of National Affairs, Inc. or its owners.

Author Information

Ronald J. Hedges is a senior counsel with Dentons US LLP. He served as a U.S. Magistrate Judge in the District of New Jersey from 1986 to 2017. He is a frequent writer and speaker on various topics related to electronic information and is the principal author of Managing Discovery of Electronic Information: A Pocket Guide for Judges, Third Edition (Federal Judicial Center: 2017).

Gail L. Gottehrer‘s practice focuses on emerging technologies, including autonomous vehicles, connected vehicles, AI, the Internet of Things, biometrics, robots and facial recognition technology, and the privacy and security laws and ethical issues associated with the data collected and used by these technologies.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.