Artificial Intelligence

Getting Smart: U.S. Financial Regulators Seek Input On Artificial Intelligence – Finance and Banking



United States:

Getting Smart: U.S. Financial Regulators Seek Input On Artificial Intelligence


To print this article, all you need is to be registered or login on Mondaq.com.

On March 29, the Federal Reserve Board, the Consumer Financial
Protection Bureau, the Federal Deposit Insurance Corporation, the
Office of the Comptroller of the Currency, and the National Credit
Union Administration (the “Federal Agencies”) issued
request for information (“RFI”)
from financial institutions, trade associations, consumer groups,
and other stakeholders on the financial industry’s use of
artificial intelligence (“AI”). The RFI broadly seeks
insight into the industry’s use of AI in the provision of
financial services to customers and appropriate AI governance, risk
management, and controls. While the RFI should not come as a
surprise (for several years, regulators have highlighted the
growing use of AI and machine learning by financial institutions
and technology firms), it is the most coordinated effort to date by
the Federal Agencies to better understand the potential benefits
and risks of AI. It follows a speech earlier this year in which
Federal Reserve Board Governor Lael Brainard previewed the potential for additional
“supervisory clarity” in this area.

Risks and Rewards

The Federal Agencies acknowledge in the RFI the importance of AI
to the industry and its customers, including with respect to
AI’s use in flagging unusual transactions, personalization of
customer services, credit decision-making, risk management, textual
analysis (handling unstructured data and obtaining insights from
that data or improving efficiency of existing processes), and
cybersecurity. The RFI also notes the potential safety and
soundness risks of AI, including operational vulnerabilities, cyber
threats, information technology lapses, third-party risk, and model
risk. Consumer risks are also identified, such as risks of unlawful
discrimination, unfair, deceptive, or abusive acts or practices,
and privacy concerns. In addition, the RFI discusses the importance
of “explainability,” which refers to “how an AI
approach uses inputs to produce outputs.” Some AI approaches
exhibit a “lack of explainability” for their overall
functioning or how they arrive at individual outcomes, which can
give rise to challenges in legal compliance, audit, and other
contexts.

READ  VirtualAdvisor by CampusLogic Sets New Standards for Financial Aid Artificial Intelligence

Request for Information

The RFI seeks comment on the following areas:

  • explainability;

  • risks from broader or more intensive data processing and
    usage;

  • “overfitting,” which occurs when an algorithm
    “learns” from idiosyncratic patterns in the training data
    that are not representative of the population as a whole;

  • cybersecurity risk;

  • “dynamic updating,” which refers to AI’s ability
    for it to learn or evolve over time as it captures new training
    data;

  • AI use by community institutions;

  • oversight of third parties that have developed or provide AI;
    and

  • fair lending.

Fair lending appears poised to be a central supervisory concern
of the Federal Agencies when evaluating AI design and usage. More
questions are posed concerning fair lending than any other area in
the RFI. In particular, the Federal Agencies seek input on the
following questions:

  • What techniques are available to facilitate or evaluate the
    compliance of AI-based credit determination approaches with fair
    lending laws or mitigate risks of non-compliance?

  • What are the risks that AI can be biased and/or result in
    discrimination on prohibited bases? Are there effective ways to
    reduce risk of discrimination, whether during development,
    validation, revision, and/or use? What are some of the barriers to
    or limitations of those methods?

  • To what extent do model risk management principles and
    practices aid or inhibit evaluations of AI-based credit
    determination approaches for compliance with fair lending
    laws?

  • What challenges, if any, do financial institutions face when
    applying internal model risk management principles and practices to
    the development, validation, or use of fair lending risk assessment
    models based on AI?

  • What approaches can be used to identify the reasons for taking
    adverse action on a credit application when AI is employed? Do
    existing rules under the Equal Credit Opportunity Act provide
    sufficient clarity for the statement of reasons for adverse action
    when AI is used?

The Upshot

The RFI reflects an increasing interest in AI by the Federal
Agencies, especially as it relates to the risks posed to consumers
and the safety and soundness of financial institutions. We remain
attentive to trends and developments in this ever-evolving space
and would be happy to discuss any questions or concerns with
respect to the use of AI in financial services.

The content of this article is intended to provide a general
guide to the subject matter. Specialist advice should be sought
about your specific circumstances.

POPULAR ARTICLES ON: Finance and Banking from United States

COVID-19: Restaurant Revitalization Fund

Winston & Strawn LLP

On March 11, 2021, President Biden signed the American Rescue Plan Act of 2021, which provides an additional $1.9 trillion in new COVID relief funds, including, among other things, funding of a…

IBA And FCA Announce Cessation Of LIBOR Settings

Arnold & Porter

On March 5, 2021, ICE Benchmark Administration Limited (IBA), the administrator for LIBOR, announced that it will permanently cease to publish LIBOR beginning on the following dates…



READ SOURCE

READ  Innosphere panel to delve into artificial intelligence – BizWest

Leave a Reply

This website uses cookies. By continuing to use this site, you accept our use of cookies.