High tech continues to make inroads in financial services, and now regulators want more insight — and input — into how technologies such as artificial intelligence (AI) can and are being used by enterprises.
In a joint statement this week from the Federal Deposit Insurance Corporation, the Board of Governors of the Federal Reserve System, the Office of the Comptroller of the Currency, the Consumer Financial Protection Bureau and the National Credit Union Administration, those agencies are seeking information and commentary on how financial institutions (FIs) are being used in activities as far-flung as lending and risk management.
As reported by PYMNTS earlier in the year, Mastercard Vice President, Global Head of Product for Artificial Intelligence (AI) Express and Credit Risk Amyn Dhala told Karen Webster in an interview that technology such as AI can make real-time risk management attainable. It can help banks reduce tens of millions of dollars in losses, which will get the attention of every financial services company on the planet. AI can also improve customers’ interactions with their FIs, he said.
“Banks can actually benefit from looking at how a customer’s behavior has been over a longer period of time, and then act accordingly rather than just at a single point of time,” Dhala said.
In its March request for comment, “use of new technologies, such as AI, has the potential to augment decision-making and enhance services available to consumers and businesses. Likewise, as with any activity or process in which a bank engages, identifying and managing risks are key,” said the announcement on the request for information.
The request ultimately might help the agencies work with FIs to ensure that AI is being used in a compliant manner, tied in part to the Bank Secrecy Act, anti-money laundering investigations and detecting data anomalies.
The agencies said in supplementary materials that in addition, AI technologies, “such as voice recognition and natural language processing are used to improve customer experience and to gain efficiencies in the allocation of financial institution resources.”
One example lies with the use of chatbots to automate routine customer interactions, such as account opening activities and general customer inquiries. Also, said the filing, “an AI approach might be used to complement and provide a check on another, more traditional credit model. Financial institutions may also use AI to enhance credit monitoring (including through early warning alerts), payment collections, loan restructuring and recovery, and loss forecasting.”
The Risks And Challenges
The requests asks, also, for insight into some of the challenges and risks tied to AI. The regulators noted that at least some “AI approaches can exhibit a ‘lack of explainability’ of how they function” or arrive at individual outcomes; and “Lack of explainability can also inhibit financial institution management’s understanding of the conceptual soundness of an AI approach.”
Broader and more intensive data usage also may increase risk tied to cybercriminals and data theft. And even within the data that is collected, according to the document, “overfitting” can occur when an algorithm “learns” from idiosyncratic patterns in the training data — but does not deliver outputs or analysis that are representative of the population as a whole.
“Overfitting is not unique to AI, but it can be more pronounced in AI than with traditional models. Undetected overfitting could result in incorrect predictions or categorizations,” said the request for comments. The commentary period will last 60 days after the publication of the request in the Federal Register.