Artificial Intelligence

Artificial Intelligence in Financial Services: The Canadian Regulatory Landscape | Knowledge


Introduction

Artificial intelligence (AI) promises to dramatically transform the financial services sector, and is being increasingly used by financial services providers.

Although Canada currently has no AI-specific regulatory framework, federal legislation to regulate AI is presently before the House of Commons. In addition, there are a number of financial services regulatory initiatives that will impact the use of AI, along with privacy and other laws of general application that apply to the use of AI. This bulletin provides a snapshot of Canadian AI regulation and initiatives relevant to the financial services sector.

Bill C-27: The Digital Charter Implementation Act, 2022

Artificial Intelligence and Data Act

The Digital Charter Implementation Act, 2022 (Bill C-27) is currently under review in the House of Commons. The Artificial Intelligence and Data Act (AIDA), a component of Bill C-27, is Canada’s first comprehensive attempt at regulating AI. Under the Bill, an AI system is “a technological system that, autonomously or partly autonomously, processes data related to human activities through the use of a genetic algorithm, a neural network, machine learning or another technique in order to generate content or make decisions, recommendations or predictions.” AIDA is notably intended to mitigate risk related to high-impact AI systems. Note that the government intends to propose substantial amendments to AIDA (read about those changes in our previous bulletin on Bill C-27).

It is proposed that AI systems used to determine whether to extend services to an individual, assess service costs and types, and prioritize the provision of services be deemed high-impact systems. Thus, we expect that AI systems used to determine whether to extend credit, provide insurance, or price financial products would be classified as high-impact systems. Indeed, Industry Minister Champagne had previously called out such systems, targeting them for AI regulation. The government also plans to add specific obligations for generative AI systems (e.g., ChatGPT), which could impact the use of AI for customer service.

Since many details of the law are left to regulation, AIDA’s full impact on the financial services sector is unclear. For example, key terms, such as “biased output” and “material harm,” remain undefined. The term “harm,” however, includes “economic loss to an individual,” which is relevant to financial services. The government’s amendments to the Bill could also be tabled in the coming weeks, shedding additional light on AIDA’s application to the financial services industry. Ultimately, the question is not whether financial services will fall within AIDA’s ambit, but the extent to which the legislation, once passed, will impact financial services providers.

Consumer Privacy Protection Act: Automated Decision-Making

Bill C-27 would also overhaul the federal private-sector privacy regime, replacing the privacy portions of the existing law (PIPEDA) with the Consumer Privacy Protection Act (CPPA). Although the CPPA does not focus solely on AI, it would regulate the use of “automated decision systems,” defined as any technology that assists or replaces the judgment of human decision-makers through the use of a rules-based system, regression analysis, predictive analytics, machine learning, deep learning, a neural network, or other techniques. In particular, organizations will have to make available a general account of their use of automated decision systems to make predictions, recommendations or decisions about individuals that could significantly impact them and must, on request, provide an affected individual with an explanation of the prediction, recommendation, or decision made by the system. This explanation must include the types of personal information used, the information source, and the reasons or principal factors that led to the prediction, recommendation, or decision. These provisions would presumably encompass systems used to make credit or other financial determinations about individuals.

Current Legislation Applicable to AI

Quebec’s Law 25

With its Act to modernize legislative provisions as regards the protection of personal information (Law 25, formerly Bill 64), Quebec was the first province to overhaul its privacy legislation (see our Resource Centre on Law 25). Law 25 amended some 20 statutes, including the province’s public and private sector privacy laws, and the Act to establish a legal framework for information technology (ALFIT). Most provisions took effect in September 2023. Key requirements apply to AI tools that rely on the use of personal information. In particular, Law 25 mandates privacy impact assessments, enhanced transparency, and the reporting of biometric authentication tools and biometric databases.

Privacy by Design. Inspired by the concept of privacy by design, Law 25 calls for privacy to be considered throughout the engineering of a project involving personal information. As such, it requires organizations to carry out a privacy impact assessment (PIA) for all projects involving acquiring, developing, or overhauling information systems or electronic service delivery systems involving collecting, using, communicating, keeping, or destroying personal information. In other words, before acquiring or developing an AI system involving personal information, organizations must conduct a risk analysis, considering all the positive or negative consequences of such a system for the privacy of the individuals concerned. Further, Quebec privacy laws now require that the confidentiality parameters of technological products or services come with the highest degree of confidentiality by default, with no action required by the individual.

Transparency Requirements. Two transparency requirements are particularly relevant to AI. First, an organization that uses a technology that involves profiling – that collects and uses personal information to assess certain characteristics of a natural person, in particular for the purpose of analyzing that person’s work performance, economic situation, health, personal preferences, interests or behaviour – must:

  • inform the individual of the use of such a tool; and
  • provide the individual with the means to activate the profiling function (i.e., profiling functions must be deactivated by default).

Second, where an organization uses a decision-making system based exclusively on automated processing (e.g., via an algorithm), upon informing the person of the decision, it must:

  • advise the individual that their personal information is being used to make a decision based exclusively on automated processing; and
  • give the individual the opportunity to present their observations to a person in a position to review the decision.

An organization implementing a decision system based exclusively on automated processing of personal information must first carry out a PIA, as explained above. This may include an algorithmic impact analysis, especially to identify the risks of algorithmic bias or discriminatory effects.

Biometrics. ALFIT now requires organizations to report to the Commission d’accès à l’information:

  • When they verify or confirm an individual’s identity using a system that captures biometric characteristics or measurements. Such biometric systems could be fingerprint, voice, or facial recognition systems, often involving AI. This will likely apply to businesses that rely on voice recognition to authenticate customers when they call, regardless of whether the underlying database is centralized or decentralized.
  • If they create a centralized database of biometric characteristics or measurements – in this case, the disclosure must be made at least 60 days before the database goes into service.

Regulatory Initiatives

OSFI Guideline E-23

Technology-related risk – including the risks of AI – is a significant focus of the Office of the Superintendent of Financial Institutions (OSFI). In 2017, OSFI issued Guideline E-23: Enterprise-Wide Model Risk Management for Deposit-Taking Institutions, which set out the regulator’s expectations for establishing sound policies and practices for enterprise-wide model risk frameworks and model management cycles at federally regulated deposit-taking institutions.

On November 20, 2023, OSFI released an updated draft of Guideline E-23, and will hold a public consultation until March 22, 2024. The final guideline is set to take effect on July 1, 2025. The revised guideline includes forecasting economic conditions, estimating financial risks, pricing products and services, and optimizing business strategies. It also includes models used for non-financial risks such as climate, cyber and tech and digital innovation risks. It recognizes that the surge in AI and machine learning (ML) analytics increases the risk arising from the use of models. The definition of “model” in the updated draft Guideline E-23 expressly includes AI/ML methods. Notably, the updated draft of Guideline E-23 would apply to federally regulated insurers and federally regulated private pension plans in addition to federally regulated deposit-taking institutions.

OSFI will expect (i) models to be adequately managed at each stage of their lifecycle, (ii) model risks to be managed proportionally to the organization’s model risk profile, complexity and size, and (iii) organizations to build out a well-defined enterprise-wide Model Risk Management Framework. The updated Guideline will also address issues of model bias, fairness and privacy that could lead to reputational risk.

Industry Thought Leadership: OSFI and AMF

OSFI has issued a number of discussion papers and reports on AI. In 2020, it published Developing Financial Sector Resilience in a Digital World, which discusses the impact of advanced analytics on model risk and principles for the responsible use of AI and ML. 

More recently, in collaboration with the Global Risk Institute (GRI), OSFI hosted a Financial Industry Forum on Artificial Intelligence and published a summary of the views expressed. The forum was an opportunity to discuss appropriate safeguards and risk management for AI use by financial institutions. These topics were discussed under the general titles of explainability, data, governance, and ethics (collectively called the “EDGE” principles). While the summary states that the report should not be interpreted as guidance from OSFI, it highlights key regulatory considerations.

In November 2021, Quebec’s Autorité des marchés financiers (AMF) published a report on the use of AI in finance, issuing ten recommendations for framing the laws and regulations from a financial regulatory perspective. Of these recommendations, the following are the most relevant for financial institutions:

  • Financial institutions should adopt an AI governance framework that includes human liability and accountability for certain decisions made by AI systems or agreement with its recommendations.
  • Financial institutions must ensure that AI systems are resilient, efficient, robust and secure, in order to contribute to the stability of the financial system.
  • To the extent that the use of AI significantly increases the volume of decisions and decreases consumer control, financial institutions must adapt their dispute and redress procedures to facilitate consumer action. In the event of disputes, they must offer fast and flexible dispute resolution mechanisms, including mediation.
  • Financial institutions must ensure that the use of AI systems does not undermine equity, i.e., the equal treatment of consumers, their current or potential customers. In particular, they must avoid reinforcing discrimination and economic inequality.
  • Financial institutions must ensure that the use of AI systems respects consumer autonomy by providing all the information required for free and informed consent, by justifying decisions made with the help of algorithms using clear language, and by respecting the diversity of lifestyles.





READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.