Artificial Intelligence

How Artificial Intelligence Can Comply with the Federal Trade Commission Act


Organizations that use artificial intelligence applications should be aware that the Federal Trade Commission has started to take a greater interest in how such applications are used and whether such uses constitute “unfair or deceptive acts or practices in or affecting commerce” under the Federal Trade Commission Act. This article outlines general practices that organizations can adopt that will help them minimize their potential for violating the Act.

There is general agreement that the authority of the Federal Trade Commission Act (the “Act”) is broad enough to govern algorithmic decision-making and other forms of artificial intelligence (“AI”).[1] Section 5(a) the Act prohibits “unfair or deceptive acts or practices in or affecting commerce” as unlawful.[2] The Federal Trade Commission (the “FTC”) is authorized to challenge such acts or practices through administrative adjudication and to promulgate regulations to address unfair or deceptive practices that occur widely by multiple parties in the market.[3]

The FTC has a department that focuses on algorithmic transparency, the Office of Technology Research and Investigation, and has requested public comment on and scheduled hearings about algorithmic decision-making and AI.[4] The FTC has already started adjudicating complaints related to unfair practices involving AI, such as the complaint the Electronic Privacy Information Center (“EPIC”) filed against Universal Tennis Rating (“UTR”), in which EPIC alleged that UTR relied on “a secret algorithm to score children” tennis players, which “created a substantial risk of harm because children’s development, educational, scholarship, and employment opportunities may be unfairly hindered by low and inaccurate scores, the calculation of which is secret and the validity of which parents are not permitted to dispute.”[5]

There is not clear guidance from the FTC at this point on what qualifies as an unfair or deceptive, and therefore unlawful method of using AI. However, there are general practices that organizations can adopt that will help them minimize their potential for violating the Act:

  1. Establish a governing structure;
  2. Establish policies, internal and external, addressing the use and/or sale of AI and AI-reliant products;
  3. Establish notice procedures;
  4. Assess AI and algorithms for bias; and
  5. Ensure third party agreements properly allocate liability and responsibility.

Below, I briefly outline each recommended practice and provide suggestions for how organizations can adopt each one.

I. Governing Structure

The first step that I recommend to clients is to establish the group or the individual within the organization that will review AI implementation with an eye toward complying with the Act. Some organizations approach AI like any other technology or software update, but I believe that is a mistake. AI is much more likely to introduce issues that are unique in terms of business operations, customer relations, and branding; organizations should implement a governing structure that creates a rubric for reviewing each AI proposal. That rubric can include the organization’s philosophical concerns, legal interpretations, operations concerns, marketing and branding concerns, etc.

The governing structure does not have to be complicated. Rather, the size and composition of the governing structure should reflect the size and composition of the organization. Large companies that have sophisticated AI programs should have a group composed of key stakeholders. That might be the board of directors, a board committee, a committee formed from C-Suite officers, etc. A smaller organization with more limited AI needs may designate only the president or the vice president of information technology to review each AI proposal in light of established principles.

Fortunately, there are plenty of resources an organization can rely on when drafting those principles. For example, the Partnership on AI, an organization founded by several technology companies, is working to develop best practices for fair, transparent, and accountable AI; it has committed to making its research into the ethical, social, economic, and legal implications of AI open to the public.[6] Similarly, the Software and Information Industry Association has published a brief on ethical principles for AI and data analytics,[7] and the Institute of Electrical and Electronics Engineers (“IEEE”) has published a treatise that attempts to provide recommendations on best practices, philosophies, and legal and ethical considerations for AI.[8] Any organization can review these to determine which principles and considerations are important to them and their compliance with the Act.

II. Policies

I recommend that companies that incorporate AI into their business operations consider adopting a public facing privacy policy that discloses to and educates their customers about their AI practices. Having a well-written policy can be an organization’s first public effort to demonstrate compliance with the Act to the FTC and consumers. When drafting a public-facing AI policy, you should look at whether it needs to do the following:

  1. Include a statement disclosing the existence of any chat bots that interact with customers and explain the requirements of the California Bot Bill;[9]
  2. Explain how your AI complies with Article 22 of the GDPR and does not subject any consumer to decisions based solely on automated processing, including profiling, which produce legal effects concerning customers or similarly significantly affects them;
  3. Affirm that your AI does not rely on special categories of data and disclose the categories of data your AI relies on; and
  4. Provide an explanation of how your AI relies on data categories to reach its decisions, consistent with Article 13(2)(f) of the GDPR.[10]

Internal policies are important as well. Properly written employee policies will establish how the governing structure incorporates elements of outside guidance and clearly states how the organization makes decisions about AI. These policies should not be huge documents, but they should be detailed enough that they are useful to the staff implementing AI. By following the policy, employees will comply with the Act and enact the organization’s vision and beliefs for AI.

III. Notice

Notice is a key element to complying with the Act. In order to avoid using AI in an unfair and deceptive manner, informing consumers and concerned individuals is vital. The extent of the notice is important too, as notifying the individuals who are in an AI’s training dataset can be just as important as informing customers who may interact with an organization’s AI. A loan applicant should know if the lender relies on algorithmic decision-making to approve a mortgage, but IBM has run into trouble because it did not inform the relevant people when the company used their Flickr photos to train its facial recognition AI.[11] Even though the FTC has not issued an affirmative regulation about this, the trend in the industry is clearly running toward greater disclosure of AI usage.[12] For example, IEEE has suggested a “government-approved labeling system like the skull and crossbones found on household cleaning supplies that contain poisonous compounds could be used for this purpose to improve the chances that users are aware when they are interacting with” AI.[13] While this is a ridiculously loaded comparison (“skull and crossbones,” “poisonous compounds”), the point is clear.

Notice can take a variety of forms, depending on the AI in question. If the organization’s website relies on AI to analyze website usage, a pop-up directing visitors to the organization’s AI policy, like pop-ups that deliver privacy policies, is appropriate.[14] Similarly, if an organization uses autonomous chatbots as part of its customer service, the bot should clearly state that it is a bot to the consumer.[15] Organizations that rely on AI in other forms or in other stages of their business operations should consider the most effective form of notice. In the example of the loan applicant above, the lender should include a clear statement in the application form, whether that is online or in hard copy, that the final decision will incorporate algorithmic decision-making.

Informing individuals whose data is being used to train an AI application can be much more difficult, as IBM’s experience demonstrates. If an organization is collecting the data itself to train a specific application, it can provide notice directly to the data subjects. If an organization is obtaining datasets from a third party vendor, they will likely need to rely on the third party to notify the participants; agreements with those vendors should address this, as Section V below discusses. Alternatively, the organization can notify the data subjects the vendor used, but that may be impossible if the data is anonymized or the vendor does not have their contact information.

The FTC is likely to view notice in this context as a balancing act, weighing the interest of the data subjects to be notified that their data is being used to train the organization’s AI against the cost and difficulty of informing them. The key issue is whether the use of the information in the dataset is unfair or deceptive. If there is no notice to data subjects in a dataset, the FTC will look at whether the data subjects were disadvantaged by the AI training and the extent to which each data subject would have behaved differently in providing his or her data if he or she knew it would be used to train the AI. At the time of this writing, the FTC is not considering any complaints against IBM regarding its use of Flickr photographs, but it is easy to see both how the issue could have been avoided with better notice and the difficulty in providing that notice.[16]

Similar to providing notice, organizations should attempt to design and implement algorithms that allow key stakeholders – consumers, employees, vendors, leadership, etc. – to understand how and why AI applications make decisions, e.g., the factors the AI weighs more heavily than others, data the AI does not consider, etc. Admittedly, this is an easy to express concept that is difficult to execute, but it is important that organizations can show they are making good faith efforts to avoid AI that acts in an unfair or deceptive manner. Trying to make their AI more understandable to all parties involved is a good way.

IV. Assessing for Bias

One of the greatest concerns even the strongest supporters of AI have is its tendency to incorporate bias into decisions, including hiring,[17] criminal sentencing,[18] and lending.[19] Bias is, in and of itself, a problem, but bias in AI is particularly troublesome because consumers typically have no access to how AI makes its decisions. This is commonly referred to as the “black box” problem: data enters the AI’s black box, the algorithm in the black box analyzes the data, and the black box produces a decision based on the data. Except for a small number of key people in the organization, no one knows how the AI makes the decision. Although there are no regulations under the Act governing this directly, the FTC is actively exploring rules, and organizations need to be careful.[20]

Absent specific regulations, the best strategy to avoid FTC action due to impermissible bias is to conduct regular tests. If there is an investigation, an organization wants to show a history of checking its AI for bias. I also recommend involving outside counsel in that process, both to comply with the Act (as well as other federal and state laws, as state attorneys general are also investigating bias under their states’ consumer protection acts) and to protect with attorney/client privilege the test results from regulators and discovery during litigation.

In testing for bias, identify the types of bias that might be a problem: race, gender, age, etc. After that, the organization should create test datasets that will demonstrate whether or not the AI can properly incorporate data regarding the areas of concern without evidencing bias. If the AI is used to make hiring decisions, but the organization is worried it will evidence a preference for hiring man, the test dataset should be designed to show how the AI application incorporates gender into its final hiring decisions.

If the test dataset has returned results that indicate impermissible bias is baked into the AI’s algorithm, the organization needs to show efforts to reduce and eliminate that bias in order to comply with the Act. This involves training the AI using other datasets, which are designed to teach the AI how to incorporate data without evidencing impermissible bias, i.e., machine learning. In the hiring example, the datasets should train the AI to ignore applicants’ gender or to favor women in order to counteract the existing training that led the AI to favor men.

An organization that can show a history of testing its AI and attempting to remediate any impermissible bias it discovers will have a strong defense against any FTC action.

V. Third Party Agreements

Part of ensuring that an organization’s AI complies with the Act is ensuring that its vendors and contractual partners comply with the Act. It is not enough to assume. Organizations need to include language in their contracts in which the appropriate parties (a) represent that the relevant individuals have received notice or given consent, (b) provide proof of that notice or consent; and (c) indemnify the other party for losses and costs caused by the relevant AI at issue.

Aggressive and/or sophisticated organizations may also seek to assign most or all liability to the other party, even when that is not appropriate given the responsibilities of the parties. For example, in a contract where an organization agrees to provide AI analysis of website usage for a third party, the party who maintains the website should represent that it provides notice of the AI while the organization performing the analysis should indemnify the third party for all losses and damages caused by the AI, generally speaking. However, if the analyzing organization is aggressive, it might attempt to assign all liability for the AI to the website operator under the theory that the AI is only being used on behalf of the website.

At this point, that type of assignment is permitted. It is possible that some assignments of liability associated with AI will be prohibited in the future. Similar statutes and regulations govern liability in other contexts. Some states require landlords to accept liability for their negligence and willful misconduct, making void any lease clause that would force the tenant to release the landlord of such liability.[21] Under the European Union’s General Data Protection Regulation, the processor of an individual’s personal data is liable to individuals for a subprocessor’s violations; it cannot contract that liability away.[22] Until similar prohibitions exist for liability associated with AI, organizations may try to aggressively limit their own risk exposure. For this reason alone, organizations should review their contracts with third parties to ensure AI representations and liability are properly addressed.

But reviewing those contracts is also part of complying with the Act. For example, an organization that obtains datasets from a third party should review the contract to ensure that the dataset provider represents it has given notice to or obtained consent from the relevant individuals, that the organizations can review documentation to confirm such notice or consent, and that the provider indemnifies the organization for losses and costs caused by the provider failing to give notice or obtain consent. If there is a complaint against the organization, its failure to take such precautions could lead the FTC to determine that it had engaged in unfair or deceptive trade practices because it aided and abetted the third party dataset provider.

VI. Conclusion

By following these practices, organizations will have a strong defense in the event a consumer files a complaint with the FTC. Even in the absence of the FTC and the Act, I recommend the above to clients as best practices that organizations should adopt. They help the organization make thoughtful decisions about AI, allow the organization to develop a desirable brand in AI management with consumers, and give consumers appropriate notice and protection regarding potentially harmful AI.

John Frank Weaver is a member of McLane Middleton’s Corporate Department and Privacy and Data Security Practice Group.  He has a diverse practice that focuses on land use, real estate, privacy, telecommunications, and emerging technologies, including artificial intelligence, self-driving vehicles, and drones.  He can be reached at (781) 904-2685 or john.weaver@mclane.com. https://www.mclane.com/staff/john-weaver



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.