Artificial Intelligence

Significance of FTC guidance on artificial intelligence in health care


November 24, 2021 – The Federal Trade Commission has issued limited guidance in the area of artificial intelligence and machine learning (AI), but through its enforcement actions and press releases has made clear its view that AI may pose issues that run afoul of the FTC Act’s prohibition against unfair and deceptive trade practices. In recent years it has pursued enforcement actions involving automated decision-making and results generated by computer algorithms and formulas, which are some common uses of AI in the financial sector but may also be relevant in other contexts such as health care.

In FTC v. CompuCredit Corp., FTC Case No. 108-CV-1976 (2008), the FTC alleged that subprime credit marketer CompuCredit violated the FTC Act by deceptively failing to disclose that it used a behavioral scoring model to reduce consumers’ credit limits. If cardholders used their credit cards for cash advances or to make payments at certain venues, such as bars, nightclubs and massage parlors, their credit limit might be reduced.

The company, the FTC alleged, did not inform consumers that these purchases could reduce their credit limit, neither at the time they signed up nor at the time they reduced the credit limit. By not informing consumers of these automated decisions, the FTC alleged that CompuCredit’s actions were deceptive under the FTC Act.

Register now for FREE unlimited access to reuters.com

In its April 8, 2020, press release titled, “Using Artificial Intelligence and Algorithms, “the FTC recommends that use of AI tools should be transparent, explainable, fair and empirically sound, while fostering accountability.

The FTC noted, for example, that research “recently published in Science revealed that an algorithm used with good intentions — to target medical interventions to the sickest high-risk patients — ended up funneling resources to a healthier, white population, to the detriment of sicker, black patients.” See Obermeyer Z., Powers B,. Vogeli C. and Mullainathan S, “Dissecting racial bias in an algorithm used to manage the health of populations,” Science, 366 (6464): 447–53 (2019); see also, summary at: “Bias at warp speed: how AI may contribute to the disparities gap in the time of COVID-19,” PubMed; Eliane Röösli, Brian Rice and Tina Hernandez-Boussard, Journal of the American Medical Informatics Association (AMIA), Volume 28, Issue 1, pages 190–192 (January 2021).

See also  The artificial intelligence in healthcare market is

According to Röösli, Rice and Hernandez-Boussard, the algorithm had used “healthcare spending as a seemingly unbiased proxy to capture disease burden, [but] did not account for or ignored how systemic inequalities created from poorer access to care for Black patients resulted in less healthcare spending on Black patients relative to equally sick White patients.”

The FTC’s April 19, 2021, press release titled, “Aiming for truth, fairness, and equity in your company’s use of AI,” reiterated this concern, noting that research has highlighted how apparently “neutral” AI technology can “produce troubling outcomes — including discrimination by race or other legally protected classes.”

The FTC highlighted a study by the American Medical Informatics Association (see above cited AMIA resource). The study suggested that the use of AI in assessing the effects of the Covid-19 pandemic, which is ultimately meant to benefit all patients, employs models with data that reflect existing racial bias in health care delivery and may worsen health care disparities for people of color. The FTC advises companies using big data analytics and machine learning to reduce the opportunity for such bias.

The FTC has required the deletion of both the data upon which an algorithm (used for AI) is developed, as well as the algorithm, itself, where the data used was not properly acquired or used (e.g., upon proper notice to and/or consent from the appropriate individuals).

In the FTC action titled, In the Matter of Everalbum, Inc., Docket No. 1923172 (2021), the FTC claimed that Everalbum, the developer of a now-defunct photo storage app, allowed users to upload photos to its platform where users were told they could optin to using Everalbum’s facial recognition feature to organize and sort photos, but the feature was already activated by default.

Everalbum, the FTC claimed, combined millions of facial images extracted from users’ photos with publicly available datasets to create proprietary datasets that it used to develop its facial recognition technology and used this technology not only for the app’s facial recognition feature, but also to develop Paravision, its facial recognition service for enterprise consumers which, although not mentioned in the FTC’s complaint, reportedly included military and law enforcement agencies. The FTC also claimed that Everalbum misled users to believe that it would delete the photos of those users who deactivated their accounts, when in fact Everalbum did not delete their photos.

See also  With Watson Anywhere, IBM wants to democratize artificial intelligence

In a Jan. 11, 2021, settlement, the FTC required Everalbum to delete (i) the photos of users who deactivated their accounts; (ii) all face embeddings (data reflecting facial features that can be used for facial recognition purposes) derived from the photos of users who did not give their express consent for this use; and (iii) any facial recognition models or algorithms developed with users’ photos.

The final point may have implications for developers of AI, to the extent the FTC requires the deletion of an algorithm, itself, developed using data not appropriately acquired or used for such means.

The FTC recommends that use of AI tools should be transparent, explainable, fair and empirically sound, while fostering accountability. Specifically, the FTC recommends companies to be transparent:

•about how automated tools are used;

•when sensitive data is collected;

•if consumers are denied something of value based on algorithmic decision-making;

•if algorithms are used to assign risk scores to consumers;

•if the terms of a deal might be changed based on automated tools.

Consumers should also be given access and an opportunity to correct information used to make decisions about them.

The FTC warns that consumers should not be discriminated against, based on protected classes. To that end, the focus should not only be on inputs, but also on outcomes to determine whether a model appears to have a disparate negative impact on people in a protected class. Companies using AI and algorithmic tools should consider whether they should engage in self-testing of AI outcomes, to help in assessing the consumer protection risks inherent in using such models. AI models should be validated and revalidated to ensure that they work as intended, and do not illegally discriminate.

See also  Facebook’s Artificial Intelligence Doesn’t Eliminate Objectionable Content, Report Finds

The inputs (e.g., the data used to develop and refine the algorithm/AI) should be properly acquired, and if personal data, should be collected and used in a transparent manner (e.g., upon proper notice to and/or consent from the appropriate individuals).

The FTC recommends that to avoid bias or other harm to consumers, an operator of an algorithm should ask four key questions:

•How representative is your data set?

•Does your data model account for biases?

•How accurate are your predictions based on big data?

•Does your reliance on big data raise ethical or fairness concerns?

Finally, the FTC encourages companies to consider how to hold themselves accountable, and whether it would make sense to use independent standards or independent expertise to step back and take stock of their AI. For the algorithm discussed above that ended up discriminating against Black patients, while it was well-intentioned employees who were trying to use the algorithm to target medical interventions to the sickest patients, it was outside objective observers who independently tested the algorithm and discovered the problem. Such outside tools and services are increasingly available as AI is used more frequently, and companies may want to consider using them.

Register now for FREE unlimited access to reuters.com

Opinions expressed are those of the author. They do not reflect the views of Reuters News, which, under the Trust Principles, is committed to integrity, independence, and freedom from bias. Westlaw Today is owned by Thomson Reuters and operates independently of Reuters News.

Linda A. Malek is a partner at Moses & Singer LLP and chair of the firm’s Healthcare and Privacy & Cybersecurity practices.

Blaze Waleski is counsel at Moses & Singer LLP and practices privacy, data protection, cyber security and technology law.



READ SOURCE

Leave a Reply

This website uses cookies. By continuing to use this site, you accept our use of cookies.