Artificial Intelligence

3 ways the life sciences industries can eliminate bias in AI


Advances in the life sciences over the past decade have made it possible to manipulate genetic codes on the molecular level. Devices can see inside the human body with a level of precision that seemed unfathomable just a few years ago. And in a matter of months, researchers have created a multitude of vaccines for the novel coronavirus that has devastated the world.

Yet despite many amazing advancements, the artificial intelligence (AI) platforms that help enable many of these triumphs continue to show astonishing levels of bias.

It doesn’t take much digging to find examples across industries of deep negative bias within purportedly objective measurements:

advertisement

  • A popular health care algorithm reduced by more than half the number of Black patients identified for extra care.
  • Several AI-based facial recognition programs have repeatedly displayed bias, in one case mistaking criminal mug shots for pictures of professional athletes.
  • Amazon spent years using artificial intelligence to develop an automated hiring program only to learn it discriminated against women.
  • A Florida county’s criminal justice algorithm — which is widely used across the country — erroneously classified African-American defendants as nearly twice as “high risk” as white defendants to commit future felonies.

Today, AI modeling often depends on data that come from biased sources that don’t fully capture diversity and are steeped in socioeconomic and racial inequality. The reality is this: No matter how advanced technology becomes, bias will be with us until there is thoughtful human intervention in the artificial intelligence process.

Patient information that’s readily collected in a digitized and reusable form at hospitals in affluent areas, such as patient history, is hard to obtain from health care facilities in underserved neighborhoods. Researchers may use variables such as education levels or employment in their algorithms, which often correlate with race or economic standing, without correcting for those factors. Even when not using such variables, other factors — such as frequency of doctor visits — can contain subtle biases.

advertisement

Data bias will continue to be a problem in the artificial intelligence ecosystem until the people working in it do something to fix it. This is why the Alliance for Artificial Intelligence in Healthcare, an organization I helped cofound, is working to make bias and fairness a central theme in its work with regulators around the world.

Acknowledging the issue is only a first step. I see three critical ways to address data bias within the health care industry:

Create agreed-upon standards that ensure a wide range of context and fairness and put them under the direction of the FDA. While it is tempting to rely on technology to do all of the work, the most effective way to improve artificial intelligence is to increase the participation of people throughout the process. As advanced as AI is, it can’t create moral-based insights by itself. If a study generates data from a largely white population, for instance, that needs to be made transparent and the data must be reweighed to address correlations with race so the ensuing AI model accurately reflects the makeup of the U.S. population.

Invest in health care facilities and outreach in underserved areas to increase the caliber of care as well as to establish high-quality datasets. Better medical care is, of course, a paramount goal in and of itself. Doing this will have the welcome side effect of increasing trust within currently underserved communities, and trust is an essential element in the relationship between underserved communities and health care providers.

This deeper engagement will also allow AI developers to access data from across a wider spectrum of people. The broader the data, the better the information that can be used to create datasets that represent everyone — and the broader and more inclusive the datasets, the more effective future treatments will be for the entire population.

At Valo Health, the company I work for, we aim to use patient-based datasets that represent more diverse patient populations while also controlling for factors like socioeconomic status. We take these factors into consideration when developing models of disease and recruiting for clinical trials in order to present data from the widest, most representative subsets of the population.

Create a more diverse workforce to create greater awareness and fewer blind spots about glaring bias issues. This will also have the added benefits of different perspectives, better problem solving, and greater creativity.

Bias in artificial intelligence affects everyone. It dilutes the quality of scientific analyses and reduces the effectiveness of medical work, both of which are mission critical. On a deeper level, bias in AI increases mistrust across society and harms countless individuals and families who depend on medical advances.

So when a researcher can’t find one photo in the medical literature of darker-skinned patients suffering from Covid-19-caused rashes, that’s because people of color are dramatically underrepresented in datasets throughout the medical industry, even though they have been disproportionately affected by Covid-19. That is unacceptable.

Until AI systems are created that directly evaluate and correct for bias, those of us in the health care industry will need to remain active to remove the presence of bias in all of our work, while AI applications are still in the initial stages and our efforts can truly make a difference.

Brandon Allgood is senior vice president and head of data science and artificial intelligence at Valo Health and the cofounder and chair of the Alliance for Artificial Intelligence in Healthcare.





READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.