Artificial Intelligence

Artificial intelligence engines have implicit biases


Technology has found a way to seep into every nook and cranny of the human experience. Throughout this digital shift, there has been one universal language that moves society toward an upward trajectory and lets humans live in an easier manner: coding.

Computer science is an increasingly important area of study, as the digital and physical world blends seamlessly together, like a beautifully choreographed dance. But like all things that humans touch, the mathematical sequences that create our digital existence are flawed by the systems of inequality that thrive within the physical world.

Each person, regardless of  their background, has inherently adopted a set of subconscious biases. The prejudice that each person holds can come across in both superficial and significant ways, and when an engineer is creating an algorithm for a new system of artificial intelligence, those biases will undoubtedly affect the outcome. 

This is not to say that humans are intentionally or deliberately ingraining their biases into these systems, but traces of their preferences break through from the data that is fed into these new creations.

Take, for instance, the AI software ImageNet.

In September, thousands of people uploaded their photos to a website called ImageNet Roulette, which used the AI software to analyze a person’s face and describe what it saw. 

This seemingly amusing game churned out a plethora of responses from “nerd” to “nonsmoker.” However, when Tabong Kima, a 24-year-old African American man, uploaded his smiling photo, the software analyzed him as an “offender” and “wrongdoer,” according to the New York Times.

To add insult to injury, the software also analyzed the man with him, another person of color, a “spree killer.”

At first glance, some may write this flawed social media trend as unimportant in the grand scheme of things, but that is far from the truth.

ImageNet is one of many data sets that have been extensively used by tech giants, start-ups and academic labs when they train new forms of artificial intelligence. This means that any flaws in this one data set, such as the racist labelling in Kima’s case, have already spread far and wide throughout the digital realm of existence.

As engineers increasingly drift toward the production of AI software, with the goal of lifting tedious responsibilities from the shoulders of busy individuals, it is important to ensure the footprint of systemic inequality that has historically permeated throughout the world does not find a way into the powerful realm of software.

Software has progressively impacted each person’s ability to thrive in the world. Whether that’s through applying for credit cards or jobs, there continues to be less of a hands-on approach when these applications are reviewed.

This past week, Apple came under fire for their credit card’s alleged sexist algorithms.

The conversation first surfaced when David Hansson, a prominent software engineer, tweeted about the issues he and his wife were having with Apple’s credit card.

“The @AppleCard is such a f—— sexist program. My wife and I filed joint tax returns, live in a community-property state, and have been married for a long time. Yet Apple’s black box algorithm thinks I deserve 20x the credit limit she does. No appeals work,” Hansson tweeted.

Not long after, Apple’s co-founder Steve Wozniak weighed in on the issue, stating that he and his wife experienced a similar issue with the Apple credit card.

To explain simply, a black box algorithm is a system where the inputs and outputs can be viewed by an observer, but without any knowledge of the internal system works. Meaning that although there’s an output of data, no one knows how the system created that information.

In response to the deeply unsettling prospect of Apple limiting users in a way which hints at sexist black box practices, Sen. Ron Wyden (D-Oregon) tweeted, “The risks of unaccountable, black box algorithms shouldn’t be underestimated. As companies increasingly rely on algorithms to handle life-changing decisions and outcomes, federal regulators must do their part to stamp out discrimination before it’s written into code.” 

Goldman Sachs, the financial company that handles the credit limit for Apple Cards, has denied the use of black box algorithms, an even more important issue has surfaced from their response.

Even if black box algorithms are not in place, and a credit card application is not explicitly asking for the applicant’s gender, these refined machine-learning algorithms can still analyze the information it’s being fed and describe what gender they are analyzing. This is then applied to determine credit limits.

For instance, the machines could learn that applicants who have credit cards open at a particular women’s clothing store are a bad financial risk. It could then provide lower credit limits for those who carry these cards, which results in women receiving lower credit limits than men, according to Forbes Magazine.

Another instance of gender-biased algorithms occurred in 2018, after Amazon engineers created an AI engine with the sole purpose of vetting through over 100 resumes to help choose the top candidates to be hired.

The tech giant realized that the engine was not rating women software engineer applicants in a fair way, because the resume patterns that the engine had been taught to replicate illustrated the stark gender gap within the tech industry.

In a male-dominated industry, Amazon’s system taught itself that male applicants were preferred over women.

Even after the engineers reprogrammed the system to ignore explicitly gendered words, like “women’s,” the system still picked up on implicitly gendered words and used that to rate its applicants.

As these new systems are trained to “learn” from historical decisions made by humans, it must come as no surprise that the race and gender-based inequality that has plagued society for so long has now found a new home within the digital realm.

Discrimination is entangled in our private lives, that’s just the truth. The tango of privilege continues to strut across all facets of human existence, and as this experience dives deeper within the world of artificial intelligence, engineers must ensure that they are not dipping further into the discrimination that minority communities have historically been shown.

“We’re all beginning to understand better that algorithms are only as good as the data that gets packed into them,” said Sen. Elizabeth Warren (D-MA) in an interview with Bloomberg News. “And if a lot of discriminatory data gets packed in, in other words, if that’s how the world works, and the algorithm is doing nothing but sucking out information about how the world works, then the discrimination is perpetuated.”

There’s no easy fix to this problem. The way in which bias affects the livelihood of individuals, and how to combat that in a fair manner, has long been a question for social scientists and philosophers. Expanding that issue into technology, where concepts have to be defined in mathematical terms, illustrates the hard work that must be done to create a truly fair digital environment.

While fixing these computing errors will rely on an extreme amount of trial and error, it’s the responsibility of software engineers to ensure that these new technologies will not cause more harm and discrimination toward people.





READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.