Artificial Intelligence

Artificial Intelligence in Hiring is Subject to Bias and Discrimination


Artificial Intelligence

In 1963, Martin Luther King gave his “I have a dream” speech, words that reflected the thoughts and attitudes of civil rights activists at the time, and lit a torch that lives on in the hearts and minds of those who fight for civil liberties and equality in the western hemisphere.

While the world has advanced since Dr. King ushered those words, it’s hard to deny that discrimination still rears its ugly head in modern society. Not just for people of color, but for minorities from all walks of life.

We know for a fact that racial discrimination in the workplace is illegal in most of America and Europe. And yet, just in the USA statistics show that things don’t seem to have improved regarding hiring practices for black people and Hispanics in the last 25 years.

Companies worried about diversity and unbiased hiring practices are looking for the kind of AI-based solutions that an offshore software development company can provide, for example, https://www.bairesdev.com/expertise/nearshore-software-development/.

But why offshore? In theory, AI-assisted hiring is built on an underlying model that makes unbiased decisions as long as the data itself isn’t biased.

In practice, the underlying model is made by humans, and their unconscious biases can make their way into the AI, creating a machine that reflects attitudes programmers themselves may not have been aware of.

Take for example Amazon’s sexist AI hiring tool that actively discarded candidates with resumes that contained the word “women”, and favored candidates that used words typically associated with males such as “executed” or “captured”.

One of the engineers who spoke to Reuters about it told the news site that the bias may have come from the list of over 10,000 resumes that were used to train the hiring assistant, which were predominantly male. The worrying part? It took almost a year for the engineers to figure out that something was off.

 

Bias is unconscious

Amazon’s sexist AI is a perfect example of how biases can creep into the development of tools that aim to be unbiased. The data itself was heavily biased towards male candidates, which is unsurprising since the tech company’s “brogrammer” culture keeps many talented women away from the field.

The unfortunate effect is that the AI was biased from the get-go. Every software developer worth their salt knows that an AI is as good as the data that was used to train the model. For example, all it takes to turn a harmless AI chatbot into a full-blown nazi monster is 24 hours of social media trolling.

On the other hand, it’s rather telling that no one realized what was happening for almost a year. This doesn’t mean that the engineers were consciously racist or sexist, it just means that they didn’t notice anything wrong because that’s what they are used to seeing.

And herein lies the problem: it’s very difficult to be aware of one’s own biases especially in a context that shares the same set of biases. In fact, one may hold unconscious biases that run counter to our values or conscious beliefs.

One solution is to actively seek the help of specialists and do diversity training, helping software developers become aware of the implications of what they say and do. Another solution is working with multicultural teams.

Diversity hiring or offshore outsourcing services aren’t just a form of tokenism. They are ways to introduce new perspectives into the design, which may be able to spot biases that might otherwise go unchecked by a team who comes from the same culture.

 

AI hiring is still on the cards

Amazon’s flop is just one fish in a sea of opportunities. HR experts are pretty sure that almost everyone will be using some kind of AI to help with their recruiting in the next 5 years. Having said that, they also agree that for now, automated hiring is out of the question. What humans bring to the table can’t be replaced by an algorithm for fuzzy logic just yet.

How can AI help, then?

  1. Helping the recruiter gather resumes for potential candidates in a faster and more efficient way.
  2. Organizing and prioritizing resume databases, flagging previous candidates that were discarded but who were eligible.
  3. Analyzing ads’ engagement and helping recruiters make better and more efficient hiring campaigns.
  4. By analyzing the performance of new recruits an AI can evaluate if the recruiters are making the right choices.
  5. By analyzing market trends an AI can help the recruiters figure out when the company will need new recruits.

And yes, when done correctly, AI can be a powerful tool to help recruiters deal with unconscious bias. For example, a recruiter who makes a choice that differs from the prediction of an AI may realize that they hired someone because of a halo effect and not because they were the best candidate for the job.

In the end, even if the potential of AI is huge, the technology is still in its infancy and that behind every software there is a human being that made it.

Share This Article


Do the sharing thingy



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.