Artificial Intelligence

Ghosts in the Machine — Red Herring


With a gigantic bottleneck in tech talent, companies are turning to tech to speed up the recruitment process. But baked into hiring startups’ algorithms are biases and prejudices as old as time – and it’s unsure whether more AI is making things better or worse.

Of the many revelations to come from The Inventor, HBO’s blockbuster film about the rise and fall of medtech startup Theranos, one of the more personal is how founder Elizabeth Holmes artificially lowered her voice to a baritone, slipping out of character every now and then; giving the game away.

While many found the detail funny, others understood exactly why Holmes did it. Holmes was mimicking a man. And despite what tech’s luminaries say, their world is still a man’s one–and a white man’s, at that. 

Tech has a huge diversity problem. A recent Pew poll found that women comprise just a quarter of all so-called “computer” roles. Since 1990, female representation in the field has actually declined. The industry’s biggest players, frequent virtue signalers, are among its worst offenders: In 2017 Google’s workforce was 80% men, 3% Latino and just 1% black. According to a 2016 Center for Investigative Journalism report, ten large Silicon Valley companies employed no black women. Three had no black employees at all.

Tech has become such an uneven playing field that Britain’s Royal Statistical Society recently named it among professions that are currently “virtual economic ghettos”. 

That persistent bias coincides with a huge dearth of tech talent. The US government projects that by next year there will be over a million domestic computer science-based jobs without sufficiently qualified people to fill them. Eighty-nine percent of UK employers expect a shortage of talent in 2020. Counterparts across most western European nations expect similar shortfalls.

With applications continuing to pour into startup HR departments, finding the right candidate for a role is like finding a shrinking needle in a growing haystack. Little wonder that more and more companies turn to artificial intelligence to streamline and optimize their hiring process.

From the outset this would seem a good thing. Humans are inherently biased, and the recruitment process is long and painstaking. But companies are finding that algorithms often have the same prejudices, or worse. 

In 2015 Google’s facial recognition software tagged two black Americans as gorillas. A year later, journalists at ProPublica found that black criminals were determined by algorithms to be deemed at far greater risk of committing additional crimes than their white counterparts. Black and Latino loan applicants are often rejected by algorithms in favor of whites. Joy Buolamwini, founder of the Algorithmic Justice League, found that facial recognition software could not recognize her until she donned a white mask. In October 2017 Amazon scrapped an AI hiring system when it was revealed to favor men over women.

It has nothing to do with a skill divide. One recent study showed that when female engineers hid their gender, the code they produced was accepted by companies 4% more than that of their male counterparts.

“AI isn’t being used just to make decisions about which products we want to buy, or which show we want to binge-watch next,” AI technologist Kriti Sharma recently told a TED audience. “We’ve reinforced our own bias into the AI.” 

It’s worse: in the latter half of the 20th century most countries have moved towards laws making the workplace more equal. With the advent of machines, many of these rules have been–albeit oftentimes unintentionally–rolled back. “Somehow AI has become above the law, because a machine made the decision,” adds Sharma.

The answers are manifold. Technologists point simply to the old computing adage, “Garbage in, garbage out”: ie, when AI engineers plug bad data, or data subsets that are too selective or small, into an algorithm, they are likely to see bad results that simply replicate the status quo. More Johns than Juans in senior positions? The AI will pick out Johns for the next role. More white men than black women in engineering roles? The same applies. 

Another common problem is wrapping an AI solution for one hiring problem over another. This is how law enforcement has come under fire for embedding racism in crime predictive analytics. As more AI recruitment startups enter the market, there is a business incentive to move into new verticals quickly. That creates issues.

“You have to start with a really small subset and then expand: you can’t try to do everything at once,” says Jes Osrow, director of people and culture at TodayTix. Osrow is also co-founder of workplace culture organization The Rise Journey, and she views the kind of parameter-based, impersonal way that AI hiring technology currently works as inherently problematic. Women are far less likely than men to overstate their qualifications, for example. Some AI has even been found to favor words used more commonly by men than women when filtering applicants.

“Something that’s been frustrating for me with my own team is figuring out the right balance of technical interview, versus gut instinct, versus previous work,” says Osrow. “If you’re trained to believe in the technical interviews, you’re trained to believe in the technical interview. It’s harder to see those other pieces of the puzzle as just as important. For me, it’s more about who they are and what they’ve done.”

One answer is to tailor data to remove its inherent biases. But there is a basic economic disconnect. Investors want return on their cash; founders want proof-of-concept and engineers want an effective product. Removing bias from datasets requires a step back, and for engineers to doctor data: things few are often willing to do.

Mo Moubarak is head of business development at MoBerries, a Berlin-based hiring startup that aims to save jobseekers and employers time finding the right match. A Canadian of Palestinian descent, Moubarak grew tired of listening to tech leaders parrot liberal values, while creating solutions that do the exact opposite in practice. “I go to these events for women in business,” he says. “But are we only talking about women from certain neighborhoods and kinds of cultural backgrounds? What about these African ladies? What about these Asian women…It’s like, how can we level that playing field.”

“Everyone wants to have smoke blown up their ass,” Moubarak adds. “And they just want to hear what they want to hear, about how everyone’s going in the right direction. But actually it takes a lot of work, and a lot of deep technical knowledge, to get that to happen. 

“Let’s say I’m a sniper, and I use a rifle,” he adds. “And I want to kill a lot of quail. But now I want to kill deer, but I keep getting it wrong. I keep bringing you quail. You need to do the exact same thing in AI. As much as it’s about the sniper rifles, who’s operating that gun? The problem is a cultural problem.”

Moubarak has been involved in projects in his native Toronto aimed at helping refugees and migrants get a foothold in the local jobs market. In Germany, where over 1m migrants have arrived since Chancellor Angela Merkel launched his open-doors migration policy in 2015, several such programs have emerged. One is CodeDoor, an NGO that trains new arrivals in the expertise to join Berlin’s “Silicon Allee” and other regional tech hubs. 

Bunch.ai, a fellow Berlin AI startup, thinks it has successfully modified its own algorithmic weapon by adding more complexity. “Unfortunately, we can’t eliminate all kinds of potential bias, but we’ve taken steps to quantify bias so we can spot it and correct for it,” says co-founder and CEO Darja Gutnick.

Bunch’s core tech predicts characteristics about somebody’s working profile by including data like their LinkedIn profile, whose data is far more idiosyncratic than that of a resumé. Its algorithms are also based on self-assessments, rather than peer or recruitment assessments, to reduce social bias and stereotyping. 

“I do believe that the biggest challenge is indeed ensuring a fair and equality-driven process, in a world where with humans that’s not possible,” adds Gutnick. “The datasets we have are full of discrimination, and so mis-guiding a learning model is definitely a huge challenge for ML-based (machine learning) hiring platforms.

Gutnick and Moubarak agree that current AI technology cannot entirely eliminate societal bias from its models. The problem runs far deeper than AI, or even the tech industry. Last year’s Fortune 500 list featured just 24 female CEOs, down from 32 the previous year. Sexual harassment and discrimination are rife in companies like Tesla, Uber and Amazon. But they are prevalent in almost every industry besides. 

Education often dissuades women and people of color from STEM (science, technology, engineering and mathematics) professions, which creates a negative feedback loop with few role models, and far fewer women and PoC at the top of tech hierarchies. Venture capital firms are even more biased towards white men–all of which goes to explain why, shockingly, in 2017 just 2% of all US VC funding in 2017 was awarded to female founders.

“The gaps are in education, the people who need to be building (education in tech), and diverse groups of people building it,” says Osrow. “We’re getting more women engineers…but they’re not at the level we need them to be and not being mentored and educated in the way that we need them to be.”

“The pain is universities, educational institutions,” adds Moubarak, “because for so long they’ve been doing such a poor job of educating people about the workplace.”

AI is making our lives easier, quicker and more entertaining, picking the right shows to watch, the right apps to download and the right things to buy. But when it comes to jobs, algorithms still have a long way to go until they stop reinforcing the prejudiced status quo–especially in tech–and begin to redress imbalances that have plagued society for millennia. 



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.