By Jeff Mills, Director, Solution Marketing at SAP SuccessFactors
It’s no longer a secret that getting past the robot résumé readers to a human – let alone land an interview – can seem like trying to get in to see the Wizard of Oz. As the résumés of highly qualified applicants are rejected by the initial automated screening, job seekers suddenly find themselves having to learn résumé submission optimization to please the algorithms and beat the bots for a meeting with the Wizard.
Many enterprise businesses use Artificial Intelligence (AI) and machine learning tools to screen résumés when recruiting and hiring new employees. Even small to midsize companies who use recruiting services are using whatever algorithm or search-driven automated résumé screening those services utilize.
Why don’t human beings read résumés anymore? Well, they do, but usually later in the process – after the initial shortlist by the bots. Unfortunately, desirable soft skills and unquantifiable experience can go unnoticed by the best-trained algorithms. So far, the only solution is human interaction.
Despite the view from outside the organization, HR has good reason for using automated processes for screening résumés. To efficiently manage the hundreds or even thousands of applications submitted for one position alone, companies have adopted automated AI screening tools to not only save time and human effort but also to find qualified and desirable candidates before they move on or someone else gets to them first.
Nobody’s ever seen the Great Oz!
The wealth of impressive time-saving and turnover reduction metrics equates to success and big ROI for organizations who automate recruiting and hiring processes. Most tales of headaches and frustration go untold for many thousands of qualified applicants whose résumés somehow failed to tickle the algorithm just right.
This trend is changing, however, as the bias built into AI and machine learning algorithms – unintentionally or otherwise – becomes more glaringly apparent and undeniable. Sure, any new technology will have its early adopters and zealous promoters and apologists as well as the naysayers and skeptics. But when that technology shows promise to change industry and increase profit, criticism can be drowned out and ignored.
The problem of bias in AI is not a new concern. For several years, scientists and engineers have warned that because AI is created and developed by humans, the likelihood of bias finding its way into the program code is high if not certain. And the time to think about that and address it as much as possible is during the design, development, and testing process. Blind spots are inevitable. Once buy-in is achieved and business ecosystems integrate that technology, the recursive and reciprocal influences of technology, commerce, and society can make changing course slow and/or costly.
Consider the recent trouble Amazon found itself in for some of its hiring practices when it had been determined that their AI recruiting tool was biased against women. AI in itself is not biased and performs only as it is instructed and adapts to new information. Rather, the bias comes from the way human beings program and develop the way machines learn and execute commands. Or if the outputs of the AI are taken at face value and never trained by ongoing human interaction, they can never adapt.
Bias enters in a few ways. One source is rooted in the data sets used to train algorithms for screening candidates. Other sources of bias enter when certain criteria are privileged, such as growing up in a certain area, attending a top university, or certain age preferences. By using the data for existing employees as a model for qualified candidates, the screening process can become a kind of feedback loop of biased criteria.
A few methods and practices can help correct or avoid this problem. One is to use broad swaths of data, including data from outside your company and even your industry. Also, train algorithms on a continual basis, incorporating new data, and monitoring algorithm function and results. Set benchmarks for measuring data quality and have humans screen résumés as well. Management of automated recruiting and screening solutions can go a long way in minimizing bias as well as reducing the number of qualified candidates who get their résumés rejected.
Bell out of order, please knock
As mentioned earlier, change takes time once these processes are in place and embedded. Until widespread acceptance that problems exist, and steps are taken to address them, the best job seekers can do is adapt.
With all of the possible ways that programmers’ biases influence the bots screening résumés, what can people applying for jobs do to improve their chances of getting past the AI gatekeepers?
The good news is that these moves will not only help eliminate false negatives and keep your résumé out of the abyss, but they are likely to make things easier for the human beings it reaches.
Well, why didn’t you say so? That’s a horse of a different color!
So, what are they looking for? How do you beat the bots?
- Tailor your résumé to each position
- Focus on ensuring skills and competencies and major projects completed are all current. AI leverages this information at deeper levels that just keywords.
- Avoid company-specific titles. Use common terminology for jobs and responsibilities. Some companies have job titles that don’t translate clearly to other companies with different names for the same experience.
- Pay attention to simple things like correct spelling and grammar. While AI is generally trained to take common misspellings into consideration, there can be misses and misspellings and mangled grammar don’t look good to human eyes, either.
- Spell out acronyms and initialisms that aren’t the most popular or well-known. This goes for industry jargon as well.
- Just like SEO, keyword stuffing will not be rewarded in this context either, but do insert common terms and words as alternatives as AI can interpret those uniquely.
In the big picture, AI is still young, and we are working out the kinks and bugs – not only at a basic code and function level, but also on the human level. We are still learning how to navigate and account for our roles and responsibilities in the overall ecosystem of human-computer interaction.
The bottom line is that AI, machine learning, and automation can eliminate bias or reinforce it. That separation may never be pure, but it’s an ideal that is not only worth striving for, it is absolutely necessary to work toward. The impact and consequences of our choices today will leave long-lasting effects on every area of human life.
And the bright side is that we’re already beginning to see how those theoretical concerns can play out in the real world, and we have an opportunity to improve a life-changing technological development whose reach and impact we can still only dimly imagine. In the meantime, job seekers looking to beat the bots are not entirely powerless, but can do what human beings have done well for ages: adapt.
Interested in how to deliver a great candidate experience? Read our guide on how to Transform the Candidate Experience.