The movement toward skills-based hiring aims to improve outdated practices that often fail to predict success on the job and exclude some candidates due to unintentional bias. Advocates of AI-powered recruiting technologies claim they can help make hiring fairer and faster. On the surface, the potential for artificial intelligence to help fix what’s broken in hiring sounds promising. But can it really solve our decades-old hiring problems? Let’s dig into the research.

The Evolution of AI in Hiring

Early use of AI primarily included automating repetitive or rote tasks—productivity boosters for recruiters and hiring managers. Think chatbots sending automatic replies to candidates for scheduling or software that scans resumes for keywords and phrases to identify qualified candidates.

Today, AI is doing much more. According to a 2022 SHRM survey, “Nearly 1 in 4 organizations use automation and/or AI to support HR-related activities,” including (in order of prevalence) communicating with candidates, screening resumes, automating candidate searches, creating job descriptions and targeting postings to certain groups, selecting candidates for interviews, administering and scoring skills tests, and administering automated interviews.

AI is moving into human decision-making territory. And that’s where things get complicated.

You Can’t Bypass the Bias Problem

Depending on where you find your news, AI has the potential to either reduce or amplify bias in hiring. But it’s more of a “both/and” situation.

A McKinsey article outlined this duality well. The authors assert that because machine learning algorithms only consider the variables that improve predictive accuracy, AI can reduce bias by taking subjective human perspective out of evaluation. At the same time, all machine learning is trained by existing data. Data that is created by humans. And therein lies the potential for bias … amplified by AI at scale.

Prasanna Tambe, Peter Cappelli, and Valery Yakubovich note in a study on AI and HR management, “Given the uncertain quality of performance evaluations by humans, can we use them for training AI algorithms? Doing so might well mean scaling up arbitrary or outright discriminatory human decisions.” The authors also point out that AI needs to learn from large data sets to be effective. A small organization with a small data set will get less value from machine learning. AI vendors offer comparative data sets from companies of a similar size or industry, but the impact of using data that doesn’t reflect your organization’s history is unclear.

In addition to bias from data that is incomplete or reflects historical inequities, algorithms may select candidates based on information unrelated to skills or the ability to succeed on the job, like their name or the file type they used to submit a resume. An article from the Harvard Business Review on the ethical implications of AI offers the example that “Facebook ‘likes’ can be used to infer sexual orientation and race with considerable accuracy. Political affiliation and religious beliefs are just as easily identifiable.”

An algorithm could make biased job performance predictions based on these types of proxy variables. And bias can be further exacerbated through feedback loops, self-reinforcing over time.

For Now, AI Is Risky Business

The rapid rise of AI is exciting. As this technology improves, it has the potential to accelerate some much-needed changes in hiring. But with many technologies, we become enamored by the shiny newness of it all and think about safeguards and repercussions later. The reality is that organizations using AI to screen and evaluate candidates now are open to increased legal liability.

New York City is the first to enact legislation restricting AI hiring tools, requiring organizations to conduct bias audits and notify employees and candidates if AI is used to evaluate them.

The Federal Trade Commission Chair and officials from the Department of Justice, the Consumer Financial Protection Bureau, and the Equal Employment Opportunity Commission released a joint statement in April stating, “Although many of these tools offer the promise of advancement, their use also has the potential to perpetuate unlawful bias, automate unlawful discrimination, and produce other harmful outcomes.”

AI Will Need To Become an HR Job Competency

As regulations continue and AI technologies evolve, HR professionals will have to acquire the knowledge and skills to accurately manage and monitor AI systems to ensure they are in compliance.

They will also need to develop an understanding of how different qualities of the learning data will impact outcomes. Using AI for hiring will not be a set-it-and-forget-it model.

In a Q&A on AI and intelligent tools, John Sumser advised, “The health of the underlying data is important to monitor. Imagine, for example, that one of your intelligent tools estimates the risk that a person will leave their job. The models are built on historical data from an extraordinarily good economic climate with serious talent shortages. That data won’t be inherently useful in a downturn with high unemployment.”

Using the Right Tool for the Right Job

Full disclosure: we use AI tools (sort of).

Our Results-Based Hiring® Process is designed to systematically reduce bias in the selection process while accurately predicting success on the job. Because our team has informed their judgment through hundreds of competency-driven searches, we don’t see AI adding value to the candidate screening process.

But we do use LinkedIn Recruiter and hireEZ as part of our sourcing process to widen the candidate pool as much as possible. For a single search, the AI usually identifies fewer than ten candidates that match our vetting criteria. We typically reach out to 150-200 candidates, so ten is a relatively low percentage. And the AI isn’t very good at learning the correct data to look for. For example, the AI may scan a job description and prioritize keywords that don’t represent the most relevant job competencies.

We have found that AI produces better sourcing results when the tool allows us to enter more specific search criteria—based on the job competencies identified by our team and the client. Better data in, better data out.

We also sometimes use ChatGPT to craft elements of position overviews and blog posts. It is a great thesaurus and can be helpful as a quick brainstorming tool to jumpstart more in-depth research.

Let’s Improve Our Systems First

AI may look like an attractive shortcut to addressing some of the bigger challenges in hiring, but the research doesn’t support that. If bias in human judgment is the problem we are trying to solve, machine learning built from imperfect and limited human data won’t solve it (at least not yet). Let’s improve our human systems first.

AI is most useful in adding to what is already working. We recommend moving to a skills-first approach before embedding AI-driven technologies into your candidate evaluation and screening processes. It’s important to understand which biases may be present, what strategies work to reduce them, and how to identify and evaluate job competencies at all stages of hiring.

If you look to AI to take over a broken process, you risk perpetuating the same failing practices we are trying to change.


Before applying a skills-based approach to hiring, it’s helpful to understand the underlying principles of why it works (and why it’s needed).