Research

IST researchers to examine bias in AI recruiting, hiring tools

NSF grant will support the study of potential discrimination in automated job screening and interview platforms

According to Lynette Yarger, associate professor of information science and technology, artificial intelligence software is being used in the recruiting and hiring process in a growing number of companies, but the tools may raise concerns about the potential for bias and transparency. Credit: Adobe StockAll Rights Reserved.

UNIVERSITY PARK, Pa. — Chances are that artificial intelligence played a role in your last job search — and possibly even determined whether or not you got an offer — without you even knowing.

According to Lynette Yarger, associate professor of information sciences and technology, nearly all Fortune 500 companies use some form of automation to support the hiring process. But are these tools biased?

Yarger and her research team will explore the answer through a new project funded by a $225,000 grant from the National Science Foundation.

“Companies are using AI assessment tools to attract, screen and hire employees in new ways,” she said. “These tools have not been thoroughly tested under the law and raise concerns about the potential for bias, fairness, transparency and accuracy.”

These tools include utilizing different software programs that match online resumes with job openings then identify, call and interview candidates; analyze job candidates’ expressions, voices and behavior captured on video during an online interview; and examine applicants’ social media presence to measure compatibility for a job.

“When algorithms use social media profiles as a proxy for measuring organizational fit and predicting the ability of an individual to perform the job, people from significantly different cultural backgrounds may be systemically disadvantaged,” said Yarger.

In their project, the research team will study candidates seeking jobs in information technology and industry employers that make hiring decisions. For job seekers, the team will use hiring scenarios and AI software user studies to elicit the candidates’ perception of fairness when a human is making employment decisions, compared to artificial intelligence. The researchers will also interview employers to understand the motivation for using AI software tools and how the use of these tools has impacted the candidate pool.

“When algorithms make inferences about applicants’ age, race, religion and sex, it is difficult to determine if firms are adhering to federal laws that protect job applicants against discrimination,” said Yarger. “I want companies, software designers and particularly job seekers who might be adversely affected by this software to understand these potential risks for bias.”

Yarger said that her research is motivated by the long-standing underrepresentation of women and African-, Hispanic- and Native Americans in the information technology industry.

“One of the most troubling aspects of this longstanding pattern of underrepresentation is that it has been concurrent with actual increases in minority worker protections, anti-discrimination law and interventions aimed at diversification in the field,” she continued. “AI software is being used by a growing number of companies, but we don’t have a good handle on how this software is helping or hindering efforts to diversify the field.”

“If bias exists in an AI hiring platform, the potential for widespread harm is far greater than the impact of a biased human hiring manager,” Yarger concluded.

Last Updated October 26, 2018