From AI recruiting tools to industrial automation and robotic assistants, new digital technologies are transforming the modern workplace. Many of these systems promise to improve efficiency, productivity, and well-being — but how are they actually affecting the people who interact with them every day?
It’s a complicated question with no cut-and-dry answers. But a growing body of research has begun to explore the nuanced ways in which technology is influencing the workplace and the workforce, shedding light on both its many benefits and substantial risks.
How Is AI Transforming Hiring?
One of the most significant areas in which technology has transformed the workplace is before new candidates even get in the door. AI tools can help recruiters sift through resumes, review cover letters, and even conduct virtual interviews. But these tools can also introduce new complexities and biases into the hiring process.
AI hiring tools can influence who applies: In one study, researchers asked more than 500 U.S.-based adults to imagine applying to a job through a system that used AI. They found that candidates who were already excited about the prospective employer and felt positively about AI in general were more likely to complete an application,. Candidates who were anxious or distrustful of AI, or who were less enthusiastic about the employer, were less likely to complete their applications if interaction with AI was required. This suggests that incorporating automated tools into the hiring process can affect different candidates’ experiences differently, influencing who ends up applying in potentially surprising ways.
Automated screening can perpetuate bias: While the potential for AI-based systems to perpetuate human biases is well-known, a new study found that even when explicitly gendered information (such as names or pronouns) is removed, today’s sophisticated machine learning models can still accurately determine a candidate’s gender. Furthermore, the study found that after controlling for job-relevant traits, when elements of a candidate’s resume did not line up with their gender — i.e., when a woman’s resume included traditionally masculine characteristics — they were less likely to get called back for an interview.
People are less offended by algorithmic than human discrimination: Given the prevalence of AI-driven bias, will companies feel pressure stop using these tools? At least one paper suggests they might not: Through a series of eight studies, researchers found that people tend to get a lot less mad when they learn that an algorithm discriminates than when a human makes the same discriminatory decision, meaning they’re less likely to …….