Could artificial intelligence (A.I.) and machine learning make ageism in tech—already a widely recognized problem—even worse?
That’s an issue that companies of all sizes will have to confront as A.I. and machine learning are integrated more tightly into the recruiting and hiring process. Imagine a scenario in which a firm’s A.I.-powered résumé-screening software begins excluding people who graduated from school before a certain date, or exhibits bias based on gender or race.
This scenario isn’t implausible. In late 2018, reporters emerged that Amazon had experimented with integrating machine-learning techniques into a recruiting platform. But this supposedly “smart” tool, after analyzing a decade’s worth of résumés, began displaying a bias toward male candidates.
When Amazon’s engineers attempted to diagnose the issue, they found that, because the majority of the résumés belonged to male candidates, the platform’s underlying algorithm decided that males were preferable hires. Application materials from women were downgraded.
“Amazon edited the programs to make them neutral to these particular terms,” Reuters mentioned at the time. “But that was no guarantee that the machines would not devise other ways of sorting candidates that could prove discriminatory.” Eventually, Amazon decided to cancel the project rather than continue tweaking further.
It’s easy to see how a similar problem could occur in the context of candidates’ age. For example, if a company had a tendency to hire candidates who graduated from school (or landed their first job) by a certain date, it might introduce a bias toward younger candidates. The company’s software developers would need to actively monitor the system to ensure that something like that wasn’t happening.
Despite those concerns, companies are still pushing hard to automate more parts of recruitment and hiring. "[A.I. allows us to] get rid of processes that don't work well or are redundant. And we can give candidates a better experience by giving them real-time feedback throughout the process,” Eric Sydell, executive vice president of innovation at Shaker International, which develops A.I. algorithms, recently told USA Today.
But so much of A.I. working effectively depends on a human element—as with the Amazon example, someone actually has to sit down and realize, “Hey, the system isn’t behaving as it should.” But when the system is massive, and the software is making decisions totally obscured behind a dashboard, how can a human being identify that something’s going wrong before it blows up into a major incident? There’s the potential for serious legal trouble here.
“The basic premise on which this technology is based is that humans are flawed, computers can do things better,” Raymond Berti, an employment attorney at Akerman LLP, told Quartz in October 2018. “Obviously things aren’t that simple. We’re not at a point where employers can sit back and let computers do all the work.”
Meanwhile, ageism remains a very serious issue in tech. Earlier this year, a study by Hired found that ageism within tech tends to hit once technologists are around 40 years old; at that point, they begin to hit a ceiling in their earnings.
That pessimistic projection aligns with other studies. Last year, a survey of startup founders from First Round Capital had ageism in tech starting to kick in around age 36 (roughly the time Hired’s data shows salary expectations and offers start to go opposite directions).
Then you have our 2018 Dice Diversity and Inclusion Survey, in which technologists told us that ageism is prevalent in the industry: 29 percent of respondents reported “experiencing or witnessing” age-based discrimination in the workplace, outpacing gender discrimination (21 percent), political-affiliation discrimination (11 percent), and bias based on sexual orientation (six percent).
While tech firms deny any bias against older workers, they still end up embroiled in age-discrimination lawsuits filed by hundreds of applicants. For those older technologists concerned about their jobs, a willingness to keep learning—and a little flexibility—can sometimes help fend off unfair demotions or termination. But what can they do against an algorithm?