New York has adopted new regulations for when and how businesses can use artificial intelligence (A.I.) in hiring and promoting workers.
The regulations prohibit employers and recruiting agencies from using “an automated employment decision tool unless the tool has been subject to a bias audit within one year of the use of the tool, information about the bias audit is publicly available, and certain notices have been provided to employees or job candidates.” Companies violating this rule will be subject to fines.
However, critics argue that the definition of “automated employment decision tool” isn’t precise enough, and that employers can still use biased tools so long as a human makes the ultimate decision about a hire. “What could have been a landmark law was watered down to lose effectiveness,” said Alexandra Givens, president of the Center for Democracy & Technology, told The New York Times.
Companies have been trying to automate the hiring process for years—and potential bias has been a problem since the beginning. In 2018, for example, Amazon reportedly experimented with integrating machine-learning techniques into a recruiting platform, using a decade’s worth of résumés as a dataset. However, the platform quickly displayed a bias toward male candidates; because the majority of those résumés belonged to male candidates, the platform’s underlying algorithm decided that males were preferable hires. Amazon eventually killed the program.
But that didn’t stop the urge to integrate A.I. into hiring; according to a 2022 report from SHRM, roughly one in four organizations use some kind of automation and/or A.I. to support HR activities. The genie is out of the bottle on this one, and the big question is whether cities, states, and the federal government will eventually put regulations in place to ensure that A.I. isn’t harmful to folks’ chances of landing a job. New York City is enacting its version of those laws; what will other municipalities do?