LegalReader.com  ·  Legal News, Analysis, & Commentary

News & Politics

NYC’s Anti-Bias AI Hiring Law Has Largely Failed So Far


— May 15, 2024

Although other jurisdictions have shown interest in passing laws in a similar vein to LL144, lawmakers have been hesitant to move forward with them after seeing NYC’s efforts in this area have largely flopped.


Last year, New York City became the first U.S. jurisdiction to regulate automated and artificial intelligence (AI) tools in the hiring process. So far, however, employers have paid this law little, if any, attention. The law (Local Law 144) is designed to prevent potential race and gender bias in AI hiring algorithms by requiring employers to regularly audit these tools. However, as employers have the freedom to decide whether or not their systems are covered by the law and overall lax enforcement, it’s largely gone ignored. Yet, despite the failure of LL144, employers should still anticipate new and more comprehensive regulation surrounding AI and employment law coming into play in the near future.

Transparency surrounding employer use of AEDTs 

Local Law 144 requires employers that use automated employment decision tools (AEDTs) in the hiring process — which includes résumé scanners that search for keywords and chatbots that interview candidates — to conduct regular audits to check for gender and race bias, and then post the findings publically on their website. A mere 18 out of 391 NYC employers have posted audit reports to their websites, new research from Cornell University reveals. And although the law also says employers should let employees and applicants know about their AEDT usage, the Cornell study found that only 13 employers have so far done so.

Too vague in scope 

LL144 “grants near total discretion for employers to decide if their system is within the scope of the law,” explains Jacob Metcalf, co-author of the Cornell study. “And there are multiple ways for employers to escape that scope.” Most AEDTs were covered in early drafts of the law, but this was eventually changed to include only those used without any human involvement. Employers are saying humans are ultimately overseeing the hiring process even if AI is used in some capacity — which means they’re not required to conduct audits. However, “there are many automated tools being used at the early stages of the hiring process to screen applications and reject candidates, and those deserve scrutiny”, AI expert Hilke Schellmann tells SHRM Today.

Employers are also not likely to want to publish their audit results, which further deters them from complying with the law. “Good bias audits are nuanced and can require deeper analyses; publishing raw impact ratios are not really helpful and can be misinterpreted by the public,” AI expert Guru Sethupathy tells SHRM. “So, you have a very narrow scope, a strange request to publish bias audit results and very small fines. It’s a perfect combination to elicit noncompliance.”

No complaints lodged by job candidates

Now Hiring sign on lawn; image by Free To Use Sounds, via Unsplash.com.
Now Hiring sign on lawn; image by Free To Use Sounds, via Unsplash.com.

It’s essentially up to job seekers and employees to enforce the law and file a complaint with a city agency if AI has been unfairly used in the application process. However, as employers aren’t posting audits, job candidates won’t actually know whether AI has been used or not. There’s been zero complaints made so far. And although employees could also hypothetically complain if they suspect they experienced AI hiring bias, none have done so either. “Job applicants are often forced consumers,” Schellmann adds. “If you want the job, will you turn down the process just because there is AI or automation in it? Probably not.”

Indeed, people are generally more concerned about job security in the advent of AI, with almost 50% of U.S. employees now worried AI will take their jobs in the near future. As such, it may be useful for job hunters to consider relocating to a state at low risk of AI job losses. Nevada, North Dakota, and New Jersey, for example, are some of the states with the safest jobs in industries like agriculture, energy production, and tourism.

Lawmakers are watching

Although other jurisdictions have shown interest in passing laws in a similar vein to LL144, lawmakers have been hesitant to move forward with them after seeing NYC’s efforts in this area have largely flopped. For example, New York state, California, and Washington D.C. have all considered similar laws in recent years. “Lawmakers are watching to see what the efficacy of the New York City law is — if people are found to be harmed by AI tools, then we will see reactionary laws as a result,” Amanda Blair, an attorney at Fisher Phillips in NYC tells SHRM. “I would expect additional guidance this year.”

Although LL144 has so far been mostly unsuccessful, the Cornell researchers do say it’s still a positive move in the right direction. “Anyone working on these laws is experimenting on accountability structures — we don’t know yet what works,” says Metcalf. “Nearly everything that civil society critics said [about LL144] has come true, but we learned things in this paper that other people can pick up [for future enforcement efforts].”

Join the conversation!