Last year, New York City passed a new law that requires organizations that do business in the city to perform annual audits of their AI tools to determine that biases do not appear in the tools. This groundbreaking law is the first of its kind in the U.S. and could become a reality for other cities and eventually entire states in the near future, legal experts warn.
What do HR leaders need to know about this law that goes into effect on Jan. 1? Here’s a quick guide.
What is the law?
Passed by the New York City Council, it is designed to protect employees during the hiring and promotion processes from unlawful bias by employers that rely on automated employment decision tools. These include recruitment tools that read and select a job candidate’s resume and job application.
The law compels employers to conduct AI tool audits to ensure that these HR solutions do not have biases that might impede the hiring and promotion of workers. In an effort to provide transparency, employers are required to disclose the data either publicly on the company’s website or upon request.
The new regulation mandates that “at least 10 business days prior to using a tool, an employer or employment agency must notify each candidate or employee who resides in New York City that an automated employment decision tool will be used in connection with the assessment or evaluation of the candidate or employee,” according to a blog from the law firm Ogletree Deakins.
What is a bias audit?
A bias audit, according to Ogletree Deakins, is “an impartial evaluation by an independent auditor” to test a recruitment or employee evaluation tool to determine if the AI could have a negative impact on a job candidate’s hiring or a current employee pursuing a promotion. This means the law covers a person’s race, gender and ethnicity.
Is NYC trying to outlaw AI tools?
No, the New York City law is not designed to prevent businesses from using AI, assures Simone Francis, an attorney with the technology practice of Ogletree Deakins. Instead, it aims to eliminate unintended biases that might have been programmed into these tools inadvertently.
“There’s certainly been a lot of conversation about the ability of AI to potentially eliminate biases, but the law is intended to put certain processes in place to ensure that AI is being used in a way that does not lead to unintended results, including results that would conflict with existing anti-discrimination laws,” she says.
Who performs the bias audit?
The responsibility for performing the audit resides with the organization using these tools, not the AI solution providers. However, it cannot be performed by the departments that use the AI tool.
“The New York City law specifically says that you have to have an independent audit, which means you cannot just rely on the vendor and the vendor’s assurances,” says Francis.
Should HR leaders expect to perform these audits on a regular basis? If so, how often?
Francis expects that these audits will not be a “one and done proposition.” Instead, HR leaders should assume that they must be performed on a regular cadence.
“We’re still trying to develop some understanding of what the city means by that,” says Francis.
Does this law apply to only companies headquartered in New York City?
No, this law applies to any business that has offices and employees in Manhattan and the surrounding boroughs and uses AI decision tools for hiring and promoting employees. If a business based in, say, North Carolina or Silicon Valley has a New York City office, it must comply with this law.
What are the penalties for not performing the audits in an open and timely manner?
So far, rather light. According to Francis, the penalties range from as little as $500 to $5,000. This does not include the potential damage to a company’s reputation, she adds.
Is this law set in stone?
Yes, but the details are still being worked out. The New York City Council will hold a public hearing on Oct. 24 following a comment period.
Could laws like this pop up in other cities and states and nationally?
It’s reasonable to assume so, says Francis. She adds that laws pertaining to anti-discrimination goals tend to start in one city or state and are then adopted by other cities and states. And the federal government is getting interested. The EEOC issued guidance this spring instructing employers to evaluate AI tools for bias against people with disabilities, Democrats introduced a bill in Congress focused on automation, though it hasn’t advanced, and the inaugural National Artificial Intelligence Advisory Committee held its first meeting in May to discuss AI’s use in several areas, including those related to the workforce, according to the Brookings Institute.
What should HR leaders and the IT teams that serve them consider when dealing with the new law?
It’s important to understand how AI tools are used, says Francis. HR and HRIS must “get their arms around that because how they’re actually used could either trigger application of this law in NYC or in other jurisdictions in the future,” she says.