212-460-0047
Call

NYC employers: Do you use automated software to screen job applicants?

aa9a3ccb_admin

Artificial intelligence (AI) is a powerful tool for business operations, including screening and recruiting potential employees. However, AI tools may rely on inadequate data models that promote bias among specific demographics. A novel New York City law seeks to counteract these issues, requiring employers to independently audit their AI programs to eliminate biases against protected groups.

AI tools that can require audits

AI-driven tools can wade through countless sites, applications, and talent pools to find candidates that are good fits for your company. However, when parties do not use, create, or manage AI tools properly, the AI’s selection criteria may not properly assess candidates with atypical work experience or may unfairly assign more value to specific groups of people, leading to discriminatory results against people within certain protected categories.

AI tools may substantially assist or replace human discretion in screening candidates by:

  • Making changes to language in job listings
  • Sourcing candidates from specific groups
  • Assessing body language
  • Using algorithms to prioritize candidates
  • Using Natural Language Processing (NLP) to analyze context and language
  • Screening candidates based on text-based interactions

If an employer’s AI tools contain biases in their coding, the software may assign lower rankings to or completely screen out qualified candidates based on their race, sex, cultural background, or other protected characteristics.

The new law attempts to address these issues in order to reduce or eliminate these biases.

Issues you may face

While employers may not intentionally engage in discriminatory hiring practices, they could  be accused of serious legal violations if they do not audit their AI programs as required under the new law.

While the law does not include a private right of action for individuals, the city may be able to bring a class action complaint against an employer in federal court if, for instance, it finds that the employer used discriminatory AI tools . Employers may also be held liable for civil penalties ranging from $500 for a first violation to $1,500 for each subsequent offense, with penalties being multiplied by the number of discriminatory AI tools in use and the number of days it took the employer to correct each issue.

As noted in the Bloomberg article, employers are awaiting further guidance from the city as to how the law will work. However, employers who use technology in their recruiting and hiring processes should review their software and speak with an employment attorney to ensure that they will be in compliance when the new law takes effect in January.

Recent Posts

Categories

Archives