Today employers have at their disposal many applications and automated solutions to help them with making employment decisions such as hiring, firing, promotion, or discipline. However, using these applications and automated solutions does not shield employers from potential violations of the law, including Title VII of the Civil Rights Act of 1964. In an effort to advise employers of the risks associated with incorporating artificial intelligence into employment decisions, the Equal Employment Opportunity Commission has waded into the arena and issued new guidance.
The technical guidance provides a reminder to employers that just because a decision is not made by a human, that does not mean they are immune from liability for discrimination. The same rules that apply to decisions such as hiring, promotion, and firing made by humans also apply to those made with the assistance of technology.
As a refresher, the EEOC issued its Uniform Guidelines on Employee Selection Procedures in 1978 to address how employers should determine whether their selection processes have a disparate impact. Disparate impact means that the processes used discriminate against people in a protected class, even if there is no intent to do so. With the increasing use of AI to streamline employment selection processes, the EEOC’s updated guidance explains how these AI implementations might result in a disparate impact. The guidance reminds us that Title VII prohibits employers from using tests or selection procedures (those used to make decisions regarding hiring, firing, and promotions) that “have a disproportionately large negative effect on a basis prohibited by Title VII.”
The EEOC provides several examples of what types of AI might be incorporated into the decision-making process. The following examples of AI tools may implicate issues under Title VII:
- Resume scanners that prioritize applications using certain keywords.
- Employee monitoring software that rates employees on the basis of their keystrokes or other factors.
- “Virtual assistants” or “chatbots” that ask job candidates about their qualifications and reject those who do not meet pre-defined requirements.
- Video interviewing software that evaluates candidates based on their facial expressions and speech patterns.
- Testing software that provides “job fit” scores for applicants or employees regarding their personalities, aptitudes, cognitive skills, or perceived “cultural fit” based on their performance on a game or on a more traditional test.
Employers should understand the legal risks that accompany the expanded use of AI in order to assess whether such use could result in adverse impact on a particular protected group. Employers should check to see whether the procedure results in a selection rate for individuals in a group that is statistically less than the selection rate for individuals in another group. Furthermore, the EEOC advises that employers are not off the hook if they utilize tools created by third parties, as they would likely be responsible for Title VII risks even when using an outside vendor’s tools. It is important for employers to provide their own assessment of the risks of using AI and not rely solely on vendor representations or warranties. Many vendors advertise that their tools are “bias free" but that representation provides little defense under Title VII.
While the EEOC’s updated guidance is just that, guidance, it is key to remember that updates such as these help employers gain insight on the agency’s enforcement priorities. As a result, employers should take steps to assess any tools that utilize AI in employment decision-making, make their own independent assessments as to whether they have any adverse impact, and adjust their use of such tools as necessary.
You can subscribe to our latest alerts and insights here.