This article originally appeared as a Gavel to Gavel guest column in the Journal Record on July 9, 2025.
By: Phoebe M. Barber
Artificial Intelligence (“AI”) is rapidly changing the landscape of employment law. Employers can now use the convenience of AI tools for many traditional human resources functions, including screening resumes and drafting performance reviews. However, employers should do so with caution, as the use of AI could implicate federal anti-discrimination laws, such as Title VII of the Civil Rights Act, the Age Discrimination in Employment Act (ADEA), and the Americans with Disabilities Act (ADA) or their state equivalents.
For example, when using AI to screen resumes, employers will often input the resumes of existing employees as source data. Consider a scenario where the majority of employees in a certain position are male. Thus, the system may infer that a male-coded resume is superior to a female-coded resume, and without explicit instruction to do so, may downgrade resumes containing female verbiage (such as “Society of Women Engineers”). In essence, feeding the AI resumes from the past could train it that male applicants are preferable – simply because most of the past employees were male. Such an algorithm could create liability for sex discrimination under Title VII. Further, replace “women” with “workers over 40,” “people with disabilities” or any other protected class, and the same risk attaches under the ADEA or ADA.
Title VII prohibits discrimination by employers on the basis of race, color, religion, sex and national origin. The statute (like other federal anti-discrimination laws) applies whether a human or a machine makes the decision – and equally applies to an employer who uses a third-party vendor to make such decisions. Further, while traditional discrimination cases often focus on disparate treatment – meaning, intentional discrimination – plaintiffs’ cases involving AI will likely focus on disparate impact – meaning, adverse effects on protected classes due to unintentionally biased algorithms.
In order to avoid opening themselves to liability for unintentional discrimination by using AI tools, employers should proactively address the usage of AI in hiring and other employment decisions. Employers should adopt and implement a policy regarding the use of AI, which specifically addresses prohibited and acceptable uses, and fosters transparency with regards to the company’s use of AI. Further, employers should conduct frequent AI audits to evaluate AI tools for potential (and even unintentional) biases. If an employer utilizes a vendor for its AI tools, employers should review vendor agreements to ensure understanding of how the AI tools function and confirm compliance with anti-discrimination laws. Additionally, it is crucial that employers implement human review and oversight of all AI recommendations – using AI as a tool, rather than a decision-maker. Lastly, employers should utilize their employment counsel to monitor developments regarding the ever-changing landscape of potential discrimination claims in the wake of AI.
About the author:
Phoebe M. Barber is a litigation attorney who represents both individuals and public companies in a wide range of civil litigation matters in both state and federal court.
CONTACT: pmbarber@phillipsmurrah.com | 405.606.4711
Visit Phillips Murrah on social media: