Chatbots, Video Screens, and Lawsuits? California Employers Brace for New AI Rules

You’ve hired a recruiter to streamline your candidate intake process. That vendor uses AI automation including an Applicant Tracking System (“ATS”) that screens 100 resumes before lunch, has chatbots that line up interviews, and ChatGPT-generated interview questions for candidates. The interview is a Zoom screening in which AI analyzes response times, facial expressions and takes notes on everything said. With all this automation and outside help, your hiring process is moving faster and more smoothly than ever before. You sit back in your chair and smile. With your vendor handling recruiting, you may even have some free time for those things you’ve been putting off for ages… yes, even your employee handbook or revamping job descriptions.

AI and hiring technology are advancing quickly, and with those advances, potential liability looms in the rearview mirror. Be sure to remember the warning on the mirror – the object (liability) may be closer than it looks. Although the rules were approved on June 27, 2025, the rules do not become effective until October 1, 2025, giving you a bit of time to get up to speed.

With these new regulations, ATS, chatbots, video interview filters, and scoring tools will now fall squarely into the same Fair Employment and Housing Act (“FEHA”) rules that already govern your other candidate selection and equal employment practices. If an AI tool affects who gets seen, who moves forward, or who receives an offer, it will fall under the Fair Employment and Housing Act.

What’s New?

California did not create a new cause of action, rather it pulled AI tools into its existing FEHA rules. Specifically, the Civil Rights Department’s (“CRD”), Civil Rights Council (“CRC”) recently issued new regulations define “automated decision systems” (“ADS”) broadly as tools that parse resumes, score interviews, analyze reaction time, or analyzing facial expression, word choice, and/or voice in interviews. These technological tools now sit squarely within the scope of FEHA, if they have the potential to contribute to candidate hiring, employee promotions, or evaluation outcomes.

The rules also make clear that if you use vendors and staffing partners to conduct your candidate screenings, they are considered an agent of your business in terms of a Company’s FEHA-related liability. Said another way, the Company remains on the liability hook for discriminatory effects even if a third party or a vendor runs the AI tools, and outsourcing doesn’t shift that risk.

The CRC regulations also require enhanced recordkeeping. Under the new rules, be sure to keep applications, selection criteria, and ADS data for at least four years (and longer if a complaint is filed, through final disposition). Also, expect heightened scrutiny of any AI tools that elicit disability information before you make a contingent offer of employment. That includes games, prompts, or tools measuring dexterity or reaction time that can implicate disability.

Why This Matters

There are several currently pending lawsuits and administrative claims alleging adverse hiring impact tied to a company’s use of automated tools. With these new CRD rules, we expect significant increases in wrongful failure to hire type or discriminatory impact in hiring claims. For example, if a resume ranker or video filter reduces pass rates for a protected group, you will need clear evidence that the tool’s process is not discriminatory in nature, but is job related and consistent with business necessity. This could include, for example, evidence of anti-bias testing (quality, recency, scope, and responses). You also need to be ready to address less discriminatory alternatives.

This proof lives in your validation file, your testing log, your contracts, and your ability to export inputs and outputs fast. Thus, your ability to access data becomes a defense issue and something you need to review in terms of your contracts with any vendors you might use. Candidly, you cannot evaluate impact or answer regulators if a vendor holds the keys and refuses to provide that information. So, now is the time to audit your contracts. A few must-haves: audit rights, fast exports, explainability detail, and set timelines to fix flagged issues. Also, be sure to build “pause rights” into agreements, so you can suspend an AI tool during an investigation, should an issue arise.

Turn Policy Into Practice

If you’re wondering where to start, make an inventory. List out every ATS touching on your candidate recruiting or employee promotion or evaluation practices. Note purpose, inputs, outputs, owners, and risk notes. Next, map out where each tool sits in your candidate or employee workflow and identify which decisions flow from the tool’s output, and determine who at the company has authority to override any AI decision. For there, make assignments to Human Resources or Legal. Finally, move record retention to a four-year default for applications, selection criteria. If there is a lawsuit that arises, ensure that your document retention is set up to run through the end of the matter.

Review and adjust your contracts with staffing partners and software providers that might use AI tools in their onboarding process. Do you have audit rights? In addition to what is above, also be sure to seek cooperation duties for adverse impact reviews and a clear fix-it timeline, for when testing signals any red flags.

As Bob Dylan famously said, “The times they are a-changin’.” Don’t get left behind.

Contact EmployLaw Group and for questions related to workplace compliance.