AI in Hiring Is Now Regulated: What Every Employer Needs to Know in 2026

As of January 1, 2026, a wave of new state laws has transformed how employers can use artificial intelligence in hiring and employment decisions. Whether you’re a small business using an applicant tracking system or a large enterprise with sophisticated talent analytics, these regulations likely apply to you—and the penalties for non-compliance can be severe. […]

As of January 1, 2026, a wave of new state laws has transformed how employers can use artificial intelligence in hiring and employment decisions. Whether you’re a small business using an applicant tracking system or a large enterprise with sophisticated talent analytics, these regulations likely apply to you—and the penalties for non-compliance can be severe.

Here’s what you need to know to protect your organization.

The New Legal Landscape: A State-by-State Patchwork

Unlike data privacy, where GDPR provided a single (if complex) framework, AI employment regulation in the United States has emerged as a patchwork of state and local laws. Notably, there is no federal preemption. According to Stinson LLP, on July 1, 2025, the U.S. Senate voted 99-1 to strike from the One Big Beautiful Bill Act a proposed moratorium that would have prevented state and local governments from enforcing AI regulations. That means employers must navigate multiple, sometimes conflicting state requirements.

Laws Now in Effect

California (Effective October 1, 2025)

According to Ballard Spahr, on June 30, 2025, California’s Civil Rights Department announced final regulations under the Fair Employment and Housing Act addressing employment discrimination in automated-decision systems. These regulations clarify that it is unlawful under California law to use automated-decision systems in hiring or personnel decisions that discriminate against applicants or employees based on protected characteristics such as age, gender, race, or disability.

The regulations also clarify that automated-decision systems eliciting information about an applicant’s disability may constitute an unlawful medical inquiry. Employers using these systems must preserve data and related records for four years from the latter of the data’s creation or the personnel action involved.

Illinois (Effective January 1, 2026)

Illinois House Bill 3773, passed in August 2024, amends the Illinois Human Rights Act to regulate AI in employment decisions. According to SHRM, the law reinforces that employers remain responsible for ensuring AI tools used in hiring, promotion, discipline, and termination do not produce unlawful discriminatory outcomes—whether intentional or not.

Ogletree Deakins reports that the Illinois Department of Human Rights has released draft rules requiring employers to provide notice when AI is used in covered employment decisions. Required notifications must include the AI product’s name, the employment decisions it affects, its purpose, the data it collects, targeted job positions, and contact details for inquiries.

Per the Connecticut Employment Law Blog, enforcement is through the Illinois Department of Human Rights, with remedies including back pay, reinstatement, emotional distress damages, and attorney’s fees.

Texas (Effective January 1, 2026)

According to SHRM, the Texas Responsible Artificial Intelligence Governance Act establishes new expectations around transparency, risk evaluation, and governance for AI systems deployed in high-impact settings, including employment.

New York City (In Effect Since July 2023)

NYC’s Local Law 144 was among the first AI hiring regulations in the nation. According to HR Defense Blog, the law requires employers to post audit summaries online, notify candidates and employees at least 10 business days before using an automated employment decision tool (AEDT), and offer an alternative selection process if requested.

Fines range from $500 to $1,500 per violation, multiplied by each day of non-compliance and each affected applicant—potentially reaching millions for systematic violations. Importantly, NYC Department of Consumer and Worker Protection guidance confirms that the law applies even when humans make final decisions based on AI rankings or scores.

Coming Soon

Colorado (Effective February 1, 2026—Subject to Potential Modification)

The Colorado Artificial Intelligence Act (SB 24-205), passed in May 2024, is among the most comprehensive state AI laws. Per GovDocs, the law requires employers with more than 50 employees using AI in employment practices to establish risk management policies, conduct annual impact assessments, and implement mitigation measures.

However, Ballard Spahr reports that Governor Jared Polis called a special session in August 2025 to address the law’s implementation, and there have been efforts to modify or delay the effective date. Employers should monitor developments closely.

What Counts as “AI” Under These Laws?

The definitions are intentionally broad. According to The HR Digest, these laws generally cover any technology used for candidate selection and employer decision-making, including:

  • Applicant tracking systems with automated screening
  • Resume parsing and ranking tools
  • Video interview analysis software
  • Algorithmic assessments for promotions, discipline, or termination
  • Any automated system that “influences or facilitates” employment decisions

Critical point: Per the HR Defense Blog, NYC guidance confirms that even when humans make final decisions based on AI rankings or scores, compliance obligations remain—the human’s role doesn’t eliminate the need for compliance if AI tools are influencing the decision-making process.

The Real Risks: Litigation and Enforcement Are Already Here

This isn’t theoretical. Employers and AI vendors are already facing legal consequences.

Mobley v. Workday Inc. — According to Stinson LLP, in Mobley v. Workday Inc., 740 F. Supp. 3d 796 (N.D. Cal. 2024), a federal court allowed claims to proceed against a major HR software vendor. The plaintiff, a graduate of a historically Black college, alleged that Workday’s AI applicant screening software discriminated based on age, race, and disability. The complaint alleged the vendor trained its AI using data from employers’ existing workforces, leading the AI to replicate historical biases.

EEOC Settlement — Stinson LLP also reports that the Equal Employment Opportunity Commission secured a $365,000 settlement from a virtual tutoring company following claims that the company’s AI recruitment tool screened out candidates based on age.

These cases signal that both employers and the vendors they rely on face exposure.

The “Black Box” Problem

Why are regulators so concerned? Stinson LLP explains two interconnected risks:

  1. Lack of transparency: AI systems may generate decisions whose reasoning their programmers cannot explain—the “black box” problem.
  2. Embedded bias: AI-driven outputs can replicate biases in training data. Even when programmers ensure training data does not include protected characteristics, the data may contain proxies for such characteristics (like ZIP codes serving as proxies for race).

Because employment decisions remain subject to existing anti-discrimination laws (Title VII, the ADA, the ADEA), algorithmic bias can create significant legal exposure.

Five Steps to Protect Your Organization

1. Inventory Your AI Tools

You cannot manage what you haven’t identified. SHRM recommends that employers begin by inventorying AI tools currently in use. Many employers are surprised to discover AI embedded in systems they use daily.

2. Understand Your Geographic Exposure

The laws that apply depend on where your employees and applicants are located, not just where your company is headquartered. A Texas employer hiring a remote worker in Illinois must comply with Illinois law for that worker.

3. Review Vendor Contracts

SHRM advises employers to review vendor practices as part of compliance preparation. Examine your contracts: Do they include representations about bias testing? Indemnification for discrimination claims? Access to audit data?

4. Implement Notice and Transparency Protocols

Multiple jurisdictions now require advance notice before AI is used in employment decisions. Per Ogletree Deakins, Illinois’s draft rules would require notice including the AI tool’s name, purpose, data collected, and affected positions. Develop standardized disclosures now.

5. Conduct Proactive Assessments

Ogletree Deakins notes that while Illinois does not require formal bias audits like NYC and Colorado, proactive assessments may reveal whether an employer’s use of AI produces discriminatory outcomes—even unintentional ones. Documentation of good-faith compliance efforts matters in enforcement actions.

The Bottom Line

As HR Defense Blog aptly summarizes: “The age of unregulated AI in employment is over.” Employers who take compliance seriously now will be positioned to use these tools effectively and legally. Those who don’t may find themselves explaining to a court—or a regulatory agency—why their algorithm rejected qualified candidates.


Need help navigating AI compliance in your hiring process? Contact Kohan LLC for a confidential assessment of your current practices and a roadmap to compliance.

Sharam Kohan is an HR consultant and the founder of Kohan LLC, providing strategic human resources guidance to California employers. Connect with him on LinkedIn.


Sources

  • Ballard Spahr LLP, “Dueling Federal and State Directives on AI Hiring Technology Bring Compliance Challenges for Employers” (August 2025)
  • Connecticut Employment Law Blog, “AI & Hiring – The Laws Are Coming” (October 2025)
  • GovDocs, “Compliance Conundrum – New State Laws Governing the Use of AI in Employment Practices” (September 2024)
  • HR Defense Blog, “AI in Hiring: Emerging Legal Developments and Compliance Guidance for 2026” (November 2025)
  • Ogletree Deakins, “Illinois Unveils Draft Notice Rules on AI Use in Employment Ahead of Discrimination Ban” (December 2025)
  • SHRM, “New Year Brings New AI Regulations for HR” (December 2025)
  • Stinson LLP, “With Federal Restrictions Removed, A Wave of State Laws Highlights Risks of Using Artificial Intelligence in Hiring and Employment Decisions” (July 2025)
  • The HR Digest, “Revisiting 2026 State AI Laws That Aim to Regulate AI in Employment” (December 2025)

Related Topics: AI hiring compliance, automated employment decision tools, AEDT audit, California AI regulations, Illinois HB 3773, Colorado AI Act, HR compliance 2026, algorithmic discrimination, employment law AI


DISCLAIMER: The information in this article is provided for general informational and educational purposes only and does not constitute legal, tax, or other professional advice. No consultant-client or attorney-client relationship is created by your use of this content. Kohan LLC is an HR consulting firm, not a law firm. Before taking any action based on this information, consult with a licensed attorney or other qualified professional who can evaluate your specific circumstances. Kohan LLC makes no representations regarding the accuracy or completeness of this information.

 

Scroll to Top