As the 2026 legislative session progresses, Connecticut is positioned to enact one of the nation's most comprehensive regulatory frameworks governing the use of artificial intelligence in the workplace. Senate Bill 435, "An Act Concerning Automated Decision Systems Protections for Employees," represents a fundamental shift in the legal landscape, moving away from a hands-off approach to algorithmic management toward a model of mandatory transparency, human accountability, and rigorous bias testing. If enacted, the bill will place significant new compliance burdens on any employer doing business in the state that utilizes automated tools to assist in personnel decisions, ranging from initial recruiting and hiring to ongoing productivity monitoring and termination.

What the bill does

At its core, SB 435 regulates the use of AI and algorithmic tools in employment decisions. Any employer doing business in Connecticut that uses a computational process to assist in hiring, firing, promotions, compensation, performance evaluations, scheduling, or productivity monitoring is in scope. The definition seems deliberately wide, including resume screeners, skills assessments, tools that analyze facial expressions or voice during video interviews, targeted recruiting ad algorithms, and any product that analyzes third-party data about applicants or employees.

The key employer obligations

  • Disclose AI use. Before an applicant or employee interacts with a covered AI system, employers must tell them they are dealing with an automated process.
  • Written notice before data collection and before any AI-assisted decision. The pre-decision notice must include what the system does, how the decision can be appealed, and a link to the most recent bias audit.
  • Mandatory human review of all final employment decisions. The bill states that "[n]o automated employment-related decision process shall be used by a deployer in making a final or determinative employment-related decision without human review." That reviewer must have actual authority to change the outcome and cannot simply rubber-stamp the AI's recommendation.
  • Pre-deployment (and annual) bias audits by an independent, state-approved auditor. Audits must assess disparate impact against protected classes and test for less discriminatory alternatives. Results must be filed with the Labor Commissioner and published on the employer's website. Critically: "No automated employment-related decision process shall be deployed or continue to be deployed if the most recent bias audit identified any disparate impact," unless business necessity is shown and corrective actions are implemented.
  • Adverse decision rights. If AI contributes to a rejection, termination, or demotion, the affected individual is entitled to a plain-language explanation, access to the underlying data, the ability to correct errors, and an appeal with human review.
  • Union bargaining. For unionized workplaces, AI tool deployment becomes a mandatory subject of bargaining, and employers cannot use AI in any way that modifies or impairs an existing collective bargaining agreement.

Employment law implications for clients

  • Discrimination and disparate impact liability. The bill amends the Connecticut Fair Employment Practices Act directly. It makes it an explicit discriminatory practice to use an AI tool in a manner that has the effect of causing discriminatory hiring, termination, or compensation decisions based on any protected class, e.g., race, color, sex, age, disability, and national origin. Courts adjudicating discrimination claims "shall consider any evidence, or lack of evidence, of anti-bias testing or similar proactive efforts to avoid such discriminatory practice, including … the quality, efficacy, recency and scope of such testing." In practical terms, if a client uses an AI hiring tool and gets sued for discriminatory hiring, the absence of a bias audit—or a completed audit that flagged problems the employer ignored—becomes affirmative evidence of liability.
  • Wrongful termination and adverse action claims. Any AI-assisted termination or demotion triggers mandatory post-decision disclosure requirements, an appeal right, and a human review obligation. An employer that skips those steps and gets sued will be obligated to explain why it did not have a human actually review the AI's recommendation before firing someone.
  • Wage and hour/scheduling tools. As you know, employers increasingly use algorithmic tools for shift scheduling, productivity tracking, and overtime management. All of these fall within SB 435's definition of covered "employment-related decisions." Accordingly, clients using these platforms will need to audit these tools.
  • Retaliation exposure. The bill creates a standalone anti-retaliation provision: no employer can discipline or discharge any applicant or employee because they filed a complaint, cooperated with an investigation, or exercised any right under the act. This runs parallel to existing retaliation protections but creates a separate cause of action specifically tied to AI complaints.
  • Third-party vendor liability. One of the more pointed concerns the business community has raised is that employers are on the hook for AI tools developed and sold to them by third parties. A client that buys an off-the-shelf hiring platform from a major HR tech vendor can still be liable as a "deployer" if that vendor's product generates disparate impact. The bill does allow the developer to contractually assume the deployer's obligations, but only if the contract explicitly says so. Clients should consider reviewing vendor agreements now and pushing for indemnification provisions and audit cooperation obligations.
  • Unionized employers face the sharpest obligations. In addition to everything above, any employer with a collective bargaining agreement must provide written notice and bargain in good faith before deploying, testing, or materially modifying any covered AI system. Further, AI cannot be used in any way that "modifies or impairs" the CBA, including uses that reduce wages, fringe benefits, or overtime hours, or that effectively assume the duties and functions of bargaining unit members. For clients with significant union workforces, this is likely going to require pre-deployment legal review of every new AI tool before rollout.

Enforcement

The Connecticut Attorney General can bring Connecticut Unfair Trade Practices Act (CUTPA) claims for disclosure and audit violations; employees and applicants have a private right of action in Superior Court for damages, equitable relief, and attorney's fees; and the anti-retaliation provision creates additional individual claims. 

Effective date and implementation

If passed in its current form, the provisions of SB 435 are scheduled to take effect on October 1, 2026.

  • Regulatory runway: While the statutory date is set for late 2026, the Connecticut Department of Labor (CTDOL) has signaled in recent testimony that establishing the necessary oversight infrastructure and independent audit approval processes may push practical enforcement deadlines into 2027.
  • Compliance preparation: Employers should expect a compressed timeline for disclosure compliance, while the more technical requirements, such as the state-approved audit standards, will likely be clarified through agency rule making shortly after the bill's enactment.

Strategic takeaway and legislative outlook  

The central implication is that SB 435 effectively ends the era of set-and-forget HR automation in Connecticut, mandating a permanent human-in-the-loop for every AI-assisted decision and exposing employers to direct litigation for algorithmic bias, even when utilizing third-party vendor platforms. There have been ongoing discussions that the final version of the bill, should it proceed to passage, may be streamlined with certain provisions potentially trimmed to address stakeholder concerns. We will continue to monitor these legislative developments closely and will provide further updates to readers as the bill’s final language and scope are established.