/ 4 min read

Practical Impact of President Trump’s New Executive Order Curbing State AI Laws — Why Your AI Governance Program Still Matters

The Trump Administration’s new Executive Order (EO 14365) on artificial intelligence seeks to reset the U.S. AI regulatory landscape by curtailing state-specific AI laws and charting a path toward a single, “minimally burdensome” national framework. If successful, EO 14365 could reshape how companies approach multistate AI compliance, particularly where state laws demand bias audits, truthful output alterations, impact assessments, or extensive disclosures. 

But the near-term effect is procedural, not substantive. State AI laws remain in force unless and until courts enjoin them or Congress passes preemptive legislation. More importantly, the EO 14355 does not change the core legal, ethical, and commercial reasons to build transparent, accurate, and wellcontrolled AI systems. The bottom line for organizations using or developing AI: stay the course on compliance, strengthen AI governance, and prepare for a period of regulatory uncertainty. 

Core Directives and Preemption Mechanics

The executive order directs a coordinated federal effort to challenge and constrain state AI regulations that conflict with national AI policy. It does so by: 

  • Creating a Department of Justice (DOJ) AI Litigation Task Force to challenge unconstitutional or preempted state AI laws
  • Instructing the Department of Commerce (DOC) to identify within 90 days “onerous” state AI laws (with referrals to DOJ for litigation), and condition certain broadband funds on absence of such laws to the extent permitted by law.
  • Directing the Federal Trade Commission (FTC) to issue a policy statement on when federal prohibitions on unfair or deceptive acts or practices preempt state rules requiring alteration of truthful AI outputs.
  • Directing the Federal Communications Commission (FCC) to consider a federal AI reporting/disclosure standard that could preempt conflicting state requirements.
  • Requiring the Special Advisor for AI and Crypto and the Assistant to the President for Science and Technology to prepare legislative recommendations for a uniform federal framework preempting conflicting state AI laws, with carve-outs (e.g., child safety, certain infrastructure, state procurement).

EO 14365 does not automatically invalidate or challenge any specific state AI statutes or regulations. At present, state and local laws (e.g., comprehensive regimes like Colorado’s algorithmic discrimination law) remain enforceable. 

Practical Implications Across Industries

In the immediate term, companies should expect intensified federalstate friction and a shifting compliance horizon. Employers and other organizations operating nationally will need to track three parallel threads: (1) DOJ Task Force litigation, (2) DOC’s identification of onerous laws and related funding conditions, and (3) potential preemptive effects from FTC and FCC policy actions. 

For life sciences and health care organizations, for example, a single federal standard could ultimately simplify compliance and bolster preemption defenses in litigation. Yet, the executive order’s emphasis on “truthful outputs” sits uneasily with the technical reality that mitigating bias and correcting for incomplete or skewed clinical data is often necessary to ensure accurate, safe, and clinically reliable AI outputs. Regulators, institutional review boards, payers, and global authorities will continue to expect rigorous bias, safety, and fairness evaluations for patientfacing AI systems regardless of federalstate dynamics. 

In short, the EO 14365 may reduce some statelevel burdens over time, but it also introduces interim uncertainty. Companies using AI should continue programbyprogram review of highrisk AI deployments (e.g., hiring, promotion, monitoring, safety, eligibility determinations, medical decision support, and other consequential use cases) and maintain robust documentation in anticipation of both state enforcement and evolving federal standards.

Why AI Governance Programs Still Matter

There are compelling reasons to develop and maintain strong AI governance frameworks beyond compliance with state AI laws. Even if specific state laws are challenged or narrowed, commonlaw exposure and civil rights and consumer protection frameworks remain. Title VII, the ADA, the ADEA, laws restricting unfair or deceptive acts or practices, contract and tort theories, and privacy laws all provide avenues for claims tied to AImediated decisions. 

Building standardized, rightsized compliance programs that cover the majority of existing state requirements is not wasted effort, even if preemption narrows some obligations later. The technical controls, documentation, and governance discipline organizations put in place now will reduce litigation risk, improve regulator trust, support customer diligence, and accelerate adaptation to any future federal standard.

The Takeaway

EO 14365 reflects a strong federal interest in harmonizing AI policy and favors a single national standard over state-specific regimes. For now, state laws remain in effect, core federal and commonlaw risks persist, and the business case for responsible AI use has not changed. Rather than pausing existing compliance efforts, organizations should consider staying the course: build standardized compliance programs that address common state requirements, formalize AI governance, and maintain a sustained focus on accuracy and transparency. This foundation travels well across legal scenarios and will support ongoing AI compliance preparedness as policy evolves.

Reed Smith will continue to track developments with regard to the regulation of artificial intelligence. If you have any questions about this executive order or any topic. Please reach out to the authors of this post or other lawyers at Reed Smith.