Post-Election AI Policy Outlook: Federal Pivot, State Acceleration

The 2024 federal election produced a unified Republican government taking office in January. We are not in the business of political prediction here, but we are in the business of legal prediction, and the implications for AI policy are clear enough to flag now while clients are doing 2025 planning.

This post is structured around three theses: the Biden Executive Order is going to be rescinded; federal AI rulemakings already in flight will mostly stall or be redirected; and state-level AI regulation will accelerate, both in volume and in litigation. None of these are particularly bold predictions individually. The combination matters because it changes the practical question of where the binding constraints on AI development and deployment will come from in 2025-26.

The Biden EO is going to be rescinded

Executive Order 14110, signed October 30, 2023, is the centerpiece of the current federal AI policy posture. It directs more than fifty agency actions, establishes the dual-use foundation model reporting regime under the Defense Production Act, and creates the AI Safety Institute at NIST.

The President-elect's transition team and 2024 platform have been explicit that EO 14110 will be rescinded "on day one." That is plausible. The rescission itself takes a stroke of a pen; the harder question is what happens to the agency work product that has already been produced.

Our read of how that lands:

Federal rulemakings: stall or redirect

Several federal rulemakings touching on AI are mid-flight:

The exception worth flagging: defense and national-security AI work will almost certainly accelerate. The DoD's AI initiatives, the IC's AI investments, and CISA's AI-related cybersecurity work all align with the incoming administration's stated priorities. If you advise clients in those sectors, expect more activity, not less.

State acceleration is the bigger story

The federal pivot creates a vacuum that state legislators and AGs are already preparing to fill. Three things to watch:

  1. The 2025 legislative session. Twenty-plus states have AI legislation drafted or pre-filed. Texas (TRAIGA), Connecticut, Virginia, New York (RAISE Act), and Illinois are the ones we are watching most closely. The Colorado model — risk-based, AG-enforced, NIST-anchored — is going to be the dominant template. California will pass another frontier-model bill, probably in a different shape than SB 1047 given the working group's likely recommendations.
  2. State AG enforcement of existing law. Multiple state AGs have already signaled that consumer-protection statutes (UDAP) and existing anti-discrimination statutes apply to AI deployments. The Texas AG, for example, has been particularly active on AI healthcare claims. Expect more of this — state AG enforcement of existing statutes, with AI-specific allegations, even where AI-specific statutes do not yet exist.
  3. The patchwork problem becomes acute. Multistate compliance is going to get noticeably harder in 2025. The differences between Colorado, the eventual Texas TRAIGA, and California's expected successor to SB 1047 will be more than cosmetic. Practitioners advising national operators should start preparing for parallel-compliance work.

EU divergence

Without federal coordination, U.S. policy will drift further from the EU AI Act as it phases in. We do not see this as a near-term enforcement risk for U.S. companies — the EU AI Act applies based on EU market presence, not on home-country regulation — but it does mean that the U.S./EU diplomatic architecture around AI policy will be less productive in 2025-26 than it was in 2023-24. The TTC has already lost momentum; we expect that to continue.

A practical implication: the EU's Article 51 systemic-risk code of practice for GPAI models, which is being negotiated this fall and winter, is going to have to land in a context where U.S. labs face less domestic pressure than they did six months ago. That changes the negotiating dynamic and could push the code in either direction depending on which labs choose to engage.

What this means for compliance programs

Our advice to clients planning for 2025:

The next eighteen months are going to look messier than the last eighteen, but the messiness is dispersed across more jurisdictions, not concentrated in fewer.