2025 in AI Law: Five Trends That Defined the Year

2025 was the year AI law stopped being primarily aspirational and became operational. Compliance deadlines actually arrived, regulators started bringing enforcement actions, and the first wave of agent-era litigation reached the merits. This is our year-end retrospective. Five trends, each with implications for how 2026 plays out.

1. The federal pivot, mostly absorbed

The most-anticipated story of the year was the rescission of EO 14110 and the federal AI policy reset. The actual impact has been more bounded than the day-one headlines suggested. The pieces that mattered most — NIST work product, sectoral agency rulemakings with independent statutory bases, export controls — survived. The pieces that have been lost — DPA-anchored reporting, OMB M-24-10 uniformity, the safety-focused framing — were partially replaced by alternative governance structures.

The Frontier Artificial Intelligence Safety and Innovation Act, in Senate markup as we go to publication, is the most important federal AI legislation since the 2022 CHIPS Act. If it passes (we think it will, in modified form), 2026 will be the year federal AI safety regulation has a coherent statutory shape. If it does not, the federal landscape stays fragmented and the state-level acceleration continues.

Either way, the predicted catastrophic federal-policy collapse did not happen. The base level of U.S. federal AI engagement has continued; the framing has shifted; specific programs have been reorganized. Practitioners should plan for normal regulatory engagement rather than for a deregulatory free-fire zone.

2. The GPAI compliance cycle: harder than expected, manageable

The August 2 GPAI obligations were the largest operational milestone in AI compliance to date. The frontier providers met the deadline. The documentation that landed was, on average, less detailed than the regulators wanted and more detailed than the providers had hoped to publish. Equilibrium is going to be reached gradually through Article 91 information requests and quiet compliance dialogues.

The most consequential unsolved questions:

For non-frontier providers — the smaller GPAI players who do not exceed the systemic-risk threshold — the obligations have been onerous but workable. The infrastructure costs are real but predictable.

3. State law fragmentation became the dominant U.S. regulatory story

Colorado SB 24-205 takes effect February 1, 2026. Texas TRAIGA takes effect January 1, 2026. California's AB 2013, SB 942, and Operative ADM regulations all took effect or expanded their footprint during 2025. Connecticut, Virginia, and New York all have meaningful AI legislation in late stages or freshly enacted. The patchwork is no longer hypothetical.

The patchwork's main feature is that the structural elements are similar across states (developer/deployer split, AG enforcement, NIST-anchored affirmative defense) while the substantive details diverge (covered systems, consequential decisions, behavioral manipulation prongs, sandboxes, government coverage). Multistate compliance has become a real undertaking.

For multinational compliance, a useful triangulation is emerging: build to the EU AI Act for most substantive controls, layer Colorado's affirmative-defense framework for U.S. risk management, and track Texas-style additions (behavioral manipulation, government coverage) where applicable. Doing this well is hard but achievable.

4. The first generation of AI-era litigation reached the merits

Several long-running cases moved to substantive postures in 2025. NYT v. OpenAI survived MTD and is in discovery toward summary judgment. The first AI-agent product-liability cases produced their first appellate filings (no published merits decisions yet, but expected in 2026). The SEC brought its first major AI-washing case against an issuer. State AG enforcement under existing UDAP statutes against AI-specific deceptive-practices allegations expanded considerably.

What we are learning from this body of litigation:

5. The agent layer arrived (legally)

The most genuinely new substantive trend of 2025 was AI agents moving from concept to legal subject matter. We have written about it from the product-liability angle (September), but the legal questions span more terrain than that:

The agent layer is going to be the most fertile area of AI-legal development in 2026-27. We are going to spend a lot of time on it.

What's coming in 2026

Looking ahead, the milestones we are watching:

2024 was the framing year. 2025 was the operational year. 2026 is going to be the consequence year — when the obligations that have been built start producing measurable enforcement, the first appellate decisions consolidate or fragment doctrine, and the bigger statutory questions get answered.

Thank you for reading. We will be back in mid-January with our first 2026 post.