2025 in AI Law: Five Trends That Defined the Year
2025 was the year AI law stopped being primarily aspirational and became operational. Compliance deadlines actually arrived, regulators started bringing enforcement actions, and the first wave of agent-era litigation reached the merits. This is our year-end retrospective. Five trends, each with implications for how 2026 plays out.
1. The federal pivot, mostly absorbed
The most-anticipated story of the year was the rescission of EO 14110 and the federal AI policy reset. The actual impact has been more bounded than the day-one headlines suggested. The pieces that mattered most — NIST work product, sectoral agency rulemakings with independent statutory bases, export controls — survived. The pieces that have been lost — DPA-anchored reporting, OMB M-24-10 uniformity, the safety-focused framing — were partially replaced by alternative governance structures.
The Frontier Artificial Intelligence Safety and Innovation Act, in Senate markup as we go to publication, is the most important federal AI legislation since the 2022 CHIPS Act. If it passes (we think it will, in modified form), 2026 will be the year federal AI safety regulation has a coherent statutory shape. If it does not, the federal landscape stays fragmented and the state-level acceleration continues.
Either way, the predicted catastrophic federal-policy collapse did not happen. The base level of U.S. federal AI engagement has continued; the framing has shifted; specific programs have been reorganized. Practitioners should plan for normal regulatory engagement rather than for a deregulatory free-fire zone.
2. The GPAI compliance cycle: harder than expected, manageable
The August 2 GPAI obligations were the largest operational milestone in AI compliance to date. The frontier providers met the deadline. The documentation that landed was, on average, less detailed than the regulators wanted and more detailed than the providers had hoped to publish. Equilibrium is going to be reached gradually through Article 91 information requests and quiet compliance dialogues.
The most consequential unsolved questions:
- How aggressive will the AI Office be on the "state-of-the-art" obligation for honoring rightsholders' opt-outs? Especially the question of retroactive removal from already-trained models.
- How will the systemic-risk evaluations that have been submitted privately translate into public expectations and enforcement?
- How does the downstream-deployer documentation regime hold up under stress, particularly for high-risk-system deployers who are working on the August 2026 deadline?
For non-frontier providers — the smaller GPAI players who do not exceed the systemic-risk threshold — the obligations have been onerous but workable. The infrastructure costs are real but predictable.
3. State law fragmentation became the dominant U.S. regulatory story
Colorado SB 24-205 takes effect February 1, 2026. Texas TRAIGA takes effect January 1, 2026. California's AB 2013, SB 942, and Operative ADM regulations all took effect or expanded their footprint during 2025. Connecticut, Virginia, and New York all have meaningful AI legislation in late stages or freshly enacted. The patchwork is no longer hypothetical.
The patchwork's main feature is that the structural elements are similar across states (developer/deployer split, AG enforcement, NIST-anchored affirmative defense) while the substantive details diverge (covered systems, consequential decisions, behavioral manipulation prongs, sandboxes, government coverage). Multistate compliance has become a real undertaking.
For multinational compliance, a useful triangulation is emerging: build to the EU AI Act for most substantive controls, layer Colorado's affirmative-defense framework for U.S. risk management, and track Texas-style additions (behavioral manipulation, government coverage) where applicable. Doing this well is hard but achievable.
4. The first generation of AI-era litigation reached the merits
Several long-running cases moved to substantive postures in 2025. NYT v. OpenAI survived MTD and is in discovery toward summary judgment. The first AI-agent product-liability cases produced their first appellate filings (no published merits decisions yet, but expected in 2026). The SEC brought its first major AI-washing case against an issuer. State AG enforcement under existing UDAP statutes against AI-specific deceptive-practices allegations expanded considerably.
What we are learning from this body of litigation:
- Doctrinal frameworks that already existed are doing more of the work than I would have predicted. Common-law negligence, products-liability principles, Section 10(b) and 17(a), UDAP statutes — none of them were designed for AI, and they are absorbing AI cases adequately.
- Discovery is expensive and is going to drive settlement in many cases. The cases that are being litigated to merits resolution are the ones where neither side can afford to give up the doctrinal precedent.
- The first appellate decisions in the AI-agent product-liability space, due in 2026, will be the most consequential common-law developments of the next few years.
5. The agent layer arrived (legally)
The most genuinely new substantive trend of 2025 was AI agents moving from concept to legal subject matter. We have written about it from the product-liability angle (September), but the legal questions span more terrain than that:
- Securities-law treatment of AI-agent-driven trading — early SEC inquiries, no formal action yet, but the question is being asked.
- Healthcare regulatory treatment of AI agents that interact with patients — FDA has an open ANPR.
- Employment-law questions about AI agents acting as substitutes for or supplements to human workers — EEOC and DOL have both started workshopping the issues.
- Common-law agency doctrine extension — the question we identified in September of whether AI agents fit into existing agency-law doctrines.
- Fiduciary duty analyses — early in 2026 we will write about a Texas state court decision that flirted with fiduciary characterization of an AI consumer-facing agent, which may foreshadow more.
The agent layer is going to be the most fertile area of AI-legal development in 2026-27. We are going to spend a lot of time on it.
What's coming in 2026
Looking ahead, the milestones we are watching:
- January 1: Texas TRAIGA takes effect; California new laws take effect.
- February 1: Colorado SB 24-205 takes effect.
- February-March: First merits-stage decisions in AI-agent product-liability cases (expected).
- Q1: Senate vote on the Frontier AI Safety and Innovation Act (expected).
- Spring 2026: First public AI Office enforcement decisions on Article 5 violations.
- Summer 2026: Summary judgment in NYT v. OpenAI (expected).
- August 2, 2026: EU AI Act high-risk system regime takes effect.
- Throughout: state AG enforcement of state AI statutes; ongoing patchwork-management work.
2024 was the framing year. 2025 was the operational year. 2026 is going to be the consequence year — when the obligations that have been built start producing measurable enforcement, the first appellate decisions consolidate or fragment doctrine, and the bigger statutory questions get answered.
Thank you for reading. We will be back in mid-January with our first 2026 post.