The DEEPFAKES Accountability Act, Take Three

After two failed runs in prior Congresses, a slimmed-down DEEPFAKES Accountability Act passed the House in February and is now in Senate markup. The 2026 version drops the criminal provisions that doomed earlier iterations and instead leans on disclosure obligations and a private right of action. This post compares the three versions side by side and assesses what is likely to make it through.

A short legislative history

The DEEPFAKES Accountability Act has been introduced by Rep. Yvette Clarke in the 116th, 118th, and 119th Congresses, with predecessor versions going back to 2018. The 2019 version (introduced as H.R. 3230) was a comprehensive disclosure-and-criminal regime. It did not advance. The 2023 version (H.R. 5586) was a revised version that retained the criminal provisions but introduced more granular distinctions across deepfake types. It also did not advance. The 2026 version (H.R. 1244) reflects the most successful iteration — it has cleared the House on a 271-160 vote, with substantial bipartisan support — and is the one we have to take seriously.

What the 2026 bill does

Three substantive provisions:

Disclosure obligations. The bill requires that "advanced technological false personations" — defined as audiovisual records that have been substantially edited or generated by AI in ways that depict an identifiable person doing something they did not do — bear an "irremovable visual disclosure" and "embedded digital watermark" identifying them as such. The technical specifications would be set by NIST in implementing regulations. Disclosures must be present from the point of creation; downstream removal is prohibited.

Sex-related deepfake provisions. The bill makes it unlawful to create or distribute non-consensual intimate-imagery deepfakes. This piece survives substantially from the 2019 version but is narrower in some respects (clearer scienter requirements) and broader in others (covers attempts).

Private right of action. The bill creates a federal private right of action for individuals depicted in non-consensual deepfakes, allowing recovery of actual damages, statutory damages of up to $150,000 per work, attorney's fees, and injunctive relief. This is the major affirmative addition over earlier versions.

What is gone: the criminal provisions. The 2019 version included felony criminal penalties for production and distribution of certain deepfake categories. The 2023 version retained criminal penalties only for non-consensual intimate-imagery deepfakes. The 2026 version retains no criminal penalties; the sex-deepfake prohibition is enforced civilly through the AG and private litigants.

Why this version is moving

Three reasons:

  1. The criminal-provision drag is gone. The criminal provisions in earlier versions drew opposition from civil-liberties advocates concerned about prosecutorial overreach and from the platforms concerned about the secondary-liability implications. The disclosure-and-civil-liability structure draws much narrower opposition.
  2. The state-law landscape has matured. By February 2026, all 50 states have some form of deepfake legislation in place. The patchwork has become difficult enough to navigate that industry support for federal preemption has grown. The 2026 bill includes a partial-preemption provision that displaces state criminal regimes (mostly redundant under the federal civil regime) while leaving state civil regimes in place.
  3. The technical infrastructure now exists. The C2PA Content Credentials standard, in widespread use by major content platforms by 2025, makes the embedded-watermark requirement operationally feasible in a way it was not in 2019.

Comparing the three versions

The 2019 version (H.R. 3230, 116th Congress):

The 2023 version (H.R. 5586, 118th Congress):

The 2026 version (H.R. 1244, 119th Congress):

Senate markup: what is likely to change

The Senate Commerce Committee has scheduled markup for early April. From conversations with people closer to the process, the changes most likely to land:

I expect the bill to clear Senate Commerce by late spring and reach the Senate floor in summer. Final passage is plausible but not certain. If it passes, the operational effective date will probably be twelve to eighteen months after enactment to allow the NIST rulemaking and platform implementation work.

How it interacts with the state regimes

The partial-preemption framework matters. As drafted, the bill preempts:

The bill does not preempt:

For practitioners, the multi-layer compliance question is going to remain. The federal regime, if it lands, will be one layer; state regimes will continue to be another. The tactical implications: a single federal compliance posture will satisfy most of the labeling-related requirements but state private-right-of-action exposure will remain. Plan accordingly.

For platforms and AI providers

The bill's NIST-anchored safe harbor is going to be the practical center of gravity for compliance. Platforms that implement C2PA-compatible content credentials and accept the implementing regulations are going to have a clear operational path; platforms that have not built that infrastructure will face harder choices. AI tool providers — particularly providers of image and video generation tools — face an emerging expectation that they will provide watermarking or other provenance signals that downstream platforms can rely on.

This is a story we will return to as the Senate markup process produces clearer text.