California's New AI Laws Take Effect: AB 2013, SB 942, and Friends

January 1 brought a stack of new California obligations into effect. The training-data disclosure rule (AB 2013), the AI content labeling regime (SB 942), the automated decision-making rules under the CCPA regulations, and the AI healthcare provider disclosure rule (AB 3030) all started to bite at once. The cumulative compliance load is significant. This post walks through what changed, what remains unclear, and where compliance teams should focus first.

AB 2013: training-data disclosure

Effective January 1, AB 2013 requires developers of generative AI models or services made available to Californians on or after January 1, 2022 to publish, on their websites, a "high-level summary of the datasets used in the development of the generative artificial intelligence system or service." The summary must include:

The disclosure obligations apply broadly: not just to frontier models, but to any developer of a "generative AI system or service" available to Californians. The retroactive January 2022 reach captures most current commercial generative AI systems. The "high-level" qualifier is doing some protective work — providers will lean on it to limit detail — but the enumerated items leave less room for general summary than the EU AI Act template.

Compared to the EU AI Act training-data summary obligations we covered in August 2025, AB 2013 is meaningfully more granular in some respects (sources or owners; whether the datasets include copyrighted content as a yes/no rather than aggregated category) and less granular in others (no specific filtering or processing details). For multinational compliance, the right move is a single summary that satisfies both regimes — but the union is larger than either.

SB 942: AI content labeling

The California AI Transparency Act, effective January 1, requires "covered providers" of generative AI systems with one million or more monthly users to provide:

The "manifest" and "latent" framing is the operationally important part. Manifest disclosures are user-facing labels; latent disclosures are embedded metadata that survives downstream use. Both are required.

The C2PA Content Credentials standard has been the de facto template for latent-disclosure compliance. Major providers — including all the U.S. frontier labs — were already implementing it for image and video outputs by late 2024. Implementation for text outputs has been slower; SB 942's effective date is forcing the issue. Several providers are using statistical watermarking schemes (provenance-tracking signals embedded in model outputs) for text; how these schemes hold up under adversarial use is being tested.

The detection-tool obligation is genuinely novel. Providers must publish a free tool that the public can use to test content against the provider's manifest and latent disclosures. The tool is essentially an inverse of the labeling obligation — a way for downstream users to verify provenance. Several providers have rushed implementations to launch by January; quality varies.

CCPA ADM regulations

The California Privacy Protection Agency's automated decision-making technology regulations took final effect January 1 after several years of revision. The substantive obligations on businesses using ADM technology to make "significant decisions" about consumers include:

The "significant decisions" definition is similar to but narrower than Colorado's "consequential decisions" — it covers financial services, housing, employment, healthcare services, education, and access to essential goods or services.

The intersection with the broader patchwork is going to require careful navigation. A national operator using ADM may now need to:

None of these regimes are identical. The unified-compliance build is non-trivial.

AB 3030: healthcare AI disclosure

AB 3030 requires healthcare providers using generative AI to communicate with patients about clinical information to disclose that the communication was AI-generated and provide instructions for how to contact a human provider. The obligation applies to physicians, surgeons, hospitals, and clinics. It does not apply to AI use that is reviewed and signed off by a licensed provider before transmission.

The exception is the operationally consequential part. Most current AI-assisted clinical messaging in California uses a human-review-before-send architecture; those workflows are not subject to AB 3030's disclosure obligation. Workflows that use AI in front-line patient communication without human review are subject to the disclosure obligation. Most institutional providers have been using the human-review architecture for liability reasons independent of AB 3030; the bill formalizes the de facto standard.

Other January 1 changes worth flagging

Action items

For California-exposed clients, the right approach is a coordinated compliance review across all of the new obligations. The most common errors I am seeing in the first two weeks:

  1. AB 2013 disclosure summaries that look like EU AI Act summaries but miss specific California-required elements (especially the data-points-count and the data-period information).
  2. SB 942 latent-disclosure implementations that satisfy C2PA but fail to surface the underlying generation source in the manifest disclosure to ordinary users.
  3. CCPA ADM compliance that satisfies the CPPA regulations but contradicts representations made in privacy notices, creating UDAP exposure.
  4. AB 3030 reliance on the human-review carve-out without documenting the review workflow sufficiently to support the carve-out under audit.

Most of these are correctable in the first quarter. The cost is meaningful but the compliance shape is now clear.