California's New AI Laws Take Effect: AB 2013, SB 942, and Friends
January 1 brought a stack of new California obligations into effect. The training-data disclosure rule (AB 2013), the AI content labeling regime (SB 942), the automated decision-making rules under the CCPA regulations, and the AI healthcare provider disclosure rule (AB 3030) all started to bite at once. The cumulative compliance load is significant. This post walks through what changed, what remains unclear, and where compliance teams should focus first.
AB 2013: training-data disclosure
Effective January 1, AB 2013 requires developers of generative AI models or services made available to Californians on or after January 1, 2022 to publish, on their websites, a "high-level summary of the datasets used in the development of the generative artificial intelligence system or service." The summary must include:
- Sources or owners of the datasets.
- Description of how the datasets further the system's intended purpose.
- The number of data points.
- A description of the types of data points.
- Whether the datasets include any data protected by copyright, trademark, or patent, or any data with personal information.
- Whether datasets were purchased, licensed, or used pursuant to a license, and whether they were modified.
- Time period during which data was collected, and the date of first use.
- Whether the datasets include data generated by AI.
The disclosure obligations apply broadly: not just to frontier models, but to any developer of a "generative AI system or service" available to Californians. The retroactive January 2022 reach captures most current commercial generative AI systems. The "high-level" qualifier is doing some protective work — providers will lean on it to limit detail — but the enumerated items leave less room for general summary than the EU AI Act template.
Compared to the EU AI Act training-data summary obligations we covered in August 2025, AB 2013 is meaningfully more granular in some respects (sources or owners; whether the datasets include copyrighted content as a yes/no rather than aggregated category) and less granular in others (no specific filtering or processing details). For multinational compliance, the right move is a single summary that satisfies both regimes — but the union is larger than either.
SB 942: AI content labeling
The California AI Transparency Act, effective January 1, requires "covered providers" of generative AI systems with one million or more monthly users to provide:
- A free-to-use AI detection tool that allows users to assess whether content was generated by the provider's system.
- Manifest disclosures (visible to ordinary users) on AI-generated content.
- Latent disclosures (machine-readable metadata) on AI-generated content.
The "manifest" and "latent" framing is the operationally important part. Manifest disclosures are user-facing labels; latent disclosures are embedded metadata that survives downstream use. Both are required.
The C2PA Content Credentials standard has been the de facto template for latent-disclosure compliance. Major providers — including all the U.S. frontier labs — were already implementing it for image and video outputs by late 2024. Implementation for text outputs has been slower; SB 942's effective date is forcing the issue. Several providers are using statistical watermarking schemes (provenance-tracking signals embedded in model outputs) for text; how these schemes hold up under adversarial use is being tested.
The detection-tool obligation is genuinely novel. Providers must publish a free tool that the public can use to test content against the provider's manifest and latent disclosures. The tool is essentially an inverse of the labeling obligation — a way for downstream users to verify provenance. Several providers have rushed implementations to launch by January; quality varies.
CCPA ADM regulations
The California Privacy Protection Agency's automated decision-making technology regulations took final effect January 1 after several years of revision. The substantive obligations on businesses using ADM technology to make "significant decisions" about consumers include:
- Pre-use notice describing the purpose, the technology, and the consumer's rights.
- Right to access information about how the ADM operates with respect to the consumer.
- Right to opt out of the ADM use, with limited exceptions.
- Risk assessment requirements for businesses processing personal information for ADM.
The "significant decisions" definition is similar to but narrower than Colorado's "consequential decisions" — it covers financial services, housing, employment, healthcare services, education, and access to essential goods or services.
The intersection with the broader patchwork is going to require careful navigation. A national operator using ADM may now need to:
- Comply with California's pre-use notice and opt-out rules under the CCPA regulations.
- Comply with Colorado SB 24-205's impact assessment and explanation rules.
- Comply with Texas TRAIGA's pre-deployment notification rules (when TRAIGA takes effect; the bill we covered in April 2025 took effect January 1).
- For multinational operators: comply with the EU AI Act's parallel obligations.
None of these regimes are identical. The unified-compliance build is non-trivial.
AB 3030: healthcare AI disclosure
AB 3030 requires healthcare providers using generative AI to communicate with patients about clinical information to disclose that the communication was AI-generated and provide instructions for how to contact a human provider. The obligation applies to physicians, surgeons, hospitals, and clinics. It does not apply to AI use that is reviewed and signed off by a licensed provider before transmission.
The exception is the operationally consequential part. Most current AI-assisted clinical messaging in California uses a human-review-before-send architecture; those workflows are not subject to AB 3030's disclosure obligation. Workflows that use AI in front-line patient communication without human review are subject to the disclosure obligation. Most institutional providers have been using the human-review architecture for liability reasons independent of AB 3030; the bill formalizes the de facto standard.
Other January 1 changes worth flagging
- SB 1001 amendments expanding the chatbot disclosure obligation beyond the original electoral and commercial contexts.
- AB 1008 clarifying that the CCPA covers personal information stored in or generated by AI systems, resolving an interpretive question that had been litigated.
- Amendments to the California Consumer Privacy Act regulations addressing employee and B2B data, with specific carve-outs for AI training-related uses.
Action items
For California-exposed clients, the right approach is a coordinated compliance review across all of the new obligations. The most common errors I am seeing in the first two weeks:
- AB 2013 disclosure summaries that look like EU AI Act summaries but miss specific California-required elements (especially the data-points-count and the data-period information).
- SB 942 latent-disclosure implementations that satisfy C2PA but fail to surface the underlying generation source in the manifest disclosure to ordinary users.
- CCPA ADM compliance that satisfies the CPPA regulations but contradicts representations made in privacy notices, creating UDAP exposure.
- AB 3030 reliance on the human-review carve-out without documenting the review workflow sufficiently to support the carve-out under audit.
Most of these are correctable in the first quarter. The cost is meaningful but the compliance shape is now clear.