Luiza's Newsletter

Luiza's Newsletter

California's Two Most Important AI Laws

Plus: Test your knowledge about global AI regulation | Edition #241

Luiza Jarovsky, PhD's avatar
Luiza Jarovsky, PhD
Oct 15, 2025
∙ Paid
18
1
4
Share

👋 Hi everyone, Luiza Jarovsky here. Welcome to our 241st edition, trusted by over 81,200 subscribers worldwide. It's great to have you here!

🔥 Paid subscribers have full access to all my previous essays and curations here, and can send me questions to cover in future editions.


🎓 This is how I can support your learning and upskilling journey in AI:

  • Join my AI Governance Training [apply for a discounted seat here]

  • Strengthen your team's critical AI expertise with a group plan

  • Receive our job alerts for open roles in AI governance and privacy

  • Sign up for weekly educational resources in our Learning Center

  • Discover your next read in AI and beyond with our AI Book Club


👉 A special thanks to Privatemode, this edition's sponsor:

AI workloads often include highly sensitive data, yet providers still expect you to simply trust them. Privatemode runs AI models entirely inside encrypted environments, keeping even the cloud provider blind to your data. It’s confidentiality that’s proven by cryptography – not by policy. Explore it with a free plan that includes 1M tokens per month.


*To support us and reach over 81,200 subscribers, become a sponsor.


California’s Two Most Important AI Laws

Two weeks ago, California, which hosts 32 of the world’s top 50 AI companies, became the first U.S. state to regulate the safety of the most powerful AI models. Two days ago, it became the first U.S. state to regulate AI companions.

These are not California's only AI laws, but they are key pieces of legislation that regulate challenging topics and may ultimately set the tone for AI regulation efforts in the U.S. and globally. Everybody should be aware of them.

In today's edition, I discuss these two AI laws, as well as some of the open questions, challenges, and gray areas that remain unresolved.

-

SB 53: Frontier AI Models

Let's start with SB 53. Also called the “Transparency in Frontier Artificial Intelligence Act,” it was approved on September 29 and covers large frontier developers. It takes effect on January 1, 2026.

The main areas covered are:

Transparency: It requires large frontier developers to publicly publish a framework on their website describing how the company has incorporated national standards, international standards, and industry-consensus best practices into their frontier AI framework.

Innovation: It establishes a new consortium within the Government Operations Agency to develop a framework for creating a public computing cluster.

Safety: It creates a new mechanism for frontier AI companies and the public to report potential critical safety incidents to California’s Office of Emergency Services.

Accountability: It protects whistleblowers who disclose significant health and safety risks posed by frontier models, and creates a civil penalty for noncompliance, enforceable by the Attorney General’s office.

Responsiveness: It directs the California Department of Technology to annually recommend appropriate updates to the law based on multistakeholder input, technological developments, and international standards.

-

From a global perspective, the intersection of SB 53 with the EU AI Act will create an interesting compliance puzzle for AI governance professionals to solve, as many companies will be subject to both laws.

For example, regarding their scope, the terminology adopted by the EU AI Act is “general-purpose AI model,” which it defines as:

“an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market”

General-purpose AI models can be classified as posing systemic risk, according to the law’s criteria.

SB 53, on the other hand, opted for the term “foundation model,” which it defines as:

“An AI model that is all of the following:
(1) Trained on a broad data set.
(2) Designed for generality of output.
(3) Adaptable to a wide range of distinctive tasks.”

And “frontier model,” which it defines as:

“a foundation model that was trained using a quantity of computing power greater than 10^26 integer or floating-point operations.”

The definitions intersect, but do not match.

The European Commission published guidelines on the scope of the obligations for providers of general-purpose AI models, including more clarity on what models are considered “general-purpose” AI models.

Californian authorities should also publish more technical guidelines on what models will be classified as “foundation models,” especially as the field continues to evolve.

-

SB 243: Companion Chatbots

The second California AI law I would like to discuss today is SB 243, covering AI chatbot safeguards, which was approved two days ago and takes effect on January 1, 2026.

SB 243 focuses on protecting children interacting with AI companion chatbots. Key provisions include:

“(a) If a reasonable person interacting with a companion chatbot would be misled to believe that the person is interacting with a human, an operator shall issue a clear and conspicuous notification indicating that the companion chatbot is artificially generated and not human.”

-

“(b) An operator shall prevent a companion chatbot on its companion chatbot platform from engaging with users unless the operator maintains a protocol for preventing the production of suicidal ideation, suicide, or self-harm content to the user, including, but not limited to, by providing a notification to the user that refers the user to crisis service providers, including a suicide hotline or crisis text line, if the user expresses suicidal ideation, suicide, or self-harm.”

-

“(c) An operator shall, for a user that the operator knows is a minor, do all of the following:
- Disclose to the user that the user is interacting with artificial intelligence.
- Provide by default a clear and conspicuous notification to the user at least every three hours for continuing companion chatbot interactions that reminds the user to take a break and that the companion chatbot is artificially generated and not human.
- Institute reasonable measures to prevent its companion chatbot from producing visual material of sexually explicit conduct or directly stating that the minor should engage in sexually explicit conduct.”

-

The law’s definition of “companion chatbot,” as well as the exceptions it establishes, might limit its applicability. It defines it as:

“(…) an AI system with a natural language interface that provides adaptive, human-like responses to user inputs and is capable of meeting a user’s social needs, including by exhibiting anthropomorphic features and being able to sustain a relationship across multiple interactions.”

-

My question here is: will ChatGPT and other general-purpose AI chatbots be covered by this law, even if they are not marketed as companions but people use them as such?

This would be extremely important, as horrifying cases such as the Adam Raine suicide make it clear. If you remember this case, ChatGPT helped the teenager plan a “beautiful suicide”; his mother is now suing OpenAI. I wrote about it here.

The law’s exceptions do not make it any clearer, including by not specifying the objective standards to assess if an AI chatbot “generates outputs that are likely to elicit emotional responses in the users.”

AI companies have realized that making AI chatbots more friendly, agreeable, and sycophantic increases user loyalty and dependency, so this is how various general-purpose AI systems (such as ChatGPT) are being fine-tuned today.

It is extremely important that laws aiming to regulate AI chatbot harm take that into consideration and do not frame “AI companions” too narrowly.

-

We know that AI regulation (or AI deregulation) is a key topic for the Trump Administration, as the recently launched America's AI Action Plan makes it clear.

In parallel to the federal government's actions aimed at ‘winning the AI race’ and consolidating its global influence network, including mega projects, investments, partnerships, and tariffs, California appears to be setting itself apart. Why?

In recent months, it has accelerated its internal regulatory efforts, including those related to topics considered central to AI progress, such as the regulation of frontier models and AI chatbots, as I wrote above.

Will the federal administration attempt to curtail these efforts?

I will keep you posted.

Share


Weekly Quiz: Global AI Regulation

AI regulation is a core topic for those who want to stay ahead and lead in AI. Even if you are not a lawyer or a compliance professional, if you are involved in AI development or deployment, AI rules will directly shape your work.

With that in mind, this week, I prepared 10 questions covering recent global AI regulation developments, which will offer you a glimpse into the AI governance zeitgeist and how different countries are choosing to approach the topic.

I have recently written about most of the topics covered in the quiz (paid subscribers can explore the archive to read more).

Are you ready to put your AI knowledge to the test?

⏰ This quiz will be available until next Wednesday.

👉 Paid subscribers can access it here:

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Luiza Jarovsky
Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture