News

California: First US Law on AI Chatbots (SB 243)

Article Highlights:
  • California first US state to regulate AI companion chatbots with SB 243 law from 2026
  • Mandatory age verification and anti-suicide protocols for platforms like OpenAI, Meta and Character AI
  • Fines up to $250,000 for illegal deepfakes and sexually explicit content to minors
  • Legislation born after tragedies of teenagers who died from conversations with AI chatbots
  • OpenAI implements parental controls and self-harm detection systems
  • AI chatbots cannot present themselves as healthcare professionals under new law
  • States like Illinois, Nevada and Utah ban AI chatbots as substitutes for mental health care

Introduction

California has taken a historic step in artificial intelligence regulation, becoming the first American state to impose specific rules on AI companion chatbots. Governor Gavin Newsom signed bill SB 243, groundbreaking legislation that requires tech companies to implement stringent safety protocols to protect vulnerable users, particularly minors. This regulation represents a direct response to a series of tragic incidents that have highlighted the risks associated with unregulated use of AI-based virtual assistants.

Context: Why Regulation Is Needed

SB 243 is legislation that mandates companies operating in the AI companion chatbot sector to adopt mandatory safety measures to protect children and vulnerable users from risks associated with these emerging technologies. The regulation was introduced in January by Senators Steve Padilla and Josh Becker, gaining urgency after dramatic events that shocked public opinion. Among these was the tragic death of young Adam Raine, who took his own life after engaging in prolonged suicidal conversations with OpenAI's ChatGPT.

The legislative push was further strengthened by leaked internal documents that reportedly revealed how Meta's chatbots were programmed to engage in "romantic" and "sensual" conversations with underage users. More recently, a Colorado family filed a lawsuit against Character AI after their 13-year-old daughter took her own life following a series of problematic and sexualized exchanges with the platform's chatbots. These episodes highlighted the urgency of establishing clear boundaries for a rapidly expanding industry that remains largely unsupervised.

Key Provisions of SB 243

The California regulation, which will take effect on January 1, 2026, establishes specific operational requirements that companies must meet. Among the most significant measures is the obligation to implement age verification systems to prevent minors from accessing inappropriate content. Platforms must also display explicit warnings regarding the use of chatbots and social media, making it clear to users that they are interacting with artificial intelligence systems and not real human beings.

A crucial aspect concerns the prevention of self-harming behaviors: companies will be required to develop specific protocols to address situations involving suicidal thoughts or self-harm. These protocols must be shared with the state's Department of Public Health, along with detailed statistics on how the service provides notifications to crisis prevention centers. The law explicitly prohibits chatbots from presenting themselves as qualified healthcare professionals, a fundamental distinction to prevent users from relying on artificially generated medical advice.

Regarding child protection, SB 243 requires platforms to offer break reminders to minor users and prevent them from viewing sexually explicit images generated by chatbots. The law also introduces stricter penalties for those who profit from illegal deepfakes, with fines up to $250,000 per violation. This package of measures aims to create a safer digital environment while maintaining space for responsible technological innovation.

"Emerging technology like chatbots and social media can inspire, educate, and connect — but without real guardrails, technology can also exploit, mislead, and endanger our kids. We've seen some truly horrific and tragic examples of young people harmed by unregulated tech, and we won't stand by while companies continue without necessary limits and accountability."

Gavin Newsom, Governor of California

Tech Companies' Response

Several AI chatbot platforms have already begun adopting preventive measures in anticipation of the new legislation. OpenAI recently implemented parental controls, content protections, and a self-harm detection system for minors using ChatGPT. Replika, a platform designed for adult users over 18, has stated it dedicates "significant resources" to safety through content filtering systems and guardrails that direct users to trusted crisis resources, committing to comply with current regulations.

Character AI has communicated that its chatbot includes a disclaimer warning users that all chats are AI-generated and fictional. A company spokesperson emphasized the company's openness to working with regulators and lawmakers as they develop regulations for this emerging sector, ensuring compliance with SB 243. These responses demonstrate how the industry is beginning to recognize the need to balance innovation and social responsibility, though the practical effectiveness of these measures remains to be seen once the law becomes fully operational.

National Implications and Future Outlook

Senator Padilla called the bill "a step in the right direction" toward establishing guardrails on "an incredibly powerful technology." He emphasized the urgency of acting quickly to avoid missing windows of opportunity before they disappear, expressing hope that other states will recognize the risks and follow California's example. The conversation about AI chatbot regulation is indeed active across the country, but the federal government has not yet acted, leaving states with the burden of protecting the most vulnerable citizens.

SB 243 represents the second significant AI regulation enacted by California in recent weeks. On September 29, Governor Newsom signed SB 53, which establishes new transparency requirements for large AI companies such as OpenAI, Anthropic, Meta, and Google DeepMind. This law mandates major AI labs to be transparent about safety protocols and ensures whistleblower protections for employees of these companies.

Other American states are already moving in the same direction. Illinois, Nevada, and Utah have passed laws that restrict or completely ban the use of AI chatbots as substitutes for professional mental health care. This legislative patchwork suggests growing consensus on the need to regulate an industry that, while offering significant potential, presents concrete risks when left unsupervised. California, with its technological leadership and economic size, could serve as a model for future federal regulation.

Conclusion

The approval of SB 243 marks a crucial moment in the evolution of artificial intelligence regulation in the United States. California has set an important precedent, demonstrating that it is possible to promote technological innovation while keeping the protection of the most vulnerable users at the center. With implementation scheduled for January 2026, the AI chatbot industry will need to adapt to new legal responsibilities and safety standards. The real challenge will be ensuring these measures prove effective in practice, genuinely protecting minors without stifling the development of technologies that can offer concrete benefits when used responsibly. The path toward balanced AI regulation has just begun, and California's approach could inspire similar regulations across the country and beyond.

FAQ

What does California's SB 243 law require for AI chatbots?

SB 243 requires AI chatbot companies to implement age verification, artificial interaction warnings, protocols to prevent self-harm and suicide, break reminders for minors, and blocking of sexually explicit content, with fines up to $250,000 for illegal deepfakes.

When does California's AI chatbot regulation take effect?

The SB 243 law will take effect on January 1, 2026, giving AI chatbot companies sufficient time to implement the required safety protocols.

Which companies are affected by the AI chatbot law?

The regulation applies to all companies operating AI companion chatbots, from major labs like OpenAI, Meta, Google DeepMind, and Anthropic, to specialized startups like Character AI and Replika.

Why did California decide to regulate AI chatbots?

The decision was motivated by tragedies including teenagers' deaths after problematic conversations with AI chatbots and documents showing inappropriate interactions between chatbots and minors, highlighting concrete risks for vulnerable users.

Can AI chatbots replace mental health professionals?

No, according to SB 243 AI chatbots cannot present themselves as qualified healthcare professionals. States like Illinois, Nevada, and Utah have also banned or restricted the use of AI chatbots as substitutes for professional mental health care.

How are AI chatbot companies preparing for the new law?

OpenAI has introduced parental controls and self-harm detection systems, Replika has implemented content filters and crisis guardrails, while Character AI has added disclaimers specifying the artificial nature of conversations.

Introduction California has taken a historic step in artificial intelligence regulation, becoming the first American state to impose specific rules on AI Evol Magazine