The National Times - California enacts AI safety law targeting tech giants

California enacts AI safety law targeting tech giants


California enacts AI safety law targeting tech giants
California enacts AI safety law targeting tech giants / Photo: © GETTY IMAGES NORTH AMERICA/AFP

California Governor Gavin Newsom has signed into law groundbreaking legislation requiring the world's largest artificial intelligence companies to publicly disclose their safety protocols and report critical incidents, state lawmakers announced Monday.

Change text size:

Senate Bill 53 marks California's most significant move yet to regulate Silicon Valley's rapidly advancing AI industry while also maintaining its position as a global tech hub.

"With a technology as transformative as AI, we have a responsibility to support that innovation while putting in place commonsense guardrails," State Senator Scott Wiener, the bill's sponsor, said in a statement.

The new law represents a successful second attempt by Wiener to establish AI safety regulations after Newsom vetoed his previous bill, SB 1047, after furious pushback from the tech industry.

It also comes after a failed attempt by the Trump administration to prevent states from enacting AI regulations, under the argument that they would create regulatory chaos and slow US-made innovation in a race with China.

The new law says major AI companies have to publicly disclose their safety and security protocols in redacted form to protect intellectual property.

They must also report critical safety incidents -- including model-enabled weapons threats, major cyber-attacks, or loss of model control -- within 15 days to state officials.

The legislation also establishes whistleblower protections for employees who reveal evidence of dangers or violations.

According to Wiener, California's approach differs from the European Union's landmark AI Act, which requires private disclosures to government agencies.

SB 53, meanwhile, mandates public disclosure to ensure greater accountability.

In what advocates describe as a world-first provision, the law requires companies to report instances where AI systems engage in dangerous deceptive behavior during testing.

For example, if an AI system lies about the effectiveness of controls designed to prevent it from assisting in bioweapon construction, developers must disclose the incident if it materially increases catastrophic harm risks.

The working group behind the law was led by prominent experts including Stanford University's Fei-Fei Li, known as the "godmother of AI."

C.Blake--TNT

Featured

Mysterious world beyond Pluto may have an atmosphere: astronomers

A tiny, little-known world beyond Pluto appears to have an atmosphere, Japanese astronomers said Monday, defying what had been thought possible for icy objects in our cosmic backyard.

Datavault AI and CyberCatch Announce Signing of Binding Letter of Intent for Datavault AI to Acquire CyberCatch to Accelerate AI-Driven, Quantum-Resistant Cyber Risk Mitigation Solutions

Strategic acquisition is anticipated to position Datavault AI to bring CyberCatch's AI-enabled cyber risk mitigation solution into Datavault AI's SanQtum-secured edge Graphics Processing Unit ecosystem, addressing a global information security market projected to reach $240 billion in 2026 (Gartner)CyberCatch's post-quantum cryptography conversion plan is also expected to position the combined company ahead of the AI-enabled "Q-Day" quantum-attack horizon, now compressed to as early as 2029 (Google)AI-enabled adversary attacks in 2025 rose 89% year-over-year while average eCrime breakout time fell to 29 minutes, a 65% increase in adversary speed compared to 2024, per CrowdStrike's 2026 Global Threat Report, and Google Quantum AI research has now compressed the timeline for cryptographically relevant quantum computing to as early as 2029.

Apple earnings beat forecasts on iPhone 17 demand

Apple on Thursday said it had its best start to the year ever when it came to earnings, with iPhone demand and digital service sales helping it beat expectations.

Musk grilled on AI profits at OpenAI trial

Elon Musk sparred with lawyers for a third day Thursday at his California trial against OpenAI, struggling to explain why his own for-profit AI empire differs from the one he is trying to take down.

Change text size: