By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
MadisonyMadisony
Notification Show More
Font ResizerAa
  • Home
  • National & World
  • Politics
  • Investigative Reports
  • Education
  • Health
  • Entertainment
  • Technology
  • Sports
  • Money
  • Pets & Animals
Reading: The controversy behind SB 53, the California invoice attempting to forestall AI from constructing nukes
Share
Font ResizerAa
MadisonyMadisony
Search
  • Home
  • National & World
  • Politics
  • Investigative Reports
  • Education
  • Health
  • Entertainment
  • Technology
  • Sports
  • Money
  • Pets & Animals
Have an existing account? Sign In
Follow US
2025 © Madisony.com. All Rights Reserved.
Technology

The controversy behind SB 53, the California invoice attempting to forestall AI from constructing nukes

Madisony
Last updated: September 12, 2025 1:23 pm
Madisony
Share
The controversy behind SB 53, the California invoice attempting to forestall AI from constructing nukes
SHARE


With regards to AI, as California goes, so goes the nation. The largest state within the US by inhabitants can be the central hub of AI innovation for your entire globe, residence to 32 of the world’s prime 50 AI firms. That measurement and affect have given the Golden State the burden to turn out to be a regulatory trailblazer, setting the tone for the remainder of the nation on environmental, labor, and client safety rules — and extra just lately, AI as nicely. Now, following the dramatic defeat of a proposed federal moratorium on states regulating AI in July, California policymakers see a restricted window of alternative to set the stage for the remainder of the nation’s AI legal guidelines.

This week, the California State Meeting is ready to vote on SB 53, a invoice that may require transparency reviews from the builders of extremely highly effective, “frontier” AI fashions. The fashions focused symbolize the cutting-edge of AI — extraordinarily adept generative techniques that require huge quantities of knowledge and computing energy, like OpenAI’s ChatGPT, Google’s Gemini, xAI’s Grok, and Anthropic’s Claude. The invoice, which has already handed the state Senate, should cross the California State Meeting earlier than it goes to the governor to both be vetoed or signed into regulation.

AI can supply great advantages, however because the invoice is supposed to deal with, it’s not with out dangers. And whereas there isn’t any scarcity of current dangers from points like job displacement and bias, SB 53 focuses on attainable “catastrophic dangers” from AI. Such dangers embrace AI-enabled organic weapons assaults and rogue techniques finishing up cyberattacks or different felony exercise that would conceivably convey down essential infrastructure. Such catastrophic dangers symbolize widespread disasters that would plausibly threaten human civilization at native, nationwide, and world ranges. They symbolize dangers of the form of AI-driven disasters that haven’t but occurred, fairly than already-realized, extra private harms like AI deepfakes.

Precisely what constitutes a catastrophic threat is up for debate, however SB 53 defines it as a “foreseeable and materials threat” of an occasion that causes greater than 50 casualties or over $1 billion in damages {that a} frontier mannequin performs a significant position in contributing to. How fault is set in apply could be as much as the courts to interpret. It’s arduous to outline catastrophic threat in regulation when the definition is much from settled, however doing so may help us defend towards each near- and long-term penalties.

By itself, a single state invoice centered on elevated transparency will most likely not be sufficient to forestall devastating cyberattacks and AI-enabled chemical, organic, radiological, and nuclear weapons. However the invoice represents an effort to manage this fast-moving expertise earlier than it outpaces our efforts at oversight.

SB 32 is the third state-level invoice to attempt to particularly give attention to regulating AI’s catastrophic dangers, after California’s SB 1047, which handed the legislature solely to be vetoed by the governor — and New York’s Accountable AI Security and Schooling (RAISE) Act, which just lately handed the New York legislature and is now awaiting Gov. Kathy Hochul’s approval.

SB 53, which was launched by state Sen. Scott Wiener in February, requires frontier AI firms to develop security frameworks that particularly element how they strategy catastrophic threat discount. Earlier than deploying their fashions, firms must publish security and safety reviews. The invoice additionally offers them 15 days to report “essential security incidents” to the California Workplace of Emergency Companies, and establishes whistleblower protections for workers who come ahead about unsafe mannequin deployment that contributes to catastrophic threat. SB 53 goals to carry firms publicly accountable for his or her AI security commitments, with a monetary penalty as much as $1 million per violation.

In some ways, SB 53 is the religious successor to SB 1047, additionally launched by Wiener.

Each cowl giant fashions which can be educated at 10^26 FLOPS, a measurement of very important computing energy used in a wide range of AI laws as a threshold for important threat, and each payments strengthen whistleblower protections. The place SB 53 departs from SB 1047 is its give attention to transparency and prevention

Whereas SB 1047 aimed to maintain firms answerable for catastrophic harms brought on by their AI techniques, SB 53 formalizes sharing security frameworks, which many frontier AI firms, together with Anthropic, already do voluntarily. It focuses squarely on the heavy-hitters, with its guidelines making use of solely to firms that generate $500 million or extra in gross income.

“The science of how one can make AI secure is quickly evolving, and it’s presently tough for policymakers to write down prescriptive technical guidelines for a way firms ought to handle security,” mentioned Thomas Woodside, the co-founder of Safe AI Venture, an advocacy group that goals to cut back excessive dangers from AI and is a sponsor of the invoice, over e-mail. “This gentle contact coverage prevents backsliding on commitments and encourages a race to the highest fairly than a race to the underside.”

A part of the logic of SB 53 is the power to adapt the framework as AI progresses. The invoice authorizes the California Lawyer Basic to vary the definition of a big developer after January 1, 2027, in response to AI advances.

Proponents of the invoice are optimistic about its probabilities of being signed by the governor ought to it cross the legislature, which it’s anticipated to. On the identical day that Gov. Gavin Newsom vetoed SB 1047, he commissioned a working group focusing solely on frontier fashions. The ensuing report by the group supplied the muse for SB 53. “I might guess, with roughly 75 p.c confidence, that SB 53 will likely be signed into regulation by the top of September,” mentioned Dean Ball — former White Home AI coverage adviser, vocal SB 1047 critic, and SB 53 supporter — to Transformer.

However a number of business organizations have rallied in opposition, arguing that extra compliance regulation could be costly, provided that AI firms ought to already be incentivized to keep away from catastrophic harms. OpenAI has lobbied towards it and expertise commerce group Chamber of Progress argues that the invoice would require firms to file pointless paperwork and unnecessarily stifle innovation.

“These compliance prices are merely the start,” Neil Chilson, head of AI coverage on the Abundance Institute, instructed me over e-mail. “The invoice, if handed, would feed California regulators truckloads of firm info that they’ll use to design a compliance industrial complicated.”

Against this, Anthropic enthusiastically endorsed the invoice in its present state on Monday. “The query isn’t whether or not we want AI governance – it’s whether or not we develop it thoughtfully right now or reactively tomorrow,” the corporate defined in a weblog publish. “SB 53 gives a stable path towards the previous.” (Disclosure: Vox Media is one in every of a number of publishers which have signed partnership agreements with OpenAI, whereas Future Excellent is funded partially by the BEMC Basis, whose main funder was additionally an early investor in Anthropic. Neither group has editorial enter into our content material.)

The controversy over SB 53 ties into broader disagreements about whether or not states or the federal authorities ought to drive AI security regulation. However for the reason that overwhelming majority of those firms are primarily based in California, and practically all do enterprise there, the state’s laws issues for your entire nation.

“A federally led transparency strategy is much, far, far preferable to the multi-state various,” the place a patchwork of state rules can battle with one another, mentioned Cato Institute expertise coverage fellow Matthew Mittelsteadt in an e-mail. However “I really like that the invoice has a provision that may enable firms to defer to a future various federal customary.”

“The pure query is whether or not a federal strategy may even occur,” Mittelsteadt continued. “In my view, the jury is out on that however the risk is much extra probably that some counsel. It’s been lower than 3 years since ChatGPT was launched. That’s hardly a lifetime in public coverage.”

However in a time of federal gridlock, frontier AI developments received’t look ahead to Washington.

The catastrophic threat divide

The invoice’s give attention to, and framing of, catastrophic dangers just isn’t with out controversy.

The thought of catastrophic threat comes from the fields of philosophy and quantitative threat evaluation. Catastrophic dangers are downstream of existential dangers, which threaten humanity’s precise survival or else completely scale back our potential as a species. The hope is that if these doomsday eventualities are recognized and ready for, they are often prevented or not less than mitigated.

But when existential dangers are clear — the top of the world, or not less than as we all know it — what falls below the catastrophic threat umbrella, and the easiest way to prioritize these dangers, will depend on who you ask. There are longtermists, individuals centered totally on humanity’s far future, who place a premium on issues like multiplanetary enlargement for human survival. They’re typically mainly involved by dangers from rogue AI or extraordinarily deadly pandemics. Neartermists are extra preoccupied with current dangers, like local weather change, mosquito vector-borne illness, or algorithmic bias. These camps can mix into each other — neartermists would additionally prefer to keep away from getting hit by asteroids that would wipe out a metropolis, and longtermists don’t dismiss dangers like local weather change — and the easiest way to think about them is like two ends of a spectrum fairly than a strict binary.

You possibly can consider the AI ethics and AI security frameworks because the near- and longtermism of AI threat, respectively. AI ethics is in regards to the ethical implications of the methods the expertise is deployed, together with issues like algorithmic bias and human rights, within the current. AI security focuses on catastrophic dangers and potential existential threats. However, as Vox’s Julia Longoria reported within the Good Robotic collection for Unexplainable, there are inter-personal conflicts main these two factions to work towards one another, a lot of which has to do with emphasis. (AI ethics individuals argue that catastrophic threat issues over-hype AI capabilities and ignores its impression on susceptible individuals proper now, whereas AI security individuals fear that if we focus an excessive amount of on the current, we received’t have methods to mitigate larger-scale issues down the road.)

However behind the query of close to versus long-term dangers lies one other one: what, precisely, constitutes a catastrophic threat?

SB 53 initially set the usual for catastrophic threat at 100 fairly than 50 casualties — just like New York’s RAISE Act — earlier than halving the brink in an modification to the invoice. Whereas the common individual would possibly contemplate, say, many individuals pushed to suicide after interacting with AI chatbots to be catastrophic, such a threat is outdoors of the invoice’s scope. (The California State Meeting simply handed a separate invoice to manage AI companion chatbots by stopping them from collaborating in discussions about suicidal ideation or sexually specific materials.)

SB 53 focuses squarely on harms from “expert-level” frontier AI mannequin help in growing or deploying chemical, organic, radiological, and nuclear weapons; committing crimes like cyberattacks or fraud; and “lack of management” eventualities the place AIs go rogue, behaving deceptively to keep away from being shut down and replicating themselves with out human oversight. For instance, an AI mannequin may very well be used to information the creation of a brand new lethal virus that infects hundreds of thousands and kneecaps the worldwide economic system.

“The 50 to 100 deaths or a billion {dollars} in property harm is only a proxy to seize actually widespread and substantial impression,” mentioned Scott Singer, lead creator of the California Report for Frontier AI Coverage, which helped inform the premise of the invoice. “We do have a look at like AI-enabled or AI probably [caused] or correlated suicide. I feel that’s like a really critical set of points that calls for policymaker consideration, however I don’t suppose it’s the core of what this invoice is attempting to deal with.”

Transparency is useful in stopping such catastrophes as a result of it will probably assist elevate the alarm earlier than issues get out of hand, permitting AI builders to right course. And within the occasion that such efforts fail to forestall a mass casualty incident, enhanced security transparency may help regulation enforcement and the courts determine what went flawed. The problem there’s that it may be tough to find out how a lot a mannequin is accountable for a selected final result, Irene Solaiman, the chief coverage officer at Hugging Face, a collaboration platform for AI builders, instructed me over e-mail.

“These dangers are coming and we must be prepared for them and have transparency into what the businesses are doing,” mentioned Adam Billen, the vice chairman of public coverage at Encode, a corporation that advocates for accountable AI management and security. (Encode is one other sponsor of SB 53.) “However we don’t know precisely what we’re going to want to do as soon as the dangers themselves seem. However proper now, when these issues aren’t taking place at a big scale, it is sensible to be form of centered on transparency.”

Nonetheless, a transparency-focused invoice like SB 53 is inadequate for addressing already-existing harms. After we already know one thing is an issue, the main focus must be on mitigating it.

“Perhaps 4 years in the past, if we had handed some form of transparency laws like SB 53 however centered on these harms, we’d have had some warning indicators and been capable of intervene earlier than the widespread harms to children began taking place,” Billen mentioned. “We’re attempting to form of right that mistake on these issues and get some form of forward-facing details about what’s taking place earlier than issues get loopy, principally.”

SB 53 dangers being each overly slim and unclearly scoped. Now we have not but confronted these catastrophic harms from frontier AI fashions, and probably the most devastating dangers would possibly take us fully unexpectedly. We don’t know what we don’t know.

It’s additionally definitely attainable that fashions educated beneath 10^26 FLOPS, which aren’t lined by SB 53, have the potential to trigger catastrophic hurt below the invoice’s definition. The EU AI Act units the threshold for “systemic threat” on the smaller 10^25 FLOPS, and there’s disagreement in regards to the utility of computational energy as a regulatory customary in any respect, particularly as fashions turn out to be extra environment friendly.

Because it stands proper now, SB 53 occupies a unique area of interest from payments centered on regulating AI use in psychological healthcare or knowledge privateness, reflecting its authors’ need to not step on the toes of different laws or chunk off greater than it will probably fairly chew. However Chilson, the Abundance Institute’s head of AI coverage, is a part of a camp that sees SB 53’s give attention to catastrophic hurt as a “distraction” from the actual near-term advantages and issues, like AI’s potential to speed up the tempo of scientific analysis or create nonconsensual deepfake imagery, respectively.

That mentioned, deepfakes may definitely trigger catastrophic hurt. As an example, think about a hyper-realistic deepfake impersonating a financial institution worker to commit fraud at a multibillion-dollar scale, mentioned Nathan Calvin, the vice chairman of state affairs and common counsel at Encode. “I do suppose a number of the traces between these items in apply generally is a bit blurry, and I feel in some methods…that’s not essentially a nasty factor,” he instructed me.

It may very well be that the ideological debate round what qualifies as catastrophic dangers, and whether or not that’s worthy of our legislative consideration, is simply noise. The invoice is meant to manage AI earlier than the proverbial horse is out of the barn. The typical individual isn’t going to fret in regards to the chance of AI sparking nuclear warfare or organic weapons assaults, however they do take into consideration how algorithmic bias would possibly have an effect on their lives within the current. However in attempting to forestall the worst-case eventualities, maybe we are able to additionally keep away from the “smaller,” nearer harms. In the event that they’re efficient, forward-facing security provisions designed to forestall mass casualty occasions will even make AI safer for people.

If SB 53 passes the legislature and will get signed by Gov. Newsom into regulation, it may encourage different state makes an attempt at AI regulation by the same framework, and finally encourage federal AI security laws to maneuver ahead.

How we take into consideration threat issues as a result of it determines the place we focus our efforts on prevention. I’m a agency believer within the worth of defining your phrases, in regulation and debate. If we’re not on the identical web page about what we imply once we speak about threat, we are able to’t have an actual dialog.

You’ve learn 1 article within the final month

Right here at Vox, we’re unwavering in our dedication to masking the problems that matter most to you — threats to democracy, immigration, reproductive rights, the setting, and the rising polarization throughout this nation.

Our mission is to offer clear, accessible journalism that empowers you to remain knowledgeable and engaged in shaping our world. By changing into a Vox Member, you immediately strengthen our skill to ship in-depth, impartial reporting that drives significant change.

We depend on readers such as you — be a part of us.

Swati Sharma

Vox Editor-in-Chief

Subscribe to Our Newsletter
Subscribe to our newsletter to get our newest articles instantly!
[mc4wp_form]
Share This Article
Email Copy Link Print
Previous Article Carlyle, EQT, HongShan amongst ultimate bidders for Starbucks China, sources say Carlyle, EQT, HongShan amongst ultimate bidders for Starbucks China, sources say
Next Article Deliberate Parenthood’s Medicaid funding could be blocked for now, appeals court docket guidelines Deliberate Parenthood’s Medicaid funding could be blocked for now, appeals court docket guidelines
Leave a Comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

POPULAR

UN overwhelmingly endorses two-state resolution declaration that condemns Hamas
Politics

UN overwhelmingly endorses two-state resolution declaration that condemns Hamas

Rodrigo Duterte’s lawyer claims he has ‘deteriorating cognitive situation’
Investigative Reports

Rodrigo Duterte’s lawyer claims he has ‘deteriorating cognitive situation’

55 Free SEL Actions for Preschool and Kindergarten
Education

55 Free SEL Actions for Preschool and Kindergarten

BlockchainFX Raises .24M in Presale as First Multi-Asset Tremendous App Connecting Crypto, Shares, and Foreign exchange Goes Dwell in Beta
Money

BlockchainFX Raises $7.24M in Presale as First Multi-Asset Tremendous App Connecting Crypto, Shares, and Foreign exchange Goes Dwell in Beta

Defeating Terence Crawford has abruptly change into important to Canelo Alvarez’s already strong legacy
Sports

Defeating Terence Crawford has abruptly change into important to Canelo Alvarez’s already strong legacy

What subsequent now Charlie Kirk taking pictures suspect in custody?
National & World

What subsequent now Charlie Kirk taking pictures suspect in custody?

Trump scraps plans to ship Nationwide Guard to Chicago for now, says they will Memphis as a substitute
Politics

Trump scraps plans to ship Nationwide Guard to Chicago for now, says they will Memphis as a substitute

You Might Also Like

Greatest Children Backpacks, Examined and Reviewed (2025)
Technology

Greatest Children Backpacks, Examined and Reviewed (2025)

Matching units are wonderful, particularly when you've got a number of youngsters. My daughter's issues are purple, and my son's…

2 Min Read
OpenAI brings GPT-4o again as a default for paying ChatGPT customers
Technology

OpenAI brings GPT-4o again as a default for paying ChatGPT customers

Need smarter insights in your inbox? Join our weekly newsletters to get solely what issues to enterprise AI, knowledge, and…

5 Min Read
AI Is Designing Weird New Physics Experiments That Really Work
Technology

AI Is Designing Weird New Physics Experiments That Really Work

“LIGO is that this enormous factor that hundreds of individuals have been occupied with deeply for 40 years,” stated Aephraim…

5 Min Read
Our Favourite Earbuds for Working Out Are Cheaper Than Ever
Technology

Our Favourite Earbuds for Working Out Are Cheaper Than Ever

Beats has been a family title in headphones for years, identified for punchy bass and daring styling. The Powerbeats Professional…

3 Min Read
Madisony

We cover the stories that shape the world, from breaking global headlines to the insights behind them. Our mission is simple: deliver news you can rely on, fast and fact-checked.

Recent News

UN overwhelmingly endorses two-state resolution declaration that condemns Hamas
UN overwhelmingly endorses two-state resolution declaration that condemns Hamas
September 12, 2025
Rodrigo Duterte’s lawyer claims he has ‘deteriorating cognitive situation’
Rodrigo Duterte’s lawyer claims he has ‘deteriorating cognitive situation’
September 12, 2025
55 Free SEL Actions for Preschool and Kindergarten
55 Free SEL Actions for Preschool and Kindergarten
September 12, 2025

Trending News

UN overwhelmingly endorses two-state resolution declaration that condemns Hamas
Rodrigo Duterte’s lawyer claims he has ‘deteriorating cognitive situation’
55 Free SEL Actions for Preschool and Kindergarten
BlockchainFX Raises $7.24M in Presale as First Multi-Asset Tremendous App Connecting Crypto, Shares, and Foreign exchange Goes Dwell in Beta
Defeating Terence Crawford has abruptly change into important to Canelo Alvarez’s already strong legacy
  • About Us
  • Privacy Policy
  • Terms Of Service
Reading: The controversy behind SB 53, the California invoice attempting to forestall AI from constructing nukes
Share

2025 © Madisony.com. All Rights Reserved.

Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?