Skip to content

Elon Musk supports California AI safety law

The first attempt to codify AI regulations anywhere in the United States has just gained the support of a powerful voice at a critical moment.

Elon Musk, CEO of Tesla and founder of Grok chatbot parent company xAI, championed California’s Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (Senate Bill 1047).

Should it pass the State Assembly and final approval by Governor Gavin Newsom before the legislative session ends this week, it is intended to set initial guardrails for the technology. The bill would require developers to create safety protocols, be able to shut down an AI model that gets out of control, report security incidents, grant rights to whistleblowers in AI companies, require companies to take measures to protect AI from being used by malicious hackers, and hold companies liable if their AI software gets out of control.

However, there is resistance from venture capitalists such as Marc Andreessen and is even hotly debated among AI luminaries: Meta Chief AI Manager Yann LeCun is against the bill, while AlexNet co-founder Geoffrey Hinton supports it.

“This is a difficult decision and will upset some people, but overall I think California should probably pass the AI ​​safety bill SB 1047,” Musk said. published on Monday and pointed to the “risk to the public” posed by AI.

So far, the only regulatory framework focuses only on the largest supercomputers with 10*26 floating point operations, which cost more than $100 million to train. However, this is not federal legislation, but a Implementing Regulation the Biden administration, which could easily be undone by his successor next year.

This bill would at least partially mitigate this and provide some legal clarity for Big Tech companies like Microsoft Partner companies OpenAI, Amazon-supported Anthropic and Googleeven if they don’t necessarily agree with it.

“SB 1047 is a straightforward, common sense and straightforward bill that builds on President Biden’s executive order,” said California State Senator Scott Wiener, who sponsored the bill. Earlier this month.

Last week for California to pass the bill before the legislative session ends

If a state were to take the lead, California would make the most sense. Its $4 trillion economy is about the size of Germany and Japan in absolute dollar terms, thanks largely to its thriving technology sector in Silicon Valley. California arguably does much more for innovation than the two G7 states.

Speaking to Bloomberg TV, Wiener said he could understand the argument that Washington should have moved on, but he also pointed to a number of issues, including privacy laws, social media and net neutrality, that the U.S. Congress has never addressed coherently.

“I agree, it should be regulated at the federal level,” Wiener told the broadcaster on Friday“Congress has a very poor record of regulating the technology sector and I don’t see that changing, so California should take the lead.”

This month is the last opportunity to pass SB 1047. After the end of the week, the state legislature will go into recess ahead of the November election. If the bill passes, it must be approved by Newsom before the end of September, and last week the U.S. House of Representatives asked him to veto the law it should go across his desk.

But regulating technology can be a futile exercise, as politics always lags behind the speed of innovation. Intervention in the free market can inadvertently suppress innovation – and this is the main criticism of SB1047.

Former OpenAI researcher admits his colleagues are giving up

Just a year ago Champions could largely suppress any outside attempt to intervene in the sector. Most policymakers recognized that America was engaged in a high-stakes AI arms race Chinaand neither can afford to lose. If the US imposes restrictions on its domestic industry, this could tip the balance in Beijing’s favor.

A rash of recent departures under senior AI security experts of OpenAI, the company that sparked the AI ​​gold rush, has raised fears that executives – including CEO Sam Altman – Throw caution to the wind in an effort to commercialize the Expensive technology.

Former OpenAI security researcher Daniel Kokotajlo said Assets on Monday that nearly half of AI’s administrative staff have voluntarily decided to collectively leave the former nonprofit, dismayed at the direction it has taken.

“It’s just people giving up one by one,” he said in a Exclusive interview. Kokotajlo renounced any equity he had in the company in order to sign a Comprehensive confidentiality agreement he was forbidden to talk about his former employer.

Musk would also likely be personally affected by the legislation. Last year he founded his own artificial general intelligence startup in xAI. He has just launched a brand new Supercomputer cluster in Memphis is the powered by AI training chips And staffed by experts He effectively poached Tesla.

But Musk is no ordinary challenger: He knows technology well, co-founded OpenAI in December 2015 and its former senior scientist. The Tesla CEO and entrepreneur later became fell out with Altman and ultimately decided on sue the company not once, but twice.

Recommended newsletter: High-level insights for senior executives. Subscribe to the CEO Daily newsletter for free today. Subscribe now.

//platform.twitter.com/widgets.js