Skip to content

UK opens San Francisco office to address AI risk

Before the start of the AI ​​​​safety summit in Seoul, South Korea later this week, its co-host, the United Kingdom, is expanding its own efforts in this field. The AI ​​Safety Institute, a UK body created in November 2023 with the ambitious goal of assessing and addressing risks in AI platforms, said it will open a second location… in San Francisco.

The idea is to get closer to what is currently the epicenter of AI development, with the Bay Area being home to OpenAI, Anthropic, Google and Meta, among others, building fundamental AI technology.

Fundamental models are the building blocks of generative AI services and other applications, and it is interesting that although the UK has signed a memorandum of understanding with the US for the two countries to collaborate on information security initiatives AI, the UK is still choosing to invest in building a direct presence in the United States to address the issue.

“By having people in San Francisco, it will give them access to the headquarters of many of these AI companies,” Michelle Donelan, the UK’s secretary of state for science, innovation and technology, said in an interview with TechCrunch. “A number of them have bases here in the UK, but we think it would be very useful to have a base there as well and access an additional pool of talent, and be able to work even more collaboratively and hand in hand. with the United States.”

Part of the reason is that, for the UK, being closer to that epicenter is useful not only for understanding what is being built, but because it gives the UK more visibility with these companies – important, given that A.I. and technology in general are seen by everyone. United Kingdom as a great opportunity for economic growth and investment.

And given the latest drama at OpenAI around its super alignment teamIt seems like a particularly opportune time to establish a presence there.

The AI ​​Safety Institute, launched in November 2023, is currently a relatively modest affair. Currently, the organization has only 32 people working in it, a real David against the Goliath of AI technology, considering the billions of dollars of investment that depend on companies that build AI models and therefore , their own economic motivations to acquire their technologies. out the door and into the hands of paying users.

One of the AI ​​Safety Institute’s most notable developments was the launch, earlier this month, of To inspectits first set of tools for testing the security of fundamental AI models.

Donelan today referred to that launch as a “phase one” effort. Not only does he have it proven challenging to date to compare models, but for now the commitment is largely a voluntary and inconsistent agreement. As a senior source at a UK regulator noted, companies have no legal obligation to examine their models at this time; and not all companies are willing to have models tested before launch. That could mean that, in cases where a risk can be identified, the horse may have already escaped.

Donelan said the AI ​​Safety Institute was still developing how best to collaborate with AI companies to evaluate them. “Our evaluation process is an emerging science in itself,” she said. “So with each evaluation we will develop the process and refine it further.”

Donelan said one goal in Seoul would be to present Inspect to regulators meeting at the summit, with the goal of them adopting it as well.

“Now we have an evaluation system. The second phase must also be about making AI safe throughout society,” he stated.

In the long term, Donelan believes the UK will develop more AI legislation, although, echoing what Prime Minister Rishi Sunak has said on the issue, it will resist doing so until it better understands the extent of AI risks.

“We don’t believe in legislating before we have adequate control and full understanding,” he said, noting that the recent international report on AI safety published by the institute focused primarily on trying to get a complete picture of the research. to date, “he highlighted that there are major gaps and that we must encourage and promote more research worldwide.

“And also legislation takes about a year in the UK. And if we had started legislating when we started, instead of [organizing] the AI ​​Safety Summit [held in November last year]“We would still be legislating now and we wouldn’t really have anything to show for it.”

“From day one of the Institute, we have been clear about the importance of taking an international approach to AI safety, sharing research and working collaboratively with other countries to test models and anticipate the risks of cutting-edge AI,” he said. Ian Hogarth, president of the AI ​​Safety Institute. “Today marks a pivotal moment that allows us to continue advancing this agenda, and we are proud to expand our operations in an area brimming with technology talent, adding to the incredible experience our staff in London have brought to the table from the beginning.”