Unlock Editor’s Digest for free
FT editor Roula Khalaf selects her favourite stories in this weekly newsletter.
The cause of open source artificial intelligence — the idea that the inner workings of AI models should be openly available for anyone to inspect, use and adapt — has just crossed an important threshold.
Mark Zuckerberg, CEO of Meta, reclaimed This week, his company’s latest open-source Llama model is the first to achieve “frontier-level” status, meaning it’s essentially on par with the most powerful AI from companies like OpenAI, Google, and Anthropic. Starting next year, Zuckerberg says, future Llama models will advance to become the most advanced in the world.
Whether or not that happens, the positive and negative effects of opening up such a powerful technology to general use have become clear. Models like Llama are the best hope of stopping a small group of big tech companies from consolidating their dominance over advanced AI, but they could also put powerful technology in the hands of disinformation spreaders, scammers, terrorists, and rival nation-states. If anyone in Washington had been thinking about challenging the open dissemination of advanced AI, now would probably be the time.
Meta’s emergence as a leading advocate for open source in the AI world has had an unexpected ring to it. The company formerly known as Facebook initially changed course from an open platform company, where any developer could build services, to one of the most closed “walled gardens” on the Internet. Meta’s open source AI isn’t exactly open source, either. Llama’s models have not been released under a license recognized by the Open Software Initiative. Meta reserves the right to prevent other large companies from using its technology.
However, Llama’s models meet many of the tests of openness (most people can inspect or adapt the “weights” that determine how they work), and Zuckerberg’s claims of being an open source convert out of enlightened self-interest ring true.
Unlike Google or Microsoft, Meta’s business does not involve selling direct access to AI models, and it would find it difficult to compete head-to-head in this technology. However, relying on other companies’ technology platforms could be risky, as Meta discovered to its cost in the smartphone world when Apple changed its privacy rules for the iPhone in ways that devastated Meta’s business.
The alternative — fostering an open-source alternative that can gain broader support in the tech industry — is a well-worn strategy. The list of companies that have lined up behind Llama’s latest model this week suggests it is starting to have an effect. They include Amazon, Microsoft and Google, which offer access through their clouds.
By claiming that open source is in many ways more secure than the traditional proprietary alternative, Zuckerberg has tapped into a powerful force. Many users want to see the inner workings of the technology they rely on, and much of the world’s core infrastructure software is open source. In the words of computer security expert Bruce Schneier: “Openness = security. It’s just the tech giants who want to convince you otherwise.”
However, despite all the advantages of the open source approach, is it simply too dangerous to release powerful AI in this form?
Meta’s chief executive argues that it’s a myth to believe that the most valuable technology can be kept safe from nation-state rivals: China, he says, will steal the secrets anyway. For a national security establishment wedded to the idea that there are things that can be kept secret, that argument probably rings hollow.
As for less powerful adversaries, Zuckerberg argues that the experience of running a social network shows that combating malign uses of AI is a winnable arms race. As long as the good guys have more powerful machines at their disposal than the bad guys, everything will be fine. But that assumption may not hold. In theory, anyone can rent powerful technology on demand, through one of the public cloud platforms.
It is possible to imagine a future world where access to such massive computing power is regulated. Like banks, cloud companies might be forced to follow a “know your customer” rule. Suggestions that governments should directly control who has access to the chips needed to build advanced AI.
That may be the world we’re headed toward, but if so, it’s still a long way off, and open source and freely available AI models are already making great strides.