Skip to content

UK antitrust watchdog announces initial review of generative AI


Well that was fast. The UK competition watchdog has announced a initial review of “fundamental AI models”, such as the extensive language models (LLMs) that underpin OpenAI’s ChatGPT and Microsoft’s New Bing. Generative AI models that power AI art platforms, such as OpenAI’s DALL-E or Midjourney, will likely also be included in the scope.

The Competition and Markets Authority (CMA) said its review will look at competition and consumer protection considerations in the development and use of fundamental AI models, with the aim of understanding “how the fundamental models and producing an assessment of the conditions and principles that will better guide the development of basic models and their use in the future”.

He proposes to publish the review in “early September,” with a June 2 deadline for interested parties to submit responses to inform their work.

“The basic models, including large language models and generative artificial intelligence (AI), which have emerged in the last five years, have the potential to transform much of what people and companies do. To ensure that AI innovation continues in a way that benefits UK consumers, businesses and the economy, the government has called on regulators, including the [CMA], to think about how innovative development and deployment of AI can be supported on five overarching principles: safety, security and robustness; appropriate transparency and explainability; justice; accountability and governance; and contestability and reparation,” the CMA wrote in a press release.”

The Basic Models Research Center at Stanford University’s Center for Human-Centered Artificial Intelligence is credited with coining the term “basic models,” in 2021, to refer to AI systems that focus on training a model on a large amount of data and adapt it to many applications

“AI development touches on a number of important issues, such as security, copyright, privacy and human rights, as well as the way markets work. Many of these issues are being considered by government or other regulators, so this initial review will focus on the questions the CMA is best positioned to address: what are the likely implications of developing core AI models for competition? and consumer protection? added the CMA.

In a statement, its CEO, Sarah Cardell, also said:

AI has burst into the public consciousness in recent months, but it has been on our radar for some time. It is a rapidly developing technology that has the potential to transform the way businesses compete, as well as drive substantial economic growth.

It is crucial that the potential benefits of this transformative technology are easily accessible to UK businesses and consumers, while individuals remain protected from issues such as false or misleading information. Our goal is to help this rapidly expanding new technology develop in a way that ensures open and competitive markets and effective consumer protection.

Specifically, the UK competition regulator said that its initial review of fundamental AI models:

  • examine how competitive markets for foundation models and their use might evolve
  • explore what opportunities and risks these scenarios could bring for competition and consumer protection
  • produce guiding principles to support competition and protect consumers as basic AI models are developed

While it may soon see the antitrust regulator conduct a review of a fast-moving emerging technology, the CMA is acting on government instructions.

An AI white paper published in March he noted the ministers’ preference to avoid establishing custom rules (or oversight bodies) to govern the uses of artificial intelligence at this stage. However Ministers said existing UK regulators, including the CMA, whose name was directly verified, are expected to issue guidance to encourage safe, fair and responsible uses of AI.

The CMA says its initial review of fundamental AI models is in line with instructions in the white paper, where the government spoke about existing regulators conducting “detailed risk analysis” to be in a position to carry out potential executions. , that is, about dangerous, unfair and irresponsible applications of AI, using its existing powers.

The regulator also points to its core mission, supporting open and competitive markets, as another reason to take a look at generative AI now.

In particular, the competition watchdog will gain additional powers to regulate Big Tech in the coming years, under plans that Prime Minister Rishi Sunak’s government shelved. last monthwhen ministers said they would move forward with a long-standing (but long-overdue) ex-ante reform targeting the market power of the digital giants.

The expectation is that the Digital Markets Unit of the CMA, in operation and running since 2021 in shadow form, will (finally) gain legislative powers in the coming years to apply proactive “pro-competition” rules that cater to platforms deemed to have “strategic market status” (SMS). So we can speculate that in the future, providers of powerful fundamental AI models may be judged to have SMS, which means they could expect to face custom rules on how they should operate against rivals and consumers in the UK market. United.

The UK data protection watchdog, the ICO, also has its eye on generative AI. It is another existing oversight body that the government has tasked with paying special attention to AI as part of its plan to context-specific guidance to direct the development of technology through the application of existing laws.

in a blog post Last month, Stephen Almond, ICO’s executive director of regulatory risk, offered some advice and a bit of a warning for generative AI developers when it comes to complying with UK data protection regulations. “Organizations that develop or use generative AI should consider their data protection obligations from the outset, taking data protection for default layout and focus,” he suggested. “This is not optional: if you are processing personal data, it is the law.”

Meanwhile, over the English Channel in the European Union, lawmakers are in the process of deciding on a fixed set of rules that are likely to apply to generative AI.

Negotiations towards a final text for the EU inbound AI rulebook are ongoing, but there is currently a focus on how to regulate foundational models through amendments to the risk-based framework to regulate uses of AI the block published in draft more than two years ago.

It remains to be seen where the EU co-legislators will end up in what is also sometimes called general purpose AI. but like us recently reported, MPs are pushing for a layered approach to address security issues with fundamental models; the complexity of responsibilities in AI supply chains; and to address specific content issues (such as copyright) that are associated with generative AI.

Add to that, EU data protection law already applies to AI, of course. And investigations focused on the privacy of models like ChatGPT are taking place on the bloc, including in Italy, where an intervention by the local watchdog led to OpenAI rushes a series of privacy disclosures and control S last month.

The European Data Protection Board also recently created a working group to support coordination between different data protection authorities on AI chatbot investigations. Others investigating ChatGPT include Privacy surveillance in Spain.


—————————————————-

Source link

🔥📰 For more news and articles, click here to see our full list.🌟✨

👍 🎉Don’t forget to follow and like our Facebook page for more updates and amazing content: Decorris List on Facebook 🌟💯