Skip to content

AI desperately needs global oversight


Every time you post a photo, reply on social media, build a website, or possibly even send an email, your data is extracted, stored, and used to train generative AI technology that can create text, audio, video, and images with just a few few words . This Has Real Consequences: OpenAI Researchers studying The labor market impact of their language models estimated that approximately 80 percent of the US workforce could have at least 10 percent of their job tasks affected by the introduction of large language models ( LLM) such as ChatGPT, while around 19 percent of workers can see at least half of their tasks affected. We are also seeing an immediate shift in the job market with imaging. In other words, the data you created could put you out of a job.

When a company builds its technology on top of a public resource, the Internet, it makes sense to say that that technology should be available and open to everyone. But critics have noted that GPT-4 was missing any clear information or specifications that allow anyone outside the organization to replicate, test, or verify any aspect of the model. Some of these companies have received large sums of funding from other major corporations to create commercial products. For some in the AI ​​community, this is a dangerous sign that these companies are going to seek profit over and above the public benefit.

Code transparency alone is unlikely to ensure that these generative AI models serve the public good. There are few conceivable immediate benefits to a journalist, policy analyst, or accountant (all “high exposure” professions according to the OpenAI study) if the data that underpins an LLM is available. We have more and more laws, like the Digital Services Act, that would require some of these companies to open up their code and data for expert auditor review. And open source code can sometimes let malicious actors in, allowing hackers to subvert the security precautions companies are putting in place. Transparency is a laudable goal, but that alone will not ensure that generative AI will be used to improve society.

To really create a public benefit, we need accountability mechanisms. The world needs a generative AI global governance body to resolve these social, economic, and political disruptions beyond what any individual government is capable of, what any academic or civil society group can implement, or any corporation is able to do. willing or able to do. There is already a precedent for global cooperation by companies and countries to take responsibility for technological results. We have examples of well-funded, independent organizations and think tanks that can make decisions on behalf of the public good. An entity like this has the task of thinking about the benefits for humanity. Let’s build on these ideas to address the fundamental problems that generative AI is already emerging.

In the post-World War II era of nuclear proliferation, for example, there was a credible and significant fear that nuclear technologies would get out of hand. The widely held belief that society had to act collectively to avert global disaster echoes many of the current discussions surrounding generative AI models. In response, countries around the world, led by the US and under the guidance of the United Nations, came together to form the International Atomic Energy Agency (IAEA), an independent body free of government and corporate affiliations that would provide solutions to the far-reaching ramifications and seemingly endless capabilities of nuclear technologies. It operates in three main areas: nuclear energy, safety and nuclear physics and safeguards. For example, after the fukushima disaster in 2011 provided critical resources, education, testing and impact reporting, and helped ensure continued nuclear safety. However, the agency is limited: it relies on member states to voluntarily comply with its rules and guidelines, and on their cooperation and assistance to carry out its mission.

In technology, Facebook Supervisory Board it is a practical attempt to balance transparency with accountability. Board members are a global interdisciplinary group, and their judgments, such as overturning a decision by Facebook to remove a post depicting sexual harassment in India, are binding. This model is not perfect either; there are allegations of corporate capture, as the board is funded solely by Meta, albeit through a separate trust, and is primarily concerned with content removal.



Source link