Skip to content

Why stop AI research when we already know how to make it safer?


last week, the Future of Life Institute published a open letter proposing a term of six months Moratorium on “dangerous” AI career. Since then, it has been signed by more than 3,000 people, including some influential members of the AI ​​community. But while it is good that the risks of AI systems are gaining visibility within the community and across society, both the problems described and the actions proposed in the letter are unrealistic and unnecessary.

The call for a pause in AI work is not only vague, but also unfeasible. While the training of large language models by for-profit companies gets the lion’s share of attention, it’s far from the only type of AI work being done. Indeed, AI research and practice is occurring in business, academia, and in Kaggle competitions around the world on a multitude of topics ranging from efficiency to safety. This means that there is no magic button that anyone can press that will stop the “dangerous” AI research and allow only the “safe” type. And the AI ​​risks mentioned in the letter are all hypothetical, based on a long term mindset which tends to overlook real problems like algorithmic discrimination and predictive surveillancethat they are harming people now, in favor of possible existential risks to humanity.

Instead of focusing on the ways AI can fail in the future, we should focus on clearly defining what constitutes AI success today. This path is eminently clear: Rather than halt research, we must improve transparency and accountability while developing guidelines on the deployment of AI systems. Policy, research and user-led initiatives in this regard have been around for decades in different sectors, and we already have concrete proposals to work with to address current AI risks.

Regulatory authorities around the world are already drafting laws and protocols to manage the use and development of new AI technologies. The US Senate Algorithmic Liability Law and similar initiatives in the EU and Canada they are among those helping define what data can and cannot be used to train AI systems, address copyright and licensing issues, and weigh special considerations necessary for the use of AI in high-risk environments. A critical part of these rules is transparency: requiring AI system builders to provide more information about technical details like where the training data comes from, the code used to train models, and how features like security filters are implemented. . Both AI model developers and their downstream users can support these efforts by engaging with their representatives and helping to shape legislation around the issues outlined above. After all, it is our data that is used and our livelihoods are affected.

But making this type of information available is not enough on its own. Companies developing AI models must also allow external audits of their systems and be held accountable for addressing risks and deficiencies if they are identified. For example, many of the newer AI models, such as ChatGPT, Bard, and GPT-4, are also the most restrictive, only available through an API or controlled access that is fully controlled by the companies that created them. This essentially turns them into black boxes whose output can change from day to day or produce different results for different people. While there have been some company-approved Red Team With tools like GPT-4, there is no way for researchers to access the underlying systems, making scientific analysis and audits impossible. This goes against the approaches to auditing AI systems proposed by academics such as Deborah Rajiwho has asked for an overview of the different stages of the model development process so that risk behaviors and damage are detected before the models are implemented in society.

Another crucial step toward security is to collectively rethink the way we create and use AI. AI developers and researchers can begin to set standards and guidelines for AI practice by listening to the many people who have been advocating for more ethical AI for years. This includes researchers such as Timnit Gebru, who proposed a “Slow AI” movementand Ruha Benjamin, who highlighted the importance of creating guiding principles for ethical AI during her main presentation at a recent AI conference. Community-driven initiatives, such as the Code of Ethics being implemented by the NeurIPS Conference (an effort I am chairing), are also part of this movement and aim to set guidelines for what is acceptable in terms of AI research and how to consider its broader impacts on society.



Source link