Skip to content

AI needs super smart regulation | Financial Times


Powerful AI systems can be of enormous benefit to society and help us tackle some of the world’s biggest problems. Machine learning models are already playing a significant role in diagnosing diseases, accelerating scientific research, increasing economic productivity and reducing energy consumption, for example by optimizing electricity flows on power grids.

It would be a tragedy if those gains were compromised due to a backlash against the technology. But this danger is growing as abuses of AI technology multiply, in areas such as unfair discrimination, disinformation and fraud, such as Geoffrey Hintonone of the “godfathers of AI”, warned last month on resigning from Google. This makes it imperative that governments move quickly to regulate the technology appropriately and proportionately.

How to do this will be one of the greatest governance challenges of our age. Machine learning systems, which can be implemented in millions of use cases, defy easy categorization and can raise numerous problems for regulators. This rapidly evolving technology can also be used in widespread, invisible and ubiquitous ways on a large scale. But encouragingly, regulators around the world are finally starting to address the issues.

Last week, the White House summoned the leaders of the largest AI companies to explore the benefits and dangers of the technology before outlining future guidelines. The EU and China are already well advanced in developing rules and regulations to govern AI. And the UK Competition Authority is conduct a review of the AI ​​market.

The first step is for the tech industry itself to agree and implement some common principles regarding transparency, accountability and fairness. For example, companies should never try to impersonate chatbots as humans. The second step would be for all regulators, in areas such as labor law, financial and consumer markets, competition policy, data protection, privacy and human rights, to modify existing rules to take account of specific risks raised by AI. The third is for government agencies and universities to deepen their technological expertise to reduce the risk of industrial capture.

Beyond that, two general regulatory regimes for AI should be considered, although neither alone is adequate for the scale of the challenge. A regime, based on the precautionary principle, would mean that algorithms used in some critical, life-or-death areas, such as healthcare, the justice system and the military, would need pre-approval before use . This could work in much the same way as the US Food and Drug Administration, which screens drugs before release and has a broader mandate to protect and promote public health.

The second, more flexible model could be based on “governance by accident”, as occurs in the aviation sector. Alarming as it may sound, it has worked extremely effectively in raising aviation safety standards over the past few decades. International aviation authorities have the power to mandate changes to all airplane manufacturers and airlines once a fault is detected. Something similar could be used when harmful flaws are found in consumer-facing AI models, such as self-driving cars.

Several leading industry researchers have called for a moratorium on the development of cutting-edge generative AI models. But pauses are useless unless clearer governance regimes can be put in place. The tech industry also accepts that it needs clearer rules of the road and must work constructively with governments and civil rights organizations to help them write them. After all, cars can drive faster around corners when equipped with effective brakes.


—————————————————-

Source link

🔥📰 For more news and articles, click here to see our full list.🌟✨

👍 🎉Don’t forget to follow and like our Facebook page for more updates and amazing content: Decorris List on Facebook 🌟💯