Skip to content

Regulating artificial intelligence is a 4D challenge

Featured Sponsor

Store Link Sample Product
UK Artful Impressions Premiere Etsy Store


The writer is founder of sievedan FT-backed site on European startups

The leaders of the G7 nations addressed many global concerns over sake-steamed Nomi oysters in Hiroshima last weekend: the war in Ukraine, economic resilience, clean energy and food security, among others. But they also threw an additional item into their good-intentioned parting bag: the promotion of inclusive and trustworthy artificial intelligence.

While acknowledging AI’s innovative potential, leaders worried about the damage it could cause to public safety and human rights. Launch of the Hiroshima AI process, the G7 commissioned a working group to look at the impact of generative AI models, such as ChatGPT, and prepare the leaders’ discussions for later this year.

Initial challenges will be how to best define AI, categorize its dangers, and frame an appropriate response. Is regulation Is it better to leave it to the existing national agencies? Or is the technology so important that it demands new international institutions? Do we need the modern equivalent of the International Atomic Energy Agency, founded in 1957 to promote the peaceful development of nuclear technology and discourage its military use?

One can debate how effectively the UN body has fulfilled that mission. Furthermore, nuclear technology involves radioactive material and massive infrastructure that is physically easy to detect. AI, on the other hand, is comparatively cheap, invisible, ubiquitous, and has infinite use cases. At the very least, it presents a four-dimensional challenge that needs to be approached in a more flexible way.

The first dimension is discrimination. Machine learning systems are designed to discriminate, to detect outliers in patterns. That’s good for detecting cancer cells on radiological scans. But it’s bad if black box systems trained on faulty data sets are used to hire and fire workers or authorize bank loans. Bias in, bias out, as they say. Banning these systems in areas of unacceptably high risk, as proposed by the forthcoming EU AI Law, is a strict and precautionary approach. Creating independent, expert auditors might be a more adaptable way to go.

Second, misinformation. As academic expert Gary Marcus warned the US Congress last week, generative AI could endanger democracy itself. Such models can generate plausible lies and falsified humans at lightning speeds and on an industrial scale.

The onus should fall on the tech companies themselves to flag content and minimize misinformation, just as they suppressed spam. Failure to do so will only amplify calls for more drastic intervention. The precedent may have been set in China, where a draft law places responsibility for the misuse of AI models on the producer, not the user.

Third, dislocation. No one can accurately forecast what economic impact AI will have overall. But it seems pretty sure it will lead to the “de-professionalization” of swaths of white-collar jobs, as businesswoman Vivienne Ming told the FT Weekend festival in DC.

Computer programmers have widely adopted generative AI as a tool to improve productivity. By contrast, Hollywood’s amazing screenwriters may be the first of many trades to fear that their basic skills will be automated. This messy story defies simple solutions. Nations will have to adapt to social challenges in their own way.

Fourth, devastation. The incorporation of AI into lethal autonomous weapon systems (LAWS), or killer robots, is a terrifying prospect. The principle that human beings should always remain in the decision-making loop can only be established and enforced through international treaties. The same applies to the discussion about artificial General Intelligence, the (possibly fictional) day when AI surpasses human intelligence in all domains. Some activists dismiss this scenario as a distracting fantasy. But it’s surely worth paying attention to experts warning of possible existential risks and calling for an international research collaboration.

Others may argue that trying to regulate the AI ​​is as futile as praying the sun doesn’t set. Laws only evolve incrementally, while AI develops exponentially. But Marcus says he was encouraged by the bipartisan consensus for action in the US Congress. Perhaps fearful that EU regulators could set global rules for AI, as they did with data protection five years ago, US tech companies are also publicly backing regulation.

G7 Leaders must foster a competition for good ideas. Now they need to spark a regulatory race to the top, rather than preside over a terrifying slide to the bottom.


—————————————————-

Source link

We’re happy to share our sponsored content because that’s how we monetize our site!

Article Link
UK Artful Impressions Premiere Etsy Store
Sponsored Content View
ASUS Vivobook Review View
Ted Lasso’s MacBook Guide View
Alpilean Energy Boost View
Japanese Weight Loss View
MacBook Air i3 vs i5 View
Liberty Shield View
🔥📰 For more news and articles, click here to see our full list. 🌟✨

👍🎉 Don’t forget to follow and like our Facebook page for more updates and amazing content: Decorris List on Facebook 🌟💯

📸✨ Follow us on Instagram for more news and updates: @decorrislist 🚀🌐

🎨✨ Follow UK Artful Impressions on Instagram for more digital creative designs: @ukartfulimpressions 🚀🌐

🎨✨ Follow our Premier Etsy Store, UK Artful Impressions, for more digital templates and updates: UK Artful Impressions 🚀🌐