Skip to content

Governments could promote responsible AI through “guidelines” rather than regulation

Governments are trying to strike a delicate balance with generative AI. Regulate If regulation is too strict, there is a risk that innovation will be stifled. If regulation is too loose, it opens the door to disruptive threats such as deep fakes and misinformation. Generative AI can both Skills of nefarious actors and those who try to defend against them.

During a breakout session on responsible AI innovation last week, speakers from Fortune Brainstorm AI Singapore acknowledged that achieving a global, unified set of AI rules would be difficult.

Governments already differ in how much regulation they want. The European Union, for example, has a comprehensive Rules that regulate how companies develop and use AI applications.

Other governments, such as the United States, are developing what Sheena Jacob, head of intellectual property at CMS Holborn Asia, calls a “framework guideline”: not hard laws, but rather nudges in a preferred direction.

“Overregulation will stifle AI innovation,” Jacob warned.

She cited Singapore as an example of innovation outside the US and China. While Singapore has a national AI strategyThe city-state has no laws that directly regulate AI. Instead, the overall framework relies on stakeholders such as policy makers and the research community to “work together” to enable innovation in a “systemic and balanced approach”.

Like many other participants at Brainstorm AI Singapore, speakers at the breakout event last week acknowledged that smaller countries can still keep up with larger countries in AI development.

“The point of AI is to level the playing field,” said Phoram Mehta, chief information security officer for PayPal’s Asia Pacific region. (PayPal sponsored last week’s breakout session)

However, experts also warned against neglecting the risks associated with AI.

“What people really miss is that AI cyberhacking is a bigger board-level cybersecurity risk than anything else,” said Ayesha Khanna, co-founder of Addo AI and co-chair of Fortune Brainstorm AI Singapore. “If you were to do a prompt attack and just throw hundreds of prompts that … poison the base model data, that could completely change the way an AI works.”

At the end of June, Microsoft announced that it discovered a way to jailbreak a generative AI model so that it ignores its protections against generating harmful content related to topics such as explosives, drugs, and racism.

However, when asked how companies can keep malicious actors away from their systems, Mehta said AI can also help the “good guys.”

AI “helps the good guys level the playing field… It’s better to be prepared and use AI for defense than to wait and see what response we get.”

Recommended newsletter:

CEO Daily provides the most important context to the news that leaders across the business world need to know. Every weekday morning, more than 125,000 readers trust CEO Daily for insights into the C-suite and beyond. Subscribe now.