Skip to content

Shocking! Why CEOs Should Not Be Trusted to Regulate AI

The Need for Independent Oversight in AI Regulation

The race to dominate Artificial Intelligence (AI) is no longer confined to the marketplace, but also extends to Washington and Brussels. Tech companies have recognized that the rules that govern the development and integration of AI products will have an existential impact on their businesses. CEOs of these tech companies are now attempting to set the tone, arguing that they are best placed to regulate the technologies they produce. However, the lawmakers shouldn’t be distracted by their talking points.

An alarming revelation is that lawmakers are eager to defer to companies and want their guidance on regulation; the same companies they should be regulating. In fact, industry circles’ calls for AI regulation have bordered on apocalypse, with scientists warning that their creations are too powerful, and could go rogue. They say that AI poses a threat to the survival of humanity similar to nuclear warfare. But, these alarms should not be leaving lawmakers to seek advice from CEOs.

The Importance of Independent Oversight

Eric Schmidt, the Former CEO of Google, argues that companies are the only ones equipped to develop guardrails, while governments lack the expertise. But, how does Schmidt’s argument hold water when lawmakers and executives are not even experts in agriculture, crime-fighting, or drug prescribing, yet they regulate all of these activities?

Hence, it is crucial that policymakers move beyond pleasantries and define specific goals and methods for AI regulation. Policy-makers must not let tech CEOs shape and control the narrative, let alone the process. A decade of technological upheaval has highlighted the importance of independent oversight. This principle is even more critical when power over technologies like AI is concentrated in a handful of companies.

Breaking Monopolies on Access to Proprietary Information

Schmidt’s argument also reminds us of the first challenge, which is breaking monopolies on access to proprietary information. With independent research, realistic risk assessments, and enforcement guidelines for existing regulations, a debate about the need for new measures would be based on facts.

It is also important to realize that business people are primarily concerned with profit rather than social impact. Therefore, the time has come to move beyond the pleasantries and define specific goals and methods for AI regulation. Policymakers must not let tech CEOs shape and control the narrative, let alone the process.

Guardrails for AI Regulation

In the absence of clear regulatory guidelines, companies have free rein to regulate themselves, resulting in self-serving regulations that contribute to neither their independence nor the creation of counterpowers within them. Hence, regulators need to think about the type of guardrails needed for AI regulation.

Additionally, the industry and regulators need to address questions about AI discrimination and bias work. Prioritizing the prevention of existential risks overshadows the much-needed anti-discrimination and anti-bias work happening today.

Guardrails for AI regulation should include:

1. Public accountability and transparency – Companies should be open about their AI usage, how they use it, and its expected impact on their business.

2. Independent oversight – Independent regulators should oversee the industry’s use of AI and hold companies accountable for their AI’s ethical behavior.

3. Fairness and non-discrimination – Protecting individuals from being discriminated against by AI systems and programs should be a priority.

4. Explainability of AI Decision – Companies must disclose how their AI made decisions, including an explanation of the algorithm used to arrive at those decisions.

5. Ethical Frameworks – AI should be guided by principles aligned with ethical guidelines and constitutional provisions.

6. Risk and Benefit Analysis – Companies need to conduct a thorough risk and benefit analysis of their AI systems and programs.

7. Ongoing Improvements – Companies should continuously monitor their AI systems, evaluate their impact, and make necessary changes to improve ethical standards continually.

How Regulating AI is Different

AI is different from other regulated sectors. It’s true that some industries self-regulate, but the risks of self-regulation for AI are different. For example, self-regulation in the financial sector was proven inefficient during the 2008 global financial crisis.

AI is a transformative technology. Its impact on workplaces, society, and humans is immeasurable. AI is also not fool-proof, and the technology is known to make mistakes. AI is inherently biased since the data used in training these models is not free from bias. Hence, it is essential to regulate AI and its usage.

But, regulation of AI by the industry is doomed to fail. Companies may want to self-regulate for marketing purposes, but their responsibility to their shareholders would always take precedence. Therefore, independent regulatory oversight is necessary for AI.

Summary

In conclusion, the need for independent oversight in AI regulation cannot be overstated. CEOs of tech companies should not be the ones setting the agenda for regulating AI. Lawmakers must not be distracted by their talking points.

Guardrails for AI regulation should include public accountability, independent oversight, fairness and non-discrimination, explainability of AI decisions, ethical frameworks, risk, and benefit analysis, and ongoing improvements. AI is a transformative technology, and regulation is necessary to ensure its ethical usage.

Regulation of AI by the industry is doomed to fail. The responsibility of companies for their shareholders would always have priority. Hence, independent regulatory oversight is necessary for AI.

—————————————————-

Article Link
UK Artful Impressions Premiere Etsy Store
Sponsored Content View
90’s Rock Band Review View
Ted Lasso’s MacBook Guide View
Nature’s Secret to More Energy View
Ancient Recipe for Weight Loss View
MacBook Air i3 vs i5 View
You Need a VPN in 2023 – Liberty Shield View

The author is director of international policy at the Cyber ​​Policy Center at Stanford University and is special adviser to Margrethe Vestager

Tech companies recognize that the race to dominate AI is being decided not just in the marketplace but also in Washington and Brussels. The rules governing the development and integration of their AI products will have an existential impact on them, but are currently hanging. So executives are trying to go ahead and set the tone, arguing that they are best placed to regulate the very technologies they produce. AI may be new, but the talking points are recycled — they’re the same ones Mark Zuckerberg has used on social media and Sam Bankman-Fried has offered regarding cryptocurrencies. Such statements should not again distract Democratic lawmakers.

Imagine the JPMorgan CEO explaining to Congress that because financial products are too complex for lawmakers to understand, banks should make their own decisions about how to prevent money laundering, enable fraud detection, and set liquidity versus reporting. of loan. He would be laughed out of the room. Angry voters would point out how well self-regulation worked in the global financial crisis. From big tobacco to big oil, we’ve learned the hard way that corporations can’t make selfless regulations. They are neither independent nor capable of creating counterpowers against their own.

Somehow that basic truth has been lost when it comes to AI. Lawmakers are eager to defer to companies and want their guidance on regulation; Senators also asked OpenAI CEO Sam Altman appoints potential industry leaders to oversee a putative national AI regulator.

Within industry circles, calls for AI regulation have bordered on apocalypse. Scientists warn their creations are too powerful and could go rogue. A recent letter, signed by Altman and others, warned that artificial intelligence posed a threat to the survival of humanity similar to nuclear warfare. One might think these fears are spurring executives to action, but despite signing, virtually no one has changed their behavior. Perhaps their definition of how we think about guardrails around AI is the real goal. Our ability to address questions about the type of regulation needed is also greatly influenced by our understanding of the technology itself. The statements focused attention on AIs existential risk. But critics argue that prioritizing prevention of all of this overshadows the much-needed anti-discrimination and anti-bias work that should be happening today.

Warnings about the catastrophic risks of AI, held by the very people who may stop pushing their products into society, are disorienting. The open letters make the signatories appear impotent in their desperate pleas. But those sounding the alarm already have the power to slow or pause the potentially dangerous progression of artificial intelligence.

Former Google CEO Eric Schmidt argues that companies are the only ones equipped to develop guardrails, while governments lack the expertise. But lawmakers and executives aren’t even experts in agriculture, crime-fighting, or drug prescribing, yet they regulate all of these activities. They should certainly not be put off by the complexity of AI, if anything it should encourage them to take responsibility. And Schmidt involuntarily reminded us of the first challenge: breaking monopolies on access to proprietary information. With independent research, realistic risk assessments and enforcement guidelines for existing regulations, a debate about the need for new measures would be based on facts.

Executive actions speak louder than words. Just days after Sam Altman welcomed AI regulation in his testimony before Congress, he threatened to pull the plug on OpenAI’s operations in Europe because of it. When he realized that EU regulators didn’t take kindly to the threats, he reverted to a glamor offensive, pledging to open an office in Europe.

Lawmakers need to remember that business people are primarily concerned with profit rather than social impact. The time has come to move beyond the pleasantries and define specific goals and methods for AI regulation. Policy makers must not let tech CEOs shape and control the narrative, let alone the process.

A decade of technological upheaval has highlighted the importance of independent oversight. This principle is even more important when power over technologies like AI is concentrated in a handful of companies. We should listen to the powerful people who run them, but never take their words at face value. Instead, their grand claims and ambitions should prompt regulators and legislators to act on their own experience: that of the democratic process.


https://www.ft.com/content/5f8b74f7-68b1-4a6c-88bf-d0dd03579149
—————————————————-