Skip to content

Procedural fairness can address the trust/legitimacy issue of generative AI

Featured Sponsor

Store Link Sample Product
UK Artful Impressions Premiere Etsy Store


The much-hyped advent of generative AI has reignited a familiar debate about trust and safety: Can tech executives be trusted to keep society’s best interests at heart?

Because its training data is created by humans, AI is inherently prone to bias and therefore subject to our own imperfect and emotional ways of viewing the world. We are well aware of the risks, from reinforcing discrimination and racial inequities to promoting polarization.

OpenAI CEO Sam Altman has He asked for our “patience and good faith” as they work to “get it right.”

For decades, we’ve patiently placed our trust in tech executives at our peril: They created it, so we believed them when they said they could fix it. Confidence in tech companies continues to plummet, and according to the 2023 Edelman Confidence Barometer, globally 65% ​​care Technology will make it impossible to know if what people see or hear is real.

It’s time for Silicon Valley to take a different approach to earning our trust, one that has proven its worth in the nation’s legal system.

A procedural fairness approach to trust and legitimacy

Grounded in social psychology, procedural justice is based on research showing that people believe that institutions and actors are more trustworthy and legitimate when they are heard and experience neutral, impartial, and transparent decision-making.

The four key components of procedural justice are:

  • Neutrality: Decisions are impartial and guided by transparent reasoning.
  • Respect: Everyone is treated with respect and dignity.
  • Voice: Everyone gets a chance to tell their side of the story.
  • Trustworthiness: Decision makers convey trustworthy motives about those affected by their decisions.

Using this framework, police have improved trust and cooperation in their communities, and some social media companies are beginning to use these insights to shape governance and moderation approaches.

Here are some ideas on how AI companies can adapt this framework to build trust and legitimacy.

Build the right team to address the right questions

As UCLA Professor Safiya Noble he argues, the questions surrounding algorithmic bias cannot be resolved by engineers alone, because they are systemic social problems that require humanistic perspectives, outside of any company, to ensure social conversation, consensus, and ultimately regulation, both own and government.

In “System Error: Where Big Tech Gone Wrong and How We Can Reboot” three Stanford professors critically discuss the shortcomings of computer science education and engineering culture for its obsession with optimization, often neglecting the core values ​​of a democratic society.

In a blog post, Open AI says it values ​​the opinion of society: β€œBecause AGI’s advantage is so great, we don’t think it’s possible or desirable for the company to halt its development forever; instead, AGI society and developers have to figure out how to get it right.”

However, the company’s recruitment page and the founder Sam Altman’s tweets show that the company is hiring a large number of machine learning engineers and computer scientists because “ChatGPT has an ambitious roadmap and is hampered by engineering.”

Are these computer scientists and engineers equipped to make decisions that, as OpenAI has said, β€œwill require much more caution than society tends to apply to new technologies”?

Technology companies should hire multidisciplinary teams that include social scientists who understand the human and social impacts of technology. With a variety of perspectives on how to train AI applications and implement security parameters, companies can articulate transparent reasoning for their decisions. This can, in turn, boost the public’s perception of technology as neutral and trustworthy.

Include external perspectives

Another element of procedural fairness is giving people the opportunity to participate in a decision-making process. In a recent Blog In a post about how OpenAI is tackling bias, the company said it seeks “outside input on our technology,” pointing to a recent red team exercise, a risk assessment process through an adversarial approach.

While red teaming is an important process for risk assessment, it must include external input. In OpenAI Red Team Exercise, 82 of 103 participants were employees. Of the remaining 23 participants, the majority were computer science academics from predominantly Western universities. To gain diverse points of view, companies must look beyond their own employees, disciplines, and geography.

They can also enable more direct feedback into AI products by giving users more control over AI performance. They might also consider providing opportunities for public comment on new policies or product changes.

Ensure transparency

Companies must ensure that all security rules and related processes are transparent and provide credible reasons for how decisions were made. For example, it is important to provide the public with information about how applications are trained, where the data is drawn from, what role humans play in the training process, and what layers of security are in place to minimize misuse.

Allowing researchers to audit and understand AI models is key to building trust.

Altman did well in a recent ABC News interview when he said, “I think society has a limited amount of time to figure out how to react to that, how to regulate it, how to manage it.”

Through a procedural fairness approach, rather than the opacity and blind faith approach of technology predecessors, companies building AI platforms can engage society in the process and earn, not demand, trust and legitimacy.




—————————————————-



Source link

We’re happy to share our sponsored content because that’s how we monetize our site!

Article Link
UK Artful Impressions Premiere Etsy Store
Sponsored Content View
ASUS Vivobook Review View
Ted Lasso’s MacBook Guide View
Alpilean Energy Boost View
Japanese Weight Loss View
MacBook Air i3 vs i5 View
Liberty Shield View
πŸ”₯πŸ“° For more news and articles, click here to see our full list. 🌟✨

πŸ‘πŸŽ‰ Don’t forget to follow and like our Facebook page for more updates and amazing content: Decorris List on Facebook πŸŒŸπŸ’―

πŸ“Έβœ¨ Follow us on Instagram for more news and updates: @decorrislist πŸš€πŸŒ

🎨✨ Follow UK Artful Impressions on Instagram for more digital creative designs: @ukartfulimpressions πŸš€πŸŒ

🎨✨ Follow our Premier Etsy Store, UK Artful Impressions, for more digital templates and updates: UK Artful Impressions πŸš€πŸŒ