Skip to content

Meet ChatGPT’s right-wing alter ego


Elon Musk provoked a stir last week when he said he (recently laid off) right-wing provocateur Tucker Carlson who plans to build “TruthGPT”, a competitor to OpenAI ChatGPT. Musk says that the incredibly popular bot shows a “wake up” bias and that his version will be a “maximum truth-seeking AI,” suggesting that only his own political views reflect reality.

Musk is far from the only person concerned about political bias in language models, but others are trying to use AI to bridge political divides rather than push particular views.

david rosado, a New Zealand-based data scientist, was one of the first people to bring attention to the issue of political bias on ChatGPT. Several weeks ago, after documenting what he considered to be liberal-leaning responses from the bot on topics including taxes, gun ownership and free markets, he created an AI model called RightWingGPT that expresses more conservative views. He is interested in gun ownership and is not a fan of taxes.

Rozado took a language model called Davinci GPT-3, similar to but less powerful than the one used by ChatGPT, and fine-tuned it with additional text, at the cost of a few hundred dollars spent on cloud computing. Regardless of what you think of the project, he shows how easy it will be for people to incorporate different perspectives into language models in the future.

Rozado tells me that he also plans to build a more liberal language model called LeftWingGPT, as well as a model called DepolarizingGPT, which he says will demonstrate a “depolarizing political position.” Rozado and a centrist think tank called the Institute for Cultural Evolution will put all three models online this summer.

“We are training each of these sides—right, left, and ‘integrative’—using the books of thoughtful (not provocative) authors,” Rozado says in an email. DepolarizingGPT text comes from conservative voices like Thomas Sowell, Milton Freeman, and William F. Buckley, as well as liberal thinkers like Simone de Beauvoir, Orlando Patterson, and Bill McKibben, along with other “selected sources.”

So far, interest in developing more politically aligned AI bots has threatened to fuel political division. Some conservative organizations are already creating competitors for ChatGPT. For example, the social network Gab, which is known for its far-right user base, says it’s working in AI tools with “the ability to freely generate content without the constraints of liberal propaganda wrapped tightly in its code.”

Research suggests that language models can subtly influence users’ moral perspectives, so any political bias they have could have consequences. The Chinese government recently issued a new guidelines on generative AI that seek to tame the behavior of these models and shape their political sensibilities.

OpenAI has warned that more capable AI models may have “greater potential to reinforce entire ideologies, worldviews, truths, and falsehoods.” In February, the company said in a blog post which would explore the development of models that allow users to define their values.

Rozado, who says he hasn’t spoken to Musk about his project, seeks to provoke thought rather than create bots that spread a particular view of the world. “Hopefully we as a society can… learn to create AIs focused on building bridges instead of sowing division,” he says.

Rozado’s aim is admirable, but the problem of establishing what is objectively true through the fog of political divide—and of teaching that to language models—may prove the biggest hurdle.

ChatGPT and similar chatbots rely on complex algorithms that are fed large amounts of text and trained to predict which word should follow a string of words. That process can produce remarkably consistent results, but it can also capture many subtle biases in the training material they consume. Just as important, these algorithms are not taught to understand objective facts and tend to make things up.


—————————————————-

Source link

For more news and articles, click here to see our full list.