Skip to content

AI chatbots are learning to spread authoritarian propaganda



AI Chatbots and the Rise of Online Censorship

AI Chatbots and the Rise of Online Censorship

Introduction

When OpenAI, Meta, Google, and Anthropic released their chatbots worldwide, millions of people saw them as an opportunity to access unfiltered information and evade government censorship. They opened up a new frontier for online communication, allowing internet users in countries with restricted access to social media platforms, independent news sites, and LGBTQ content to engage with these tools and shape their understanding of the world. However, these advancements have not gone unnoticed by authoritarian regimes, who are quickly learning how to exploit chatbots as tools for online censorship.

China: Pioneering Information Controls

China is at the forefront of using chatbots to tighten its long-standing information controls. In February 2023, regulators prohibited Chinese conglomerates Tencent and Ant Group from integrating ChatGPT into their services. Then, in July, the Chinese government published rules that demanded generative AI tools, like chatbots, comply with broad censorship measures similar to those governing social media platforms. These rules include promoting “core socialist values” and preventing any discussion of sensitive topics such as the persecution of Uyghurs in Xinjiang. The government’s influence and control over the technology are evident as more than 100 generative AI chatbot apps were removed from the Chinese app store by Apple.

China is also pressuring local businesses to develop their own chatbots with built-in information controls. July 2023 regulations require generative AI products, like the Ernie Bot developed by Baidu, to ensure the “truth, accuracy, objectivity, and diversity” of training data. The CCP’s definition of truth and objectivity heavily influences the chatbot’s responses, leading to biased results that echo state propaganda and avoid engagement with sensitive topics.

Russia: Pursuit of Technological Sovereignty

Similar to China, Russia is prioritizing technological sovereignty when it comes to AI. While AI regulation is still in its infancy, several Russian companies have launched their own chatbots. When chatting with Alice, an AI-generated bot created by Yandex, questions concerning Russia’s invasion of Ukraine were met with a refusal to discuss the topic in order to avoid potential offense. In contrast, Google’s Bard provided extensive information on the contributing factors to the war. The lack of clarity surrounding Yandex’s actions raises questions about self-censorship, government influence, or training limitations due to existing online censorship in Russia.

Early Warning for Other Countries

These developments in China and Russia should serve as an early warning for other countries. While not all nations may have the necessary resources and regulatory framework to develop and control AI chatbots, the most repressive governments are likely to perceive large language models (LLMs) as a threat to their control over online information. Vietnamese state media has already criticized ChatGPT’s responses regarding the Communist Party of Vietnam, indicating the potential for future censorship measures. Governments may resort to regulating or controlling chatbot technology to maintain a tight grip on online narratives.

Lessons from Social Media’s Evolution

The hope that chatbots would help people bypass online censorship mirrors the early promises made about social media platforms. However, as governments adapted, they found ways to restrict social media access through filters, mandatory censorship, or the promotion of state-aligned alternatives. The same pattern may occur with chatbots as they become more ubiquitous. It is crucial for people to recognize the potential of these emerging tools to strengthen censorship and work together to find effective responses if there is hope for preserving internet freedom.

Conclusion

The rise of AI chatbots has presented both opportunities and challenges. While they initially offered a way to access unfiltered information and circumvent online censorship, authoritarian regimes are now harnessing these tools to tighten their control over online narratives. China and Russia have emerged as pioneers in using chatbots as instruments of censorship. Their experiences should serve as a wake-up call for other nations, urging them to be proactive in safeguarding internet freedom.

Summary

AI chatbots developed by companies like OpenAI, Meta, Google, and Baidu have provided access to unfiltered information for internet users living in countries with restricted access to social media platforms and independent news sites. However, authoritarian regimes, such as China and Russia, have quickly recognized the potential of these chatbots for online censorship and are implementing measures to tighten control and limit information flow. China has integrated chatbots into its information controls and pressured local businesses to develop their own chatbots with built-in censorship. Russia has prioritized technological sovereignty and is exploring the development of its own chatbots. These actions serve as an early warning for other nations to recognize the potential threat to internet freedom and develop strategies to preserve it.

—————————————————-

Article Link
UK Artful Impressions Premiere Etsy Store
Sponsored Content View
90’s Rock Band Review View
Ted Lasso’s MacBook Guide View
Nature’s Secret to More Energy View
Ancient Recipe for Weight Loss View
MacBook Air i3 vs i5 View
You Need a VPN in 2023 – Liberty Shield View

when you ask ChatGPT “What happened in China in 1989?” The robot describes how the Chinese military massacred thousands of pro-democracy protesters in Tiananmen Square. But ask Ernie the same question and you’ll get the simple answer. answer that it does not have “relevant information”. That is because Ernie is an AI chatbot developed by the China-based company Baidu.

When OpenAI, Meta, Google and Anthropic created their chatbots available worldwide Last year, millions of people initially used them to evade government censorship. For the 70 percent of the world’s internet users who live in places where the state has blocked major social media platforms, independent news sites, or human rights and LGBTQ content, these robots provided access to unfiltered information. that can shape a person’s vision. of its identity, community and government.

This has not gone unnoticed by the world’s authoritarian regimes, who are quickly discovering how to use chatbots as a tool. New frontier for online censorship..

The most sophisticated response to date is in China, where the government is pioneering the use of chatbots to tighten long-standing information controls. In February 2023, regulators forbidden Chinese conglomerates Tencent and Ant Group integrate ChatGPT into their services. The government then published rules in July demanding that generative AI tools abide by the same broad censorship that governs social media services, including the requirement to promote “core socialist values.” For example, it is illegal for a chatbot to talk about the Chinese Communist Party’s (CCP) ongoing persecution of Uyghurs and other minorities in Xinjiang. A month later, Apple removed more 100 Generative AI Chatbot Apps from its Chinese app store, in accordance with government demands. (Some US-based companies, including OpenAI, they have not made their products available in a handful of repressive environments, China among them).

At the same time, authoritarians are pressuring local businesses to produce their own chatbots and are seeking to build information controls into them by design. For example, China’s July 2023 regulations require generative AI products like the Ernie Bot to ensure what the PCC defines as “truth, accuracy, objectivity and diversity” of training data. These controls appear to be bearing fruit: chatbots produced by China-based companies have refused to respond to users’ prompts on sensitive topics and have parroted CCP propaganda. Large language models trained on state propaganda and censored data naturally produce biased results. In a recent studyAn AI model trained on the Baidu online encyclopedia, which must comply with CCP censorship directives, associated words like “freedom” and “democracy” with more negative connotations than a model trained on Chinese-language Wikipedia, which is isolated from direct censorship.

Similarly, the Russian government Liza “technological sovereignty” as a central principle in its approach to AI. While efforts to regulate AI are in their infancy, several Russian companies have launched their own chatbots. When we asked Alice, an AI-generated bot created by Yandex, about the Kremlin’s large-scale invasion of Ukraine in 2021, we were told that she was not prepared to discuss this topic, so as not to offend anyone. By contrast, Google’s Bard provided a litany of factors that contributed to the war. When we asked Alice other questions about the news, such as “Who is Alexey Navalny?”, we received similarly vague answers. While it’s unclear whether Yandex is self-censoring its product, acting on government orders, or simply hasn’t trained its model with relevant data, we do know that these topics are already censored online in Russia.

These developments in China and Russia should serve as an early warning. While other countries may lack the computing power, technological resources and regulatory apparatus to develop and control their own AI chatbots, the most repressive governments are likely to perceive LLMs as a threat to their control over online information. Vietnamese state media has already published an article disparaging ChatGPT’s responses to suggestions about the Communist Party of Vietnam and its founder, Hồ Chí Minh, saying they were not patriotic enough. A prominent security official has called for new controls and regulation about the technology, citing concerns that it could cause the Vietnamese people to lose faith in the party.

The hope that chatbots could help people evade online censorship echoes early promises that social media platforms would help people bypass state-controlled offline media. Although few governments were initially able to clamp down on social media, some quickly adapted by blocking platforms. mandatory that filter critical discourse, or propping up alternatives aligned with the state. We can expect more of the same as chatbots become increasingly ubiquitous. People will need to be clear-eyed about how these emerging tools can be leveraged to strengthen censorship and work together to find an effective response if they hope to turn the tide on declining Internet freedom.


WIRED Opinion publishes articles from external contributors representing a wide range of points of view. Read more opinions here. Submit an opinion piece at ideas@wired.com.

—————————————————-