Skip to content

TikTok is letting people turn off their infamous algorithm and think for themselves




Protecting Cognitive Freedom: TikTok’s Algorithm Change and the Need for Global Action

Protecting Cognitive Freedom: TikTok’s Algorithm Change and the Need for Global Action

In a recent announcement, TikTok revealed that its users in the European Union will soon have the option to disable its highly engaging content selection algorithm. This decision comes as a result of the EU Digital Services Act (DSA), which aims to regulate AI and digital services in accordance with human rights and values. The move towards allowing users to opt out of the algorithmic experience is a significant step towards protecting cognitive freedom, the fundamental right to self-determination over our own minds and mental experiences.

The Power of TikTok’s Algorithm

TikTok’s algorithm relies on user interactions to learn and personalize the content shown to each individual. It takes into account factors such as the duration of time users spend watching videos, the content they like, and when they share videos. This algorithmic approach creates a highly immersive and personalized experience that can shape users’ mental states, preferences, and behaviors without their full awareness or consent.

By allowing users to turn off the algorithm, TikTok is giving them more control over their own digital experience. Instead of being confined to algorithmically curated “For You” pages and live feeds, users will have the option to view trending videos in their region and language, or a “Following & Friends” feed that lists the creators they follow in chronological order. This shift prioritizes popular content in a user’s region rather than content curated solely based on their engagement patterns.

Regulating AI and Digital Services

The EU Digital Services Act (DSA) represents a broader effort by the European Union to regulate AI and digital services in line with human rights and values. This legislation aims to protect individuals’ cognitive freedom and ensure that their rights are not infringed upon by algorithmic manipulation. In addition to allowing users to opt out of algorithms, the DSA also prohibits advertising to users between the ages of 13 and 17, and provides more information and reporting options to flag illegal or harmful content.

While the DSA is a step in the right direction, more comprehensive legal frameworks are needed to protect cognitive freedom worldwide. Existing laws and proposals often focus on subsets of the problem, such as privacy by design or data minimization, without explicitly addressing the broader issue of protecting our ability to think freely. To truly safeguard cognitive freedom, lawmakers and businesses must work together to reform the business models on which the tech ecosystem is built.

A Comprehensive Approach

Ensuring cognitive freedom requires a combination of regulations, incentives, and business redesigns. Regulatory standards should govern user engagement models, information sharing, and data privacy. Legal safeguards should be put in place to prevent interference with mental privacy and manipulation. Companies must be transparent about how their algorithms work and have a responsibility to evaluate, disclose, and adopt safeguards against undue influence.

Similar to corporate social responsibility guidelines, companies should also be legally required to evaluate the impact of their technology on cognitive freedom. This includes providing transparency into algorithms, data usage, content moderation practices, and cognitive settings. Impact assessment efforts, such as those proposed in the EU digital services law, the US Algorithmic Liability Law, and the US Privacy and Data Protection Act, can measure the influence of AI on self-determination, mental privacy, and freedom of thought and decision-making.

Tax incentives and funding can further encourage innovation in business practices and products that prioritize cognitive freedom. Governments can offer tax breaks and funding opportunities to companies that collaborate with educational institutions to develop AI security programs promoting self-determination and critical thinking skills. Research and innovation into tools and techniques that identify deception by AI models should also be supported through tax incentives.

Designing for Cognitive Freedom

Technology companies should embrace design principles that embed cognitive freedom. Features like adjustable settings on platforms like TikTok or increased control over notifications on devices like Apple smartphones are steps in the right direction. Other features that promote self-determination include tagging content as human or machine-generated and asking users to critically engage with an article before sharing it.

TikTok’s recent policy change in Europe is a significant victory for cognitive freedom, but it is not the end of the game. Urgent action is needed to update our digital rulebook and implement new laws, regulations, and incentives that safeguard user rights and hold platforms accountable. We cannot solely rely on technology companies to control our minds; it is time for a global effort to prioritize cognitive freedom in the digital age.

In Conclusion

As the influence of artificial intelligence, big data, and digital media continues to shape our world, the need to protect cognitive freedom grows more urgent. TikTok’s decision to allow users to turn off its algorithm marks a significant step towards safeguarding our ability to think freely and make independent choices online. However, this is just the beginning. Governments, businesses, and individuals must work together to develop comprehensive legal frameworks, promote responsible AI practices, and prioritize cognitive freedom as a fundamental human right.

Summary:

TikTok has announced that it will allow users in the European Union to disable its algorithm, a move driven by the EU Digital Services Act (DSA) and its focus on regulating AI and digital services in accordance with human rights and values. The algorithm learns from user interactions to create a personalized experience that can shape users’ mental states and behaviors without their full awareness or consent. The opt-out feature will let users view trending videos in their region and language, as well as a chronological feed of content from creators they follow. The DSA also prohibits advertising to users between 13 and 17 and provides more reporting options for illegal or harmful content. However, more comprehensive legal frameworks worldwide are needed to protect cognitive freedom. This requires a combination of regulations, incentives, and redesigns focused on user engagement, information sharing, data privacy, and transparency. Governments can offer tax incentives and funding to drive innovation in business practices that prioritize cognitive freedom. Additionally, technology companies should embrace design principles that promote self-determination and critical thinking. While TikTok’s algorithm change is a positive step, global action is necessary to prioritize cognitive freedom in the digital age.


—————————————————-

Article Link
UK Artful Impressions Premiere Etsy Store
Sponsored Content View
90’s Rock Band Review View
Ted Lasso’s MacBook Guide View
Nature’s Secret to More Energy View
Ancient Recipe for Weight Loss View
MacBook Air i3 vs i5 View
You Need a VPN in 2023 – Liberty Shield View

TikTok recently announced that its users in the European Union will soon be able to to turn off its infamously engaging content selection algorithm. He EU Digital Services Act (DSA) is driving this change as part of the region’s broader effort to regulate AI and digital services in accordance with human rights and values.

TikTok Algorithm learn from user interactions—how long they watch, what they like, when they share a video, to create a highly personalized and immersive experience that can shape your mental states, preferences, and behaviors without your full awareness or consent. An opt-out feature is a big step towards protection cognitive freedom, the fundamental right to self-determination over our brains and mental experiences. Instead of being confined to algorithmically curated For You pages and live feeds, users will be able to view trending videos in their region and language, or a “Following & Friends” feed that lists the creators they follow in chronological order. This prioritizes popular content in your region rather than content curated by your stickiness. The law also prohibits advertising to users between the ages of 13 and 17 and provides more information and reporting options to flag illegal or harmful content.

In a world increasingly shaped by artificial intelligence, big data, and digital media, the urgent need to protect cognitive freedom is gaining attention. the proposal EU AI Law Offers some safeguards against mind manipulation. UNESCO approach to AI human rights centers, the Biden Administration voluntary commitments from AI companies addresses deception and fraud, and the Organization for Economic Cooperation and Development has incorporated cognitive freedom into its principles for responsible government of emerging technologies. But while laws and proposals like these move forward, they often focus on subsets of the problem, like privacy by design or data minimization, rather than outlining an explicit and comprehensive approach to protecting our ability to think freely. Without robust legal frameworks around the world, developers and providers of these technologies can evade liability. This is why simple incremental changes will not suffice. Lawmakers and businesses urgently need to reform the business models on which the tech ecosystem is based.

A well-structured plan requires a combination of regulations, incentives, and business redesigns focused on cognitive freedom. Regulatory standards should govern user engagement models, information sharing, and data privacy. There must be strong legal safeguards against interference with mental privacy and manipulation. Companies must be transparent about how the algorithms they implement work and have a duty to evaluate, disclose and adopt safeguards against undue influence.

Like corporate social responsibility guidelines, companies must also be legally required to evaluate your technology for its impact on cognitive freedom, providing transparency into algorithms, data usage, content moderation practices, and cognitive settings. Impact evaluation efforts are already an integral part of legislative proposals around the world, including the EU digital services lawthe US proposal Algorithmic Liability Law and US Privacy and Data Protection Actand voluntary mechanisms such as the 2023 Risk Management Framework from the US National Institute of Standards and Technology.. An impact assessment tool for cognitive freedom would specifically measure the influence of AI on self-determination, mental privacy, and freedom of thought and decision-making, focusing on transparency, data practices, and mind manipulation. The data needed would include detailed descriptions of the algorithms, data sources and collection, and evidence of the effects of technology on user cognition.

Tax incentives and funding could also drive innovation in business practices and products to bolster cognitive freedom. Leading AI Ethics Researchers Emphasize that a security-first organizational culture is essential to counter the many risks posed by large language models. Governments can encourage this by offering tax breaks and funding opportunities, such as those included in the proposal Law of Responsibility and Transparency of the Platform, to companies that actively collaborate with educational institutions to create AI security programs that foster self-determination and critical thinking skills. Tax incentives could also support research and innovation to tools and techniques that surface deception by AI models.

Technology companies should also embrace design principles that embed cognitive freedom. Options like adjustable settings on TikTok or more control over notifications on Apple devices They are steps in the right direction. Other features that enable self-determination, including tagging content with “badges” that specify content as human or machine generatedo ask users to critically engage with an article before sharing it—should become the norm across all digital platforms.

TikTok’s policy change in Europe is a victory, but it’s not the end of the game. We urgently need to update our digital rulebook, implementing new laws, regulations, and incentives that safeguard user rights and hold platforms accountable. Let’s not leave control over our minds only to technology companies; It’s time for global action to prioritize cognitive freedom in the digital age.


WIRED Opinion publishes articles by external contributors representing a wide range of viewpoints. Read more reviews here. Submit an opinion piece on ideas@wired.com.

—————————————————-