Skip to content

The dangers of disinformation lurk in the EU’s media freedom law


The author is an intern at the Digital Civil Society Lab at Stanford University and a former senior content moderator on Twitter

Twitter recently came under fire over changes to its verification system that gave the labels “state-affiliated media” and “government-funded media” to accounts run by US public service broadcasters PBS, NPR and the UK’s BBC. After an uproar over press freedom, the social media company was forced to update the label to “publicly funded media.”

This furor not only attracted ridicule (a former BBC managing director commented that it looked like “the guy with work experience” had been left to do the tagging), but it also exposed the pitfalls of the politics of moderation of amateur content. The self-regulatory system without accountability or transparency has led the United States down a dangerous path. In contrast, Europe’s efforts at regulation, presented in the Digital Services Act, look promising. But the DSA’s attempt to address systemic risks such as disinformation could be undermined before they have a chance to work: Amendments proposed in the EU’s draft European Media Freedom Act would ensure full exemption from any content moderation obligations for any organization classified as “average”.

I know the problems this could cause, because I used to work within Twitter’s trust and safety department, writing and enforcing moderation policies. During my time there from 2019 to 2021, I got a unique perspective on momentous world events. I saw firsthand the devastating impact of social media in fueling the violent attacks on the US Capitol and later provided evidence as a whistleblower to the US Congress.

The danger is that Article 17 of the EMFA could create a potentially limitless category of bad actors who can simply declare themselves “average” entities. It would be a cheap and easy way for disinformation campaigns to legitimize themselves. At a time when deepfake news outlets are establishing themselves, generative artificial intelligence is rampant in the news, and hostile states are actively exploiting social media features such as the recent changes to Twitter verification, we shouldn’t create new weapons for information warfare.

If the media exemption in the EMFA passes, content moderators like myself will also have their hands tied by law. For example, when Vladimir Putin invaded Ukraine, the Russian state broadcaster RT opened a Twitch account. Within Twitch, where I was working at the time, conversations immediately began about how to implement the company’s new policy on harmful disinformation. RT’s account was immediately banned.

Under EMFA, companies will no longer have this freedom. Disinformation outlets claiming to be legitimate “media” would be exempt from swift action and their propaganda amplified.

Instead, regulators have to work with moderators who have spent years wrangling with the question of how to verify news broadcasts and define concepts such as news trustworthiness. They developed metrics and made mistakes. Their knowledge and experience are the key to achieving this result. Through DSA transparency requirements, we can begin to fix platforms, hold them accountable, and ensure redress mechanisms are effective.

If there’s one thing I’ve learned in my career, it’s that you can write any words you want and call them “policy”. But if you can’t consistently enforce these policies in a principled manner, then they are meaningless.

The whole world is watching to see how the DSA works: EU institutions must focus on ensuring this historic regulation is successful. Allowing EMFA to create a dangerous and unworkable loophole, on the other hand, is a surefire path to failure.


—————————————————-

Source link

🔥📰 For more news and articles, click here to see our full list.🌟✨

👍 🎉Don’t forget to follow and like our Facebook page for more updates and amazing content: Decorris List on Facebook 🌟💯