Skip to content

Linda Yaccarino’s EPIC clapback to the EU over 700 community notes, 5K+ images, and MAJOR content purge in Israel-Hamas war – You won’t believe the jaw-dropping details!

The Challenges Faced by X in Addressing Disinformation and Illegal Content

In a recent turn of events, X, the social platform formerly known as Twitter, has been under fire in Europe for its shortcomings in cracking down on disinformation and illegal content. European Commissioner Thierry Breton sent a strongly worded open letter to the company, expressing concerns about its failure to effectively handle such content after the deadly Hamas terrorist attack against Israel. X responded with a lengthy letter in which the CEO, Linda Yaccarino, acknowledged some actions taken to address the issue but fell short in providing specific numbers and recognizing the extent of the problem.

X’s Response: Redeploying Resources and Refocusing Teams

Linda Yaccarino’s letter stated that X has “redeployed resources” and “refocused teams” to tackle the issue at hand. However, this response remained high-level and lacked precise figures. Yaccarino mentioned that shortly after the attack, a leadership group convened to discuss X’s response. She highlighted that “tens of thousands” of content had been removed, and user-generated community notes were now present in “thousands” of posts. Additionally, “hundreds” of accounts linked to terrorist groups or promoting violence and extremism had been eradicated. However, she did not provide an estimate of the overall amount of content that still required moderation and verification.

Law Enforcement Cooperation and Europol’s Request

The CEO mentioned that X is responding to requests from law enforcement agencies. However, at the time of writing, the company had not received any requests from Europol, the European Union’s law enforcement agency.

Omission of User Concerns and Elon Musk Involvement

One significant flaw in X’s response was the failure to acknowledge or address the concerns raised by users. Many users reported graphic videos of terrorist attacks on civilians and misleading posts related to the Israel-Gaza conflict. The response also sidestepped the fact that X’s owner, Elon Musk, had recommended following an account known for spreading anti-Semitic content. While the specific post may have been taken down, numerous screenshots and mentions of it still circulated, highlighting the platform’s struggle to address disinformation effectively.

Media Outlets and EU’s Letter to Meta

The disorder on the platform, including the proliferation of disinformation, didn’t go unnoticed by media outlets. Several reports described X as “drowning in disinformation,” which likely contributed to the EU’s decision to send the open letter to the company. It’s worth noting that Commissioner Breton had also sent a similar letter to Meta, another social media giant, prompting both X and Meta to assemble response teams. Moreover, the bulk of X’s letter to the EU focused on explaining the company’s existing policies in areas such as basic rules, public interest exceptions, and illegal content removal.

The Role of Community Notes in Content Control

In the letter, Yaccarino emphasized the role of Community Notes in controlling what is said on the platform. She revealed that, out of tens of millions of posts with community notes seen in the last four days, over 700 community notes were related to the attacks. However, it is unclear whether this number was meant to downplay the significance of content related to Israel and Hamas or to underscore the scale of activity. Additionally, more than 5,000 posts were highlighted as having matching videos and other media through the “media notes” feature. The letter acknowledged a need for faster publication of notes and approved media releases.

EU’s Content Moderation Policies and X’s Obligations

Commissioner Breton’s letter serves as an early example of the EU’s implementation of recently established content moderation policies outlined in its new Digital Services regulation. These policies impose special requirements on large online platforms like X. Disinformation itself is not illegal in the EU, but companies like X have a legal obligation to mitigate the risks associated with fake news. This includes responding swiftly to reports of illegal content.

Increasing Challenges for Social Media Platforms in the Face of Disinformation

Over the years, social media platforms have faced mounting challenges in combating disinformation and illegal content. As platforms have become integral parts of people’s lives, the responsibility to ensure a safe and accurate flow of information falls on their shoulders. However, the scattered nature of disinformation and the sheer volume of content pose significant hurdles to effective moderation. Some key challenges faced by social media platforms include:

1. Scale of Content

Social media platforms host an immense amount of content – from posts, articles, videos, to comments. Moderating each piece of content manually is nearly impossible. Algorithms and automated systems help in the initial screening, but striking the right balance between accuracy and speed remains a challenge. The scale of disinformation and illegal content circulating on platforms like X makes it difficult to stay on top of the situation.

2. Rapid Spread of Disinformation

Disinformation can spread rapidly on social media platforms, thanks to the ease of sharing content. This speed poses a challenge for moderation efforts. By the time a false post or misleading information is identified, it may have already reached a wide audience. Social media platforms must develop effective strategies to detect and address disinformation as quickly as possible to minimize its impact.

3. The Gray Area Between Free Speech and Harmful Content

Striking the right balance between allowing free speech and curbing harmful content is a complex task for social media platforms. While platforms must respect users’ right to express their opinions, they also have a responsibility to prevent the spread of content that incites violence, promotes hate, or disseminates false information. Determining what falls within these boundaries can be challenging, and platforms must constantly adapt their policies and guidelines accordingly.

4. Evolving Tactics of Disinformation

Those spreading disinformation are not static; they adapt and evolve their tactics to avoid detection. This means that social media platforms must continuously update their moderation strategies to identify and combat new forms of disinformation. The dynamic nature of disinformation requires platforms to invest in ongoing research and development to effectively detect and counter false narratives.

5. Balancing Freedom of Expression and Global Regulations

Social media platforms operate globally, catering to diverse cultures, legal frameworks, and political systems. Navigating this complex landscape requires platforms to strike a delicate balance between respecting different jurisdictions’ regulations and upholding principles of freedom of expression. What may be considered illegal content in one country may be protected speech in another. Social media platforms must carefully consider these factors to maintain a consistent and responsible approach to content moderation.

Summary

X, the social platform formerly known as Twitter, has faced criticism in Europe for its inability to effectively address disinformation and illegal content. In a response letter, X’s CEO acknowledged actions taken to address the issue but fell short in providing specific numbers and acknowledging user concerns. The letter also omitted any mention of Elon Musk’s involvement in promoting an account known for spreading anti-Semitic content. Media reports describing X as “drowning in disinformation” likely prompted the EU’s open letter. X’s letter primarily explained existing policies and highlighted the role of Community Notes in controlling content. Social media platforms face various challenges in combating disinformation, including the scale of content, rapid spread, the gray area between free speech and harmful content, evolving disinformation tactics, and navigating global regulations. Effectively addressing these challenges will require continuous adaptation and innovation to promote a safe and accurate online environment.

—————————————————-

Article Link
UK Artful Impressions Premiere Etsy Store
Sponsored Content View
90’s Rock Band Review View
Ted Lasso’s MacBook Guide View
Nature’s Secret to More Energy View
Ancient Recipe for Weight Loss View
MacBook Air i3 vs i5 View
You Need a VPN in 2023 – Liberty Shield View

X, the social platform formerly known as Twitter, was criticized earlier this week in Europe, when European Commissioner Thierry Breton sent a harsh open letter to the company warning it of its inability to crack down on disinformation and illegal content on the platform circulating following the deadly Hamas terrorist attack against Israel. Today, X responded with a letter that was many pages long, but relatively short on numbers and direct recognition of his stumbles.

A letter signed by X CEO Linda Yaccarino notes that the company has “redeployed resources” and “refocused teams.” The card remains, in Yaccarino’s words, “high level,” meaning it is light on specific numbers. “Shortly after” the attack (no exact time), a leadership group met to consider X’s response; “Tens of thousands” of content have been removed, and user-generated community notes are now found in “thousands” of posts, and “hundreds” of accounts linked to terrorist groups, violence or extremism have been removed. She doesn’t give any estimate of how much content she faces overall that still needs moderation and verification.

He added that X is responding to requests from law enforcement, but also said the company had not received any requests from Europol at the time of writing.

Significantly, however, the letter did not acknowledge or address any of what many users had been seeing in plain sight on the platform since Saturday, which included graphic videos of terrorist attacks on civilians, as well as posts purportedly showing images of the attacks in Israel and Gaza that had already been identified as false.

Nor does it recognize that Elon Musk himself, owner of X and possibly the most popular user of the platform these days, shared a recommendation follow an account known for spreading anti-Semitic content.

It looks like the post is unavailable, but just do a search and the terms you used and you can find many, many shares of a screenshot, which highlights the slippery problem that X and other social media companies have here. Many others have reported on the disorder on the platform. cabling described X as “drowning in disinformation,” and those reports were probably a big stimulus for the EU to send its letter in the first place.

The answer comes after Breton’s response sending a similar letter to Meta yesterday. Meta told TechCrunch that he had also assembled a team to respond and was actively participating in trying to keep harmful content off the platform. He is probably writing a similar letter to X directly to the Commissioner.

The bulk of X’s four-page letter explains to the EU X’s existing policies in areas such as its basic rules, public interest exceptions and its illegal content removal policy.

But with much of the company’s staff exhausted in areas like content moderation and trust and safety, Community Notes have taken on a very prominent role in controlling what is said on the platform and that’s where Yaccarino becomes a bit more specific, but only a little. .

He noted that they are seeing posts of more than 700 community notes related to the attacks, out of tens of millions of posts with community notes seen overall in the last four days (but that number covers all topics, not just Israel). It is unclear whether this is to send a message that content between Israel and Hamas is relatively small or to signal how much activity there has been.

It also noted that more than 5,000 posts have matching videos and other media as a result of its “media notes” feature, and this number grows when those posts are shared. Notes are published approximately five hours after they are created, according to the letter, and the company is working to speed it up. (Media releases are approved faster, he added.

Breton’s letter is an early example of how the EU will likely implement its recently minted content moderation policies, which are part of its new Digital Services regulation and have special requirements for large online platforms, which, despite the exodus of users since it changed its name from Twitter, still includes X. Like Natasha noted above, disinformation is not illegal in the EU, but X now has a legal obligation to mitigate the risks associated with fake news, including providing a rapid response when illegal content is reported.

Linda Yaccarino responds to EU: 700 Community Notes, 5K+ images shared on Israel-Hamas war, “thousands” of pieces of content removed


—————————————————-