Skip to content

The thorny art of deepfake tagging


last week, the The Republican National Committee issued a video ad against Biden, which featured a small disclaimer at the top left of the frame: “Built entirely with AI images.” Critics questioned the reduced size of the disclaimer and suggested its limited value, particularly as the ad marks the first substantive use of AI in political attack advertising. As AI-generated media becomes more common, many have argued that text-based tags, captions, and watermarks are crucial to transparency.

But do these tags really work? Maybe not.

For a label to work, it must be readable. Is the text big enough to read? Are the words accessible? You must also provide audiences with meaningful context about how media has been created and used. And at its best, it also reveals the intent: why has this means of communication been put into the world?

Journalism, documentary media, industry, and scientific publications have long relied on disclosures to provide audiences and users with the necessary context. Journalistic and documentary films generally use superimposed text to cite sources. Warning labels and tags are ubiquitous on manufactured goods, food, and medicines. In scientific reports, it is essential to disclose how the data and analysis were captured. But labeling synthetic media, AI-generated content, and deepfakes is often seen as unwanted burden, especially on social media platforms. It’s a slapped afterthought. Boring compliance in an age of misinformation.

As such, many existing AI media disclosure practices, such as watermarking and tagging, can be easily removed. Even when they’re there, the audience members’ eyes, now trained on quick visual input, seem not see watermarks and disclosures. For example, in September 2019, the well-known Italian satirical TV show Striscia the News posted a lo-fi face swap video of former Prime Minister Matteo Renzi sitting at a desk insulting his then coalition partner Matteo Salvini with exaggerated hand gestures on social media. despite a striscia watermark and a clear text-based disclaimer, according to deepfake researcher Henry Adjer, some viewers believed the video to be genuine.

This is called context switching: once any piece of media, even tagged and watermarked, is distributed among closed, politicized social media groups, its creators lose control of how it is framed, interpreted, and shared. As we found in a joint investigation study between Witness and MIT, when satire is mixed with deepfakes it often creates confusion, as in the case of this striscia video. These kinds of simple text-based tags can create the added advantage Wrong idea that you don’t handle anything that doesn’t have a label, when in reality that may not be true.

Technologists are working on ways to quickly and accurately trace the origins of synthetic media, such as crypto provenance and detailed file metadata. When it comes to alternative tagging methods, artists and human rights activists offer promising new ways to better identify this type of content by reframing tagging as a creative act rather than an add-on.

When a disclosure is included in the media itself, it cannot be removed, and in fact can be used as a tool to encourage the public to understand how a media was created and why. For example, in David France’s documentary welcome to chechnyavulnerable interviewees were digitally disguised with the help of inventiveness synthetic media tools such as those used to create deepfakes. In addition, subtle halos appeared around their faces, a clue to viewers that the images they were viewing had been manipulated and that these subjects were taking an immense risk by sharing their stories. And in Kendrick Lamar’s 2022 music video, “the heart part 5”, the directors used deepfake technology to transform Lamar’s face into deceased and living celebrities such as Will Smith, OJ Simpson and Kobe Bryant. This use of technology is written directly into the song’s lyrics and choreography, such as when Lamar uses his hand to glide over his face, clearly indicating fake editing. The resulting video is a meta-commentary on the deepfakes themselves.




—————————————————-

Source link

🔥📰 For more news and articles, click here to see our full list.🌟✨

👍 🎉Don’t forget to follow and like our Facebook page for more updates and amazing content: Decorris List on Facebook 🌟💯