If you’ve recently had trouble determining whether an image of a person is real or generated by artificial intelligence (AI), you’re not alone.
A new study by researchers at the University of Waterloo found that people had a harder time than expected distinguishing who is a real person and who is artificially generated.
In the Waterloo study, 260 participants were provided with 20 unlabeled images: 10 of which were of real people obtained from Google searches and the other 10 generated by Stable Diffusion or DALL-E, two commonly used artificial intelligence programs. that generate images.
Participants were asked to label each image as real or AI-generated and explain why they made their decision. Only 61 percent of participants could distinguish between AI-generated people and real people, well below the 85 percent threshold the researchers expected.
“People are not as adept at making distinctions as they think,” said Andreea Pocol, a doctoral candidate in computer science at the University of Waterloo and lead author of the study.
Participants paid attention to details such as fingers, teeth, and eyes as possible indicators when searching for AI-generated content, but their evaluations were not always correct.
Pocol noted that the nature of the study allowed participants to examine the photographs closely, while most Internet users glance at the images in passing.
“People who are just commuting or don’t have time won’t pick up these cues,” Pocol said.
Pocol added that the extremely rapid pace at which AI technology is developing makes it particularly difficult to understand the potential for malicious or nefarious actions posed by AI-generated images. The pace of academic research and legislation is often unable to keep pace: AI-generated images have become even more realistic since the study began in late 2022.
These AI-generated images are particularly threatening as a political and cultural tool, as any user could create fake images of public figures in embarrassing or compromising situations.
“Disinformation is not new, but the tools of disinformation have been constantly changing and evolving,” Pocol said. “It can get to a point where people, no matter how trained they are, will still have difficulty differentiating real images from fake ones. That’s why we need to develop tools to identify and counter this. It’s like a new AI arms race.” .
The study, “Seeing Not Believing: A Survey of the State of Deepfakes, AI-Generated Humans, and Other Non-Truth Media,” appears in the journal. Advances in computer graphics.