Skip to content

Using memes, social media users have become red teams for half-baked AI features

“Running with scissors is a cardiovascular exercise that can increase heart rate and requires concentration and attention,” says the new Google AI search function. “Some say it can also improve pores and give you strength.”

Google’s AI feature got this response from a website called old lady comedy, which as its name makes clear, is a comedy blog. But the error is so ridiculous that it has been circulating on social media, along with other obviously incorrect AI roundups on Google. Indeed, everyday users are now combining these products on social media.

In cybersecurity, some companies hire “red teams” (ethical hackers) who try to breach their products as if they were bad actors. If a red team finds a vulnerability, the company can fix it before the product ships. Certainly, Google conducted a red team of sorts before launching an AI product on Google Search, which is My dear to process billions of queries per day.

It’s surprising, then, that a deep-pocketed company like Google still offers products with obvious flaws. That’s why it’s now become a meme to clown about the flaws of AI products, especially in a time when AI is becoming more ubiquitous. We have seen this with bad spelling ChatGPTthe inability of video generators to understand how humans eat spaghettiand Grok IA news summaries about X that, like Google, don’t understand satire. But these memes could actually serve as useful feedback for companies developing and testing AI.

Despite the egregious nature of these failures, technology companies often downplay their impact.

“The examples we’ve seen are generally very rare queries and are not representative of most people’s experiences,” Google told TechCrunch in an emailed statement. “We conducted extensive testing before launching this new experience and will use these isolated examples as we continue to refine our overall systems.”

Not all users see the same AI results, and by the time a particularly bad AI suggestion comes in, the problem has often already been fixed. In a more recent case that went viral, Google suggested that if you are making pizza but cheese doesn’t stick, you could add about an eighth of a cup of glue to the sauce to “make it more sticky.” As it turned out, the AI ​​is getting this response from a Reddit comment from eleven years ago from a user named “f––smith”.

Beyond being an incredible mistake, it also indicates that AI content offerings may be overrated. Google has a 60 million dollar contract with Reddit to license their content for training AI models, for example. Reddit signed a similar agreement with Open AI last week, and the Automattic WordPress.org and Tumblr properties are rumored be in talks to sell data to Midjourney and OpenAI.

Admittedly, many of the bugs circulating on social media come from unconventional searches designed to trip up AI. At least I hope no one is seriously looking into the “health benefits of running with scissors.” But some of these errors are more serious. Science journalist Erin Ross published in X that Google spits out incorrect information about what to do if you get bitten by a rattlesnake.

Ross’s post, which garnered more than 13,000 likes, shows that AI recommended applying a tourniquet to the wound, cutting it and suctioning out the venom. According to the US Forest ServiceThese are all things you should. No do it if they bite you. Meanwhile, over at Bluesky, author T Kingfisher expanded on a post showing Google’s Gemini. Misidentifying a poisonous mushroom like a common white mushroom – screenshots from the post have spread to other platforms as a warning.

When a bad AI answer goes viral, the AI ​​could become more confused by new content on the topic that emerges as a result. On Wednesday, New York Times journalist Aric Toler published a screenshot in X that shows a query asking if a dog has ever played in the NHL. The AI’s answer was yes: for some reason, the AI ​​called Calgary Flames player Martin Pospisil a dog. Now when you ask the same query, the AI ​​gets an article from the daily point about how Google’s AI still thinks dogs play sports. The AI ​​is being fed its own errors, poisoning it even more.

This is the inherent problem with training these large-scale AI models on the Internet: sometimes people on the Internet lie. But just as there is There is no rule against a dog playing basketball.Unfortunately, there is no rule prohibiting large technology companies from shipping defective AI products.

As the saying goes: garbage in, garbage out.