Skip to content

Prepare for a tidal wave of ChatGPT email scams


here is an experiment is being targeted by college computer science students everywhere – ask ChatGPT to generate phishing emails and test whether these are better at persuading victims to reply or click the link than regular spam . It’s an interesting experiment, and the results are likely to vary greatly depending on the details of the experiment.

But while it’s an easy experiment to run, it misses the real risk of long language models (LLMs) writing fraudulent emails. Today’s human-led scams are not limited by the number of people who respond to the initial email contact. They are limited by the painstaking process of persuading these people to send money to the scammer. LLMs are about to change that.

A decade ago, one type of spam had become the running gag on all late-night shows: “I am the son of the late King of Nigeria and I need your help…” Almost everyone had received one or a thousand of those emails. to the point that it seemed like everyone knew they were scams.

So why did scammers keep sending such obviously dubious emails? In 2012, researcher Cormac Herley offered a answer: Eliminated all but the most gullible. A smart scammer doesn’t want to waste time with people who respond and then realize it’s a scam when they’re asked to transfer money. By using an obvious scam email, the scammer can target the potentially most profitable people. It takes time and effort to participate in the back-and-forth communications that push brands, step by step, from interlocutor to trusted acquaintance to destitute.

Long-standing financial scams are now known as pig slaughter, increasing the potential mark until its final and sudden disappearance. Such scams, which require gaining trust and infiltrating a target’s personal finances, take weeks or even months of personal time and repeated interactions. It is a high risk, low probability game that the scammer is playing.

This is where LLMs will make a difference. Much has been written about the unreliability of OpenAI’s GPT models and others like them: they frequently “hallucinate”, make things up about the world, and talk nonsense with confidence. For entertainment this is fine, but for most practical uses it’s a problem. Not a bug, though, but a feature when it comes to scams: LLMs’ ability to confidently handle the hits no matter what a user throws at them will prove useful to scammers as they navigate a hostile, bewildered, and gullible scam targets billions. AI chatbot scams may catch more people, because the pool of victims who will fall for a more subtle and flexible scammer, one who has been trained in everything ever written online, is much larger than the pool of those who believe that the king of Nigeria wants to give them a billion dollars.

Personal computers are powerful enough today to run compact LLMs. Following the new Facebook model, LLaMA, leaked online, the developers tuned it to run quickly and cheaply on powerful laptops. Many other open source LLMs are in development, with a community of thousands of engineers and scientists.

A single scammer, from his laptop anywhere in the world, can now run hundreds or thousands of scams in parallel, day and night, with brands all over the world, in every language under the sun. AI chatbots will never sleep and will always adapt along the way towards their goals. And new mechanisms, from ChatGPT accessories to LangChain, will enable AI composition with thousands of API-based cloud services and open source tools, allowing LLMs to interact with the internet just like humans do. The impersonations in such scams are no longer just princes offering their country’s riches. They are desperate strangers looking for romance, hot new cryptocurrencies that will soon skyrocket in value, and seemingly solid new financial websites offering amazing returns on deposits. and the people are already falling in love with LLM.

This is a change in both scope and scale. LLMs will change the scam pipeline, making them more profitable than ever. We don’t know how to live in a world with a billion or 10 billion scammers who never sleep.

There will also be a change in the sophistication of these attacks. This is due not only to advances in AI, but also to the internet’s business model, surveillance capitalism, which produces vast amounts of data about all of us, available for purchase through data brokers. Targeted attacks against individuals, whether for phishing or data collection or scams, were once only within the reach of nation-states. Combine the digital dossiers that data brokers have on all of us with LLMs and you have a tool tailor-made for personalized scams.

Companies like OpenAI try to prevent their models from doing bad things. But with the release of each new LLM, social networking sites buzz with new AI jailbreaks that evade the new restrictions set by the AI ​​designers. ChatGPT, and then Bing Chat, and then GPT-4 were released within minutes of release, and in dozens of different forms. Most protections against misuse and harmful production are superficial and easily evaded by determined users. Once a jailbreak is discovered, it can usually be generalized, with the user community opening up the LLM through the cracks in their armor. And technology is advancing too fast for anyone to fully understand how they work, even designers.

This is an old story though: it reminds us that many of the misuses of AI are a reflection of humanity rather than a reflection of the AI ​​technology itself. Scams are nothing new: simply the intent and then the action of one person deceiving another for personal gain. And sadly, using others as henchmen to run scams is nothing new or uncommon: organized crime in Asia, for example, currently kidnaps or hires thousands of people in sweatshops scam. Is it better that organized crime no longer sees the need to exploit and physically abuse people to run their scam operations, or worse that they and many others can escalate scams to an unprecedented level?

The defense can and will catch up, but before it does, our signal-to-noise ratio will drop dramatically.



Source link