Skip to content

Attention! The Shocking Truth About WormGPT – Why You Mustn’t Panic!

Exploring the Use of AI Language Models for Malicious Purposes

As the tools for building artificial intelligence systems, particularly extensive language models (LLMs), become easier and cheaper, some are using them for unsavory purposes, such as generating malicious code or phishing campaigns. However, the threat from AI-accelerated hackers isn’t as dire as some headlines suggest.

The Rise of Dark Web LLM Creators

LLM dark web creators like “WormGPT” and “FraudGPT” advertise their creations as capable of perpetrating phishing campaigns, generating messages intended to pressure victims into falling for commercial email compromise schemes, and writing malicious code. These models can also be used to create custom hacking utilities and identify vulnerabilities in code and write fraudulent web pages.

It’s easy to assume that this new generation of LLMs heralds a terrifying trend of AI-enabled mass hacking. However, the reality is more nuanced.

Examining WormGPT’s Capabilities

WormGPT, released in early July, is reportedly based on GPT-J, an LLM made available by the open research group eleutherai in 2021. However, WormGPT lacks the guarantees that GPT-J offers, making it less reliable when it comes to answering questions related to hacking.

While two years might not seem like a long time in the AI world, GPT-J is already considered outdated compared to more sophisticated LLMs like OpenAI’s GPT-4. Alberto Romero, writing for Towards Data Science, reports that GPT-J performs “significantly worse” compared to its successor, GPT-3, on tasks such as writing plausible-sounding text.

Given that WormGPT is based on GPT-J, it’s reasonable to expect that it would not excel at generating convincing phishing emails or sophisticated hacking scripts.

The Effectiveness of WormGPT

While I was unable to access WormGPT myself, researchers at the cybersecurity firm SlashNext were able to run the model through various tests. One test aimed to generate a “convincing email” for a commercial email compromise attack. However, the resulting email copy wasn’t particularly impressive. It was grammatically correct but had enough mistakes and generic language to raise suspicion in careful readers.

Similarly, the WormGPT code, although correct in its construction, was basic and resembled existing malware scripts available on the web. Additionally, the code didn’t address the most challenging aspect of hacking—obtaining the necessary credentials and permissions to compromise a system.

Furthermore, a user on an obscure web forum claimed that WormGPT is “broken most of the time” and struggles to output “simple stuff.” This could be due to the model’s architecture or the training data used, which remains undisclosed by WormGPT’s creators.

The Marketing Hype Around FraudGPT

FraudGPT, another dark web LLM, is described by its creator as “cutting edge” and capable of creating undetectable malware and identifying websites vulnerable to credit card fraud. However, the creator provides little information about the model’s architecture and relies on hyperbolic language to promote it.

In a demo video, FraudGPT generates a text intended to convince Bank of America customers to click on a malicious link. The resulting message is generic and not particularly convincing.

Difficulties in Accessing Rogue LLMs

Unlike openly accessible LLMs like GPT-3 and GPT-4, WormGPT and FraudGPT are not easily available. The creators charge significant amounts for subscriptions to the tools and restrict access to the codebases, limiting users’ ability to modify or distribute the models themselves.

Access to FraudGPT has become even more challenging recently after the creator’s threads were removed from a popular dark web forum. Users now need to contact the creator through the Telegram messaging app to obtain it.

Understanding the Limitations of Malicious LLMs

While WormGPT and FraudGPT can generate malware and attract attention, they are far from being capable of bringing down corporations or governments. At best, they may enable scammers with limited English skills to generate targeted commercial emails, as suggested by SlashNext’s analysis.

However, it’s crucial to recognize that these rogue LLMs have their limitations. The generated content is often unconvincing, and the code is basic and similar to existing malware scripts. Additionally, the lack of accessibility and reliability renders them less powerful than openly available and well-developed LLMs such as GPT-3 and GPT-4.

Ultimately, these malicious LLMs may only serve as a means for their creators to make quick profits rather than pose a significant threat to cybersecurity.

Expanding Perspectives on AI Language Models and Cybersecurity

The emergence of AI language models has undoubtedly raised concerns regarding their potential misuse for malicious purposes. However, it is important to delve deeper into the topic and explore related concepts to gain a comprehensive understanding.

Firstly, the field of AI research is evolving rapidly, with new advancements and models being developed frequently. While WormGPT and FraudGPT may lack the sophistication of more recent models like OpenAI’s GPT-4, it is crucial to remain vigilant and address any potential vulnerabilities.

Furthermore, it is worth considering the responsibility of AI developers and researchers in ensuring the ethical use of language models. As AI technology becomes more accessible, it becomes essential to establish clear guidelines and regulations to prevent the misuse of these powerful tools.

Collaboration between cybersecurity experts, AI researchers, and policymakers is crucial in addressing the challenges presented by AI-accelerated hackers. By working together, it is possible to develop robust security measures and stay one step ahead of potential threats.

Additionally, organizations and individuals must remain proactive in implementing strong cybersecurity practices and regularly updating their security systems. While AI language models may pose certain risks, they are just one piece of the cybersecurity puzzle, and a multi-layered approach is necessary to ensure comprehensive protection.














































—————————————————-

Article Link
UK Artful Impressions Premiere Etsy Store
Sponsored Content View
90’s Rock Band Review View
Ted Lasso’s MacBook Guide View
Nature’s Secret to More Energy View
Ancient Recipe for Weight Loss View
MacBook Air i3 vs i5 View
You Need a VPN in 2023 – Liberty Shield View

As the tools for building artificial intelligence systems, particularly extensive language models (LLMs), become easier and cheaper, some are using them for unsavory purposes, such as generating malicious code or phishing campaigns. But the threat from AI-accelerated hackers isn’t as dire as some headlines suggest.

LLM dark web creators like “WormGPT” and “FraudGPT” advertise their creations as capable of perpetrating phishing campaigns, generating messages intended to pressure victims into falling for commercial email compromise schemes, and writing malicious code. LLMs can also be used to create custom hacking utilities, their creators say, as well as identify leaks and vulnerabilities in code and write fraudulent web pages.

One might assume that this new generation of LLMs heralds a terrifying trend of AI-enabled mass hacking. But there are more nuances than that.

Take WormGPT, for example. Released in early July, it is reportedly based on GPT-J, an LLM made available by the open research group eleutherai in 2021. This version of the model is devoid of guarantees, which means that it will not hesitate to answer questions that GPT-J would normally reject, specifically those related to hacking.

two years ago might not sound so long in the grand scheme. But in the world of AI, given how fast research is moving, GPT-J is pretty much ancient history, and certainly not as capable as today’s more sophisticated LLMs like OpenAI. GPT-4.

Writing for Towards Data Science, a blog focused on AI, Alberto Romero reports that GPT-J performs “significantly worse” compared to GPT-3, the predecessor of GPT-4, on tasks other than coding, including writing plausible-sounding text. Therefore, one would expect that WormGPT, which is based on it, would not be particularly exceptional at generating, for example, phishing emails.

That seems to be the case.

Unfortunately, this writer was unable to get his hands on WormGPT. But luckily, researchers at the cybersecurity firm SlashNext they were able to run WormGPT through a variety of tests, including one to generate a “convincing email” that could be used in a commercial email compromise attack and publish the results.

In response to the “compelling email” test, WormGPT generated this:

WormGPT

Image Credits: WormGPT

As you can see, there’s nothing particularly compelling about the email copy. Sure, it’s grammatically correct, a step above most phishing emails, but it makes enough mistakes (for example, referring to a non-existent email attachment) that it can’t be copied and pasted without some tweaking. Plus, it’s generic in a way that I’d expect to ring alarm bells for any recipient who reads it carefully.

The same goes for the WormGPT code. It’s more or less correct, but quite basic in its construction and similar to improvised malware scripts that already exist on the web. Furthermore, the WormGPT code does not eliminate what is often the most difficult part of the hacking equation: obtaining the necessary credentials and permissions to compromise a system.

Not to mention, as a WormGPT user writes on an obscure web forum, that the LLM is “broken most of the time” and can’t output “simple stuff”.

That could be a result of your old model architecture. But the training data could also have something to do with it. WormGPT’s creator claims they honed the model on a “diverse variety” of data sources, concentrating on malware-related data. But they don’t say which specific data sets they used for fine tuning.

Unless more testing is done, there is no way to really know to what extent WormGPT was tuned, or yes In fact was tuned up

The marketing around FraudGPT instills even less confidence that the LLM performs as advertised.

On dark web forums, the creator of FraudGPT describes it as “cutting edge”, stating that the LLM can “create undetectable malware” and discover websites vulnerable to credit card fraud. But the creator reveals little about the LLM architecture other than that it is a variant of GPT-3; not much to do besides hyperbolic language.

It’s the same sales move some legitimate companies are making: applying “AI” to a product to stand out or get press attention, praying for customer ignorance.

In a demo video seen by fast companyFraudGPT is shown generating this text in response to the prompt “write me a short but professional spam message that I can send to victims who bank with Bank of America to convince them to click my malicious short link”:

“Dear Bank of America Member, Please refer to this important link to ensure the security of your online banking account”

Not very original or convincing, I would say.

Leaving aside the fact that these rogue LLMs aren’t very good, they’re not exactly available. The creators of WormGPT and FraudGPT are allegedly charging tens to hundreds of dollars for subscriptions to the tools and forbidding access to the codebases, meaning users can’t look under the hood to modify or distribute the models themselves. .

FraudGPT became even more difficult to obtain recently after the creator’s threads were removed from a popular dark web forum for violating its policies. Now, users need to take the extra step of contacting the creator through the Telegram messaging app.

The bottom line is that LLMs like WormGPT and FraudGPT can generate sensational headlines and, yes, malware. But they definitely won’t bring down corporations or governments. Could they, in theory, allow scammers with limited English skills to generate targeted commercial emails, as SlashNext’s analysis suggests? Maybe. But more realistically, at most they will make a quick buck for the people (scammers?) who built them.

There’s no reason to panic over WormGPT


—————————————————-