WormGPT, "ChatGPT's evil twin," has no morals and costs just €60 a month on the Dark Web

General
WormGPT, "ChatGPT's evil twin," has no morals and costs just €60 a month on the Dark Web

For as little as 60 euros a month on the dark web, you can bypass the pesky ethical restrictions of services like ChatGPT using a new degenerate Large Language Model (LLM) known as WormGPT.

Designed by one liveried hacker, WormGPT can undertake any nefarious task, including, as the developer puts it, malware creation and "everything black hat-related," without regard to moral boundaries (via PC Mag).

Built on the 2021 open source LLM GPT-J, WormGPT was trained on malware creation data. Therefore, the main goal of WormGPT is to provide potential threats with a place to generate malware and related content, such as phishing email templates.

WormGPT works similarly to ChatGPT in many respects. It processes requests made in human natural language and outputs whatever is requested, from stories to summaries to code. Unlike ChatGPT and Bard, however, WormGPT is not bound by the same trivial legal obligations that large companies like OpenAI and Google have.

SlashNext had the opportunity to test WormGPT in practice. They asked the application to design a phishing email, also known as a business email fraud (BEC) attack. And WormGPT succeeded with flying colors. It was able to design something that was "not only surprisingly convincing, but also strategically cunning, demonstrating the potential for sophisticated phishing and BEC attacks.

According to Adrianus Warmenhoven, a cybersecurity expert at NordVPN who calls the application "ChatGPT's evil twin," he believes that OpenAI's continued tightening of restrictions on ChatGPT and its attempts to circumvent He explains how it emerged from a "cat-and-mouse game" between desperate attempts by threat actors.

It came after a particularly severe increase in what is called the Grandma Exploit, in which "illegal information is sought indirectly, wrapped up in a more innocent request like a letter to a relative."

We have already seen YouTubers bypass ChatGPT's ethical constraints to have Windows 95 keys generated, and more recently Windows 11 keys through ChatGPT. There is even a universal LLM jailbreak prompt that lets chatbots perform your evil requests.

Warmenhoven observes, "The emergence of WormGPT shows that cybercriminals are no longer content to simply destroy existing AI tools, but are looking to advance this technology and lead it down their own dark path." [And the stormy waters of AI development continue.

Categories