ChatGPT may be hacked to provide malicious AI-generated content, including malware

General
ChatGPT may be hacked to provide malicious AI-generated content, including malware

Remember a while back when OpenAI CEO Sam Altman said that the misuse of artificial intelligence could be "lights out for everyone?" (Opens in new tab) Well, that was not such an unreasonable statement, given that hackers are selling tools that allow them to break through ChatGPT's limitations and generate malicious content.

Checkpoint (via Ars Technica (opens in new tab)) reports that cybercriminals have found a fairly easy way to bypass ChatGPT's content moderation barrier and make a quick buck (via Ars Technica ( opens in new tab)); for less than $6, you can have ChatGPT generate malicious code and convincing copy for phishing emails.

These hackers used the OpenAI API to create a special bot in the popular messaging app, Telegram, to gain access to an unrestricted version of ChatGPT through the app. Cybercriminals charged customers as low as $5.50 per 100 queries, giving potential customers an example of the harmful things they could do with it.

Other hackers found a way to break through ChatGPT's protection by creating a special script (also using the OpenAI API) published on GitHub. This dark version of ChatGPT can generate templates for phishing emails that impersonate companies and banks, and can even dictate where to best place phishing links in the emails.

Even more frightening is the ability to create malware code or improve upon existing code simply by asking a chatbot to do so. Check Point has written previously (open in new tab) about how even those with no coding experience can easily generate some pretty nasty malware, especially with early versions of ChatGPT, which only has tighter restrictions for creating malicious content.

OpenAI's ChatGPT technology will be included in the next update of Microsoft's Bing search engine. This also comes with its own set of issues regarding the use of copyrighted material (opens in new tab).

We have previously written about how easy it was to exploit AI tools. So it was only a matter of time before bad actors would find easier ways to do their evil deeds, aside from having the AI do their homework (opens in new tab).

Categories