ChatGPT Cybercrime Surge Revealed in 3000 Dark Web Posts

Summary:
Kaspersky researchers are warning of a notable surge in dark web discussions related to the use of ChatGPT and other Large Language Models (LLMs) to bolster cyber attacks. Nearly 3000 dark web posts were identified, focusing on a spectrum of cyber-threats, from creating malicious chatbot versions to exploring alternative projects like XXXGPT and FraudGPT. While the chatter apparently peaked in March of last year, there have been continued ongoing discussions about exploiting AI technologies for illegal activities.

“Threat actors are actively exploring various schemes to implement ChatGPT and AI. Topics frequently include the development of malware and other types of illicit use of language models, such as processing of stolen user data, parsing files from infected devices and beyond,” explained Alisa Kulishenko, digital footprint analyst at Kaspersky.

Security Officer Comments:
LLM technologies typically have restrictions in place to prevent threat actors from prompting the tools for malicious advice, however, Kaspersky notes that threat actors are sharing “jailbreaks” across dark web channels, or special sets of prompts that can unlock additional functionality or bypass these restrictions.

“Another concerning aspect revealed by Kaspersky is the market for stolen ChatGPT accounts, with an additional 3000 posts advertising these accounts for sale across the dark web. This market poses a significant threat to users and companies, with posts either distributing stolen accounts or promoting auto-registration services that mass-create accounts on request” (Kaspersky, 2024).

Suggested Corrections:
From a phishing perspective, LLMs may allow foreign adversaries to generate more believable and better punctuated phishing emails. Traditional phishing emails were ripe with grammar mistakes that made them easy to spot. Users will need to be more vigilant at distinguishing phishing attempts by looking for other clues like strange domains, unexpected communications, and social engineering attempts that prey on a sense of urgency. As always, multi-factor authentication can prevent many phishing attempts and should be leveraged wherever possible.

The problem of malware development using LLMs is a more challenging problem. AI companies will need to monitor how threat actors are abusing systems and implement additional safeguards. Organizations should monitor for trending tactics, techniques, and procedures, and share when they see something malicious.

Link(s):
https://www.infosecurity-magazine.com/news/chatgpt-cybercrime-revealed-dark/
https://dfi.kaspersky.com/blog/ai-in-darknet