FraudGPT, A New Malicious Generative AI Tool Appears in the Threat Landscape

Cyber Security Threat Summary:
Generative AI models are becoming attractive for crooks, Netenrich researchers recently spotted a new platform dubbed FraudGPT which is advertised on multiple marketplaces and the Telegram Channel since July 22, 2023. According to Netenrich, this generative AI bot was trained for offensive purposes, such as creating spear phishing emails, conducting BEC attacks, cracking tools, and carding.

For $200 per month, $1,000 for a six-month subscription or $1,700 for twelve months, crooks can choose a subscription package to leverage the service. The tool claims to be able to to develop undetectable malware and find vulnerabilities in targeted platforms.

Below are some features supported by the chatbot:

  • Write malicious code
  • Create undetectable malware
  • Find non-VBV bins
  • Create phishing pages
  • Create hacking tools
  • Find groups, sites, markets
  • Write scam pages/letters
  • Find leaks, vulnerabilities
  • Learn to code/hack
  • Find cardable sites
  • Escrow available 24/7
  • 3,000+ confirmed sales/reviews
Security Officer Comments:
While the large language model (LLM) used to develop the system is not disclosed, the author claims to have more than 3,000 confirmed sales and reviews. The service is promoted on a Telegram Channel created in June 23, 2023. The author claims to be a verified vendor on various dark web marketplaces, including EMPIRE, WHM, TORREZ, WORLD, ALPHABAY, and VERSUS.

Suggested Correction(s):
Researchers have warned about the threat of cybecriminals using generative AI technologies to bolster phishing and business email compromise attacks. ChatGPT and other LLM companies have combated these activities by blocking certain prompts, but threat actors have found ways to circumvent these protections.

“As time goes on, criminals will find further ways to enhance their criminal capabilities using the tools we invent. While organizations can create ChatGPT (and other tools) with ethical safeguards, it isn’t a difficult feat to reimplement the same technology without those safeguards.” concludes the report. “In this case, we are still talking about phishing, which is the initial attempt to get into an environment. Conventional tools can still detect AI-enabled phishing and more importantly, we can also detect subsequent actions by the threat actor.”

Link(s):
https://netenrich.com/blog/fraudgpt-the-villain-avatar-of-chatgpt
https://securityaffairs.com/148829/cyber-crime/fraudgpt-cybercrime-generative-ai.html