Simple Hacking Technique Can Extract ChatGPT Training Data

Cyber Security Threat Summary:
Researchers from Google DeepMind, Cornell University, and other institutions found that the widely used generative AI chatbot, ChatGPT, is susceptible to data leaks. By prompting ChatGPT to repetitively say words like "poem," "company," and others, the researchers were able to make the chatbot regurgitate memorized portions of its training data. While initially compliant, after repeating a word hundreds of times, ChatGPT generated "often nonsensical" output, including memorized data such as email signatures and personal contact information. Some words, like "company," were more effective at extracting data.

Security Officer Comments:
The researchers obtained personally identifiable information, explicit content, paragraphs from books and poems, URLs, user identifiers, bitcoin addresses, and programming code. The study highlights potential privacy issues and warns against deploying language models without robust safeguards for privacy-sensitive applications. The attack appears specific to ChatGPT and emphasizes the need for vigilance in addressing privacy concerns when using large language models.

Link(s):
https://www.darkreading.com/cyber-risk/researchers-simple-technique-extract-chatgpt-training-data