NIST Warns of Security and Privacy Risks from Rapid AI System Deployment

Summary:
NIST highlights the rising security and privacy concerns linked to the growing usage of artificial intelligence (AI) systems. These challenges encompass threats such as tampering with training data, exploiting model vulnerabilities, or even malicious interaction or manipulation to extract sensitive information.

As AI systems, notably generative AI systems, become more integrated into online services, they face multifaceted threats throughout the machine learning lifecycle. These threats include corrupted training data, software component vulnerabilities, data model poisoning, supply chain weaknesses, and privacy breaches caused by prompt injection attacks. NIST’s computer scientist, Apostol Vassilev, underscores the risks associated with AI systems. He emphasized that while software developers seek broader exposure to improve their products, there’s no guarantee that such exposure will be positive. For instance, a chatbot could disseminate inaccurate or harmful information when prompted with carefully crafted language.

Security Officer Comments:
NIST noted that threat actors could execute these attacks with varying levels of understanding about the AI system full knowledge (white box), minimal knowledge (black-box), or partial comprehension (gray-box). The agency stressed the lack of robust mitigation measures to counter these risks, urging the tech community to devise more effective defenses. The technical intricacies of these attacks result in potential impacts on the availability, integrity and privacy of AI systems.

Link(s):
https://thehackernews.com/2024/01/nist-warns-of-security-and-privacy.html

PDF: https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-2e2023.pdf