We use cookies for analytics and to identify companies visiting our site (not individuals). Essential cookies are always active. Learn more
The definitive list of critical security risks for Large Language Model applications. Understand the threats and protect your AI systems from attack.
The OWASP Top 10 for Large Language Model Applications is a community-driven project that identifies the most critical security vulnerabilities in LLM-powered systems. Created by the Open Worldwide Application Security Project (OWASP), it serves as an essential awareness document for developers, security teams, and organizations deploying AI applications.
Unlike traditional application security risks, LLM vulnerabilities emerge from the unique characteristics of language models: their ability to interpret natural language, generate content, and take actions based on prompts. This creates novel attack surfaces that require specialized understanding and defenses.
The list is regularly updated as the threat landscape evolves. Version 2.0, released in 2025, reflects the latest attack patterns observed as LLM deployment has accelerated across industries.
Critical vulnerabilities every organization using LLMs must understand and address
Attackers manipulate LLMs through crafted inputs, leading to unauthorized actions or data exposure.
Insufficient validation of LLM outputs can lead to XSS, CSRF, SSRF, or code execution.
Manipulation of training data introduces vulnerabilities, biases, or backdoors into models.
Resource-intensive operations or crafted inputs cause service degradation or outages.
Compromised components, models, or datasets in the AI supply chain introduce risks.
LLMs may reveal confidential data, PII, or proprietary information in responses.
Plugins with inadequate access controls can lead to unauthorized actions or data exposure.
LLMs with too much autonomy can take unintended actions with real-world consequences.
Excessive dependence on LLMs without oversight leads to misinformation or vulnerabilities.
Unauthorized access, copying, or extraction of proprietary LLM models or capabilities.
Created by security practitioners specifically for LLM application security
Updated regularly (v2.0 released 2025) to address emerging threats
Practical, actionable guidance from real-world attack patterns
Widely referenced by security teams and auditors globally
Companion to traditional OWASP Top 10 for web application security
Essential baseline for any organization deploying LLM-powered applications
LLM attack techniques evolve rapidly. New vulnerabilities like prompt injection variants, jailbreaks, and inference attacks are discovered regularly. Continuous security testing is essential for LLM applications.
LLMs introduce entirely new vulnerability classes that traditional security tools don't detect. Prompt injection alone has no equivalent in web security.
LLMs can inadvertently leak training data, user information, or system prompts through carefully crafted queries.
Foundation models, fine-tuning datasets, plugins, and third-party integrations create complex supply chains with multiple attack vectors.
Security-conscious buyers increasingly require evidence of LLM security testing before procurement of AI-powered solutions.
Comprehensive adversarial testing of your LLM applications against all OWASP LLM Top 10 risks using the latest attack techniques.
Specialized testing for direct and indirect prompt injection vulnerabilities, including jailbreaks, system prompt extraction, and context manipulation.
Testing for sensitive information disclosure, training data extraction, and PII exposure in model outputs.
Actionable recommendations for addressing identified vulnerabilities, including input validation, output filtering, and architectural controls.
Our experts test your AI systems against real-world attack patterns.