OWASP Top 10 for Large Language Models¶
Estimated time to read: 3 minutes
In an era where digital advancements permeate every aspect of our lives, large language models (LLMs) stand out as transformative technology. These AI models, known for their ability to generate human-like text, have various applications, from virtual assistance to content creation, language translation, and beyond. However, along with the benefits come many cybersecurity challenges that warrant urgent attention.
Similarly, as the Open Web Application Security Project (OWASP) highlights the most critical security risks to web applications, this article introduces the OWASP Top 10 for Large Language Models. This set of risks addresses LLMs' unique challenges, recognising that vulnerabilities in these AI systems can have serious implications, just as with traditional software.
These risks range from Prompt Injections (LLM01), akin to the well-known SQL injection attacks but adapted to LLMs, to Training Data Poisoning (LLM03), a threat uniquely relevant to machine learning models. Issues such as Denial of Service (LLM04), a well-established risk in traditional systems, appear alongside newer threats like Excessive Agency (LLM08), reflecting the evolving interaction between AI and other systems.
The potential for security breaches, propagation of misinformation, data leakage, and the compromise of system integrity, among other threats, necessitates a robust and comprehensive approach to the security of LLMs. Consequently, the cybersecurity strategies applied to traditional software systems must be reimagined and reinforced to cater to the nuances of LLMs.
This list of the Top 10 risks is not only a call to action for researchers, developers, and users of LLMs but also serves as a roadmap for prioritising security efforts in the realm of AI. The overall aim is to ensure that as we reap the benefits of AI advancements, we do not compromise on security, privacy, and trust. The ethical use of AI, anchored on a strong cybersecurity foundation, is paramount to harnessing its full potential responsibly and safely.
You can find the original list of OWASP Top 10 for Large Language Models
Prompt Injections (LLM01) Similar to SQL injection, this vulnerability involves users providing input that is directly inserted into a command, potentially allowing unauthorised actions.
Insecure Output Handling (LLM02) If the output from an LLM is not handled securely, it could lead to serious security risks like cross-site scripting (XSS) or server-side request forgery (SSRF).
Training Data Poisoning (LLM03) This involves intentionally manipulating training data to make the model behave in specific ways, often to serve malicious purposes.
Denial of Service (LLM04) If a user can cause an LLM to perform highly resource-intensive tasks, it can slow down or even halt the service for other users.
Supply Chain (LLM05) The elements used to train and implement the LLM, from data sources to plugin extensions, can be compromised, leading to security breaches or biases in the model's output.
Permission Issues (LLM06) Similar to privilege escalation in traditional security contexts, this vulnerability involves unauthorised users gaining access to sensitive functions or information.
Data Leakage (LLM07) If not handled carefully, LLMs could inadvertently reveal sensitive information or proprietary data, leading to privacy breaches.
Excessive Agency (LLM08) Unrestricted interaction between LLMs and other systems can lead to harmful actions, such as the execution of unauthorised operations.
Overreliance (LLM09): Overdependence on LLMs can lead to misinformation or inappropriate content being displayed, which could result in legal issues or harm a company's reputation.
Insecure Plugins (LLM10): Plugins that connect LLMs to external resources can be vulnerable to attacks if they accept free-form text inputs, potentially leading to undesirable behaviour or even the execution of malicious commands.