Quiet Drift

Inside OWASP’s Top 10 for LLM 2025

Speaking of GenAI, OWASP has released, late last year, its much-anticipated Top 10 for LLM Applications 2025, a comprehensive guide to the most pressing security risks facing Large Language Model (LLM) applications. This update builds upon the foundation laid in 2023, reflecting a rapidly evolving landscape where AI is embedded deeper into critical workflows and decision-making processes.

The 2023 list was a big success in raising awareness and building a foundation for secure LLM usage, but we’ve learned even more since then. – OWASP Project Leads

The 2025 edition incorporates real-world feedback from researchers, developers, and security experts, highlighting emerging threats and offering new mitigation strategies. Let’s explore the key takeaways from this year’s report.

1. The Rising Threat of Prompt Injection

If there’s one vulnerability that continues to plague LLMs, it’s prompt injection. This attack manipulates an AI model’s behavior through crafted inputs, causing it to leak sensitive information, bypass restrictions, or execute unintended actions.

OWASP distinguishes between direct and indirect prompt injections:

Multimodal AI, which processes multiple data types, introduces new attack surfaces. Malicious actors could hide instructions in images accompanying benign text.

The implications? A future where AI jailbreaks are automated, and attackers can exploit models through seemingly innocent interactions.

Mitigation strategies include:

2. Data and Model Poisoning: AI’s Silent Killer

Imagine training a model to recognize financial fraud—what if attackers subtly altered the training data to make fraud harder to detect? Data poisoning is one of the most insidious threats to AI security, and it’s becoming more sophisticated.

OWASP warns that poisoning attacks can:

The challenge? These attacks are hard to detect. Once a poisoned dataset is incorporated, the entire system may be compromised.

Defense mechanisms include:

3. System Prompt Leakage: Exposing the AI’s Brain

A surprising addition to the 2025 list is System Prompt Leakage—a vulnerability that occurs when an LLM inadvertently reveals its own hidden instructions.

In the past year, researchers have demonstrated how users can extract system prompts from AI models. This is dangerous because these prompts often contain:

Many applications assumed prompts were securely isolated, but recent incidents have shown that developers cannot safely assume that information in these prompts remains secret.

The best protection? Strict input/output sanitization, coupled with differential access controls that prevent AI models from inadvertently revealing their system prompts.

Moving Forward: AI Security is an Ongoing Battle

The OWASP Top 10 for LLM Applications 2025 isn’t just a static list—it’s a call to action for developers, security researchers, and AI practitioners. As models become more autonomous and interconnected, new vulnerabilities will emerge. The key takeaway? We can’t afford to be reactive—securing AI systems must be a proactive effort.

With the rapid evolution of agentic AI architectures, multimodal models, and real-time LLM integrations, now is the time to bake security into the foundation of AI development.

As Steve Wilson, OWASP Project Lead, puts it:

Like the technology itself, this list is a product of the open-source community’s insights and experiences.

The future of AI security is being written today—are you prepared?