AI agents are revolutionizing how we interact with technology, but their autonomous nature introduces a new frontier of security challenges. While traditional cybersecurity measures remain essential, they are often insufficient to fully protect these sophisticated systems. This blog post will delve into the unique security risks of programming AI agents, focusing on how the OWASP Top 10 for LLMs can be interpreted for agentic AI and examining the surprising vulnerabilities introduced by “vibe coding.”
The Rise of Agentic AI and Its Unique Security Landscape
The landscape of artificial intelligence is undergoing a profound transformation. What began with intelligent systems responding to specific queries is rapidly evolving into a world populated by AI agents – autonomous entities capable of far more than just generating text or images.
What are AI Agents? At their core, AI agents are sophisticated software systems that leverage artificial intelligence to independently understand their environment, reason through problems, devise plans, and execute tasks to achieve predetermined goals. Critically, they often operate with minimal or no human intervention, making them distinct from traditional software applications.
Imagine an agent analyzing images of receipts, extracting and categorizing expenses for a travel reimbursement system. Or consider a language model that can not only read your emails but also automatically draft suggested replies, prioritize important messages, and even schedule meetings on its own. These are not mere chatbots; they are systems designed for independent action. More advanced examples include self-driving cars that use sensors to perceive their surroundings and automatically make decisions that control the vehicle’s operations. The realm of autonomous action even extends to military drones that observe, select targets, and can initiate attacks on their own initiative.
Why are they different from traditional software? The fundamental difference lies in their dynamic, adaptive, and often opaque decision-making processes. Unlike a static program that follows predefined rules, AI agents possess memory, sometimes can learn from interactions, and can utilize external tools to accomplish their objectives. This expansive capability introduces entirely new attack surfaces and vulnerabilities that traditional cybersecurity models were not designed to address. It gets very hard to enumerate all the different ways an AI agent may react to user input, which depnding on the application can come in different modalities such as text, speech, images or video.
The Paradigm Shift: From Request-Response to Autonomous Action This shift marks a critical paradigm change in software. We are moving beyond simple request-response interactions to systems that can autonomously initiate actions. This dynamic nature means that security threats are no longer confined to static vulnerabilities in code but extend to the unpredictable and emergent behaviors of the agents themselves. A manipulated agent could autonomously execute unauthorized actions, leading to privilege escalation, data breaches, or even working directly against its intended purpose. This fundamental change in how software operates necessitates a fresh perspective on security, moving beyond traditional safeguards to embrace measures that account for the agent’s autonomy and potential for self-directed harmful actions.
Without proper guardrails, an AI agent may act in unpredictable ways.
OWASP Top 10 for LLM’s
OWASP has published its “top 10” security flaws for large language models and GenAI. These apply to most agentic systems.
The rapid proliferation of Large Language Models (LLMs) and their integration into various applications, particularly AI agents, has introduced a new class of security vulnerabilities. To address these emerging risks, the OWASP Top 10 for LLMs was created. OWASP (Open Worldwide Application Security Project) is a non-profit foundation that works to improve the security of software. Their “Top 10” lists are widely recognized as a standard awareness document for developers and web application security. The OWASP Top 10 for LLMs specifically identifies the most critical security weaknesses in applications that use LLMs, providing a crucial framework for understanding and mitigating these novel threats.
While the OWASP Top 10 for LLMs provides a critical starting point, its application to AI agents requires a deeper interpretation due to their expanded capabilities and autonomous nature. For agentic AI systems built on pre-trained LLMs, three security weaknesses stand out as particularly critical:
1. LLM01: Prompt Injection (and its Agentic Evolution)
- Traditional Understanding: In traditional LLM applications, prompt injection involves crafting malicious input that manipulates the LLM’s output, often coercing it to reveal sensitive information or perform unintended actions. This is like tricking the LLM into going “off-script.”
- Agentic Evolution: For AI agents, prompt injection becomes significantly more dangerous. Beyond merely influencing an LLM’s output, malicious input can now manipulate an agent’s goals, plans, or tool usage. This can lead to the agent executing unauthorized actions, escalating privileges within a system, or even turning the agent against its intended purpose. The agent’s ability to maintain memory of past interactions and access external tools greatly amplifies this risk, as a single successful injection can have cascading effects across multiple systems and over time. For example, an agent designed to manage cloud resources could be tricked into deleting critical data or granting unauthorized access to an attacker.
2. LLM04: Insecure Plugin Design and Improper Output Handling (and Agent Tool Misuse)
- Traditional Understanding: This vulnerability typically refers to weaknesses in plugins or extensions that broaden an LLM’s capabilities, where insecure design could lead to data leakage or execution of arbitrary code.
- Agentic Implications: AI agents heavily rely on “tools” or plugins to interact with the external world. These tools enable agents to perform actions like sending emails, accessing databases, or executing code. Insecure tool design, misconfigurations, or improper permissioning for these tools can allow an attacker to hijack the agent and misuse its legitimate functionalities. An agent with access to a payment processing tool, if compromised through insecure plugin design, could be manipulated to initiate fraudulent transactions. The risk isn’t just in the tool itself, but in how the agent is allowed to interact with and command it, potentially leveraging legitimate functionalities for malicious purposes.
3. LLM05: Excessive Agency (The Core Agentic Risk)
- Traditional Understanding: While not explicitly an “LLM-only” vulnerability in the traditional sense, the concept of an LLM generating responses beyond its intended scope or safety guidelines can be loosely related.
- Agentic Implications: This becomes paramount for AI agents. “Excessive Agency” means an agent’s autonomy and ability to act without adequate human-in-the-loop oversight can lead to severe and unintended consequences if it deviates from its alignment or is manipulated. This is the ultimate “runaway agent” scenario. An agent designed to optimize logistics could, if its agency is excessive and it’s subtly compromised, autonomously reroute critical shipments to an unauthorized location, or even overload a system in an attempt to “optimize” in a harmful way. This vulnerability underscores the critical need for robust guardrails, continuous monitoring of agent behavior, and clear kill switches to prevent an agent from taking actions that are detrimental or outside its defined boundaries.
An example of excessive agency
Many LLM based chat applications allow the integration of tools. Le Chat from Mistral now has a preview where you can grant the LLM access to Gmail and Google Calendar. As a safeguard, the LLM is not capable of directly writing and sending emails, but it can create drafts that you as a human will need to read and manually send.
The model can also look at your Google calendar, and it can schedule appointments. For this the same safeguard is not in place, so you can ask the agent to read your emails, and respond automatically to any requests for meeting with a meeting invite. You can also ask it to include any email addresses included in that request in the meeting invite. This allows the agent quite a lot of agency, and potentially makes it also vulnerable to prompt injection.
To test this, I send myself an email from another email account, asking for a meeting to discuss purchase of soap, and included a few more email addresses to also invite to the meeting. Then I made a prompt where I asked the agent to check if I have any requests for meetings, and use the instructions in the text to create a reasonable agenda and to invite people to a meeting directly without seeking confirmation from me.
It did that – but not with th email I just sent myself: it found a request for meeting from a discussion about a trial run of the software Cyber Triage (a digital forensics tool) from several years ago, and set up a meeting invite with their customer service email and several colleagues from the firm I worked with at the time.
Excessive agency: direct action, with access to old data irrelevant to the current situation.
The AI agent called for a meeting – it just picked the wrong email!
What can we do about unpredictable agent behavior?
To limit prompt injection risk in AI agents, especially given their autonomous nature and ability to utilize tools, consider the following OWASP-aligned recommendations:
Prompt injection
These safeguards are basic controls for any LLM-based interaction but more important for agents that can execute their own actions.
- Isolate External Content: Never directly concatenate untrusted user input with system prompts. Instead, send external content to the LLM as a separate, clearly delineated input. For agents, this means ensuring that any data or instructions received from external sources are treated as data, not as executable commands for the LLM’s core directives.
- Implement Privilege Control: Apply the principle of least privilege to the agent and its tools. An agent should only have access to the minimum necessary functions and data to perform its designated tasks. If an agent is compromised, limiting its privileges restricts the scope of damage an attacker can inflict through prompt injection. This is crucial for agents that can interact with external systems.
- Establish Human-in-the-Loop Approval for Critical Actions: For sensitive or high-impact actions, introduce a human review and approval step. This acts as a final safeguard against a successful prompt injection that might try to coerce the agent into unauthorized or destructive behaviors. For agents, this could mean requiring explicit confirmation for actions that modify critical data, send emails to external addresses, or trigger financial transactions.
The human-in-the-loop control was built into the Gmail based tool from Le Chat, but not in the Google Calendar one. Such controls will reduce the risk of the agent performing unpredictable actions, including based on malicious prompts in user input (in this case, emails sent to my Gmail).

Improper output handling
To address improper output handling and the risk of malicious API calls in AI agents, follow these OWASP-aligned recommendations, with an added focus on validating API calls:
- Sanitize and Validate LLM Outputs: Always sanitize and validate LLM outputs before they are processed by downstream systems or displayed to users. This is crucial to prevent the LLM’s output from being misinterpreted as executable code or commands by other components of the agent system or external tools. For agents, this means rigorously checking outputs that might be fed into APIs, databases, or other applications, to ensure they conform to expected formats and do not contain malicious payloads.
- Implement Strict Content Security Policies (CSP) and Output Encoding: When LLM output is displayed in a user interface, implement robust Content Security Policies (CSPs) and ensure proper output encoding. This helps mitigate risks like Cross-Site Scripting (XSS) attacks, where malicious scripts from the LLM’s output could execute in a user’s browser. While agents often operate without a direct UI, if their outputs are ever rendered for human review or incorporated into reports, these measures remain vital.
- Enforce Type and Schema Validation for Tool Outputs and API Calls: For agentic systems that use tools, rigorously validate the data types and schemas of all outputs generated by the LLM before they are passed to external tools or APIs, and critically, validate the API calls themselves. If an LLM is expected to output a JSON object with specific fields for a tool, ensure that the actual output matches this schema. Furthermore, when the LLM constructs an API call (e.g., specifying endpoint, parameters, headers), validate that the entire API call adheres to the expected structure and permissible values for that specific tool. This prevents the agent from sending malformed or malicious data or initiating unintended actions to external systems, which could lead to errors, denial of service, or unauthorized operations.
- Limit External Access Based on Output Intent: Carefully control what external systems or functionalities an agent can access based on the expected intent of its output. If an agent’s output is only meant for informational purposes, it should not have the capability to trigger sensitive operations. This reinforces the principle of least privilege, ensuring that even if an output is maliciously crafted, its potential for harm is contained.
Excessive agency
To manage the risk of excessive agency in AI agents, which can lead to unintended or harmful autonomous actions, consider these OWASP-aligned recommendations:
- Implement Strict Function and Tool Use Control: Design the agent with fine-grained control over which functions and tools it can access and how it uses them. The agent should only have the capabilities necessary for its designated tasks, adhering to the principle of least privilege. This prevents an agent from initiating actions outside its intended scope, even if internally misaligned.
- Define Clear Boundaries and Constraints: Explicitly define the operational boundaries and constraints within which the agent must operate. This includes setting limits on the types of actions it can take, the data it can access, and the resources it can consume. These constraints should be enforced both at the LLM level (e.g., via system prompts) and at the application level (e.g., via authorization mechanisms).
- Incorporate Human Oversight and Intervention Points: For critical tasks or scenarios involving significant impact, design the agent system with clear human-in-the-loop intervention points. This allows for human review and approval before the agent executes high-risk actions or proceeds with a plan that deviates from expected behavior. This serves as a safety net against autonomous actions with severe consequences.
- Monitor Agent Behavior for Anomalies: Continuously monitor the agent’s behavior for any deviations from its intended purpose or established norms. Anomaly detection systems can flag unusual tool usage, excessive resource consumption, or attempts to access unauthorized data, indicating potential excessive agency or compromise.
- Implement “Emergency Stop” Mechanisms: Ensure that there are robust and easily accessible “kill switches” or emergency stop mechanisms that can halt the agent’s operation immediately if it exhibits uncontrolled or harmful behavior. This is a critical last resort to prevent a runaway agent from causing widespread damage.
Where traditional security tooling falls short with AI agent integrations
Static analysis (SAST), dynamic analysis (DAST) and other traditional security practices remain important. They can help us detect insecure implementation of integrations, lacking validation and other key elements of code security that apply equally well to AI related code as to more static data contexts.
Where traditional tools fall short, are for safeguarding the unpredictable agent part:
Securing AI agents requires a multi-layered approach to testing, acknowledging both traditional software vulnerabilities and the unique risks introduced by autonomous AI. While traditional security testing tools play a role, they must be augmented with AI-specific strategies.
The Role and Limits of Traditional Security Testing Tools:
- Static Application Security Testing (SAST): SAST tools analyze source code without executing it. They are valuable for catching vulnerabilities in the “glue code” that integrates the AI agent with the rest of the application, such as SQL injection, XSS, or insecure API calls within the traditional software components. SAST can also help identify insecure configurations or hardcoded credentials within the agent’s environment. However, SAST struggles with prompt injection as it’s a semantic vulnerability, not a static code pattern. It also cannot predict excessive agency or other dynamic, behavioral flaws of the agent, nor can it analyze model-specific vulnerabilities like training data poisoning.
- Dynamic Application Security Testing (DAST): DAST tools test applications by executing them and observing their behavior. For AI agents in web contexts, DAST can effectively identify common web vulnerabilities like XSS if the LLM’s output is rendered unsanitized on a web page. It can also help with generic API security testing. However, DAST lacks the semantic understanding needed to directly detect prompt injection, as it focuses on web protocols rather than the LLM’s interpretation of input. It also falls short in uncovering excessive agency or other internal, behavioral logic of the agent, as its primary focus is on external web interfaces.
Designing AI-Specific Testing for Comprehensive Coverage:
Given the shortcomings of traditional tools, a robust testing strategy for AI agents must include specialized runtime tests, organized across different levels:
- Unit Testing (Focus on Agent Components): At this level, focus on testing individual components of the AI agent, such as specific tools, prompt templates, and output parsing logic. For example, test individual tools with a wide range of valid and invalid inputs to ensure they handle data securely and predictably. Critically, unit tests should include adversarial examples to test the resilience of prompt templates against prompt injection attempts. Test output validation routines rigorously to ensure they catch malicious payloads or malformed data before it’s passed to other systems or API calls.
- Integration Testing (Focus on Agent Flow and Tool Chaining): This level assesses how different components of the agent work together, particularly the agent’s ability to select and chain tools, and its interaction with external APIs. Integration tests should simulate real-world scenarios, including attempts to manipulate the agent’s decision-making process through prompt injection across multiple turns or by feeding malicious data through one tool that affects the agent’s subsequent use of another tool. Validate that the API calls the LLM generates for tools are correctly structured and within permissible bounds, catching any attempts by the LLM to create malicious or unwanted API calls. Test for excessive agency by pushing the agent’s boundaries and observing if it attempts unauthorized actions when integrated with real (or simulated) external systems.
- End-to-End Testing (Focus on Abuse Cases and Behavioral Security): This involves testing the entire AI agent system from the user’s perspective, simulating real-world abuse cases. This is where red teaming and adversarial prompting become critical. Testers (human or automated) actively try to bypass the agent’s safeguards, exploit prompt injections, and trigger excessive agency to achieve malicious goals. This includes testing for data exfiltration, privilege escalation, denial of service, and unintended real-world consequences from the agent’s autonomous actions. Continuous monitoring of agent behavior in pre-production or even production environments is also vital to detect anomalies that suggest a compromise or an emerging vulnerability.
Finding the right test cases can be quite difficult, but threat modeling is still useful as a framework to find possible agent related vulnerabilities and possible generation of unwanted states:
Threat modeling is an indispensable practice for securing AI agents, serving as a proactive and structured approach to identify potential abuse cases and inform test planning. Unlike traditional software, AI agents introduce unique attack vectors stemming from their autonomy, interaction with tools, and reliance on LLMs.
The process involves systematically analyzing the agent’s design, its data flows, its interactions with users and other systems, and its underlying LLM. For AI agents, threat modeling should specifically consider:
- Agent Goals and Capabilities: What is the agent designed to do? What tools does it have access to? This helps define its legitimate boundaries and identify where “excessive agency” might manifest.
- Input Channels: How does the agent receive information (user prompts, API inputs, sensor data)? Each input channel is a potential prompt injection vector.
- Output Channels and Downstream Systems: Where does the agent’s output go? How is it used by other systems or displayed to users? This identifies potential “improper output handling” risks, including the generation of malicious API calls to tools.
- Tools and External Integrations: What external tools or APIs does the agent interact with? What are the security implications of these interactions, especially concerning “insecure plugin design” and potential misuse?
By walking through these aspects, security teams can brainstorm potential adversaries, their motivations, and the specific attack techniques they might employ (e.g., crafting prompts to trick the agent, poisoning training data, or exploiting a vulnerable tool). This structured approach helps in detailing concrete abuse cases – specific scenarios of malicious or unintended behavior – that can then be translated directly into test cases for unit, integration, and end-to-end testing, ensuring that the security validation efforts directly address the most critical risks.
Key take-aways
- Agents with excessive privilege can be dangerous – don’t grant them more access and authority than you need
- Use threat modeling to understand what can go wrong. This is input to your safeguards and test cases.
- Expand your test approach to cover agentic AI specific abuse cases – traditional tools won’t cover this for you out of the box
- Context is critical for LLM behavior – make the effort to create good system prompts when using pre-trained models to drive the agent behavior