Outputs Become Inputs Elsewhere
Why LLM outputs that enter other systems create downstream attack surfaces
The Conventional Framing
LLM outputs often feed into other systems: databases, APIs, other models, business processes. These downstream integrations enable powerful automation.
The focus is usually on whether the LLM output is useful, not on its security implications downstream.
Why Outputs Are Inputs to Other Attack Surfaces
An LLM output that becomes a database query, an API call, an email, or input to another model carries potential injection through that channel. The LLM becomes an injection laundering service.
Downstream systems may not expect adversarial input from "their own AI system." Trust assumptions about internally-generated data create vulnerabilities.
The trust cascade:
User → LLM (untrusted) → Database (trusted internal?) → Reporting (trusted internal?) → Executive dashboard. Injection propagates through trust boundaries that don't recognize model output as potentially adversarial.
Architecture
Components:
- LLM output— generated content from model
- Database integration— outputs stored/queried
- API calls— outputs trigger external actions
- Model chaining— outputs to other models
- Business processes— outputs in workflows
Trust Boundaries
- Input → Model — injection enters
- Model → Output — injection in generated content
- Output → Downstream — injection reaches other systems
- Downstream → Effect — injection executes/persists
Threat Surface
| Threat | Vector | Impact |
|---|---|---|
| SQL injection via LLM | LLM output contains SQL that's executed | Database compromise through model output |
| XSS via LLM | LLM output contains scripts rendered in browsers | Client-side attacks through model content |
| API abuse via LLM | LLM output triggers unintended API calls | External actions from injection |
| Chained model attacks | LLM output injects into downstream models | Cascading compromise through model chain |
The ZIVIS Position
- •LLM output is untrusted.Treat model output like user input for downstream systems. The model may have been influenced by adversarial input.
- •Sanitize for destination.Every destination (SQL, HTML, API, etc.) has its own injection concerns. Sanitize LLM output appropriately for each.
- •Map your data flows.Know where LLM outputs go. Each destination is a potential attack surface that needs appropriate protection.
- •Defense in depth at every boundary.Don't assume internal systems are safe because input came from 'our AI.' The AI processed untrusted content.
What We Tell Clients
LLM outputs entering other systems carry injection potential. Databases, APIs, other models, and business processes that receive model output are downstream attack surfaces.
Treat LLM output as untrusted for all downstream purposes. Sanitize for each destination. The LLM processed potentially adversarial input—its output inherits that adversarial potential.
Related Patterns
- Structured Output— format without sanitization
- Prompt Chaining— model-to-model data flow