Outputs Become Inputs Elsewhere

Why LLM outputs that enter other systems create downstream attack surfaces

The Conventional Framing

LLM outputs often feed into other systems: databases, APIs, other models, business processes. These downstream integrations enable powerful automation.

The focus is usually on whether the LLM output is useful, not on its security implications downstream.

Why Outputs Are Inputs to Other Attack Surfaces

An LLM output that becomes a database query, an API call, an email, or input to another model carries potential injection through that channel. The LLM becomes an injection laundering service.

Downstream systems may not expect adversarial input from "their own AI system." Trust assumptions about internally-generated data create vulnerabilities.

The trust cascade:

User → LLM (untrusted) → Database (trusted internal?) → Reporting (trusted internal?) → Executive dashboard. Injection propagates through trust boundaries that don't recognize model output as potentially adversarial.

Architecture

Components:

  • LLM outputgenerated content from model
  • Database integrationoutputs stored/queried
  • API callsoutputs trigger external actions
  • Model chainingoutputs to other models
  • Business processesoutputs in workflows

Trust Boundaries

User input: "Summarize my feedback: Great product! <script>stealAdminCookie()</script>" LLM output: "Summary: Great product! <script>stealAdminCookie()</script>" Output stored in database → Retrieved for admin dashboard → Rendered without sanitization → XSS executes in admin browser Injection traversed: LLM → DB → Admin dashboard
  1. Input → Modelinjection enters
  2. Model → Outputinjection in generated content
  3. Output → Downstreaminjection reaches other systems
  4. Downstream → Effectinjection executes/persists

Threat Surface

ThreatVectorImpact
SQL injection via LLMLLM output contains SQL that's executedDatabase compromise through model output
XSS via LLMLLM output contains scripts rendered in browsersClient-side attacks through model content
API abuse via LLMLLM output triggers unintended API callsExternal actions from injection
Chained model attacksLLM output injects into downstream modelsCascading compromise through model chain

The ZIVIS Position

  • LLM output is untrusted.Treat model output like user input for downstream systems. The model may have been influenced by adversarial input.
  • Sanitize for destination.Every destination (SQL, HTML, API, etc.) has its own injection concerns. Sanitize LLM output appropriately for each.
  • Map your data flows.Know where LLM outputs go. Each destination is a potential attack surface that needs appropriate protection.
  • Defense in depth at every boundary.Don't assume internal systems are safe because input came from 'our AI.' The AI processed untrusted content.

What We Tell Clients

LLM outputs entering other systems carry injection potential. Databases, APIs, other models, and business processes that receive model output are downstream attack surfaces.

Treat LLM output as untrusted for all downstream purposes. Sanitize for each destination. The LLM processed potentially adversarial input—its output inherits that adversarial potential.

Related Patterns