State Transitions Can Be Forced
Why explicit states and transitions don't eliminate injection vectors
The Conventional Framing
State machine workflows (like LangGraph) define explicit states and valid transitions. Instead of free-form agent reasoning, the system moves through defined states with controlled transitions. This provides structure, predictability, and auditability.
The pattern is positioned as more controllable than ReAct-style agents—you know exactly what states exist and what transitions are allowed.
Why This Is Still Vulnerable
State machines constrain the shape of execution but not the content. An injection can manipulate what happens within a state or influence which transition fires. The state machine executes the attack in an orderly fashion.
Worse, state itself becomes an attack surface. If state is stored and retrieved, it can be poisoned. If state influences LLM decisions, those decisions inherit whatever's in state.
Attack vectors in state machines:
- Forced transitions. Injection that triggers unintended state transitions, skipping validation or approval states.
- State poisoning. Malicious content written to state persists and affects all subsequent states.
- Condition manipulation. Transition conditions are often LLM-evaluated—and therefore injectable.
Architecture
Components:
- States— defined processing stages
- Transitions— rules for moving between states
- State storage— persisted context across states
- Transition conditions— logic determining which transition fires
Trust Boundaries
- Input → First state — injection enters the machine
- State → State data — poisoned data persists
- Condition evaluation → Transition — LLM decides transitions
Threat Surface
| Threat | Vector | Impact |
|---|---|---|
| Forced transition | Injection triggers unintended state change | Skip validation or approval states |
| State poisoning | Malicious content written to persistent state | Compromise propagates to all downstream states |
| Condition manipulation | Injection influences transition condition evaluation | Attacker controls state machine flow |
| State enumeration | Probe to discover available states and transitions | Architecture disclosure, attack planning |
| Rollback exploitation | Force rollback to earlier state with weaker controls | Bypass progressive security checks |
The ZIVIS Position
- •States don't create trust boundaries.Moving to a different state doesn't change the trust level of the data. Poisoned input in state A is still poisoned in state B.
- •Validate at state entry.Each state should validate its inputs, even from previous states. Don't assume upstream states sanitized data.
- •Transition conditions need protection.If an LLM evaluates transition conditions, those evaluations are injectable. Use deterministic conditions where possible.
- •State storage is attack surface.Persistent state can be read and written by multiple states. Treat it like a shared database—with access controls and validation.
- •Limit state machine exposure.Don't leak state names, transition logic, or current state to untrusted inputs. This information helps attackers navigate the machine.
What We Tell Clients
State machines provide structure, not security. They make your agent's behavior more predictable—including how it responds to attacks.
Use state machines for workflow organization. Add security controls at each state boundary, validate state data independently, and don't let LLMs control transitions for security-critical paths.
Related Patterns
- Plan-and-Execute— sequential execution with similar issues
- Working Memory— state storage as scratchpad
- Audit Logging— track state transitions for forensics