State Transitions Can Be Forced

Why explicit states and transitions don't eliminate injection vectors

The Conventional Framing

State machine workflows (like LangGraph) define explicit states and valid transitions. Instead of free-form agent reasoning, the system moves through defined states with controlled transitions. This provides structure, predictability, and auditability.

The pattern is positioned as more controllable than ReAct-style agents—you know exactly what states exist and what transitions are allowed.

Why This Is Still Vulnerable

State machines constrain the shape of execution but not the content. An injection can manipulate what happens within a state or influence which transition fires. The state machine executes the attack in an orderly fashion.

Worse, state itself becomes an attack surface. If state is stored and retrieved, it can be poisoned. If state influences LLM decisions, those decisions inherit whatever's in state.

Attack vectors in state machines:

  • Forced transitions. Injection that triggers unintended state transitions, skipping validation or approval states.
  • State poisoning. Malicious content written to state persists and affects all subsequent states.
  • Condition manipulation. Transition conditions are often LLM-evaluated—and therefore injectable.

Architecture

Components:

  • Statesdefined processing stages
  • Transitionsrules for moving between states
  • State storagepersisted context across states
  • Transition conditionslogic determining which transition fires

Trust Boundaries

┌─────────────────────────────────────────────────────────┐ │ STATE MACHINE │ │ │ │ State A ──[condition]──► State B ──[condition]──► ... │ │ │ │ │ │ ▼ ▼ │ │ [State data] [State data] │ │ (can be poisoned) (inherits poison) │ │ │ │ Transitions are often LLM-evaluated: │ │ "Should we proceed?" ← Injectable decision │ │ │ │ The machine executes orderly, even if compromised │ └─────────────────────────────────────────────────────────┘
  1. Input → First stateinjection enters the machine
  2. State → State datapoisoned data persists
  3. Condition evaluation → TransitionLLM decides transitions

Threat Surface

ThreatVectorImpact
Forced transitionInjection triggers unintended state changeSkip validation or approval states
State poisoningMalicious content written to persistent stateCompromise propagates to all downstream states
Condition manipulationInjection influences transition condition evaluationAttacker controls state machine flow
State enumerationProbe to discover available states and transitionsArchitecture disclosure, attack planning
Rollback exploitationForce rollback to earlier state with weaker controlsBypass progressive security checks

The ZIVIS Position

  • States don't create trust boundaries.Moving to a different state doesn't change the trust level of the data. Poisoned input in state A is still poisoned in state B.
  • Validate at state entry.Each state should validate its inputs, even from previous states. Don't assume upstream states sanitized data.
  • Transition conditions need protection.If an LLM evaluates transition conditions, those evaluations are injectable. Use deterministic conditions where possible.
  • State storage is attack surface.Persistent state can be read and written by multiple states. Treat it like a shared database—with access controls and validation.
  • Limit state machine exposure.Don't leak state names, transition logic, or current state to untrusted inputs. This information helps attackers navigate the machine.

What We Tell Clients

State machines provide structure, not security. They make your agent's behavior more predictable—including how it responds to attacks.

Use state machines for workflow organization. Add security controls at each state boundary, validate state data independently, and don't let LLMs control transitions for security-critical paths.

Related Patterns