AI Systems Are Event-Driven Whether You Designed Them That Way Or Not
Why MCP, agent graphs, and multi-agent meshes are event-driven architectures — and what fifteen years of EDA discipline says about securing them
The Conventional Framing
Most AI applications are described as request-response systems. A user sends a prompt, a model returns an answer, the UI displays it. That's a fine mental model for a demo.
The serious systems being built right now are something else. A single user request triggers context retrieval, document ranking, policy checks, prompt assembly, model routing, tool selection, optional human approval, memory updates, downstream actions, and observability traces. What appears to the user as one answer is the result of dozens of events moving through the system.
Why The Industry Rebuilt EDA And Forgot The Seatbelts
Once an AI system can retrieve data, call tools, update records, or remember things, you've stopped building a chatbot. You're building an event-driven decision system, whether you designed it that way or not.
MCP is event-driven. LangGraph is event-driven. Every agent mesh, A2A protocol draft, and "graph of agents" framework shipped in the last eighteen months is event-driven. Look at MCP on the wire — clients emit tools/list, servers respond, models read results and emit tools/call. Look at LangGraph — nodes, edges, shared state, conditional routing. Swap "node" for "service" and "edge" for "topic subscription" and you're looking at a 2018 architecture diagram.
What a mature 2018 EDA shop had:
- Schema registries enforcing the shape of every event
- Dead-letter queues for messages that couldn't be processed
- Idempotency keys so retries didn't double-charge customers
- Distributed tracing with correlation IDs across every service
- Cycle detection, backpressure, circuit breakers
- Consumer authorization separate from network reachability
- A clean line between control plane (routing) and data plane (bytes)
What a typical agent graph has today:
- Events are unstructured natural language
- The contract between nodes is whatever the previous node happened to emit
- No dead-letter queue — bad payloads flow into the next consumer
- Cycle detection is
max_iterations=10 - Authorization is ambient — every node inherits process credentials
- Tracing rarely correlates a tool call back to the chunk that triggered it
The seatbelts are gone. The car is faster. Most of the failure modes the AI security community is busy naming — memory poisoning, multi-hop prompt injection, tool impersonation — are problems EDA already had vocabulary for.
Architecture
Components:
- Events— user prompts, retrieved chunks, tool calls, model outputs, memory writes
- Mediator (planner)— decides routing, authorization, ordering — control plane
- Broker (event bus)— decentralized fan-out across subscribed agents
- Producers— any node emitting events: models, tools, retrievers, users
- Consumers— any node subscribing to events — including the model itself
- Schema layer— enforces event shape at producer and consumer
- Trace layer— correlation IDs across the whole event chain
Trust Boundaries
- User → Bus — user events may contain injection
- Retrieval → Bus — ingested content may carry latent instructions
- Model → Bus — generated events are interpretations, not facts
- Bus → Tool — consumer authorization gate — or its absence
- Tool → Bus — tool results re-enter the reasoning loop as untrusted events
- Bus → Memory — memory writes persist the trust label of the event
Threat Surface
| Threat | Vector | Impact |
|---|---|---|
| Memory poisoning | Bad payload enters the bus and corrupts every downstream consumer that reads it | Persistent compromise — EDA equivalent: poison message |
| Multi-hop prompt injection | Untrusted instruction enters at one event boundary, rides the rails to a privileged consumer | Action shipped on behalf of a user who never asked — EDA equivalent: event-chain corruption across topics |
| Tool impersonation | Model selects a tool from a discovery list with no enforcement that this turn is allowed to call it | Unauthorized action — EDA equivalent: missing consumer authorization |
| Emergent behavior | Combination of subscriptions, retries, and memory writes produces outcomes nobody designed | Architectural surprise — EDA equivalent: cycle / cascade without backpressure |
| Untyped events | Natural-language events with no schema flow into consumers that interpret them as instructions | Garbage-in becomes action-out — EDA equivalent: missing schema registry |
| Ambient credentials | Every node in the graph inherits process-level authority | Any compromised node can do anything any node could do — EDA equivalent: missing consumer ACLs |
| Lost provenance | Trust label of an event is dropped when it's transformed by a model | Untrusted content reaches a privileged sink unlabeled — EDA equivalent: missing taint propagation |
The ZIVIS Position
- •Decide if you have a mediator or a broker — and design accordingly.Mediator topology centralizes the control plane. Broker topology distributes it. Both are valid. Choosing accidentally is not. For high-risk enterprise AI, especially anything that takes action, a mediator isn't a convenience — it's a control plane.
- •Every event needs a schema.If your contract between nodes is 'whatever the model happened to emit,' you don't have a contract. Structured outputs at every event boundary are the floor, not a feature.
- •Provenance travels with the payload.Every event carries a trust label, and the label survives every transformation. Untrusted content stays untrusted across the whole chain — even after a model summarizes it.
- •Consumer authorization is separate from reachability.A tool being registered is not a tool being callable this turn. The mediator decides what the model can call this turn, signs that decision, and the tool runtime checks the signature.
- •Trace the whole chain, not just the model call.If you can't walk a single user request from prompt → retrieval → context assembly → model → tool → memory → downstream effect with a single correlation ID, you don't have observability. You have logs.
- •Bound the blast radius of bad events.Dead-letter queues, idempotency keys, cycle detection, payload size limits, circuit breakers. The EDA community shipped these. Port them across.
What We Tell Clients
The moment your AI system retrieves sensitive context, calls tools, writes to memory, or coordinates across systems, the architecture matters more than the model. At that point you don't get to choose whether you have an event-driven system. You have one. The only question is whether you designed it intentionally or it happened to you.
The controls already exist. They were built and battle-tested by the distributed-systems community across fifteen years. The work in front of us is porting them into agentic stacks that are currently shipping without them.
If trust in your AI system depends on the model never being tricked, you don't have a trustworthy system. You have a hopeful one.
Related Patterns
- MCP (Model Context Protocol)— an event-driven protocol — explicitly
- Multi-Agent Orchestration— broker topology, with all the broker-topology problems
- Supervisor Pattern— the mediator topology applied to agents
- Plan-and-Execute— Planner issues turn-scoped capabilities; Worker consumes them
- Capability Tokens— consumer authorization for the AI EDA
- Temporal Provenance— world time, reasoning time, action time
- Indirect Effects— event-chain corruption across trust boundaries