AI Threat Modeling

Threat modeling for systems that reason, remember, and act.

ZIVIS maps threats across the full Agent Trust Protocol Stack — context, meaning, memory, runtime, agency, authority, governance, observability — before they become incidents, failed enterprise reviews, or unsafe automation.

AI systems don’t fail like traditional software.

A malicious instruction can arrive anywhere in the loop — a retrieved document, a tool result, another agent’s output, a memory item from last week’s session, a routing rule the orchestrator decided was safe, or a sub-agent’s plan handed back up the chain.

The agent may interpret it, plan around it, persist it in memory, route it to a tool, or fan it out across a multi-agent system. Every step is an attack opportunity that didn’t exist when AI was just a chat completion.

None of those steps trip a WAF rule. None look like a network boundary crossing. They’re what an autonomous system does correctly — until an adversary shapes the input.

How is this different from STRIDE or PASTA?

Traditional threat modeling was built for systems with hard boundaries. AI doesn’t have those. The question changes.

TRADITIONAL THREAT MODEL ASKS

What components exist, what boundaries separate them, and what can an attacker do across those boundaries?

A boundary-checker. The right model for the last twenty years of web and API.

AI THREAT MODEL ASKS

How can meaning, intent, context, memory, authority, and tool access be manipulated until the system takes an action it should not take?

Attackers don’t only cross network boundaries. They tunnel through meaning.

That’s not a flaw in STRIDE. It’s a different kind of system.

Three ways to engage ZIVIS threat modeling.

One overarching model, scoped models for what you’re shipping next, and ongoing maintenance for the model itself.

Overarching AI Threat Model

A system-wide model of how trust moves through your AI program.

Actors, assets, trust assumptions, AI surfaces, agent layers, and business-critical failure modes. The reference document a board, a regulator, or a Salesforce-style reviewer can read and understand. Updated as your architecture evolves.

FIT FOR

Companies building or scaling an AI program from the ground up, or preparing for a major review.

Immediate-Use Threat Models

Practical models for the launch, RFP, or feature you're shipping next.

Tight-scope models for a specific high-risk feature, customer pilot, enterprise security review, or board concern. Same rigor as the overarching model, narrowed to the surface that matters this quarter. Hands you the attack scenarios, the mitigations, and the evidence trail your reviewer will ask for.

FIT FOR

Teams with an active deal stalled in security review, or a feature you can't ship without proving safe.

Ongoing Threat Model Maintenance

A living model that doesn't go stale.

Threat models drift the moment your AI does. We track actors, scenarios, mitigations, tests, red-team results, and ownership over time — wiring the model into the rest of your continuous testing and remediation cycle. The model that was true today is still true the next time a reviewer asks.

FIT FOR

Companies running ZIVIS as their fractional security team, or anyone who needs the threat model maintained as the AI program grows.

The Agent Trust Protocol Stack (ATPS)

Threat modeling across the agent stack.

ZIVIS models threats across the full Agent Trust Protocol Stack — from model provenance and input context to meaning, memory, runtime, agency, authority, governance, and observability. This lets teams see exactly where a semantic manipulation can become a business-impacting action.

L9
Observability
Can you reconstruct what happened across systems?
L8
Governance / Human Control
Policy, approvals, escalation, human review.
L7
Authority
Who/what the AI is allowed to act as.
L6
Agency
What the AI can do in the world.
L5
Runtime / Execution
Agent loops, orchestration, retries, routing, tool calls.
L4
Memory
Persistent context, vector stores, learned state.
L3
Meaning
Interpretation, intent, semantic manipulation.
L2
Reasoning
Model inference, decision boundaries, planning.
L1
Context
Prompts, RAG, tool descriptions, input data.
L0
Supply Chain / Provenance
Models, datasets, embeddings, evals, dependencies.

Read it bottom-up. Every threat at L3 Meaning that goes unmodeled becomes a surface above it — in memory, in agency, in authority, in observability. L3 is where the new attack plane lives.

THE POSITIONING

ZIVIS doesn’t just diagram AI systems. We model how meaning moves through them — and where that meaning can be manipulated into unsafe action.

Know your attack surface before attackers do

AI systems have a new kind of attack surface. ZIVIS is built to find it. Tell us what you're shipping and we'll map the threat model.

We typically respond within 24 hours.

Your message goes directly to

Jim Goldman

Jim Goldman

Co-Founder & CISO

30+ yrs cybersecurity. Ex-Salesforce VP Enterprise Security. FBI Cyber Crime TFO.

Jake Miller

Jake Miller

Co-Founder & CEO

25+ yrs building secure enterprise systems. First engineer on Salesforce Journey Builder.