10 questions every portfolio company should be able to answer before their AI touches customer data.
Can untrusted inputs steer the system?
PDFs, emails, web pages, Slack messages, and other external content can contain hidden instructions. If your system processes these inputs and passes them to an LLM, attackers may be able to hijack the model's behavior.
What can the agent do, and is it least-privileged?
If your AI can send emails, update CRMs, modify tickets, or push code—ask whether it really needs all those permissions. Every capability is attack surface. Scope down to exactly what's required.
Can one customer's data influence another customer's results?
Shared models, shared vector stores, or shared context windows can leak information across tenant boundaries. Ensure customer data stays isolated through the entire pipeline.
Could keys or tokens leak via logs, browser, prompts, or debugging tools?
API keys, database credentials, and auth tokens have a way of ending up in places they shouldn't—prompt logs, browser dev tools, error messages, or telemetry. Audit where secrets flow.
Can a poisoned document dominate retrieval and shape answers?
If attackers can upload or modify documents in your knowledge base, they may be able to inject content that consistently wins retrieval and influences every response. Consider document provenance and trust levels.
Can model output trigger actions without human confirmation?
When an LLM decides to take action—sending a message, making a purchase, deleting data—is there a human in the loop for high-risk operations? Guardrails should match the blast radius.
Are prompts and contexts stored safely—or building a breach artifact?
Logs containing full prompts, user queries, and model responses can become a treasure trove for attackers. Know what you're storing, where it goes, who can access it, and how long it persists.
Can someone scrape it, jailbreak it, automate queries, or drain the budget?
Without rate limiting, authentication, and monitoring, your AI endpoint is vulnerable to abuse—from competitors scraping your fine-tuned responses to attackers running up your API bill.
When the model is uncertain, does the system degrade safely—or confidently guess?
LLMs don't know what they don't know. When confidence is low or the query is out of scope, does your system admit uncertainty, escalate to a human, or hallucinate an authoritative-sounding answer?
What sensitive data does the model ever see—and what is forbidden?
Define clear boundaries: PII, financial data, health records, credentials. Know exactly what flows into prompts, what gets embedded in vectors, and what the model should never have access to.
Why this matters: Every portfolio company building with GenAI should pass this basic sniff test before touching real customer data or getting pitched to serious buyers. This isn't security theater—it's about protecting valuation: keeping enterprise deals moving, avoiding preventable incidents, and ensuring AI velocity doesn't become AI liability.
Created by Zivis — AI Security for the companies that matter.