Our Threat Model for Local-First AI
Every privacy product has a threat model. Most of them hide it. Here is ours, stated honestly.
What we protect against:
(1) Cloud-side data retention. When an agent reads your calendar or drafts an email, the raw content stays on your machine or passes through our server only as transient inference. We do not store agent conversations, attachments, or the output of tool calls in a persistent cloud database.
(2) Silent action. Every write happens behind an approval modal. A compromised LLM cannot send an email without you clicking Allow.
(3) Tampered audit history. The on-chain Merkle root means we cannot delete or rewrite what your agent did without it being detectable.
(4) Credential leaks from the client. API keys live encrypted in your browser via AES-256-GCM. Even a stolen laptop with the disk unlocked requires your passphrase to use them.
What we do NOT protect against (yet):
(1) Compromised LLM provider. If Anthropic or OpenAI is breached, and they decide to log your prompts, we cannot stop that. Use Ollama if you need full local inference.
(2) Malicious browser extensions. An extension with content-script access can read anything the page can read, including your approvals modal. This is an operating-system-level problem we inherit.
(3) Social engineering of the human. If you click Allow on every prompt without reading, we cannot save you. The approval modal is a chance to think, not a guaranteed safeguard.
(4) Nation-state adversaries. We are a beta product. Assume a sophisticated adversary can find a flaw in our stack. For anything requiring genuine state-adversary defense, use the self-hosted Tauri build on an air-gapped machine.
The point of publishing the threat model is not to claim perfection. It is to give you enough information to decide whether our guarantees match your threat profile.
Want to try it?
Operator Uplift is in private beta. Join the waitlist and we'll let you in.
Try it free