We looked at 0G this week. Here is why we are holding off.
A founder we respect sent us the 0G docs this week and asked which modules we should fold in. We spent an afternoon on it. The answer is "none yet, and here is exactly what would have to change for us to ship it."
We are writing this in public because we have a rule: every infrastructure decision we make has a public record. Filecoin was a yes (we shipped it). Tauri desktop is a no (the binary does not build). 0G is a no (today), and the reasons matter more than the answer.
What 0G actually offers
Five modules, briefly. We are paraphrasing their docs into a one-line summary each so the rest of this post makes sense.
0G Storage. A decentralized storage network with two layers: permanent archival (Log) plus millisecond-level key-value query (KV). Optimized for AI data.
Compute Network. A decentralized GPU marketplace where providers sell inference, fine-tuning, and training. Pay-as-you-go with cryptographic settlement.
Persistent Memory. Cross-session permanent memory and ultra-large context windows for AI agents. Listed as "coming soon" on their docs.
Agent ID. A standard for tokenizing AI agents, their identity, memory, and behavior, with encrypted metadata, tradable ownership, and composability.
Privacy & Security. Hardware-enforced trusted execution environments for inference, plus monitoring nodes that watch for model drift and bias.
Why each one is a no, in plain English
Storage. Your action receipts already get mirrored to a public storage network (Filecoin via Lighthouse, see last week's post). Swapping that out for 0G Storage is moving from one decentralized storage provider to another. You would not see a difference. Our database (Supabase) stays where it is for the same reason: it works, it is fast, and the dependency is well understood.
Compute. Our wedge is "use the model you already pay for." The assistant routes your turn to Claude or GPT or Gemini using your own API key. That is the entire premise. If we add a decentralized GPU marketplace as a sixth option, either we replace your provider (breaks the "your key" promise) or we add a new dropdown nobody asked for. Cryptographic settlement of inference is technically interesting and unrelated to anything our users care about.
Agent ID. 0G's standard is for tokenizing agents so they can be owned and traded. We are not a marketplace for agents. The agents on our platform are scripts we publish. They are not tokens, they are not owned by anyone, and nobody trades them. Adopting a standard built for a different product would muddle a story that currently works.
TEE privacy. Trusted execution environments make inference private from the host machine running it. We do not run inference. Your AI provider does. We are a thin layer that proxies your prompt to a provider you chose. There is no inference for us to protect with hardware. Privacy in our model comes from BYOK plus signed receipts, not from a TEE on a node we do not run.
The one piece that could change our mind
Persistent Memory is the only module worth a second look. The reason: AI assistants forget you when you switch providers. That problem is real, our solution is currently homegrown (browser localStorage plus a Supabase table), and it has obvious limits. If our memory system started straining and 0G Persistent Memory was shipped (right now it is "coming soon"), and they offered either self-hosting or clear pricing, we would re-evaluate.
Three conditions, all required. Today, zero of them are true.
Why we publish decisions like this
Most infrastructure platforms have an integration story that runs "we shipped support for X." The thing nobody writes is the inverse: which integrations they considered and rejected, and why. That is the more honest signal about how a team makes decisions.
If you are a founder thinking about adopting any of this kind of platform (0G, but also the various decentralized AI / wallet / agent infrastructure stacks that show up every month), the question is not "do they have cool features." The question is "does my product actually need what they are selling, and would I notice the difference." For us today, the answer to that question is no for everything except Persistent Memory.
If 0G ships Persistent Memory and our memory system hits a wall, this post becomes the "we are integrating it" post. Until then, the decision lives in this post and in docs/0g-integration-decision.md in our repo, where anyone can read the same reasoning we just wrote.
You did not buy us to learn about decentralized infrastructure. You bought us for an assistant that drafts your email and waits for your tap. Every integration we add or skip has to clear that bar. Today, 0G does not.
Want to try it?
Operator Uplift is in private beta. Join the waitlist and we'll let you in.
Try it free