Sovereignty
Own your AI memory before a vendor wipes it for you
There's a specific kind of feeling you only get from talking to ChatGPT for six months and then watching it forget your name.
There's a specific kind of feeling you only get from talking to ChatGPT for six months and then watching it forget your name.
On February 5, 2025, an OpenAI backend update wiped saved memories for thousands of ChatGPT users. People lost months of context they hadn't backed up because there was nothing to back up — they didn't have a database. They had a settings panel. The settings panel was empty when they woke up.
Nine months later, in early November 2025, it happened again. Different bug, same outcome. One user posted on Reddit: "My ChatGPT was writing a recipe to memory, and after it was done, the entire saved memory panel was blank, with no history at all. Everything is just gone." Dozens of replies confirmed the same thing.
OpenAI eventually acknowledged both incidents. There was no recovery path. The memories were gone.
That's the moment the abstract idea of "AI vendor lock-in" becomes a concrete feeling in your chest. You spent months teaching the model about your work, your preferences, the phrasing you like. Then a backend update — not yours, theirs — and it's a stranger again.
The actual problem (it's not a feature, it's a contract)
Hosted AI memory looks like a feature. ChatGPT Memory, Claude Projects, Mem0, Letta, Zep — they all present the same UI: a list of facts the AI "knows" about you. But you don't have the rows. You have a UI. The rows live in someone else's database, on their servers, under their terms of service.
That arrangement is fine until any of these things happen:
- The vendor changes the memory format and old entries become garbage
- The vendor caps total memory size (ChatGPT's is around 1,200–1,400 words; once full, you delete to add)
- The vendor has a backend bug and your memories silently disappear (Feb 2025, Nov 2025)
- Your account gets banned, frozen, or the email you signed up with stops working
- You decide to switch from ChatGPT to Claude and discover there was no "export" button to begin with
- The company gets acquired, deprecated, or shifts to a different pricing model that prices you out
A 2026 Parallels survey of 540 IT professionals found 94% are now concerned about vendor lock-in. AI memory is the worst kind of lock-in because it's emotional. You don't just lose data. You lose the version of the model that knew you.
What "owning" your memory actually means
This isn't philosophical. It's a single, testable question: can you read the rows directly?
With ChatGPT Memory, the answer is no. You see a UI listing.
With Context Hub, the answer is yes. The memories live in a Cloudflare D1 database tied to your own Cloudflare account. You can run wrangler d1 execute context-hub --command "SELECT * FROM memories" and see every row. You can dump the whole thing to JSON in 30 seconds. You can git diffyour context the way you'd diff code.
That sounds nerdy until you remember why it matters: every other piece of your work life lives in a database you can read. Your email. Your code. Your notes. Your calendar. The reason you've never panicked about Notion losing your docs is that Notion gives you exports — and even if they didn't, you write your important things down somewhere else.
AI memory is the only category where we accepted "trust the vendor with your relationship to the model." Two outages in 2025 made it clear that wasn't a great deal.
What I learned the hard way
I had eight months of ChatGPT memory wiped in November 2025. I didn't even get a notification. I noticed because the model started asking me what I was working on — a question it hadn't asked since April.
Three things came out of that experience.
1. The most valuable memories aren't the obvious ones. When you finally look at your stored context after losing it, you realize the facts the model remembered (job, projects, tools) were the easy half. The harder half was the preferences— "Mayank prefers Postgres over Supabase even when Supabase looks easier." "Mayank wants me to push back when his architecture is wrong, not just affirm it." Those are the entries that took months of correction to land. Those are the ones I missed most.
2. Migration tools are always one-way.When Anthropic launched their Import Memory tool in March 2026 to pull context out of ChatGPT and Gemini, I noticed something: it pulls IN, but doesn't push OUT. None of the major vendors offer a clean export back to a portable format. Lock-in by inertia, even when the inbound migration is fixed.
3. Owning the rows is half the work; the other half is discipline about what gets stored. When your memory store is yours, you actually look at it. I read the Context Hub D1 dump monthly. About 5% of the memories are wrong, outdated, or embarrassingly trivial. I delete them. The signal-to-noise ratio stays high because I'm the librarian. With hosted memory you can't do this — you can't even see the librarian.
What this looks like with Context Hub
The setup is the same as every other Context Hub use case:
$ npx create-context-hubWhat changes is where the rows live. Every memory you save through any AI client — Claude.ai, Claude Code, ChatGPT (via Plus connectors), Cursor, Perplexity — lands in a D1 table on your Cloudflare account. Not OpenAI's. Not Anthropic's. Yours.
What that buys you in concrete terms:
- The wipes can't happen to you. A vendor backend bug only affects vendors. Your D1 stays intact regardless of what ChatGPT does to its memory backend on a given Tuesday.
- The migration is free.Switching from ChatGPT to Claude doesn't mean losing context anymore — both clients read from the same Context Hub via MCP. You change the tool. The relationship persists.
- Privacy is a property of the architecture, not a promise. No vendor sees your memory store. They see individual rows when they need them, via MCP, in the moment of the call. Then those rows leave their context window. They don't accumulate in someone else's database.
- You can audit your own memory.
wrangler d1 execute, dump to JSON, grep for the preferences you've forgotten you taught the model. Before Context Hub I'd never read my stored context. Now I do it monthly and the AI works better for it.
The migration question (be honest about it)
You probably already have memories invested in ChatGPT or Claude or Cursor. Two questions matter:
Can you get them out?Anthropic's Markdown memory format exports cleanly — Claude.ai gives you the actual files. ChatGPT's format is opaque (vector-backed) and the official export gives you conversation logs, not extracted memories. You can recover most of the important ones by asking ChatGPT to list everything it knows about you, then pasting the response into Context Hub. It's manual. It works.
Should you migrate them all at once?No. The better path: install Context Hub, point your most-used AI client at it, and let new memories accumulate there. Keep the old vendor memory until your Context Hub is rich enough that you don't miss it. Most people get there in 2–3 weeks of normal usage.
The uncomfortable truth
You've been renting your AI relationship from a company you didn't pick because of how they handle data. You picked them because the model was good. The memory got bundled in.
That arrangement was fine when AI memory was a novelty. It's less fine now that "the model knows me" is approaching critical to how you work.
The Feb 2025 and Nov 2025 wipes weren't the last incidents. They're just the first ones with a public news cycle. There will be more, because building robust memory at consumer scale is hard, and because the vendor's incentives don't fully line up with yours: they want memory to make you stickier; you want memory to be portable.
Owning the rows is how you opt out of that incentive mismatch. Not because Cloudflare is morally superior to OpenAI. Because the deal is different when the database is in your name.
Frequently asked questions
- What happens if Cloudflare goes down or changes their pricing?
- D1 is portable SQLite. You export the entire memory store with one wrangler command and re-host it on Turso, libSQL, or local SQLite. The MCP protocol layer doesn't care what database is underneath. You're never one company's policy change away from losing your context.
- Can I migrate my existing ChatGPT or Claude memories into Context Hub?
- Partially. Anthropic's Markdown-based memory exports cleanly — the Import Memory tool they shipped in March 2026 makes this a one-click flow into Context Hub if you adapt the script. ChatGPT Memory exports are messier (vector-backed, opaque format) but doable with some scripting. The good news: once they're in Context Hub, they never have to be migrated again.
- Does Context Hub send my memories to any third party for analysis?
- No. Memories live in a D1 row in your own Cloudflare account. The only entities that ever read them are the AI clients you explicitly connect (Claude.ai, Cursor, etc.) — and they read only at the moment of the call, via MCP. Nothing is logged, indexed, or persisted on Context Hub's side.
- What if I want to go back to a hosted memory product later?
- Export the rows, point your AI clients at the new service, done. The point of owning the rows is that the cost of leaving is roughly equal to the cost of arriving. That's what 'owning' actually means.
Ship in one command
Try Context Hub yourself.
One command. Every AI tool you use, finally on the same page.
npx create-context-hub