Workflow

Sync Claude.ai memories into Claude Code

The thing that kills more AI workflows than bad prompts: telling the same story twice.

7 min readBy

You know what kills more AI workflows than bad prompts? Telling the same story twice.

I'd spend 20 minutes in Claude.ai working through a product decision — the constraints, the tradeoffs, the reason we picked option B over A. Then I'd switch to Claude Code in my terminal to actually ship the change, and the model had no idea who I was, what I'd decided, or why.

So I'd paste a 400-word recap. Claude Code would echo back "got it" and then write code that quietly contradicted three things we'd just agreed on in the browser. Because of course it did. It never read the conversation. It just pattern-matched on the recap.

Multiply that by ten decisions a week and you're spending an hour a day re-explaining yourself to your own tools. This is the use case Context Hub was built for first.

The actual problem (not the marketing version)

The real friction isn't "Claude.ai and Claude Code are different products." It's that memory in AI clients is a per-client database.

  • Claude.ai has its own memory toggle, stored in Anthropic's web product
  • Claude Code stores conversation state per-session in your local .claude directory
  • ChatGPT memory lives in OpenAI's account model
  • Cursor remembers things differently than any of them

Each AI client treats your context as proprietary state. They don't talk to each other. They were never designed to. So if you decide something important in one place, you carry it manually to the next. Your brain becomes the integration layer.

That's fine if you use one tool. The minute you use two, the context tax shows up.

What "shared memory" actually means here

Context Hub is a Model Context Protocol (MCP) server. MCP is the protocol Anthropic released that lets any AI client read from and write to a shared external source — files, databases, APIs, your memory. Cursor, Claude Code, ChatGPT (via custom connectors), Perplexity (via spaces with MCP), and Claude.ai all speak it.

The trick is that MCP servers are usually for things like "let the AI read my Notion" or "let it run SQL queries." Context Hub flips that: it's an MCP server whose only job is storing the things you'd otherwise have to repeat.

You finish a long Claude.ai conversation and type:

"Ok let's lock that in. Postgres over Supabase for the migration tracker, because we want to avoid vendor lock-in. Save that for the migration-tracker project."

Claude reads the request, calls the Context Hub MCP tool that handles memory writes, and lands the decision in a Cloudflare D1 row tagged with the source client (claude.ai), a timestamp, and the project. You don't see the tool call. You don't need to.

Five minutes later, you open Claude Code in your terminal:

"I'm wiring up the migration tracker now — what did we decide about the database?"

Claude Code reads from the same D1 row store, finds the decision you saved in the browser, and quotes the exact reasoning back to you before writing any code. Same row. Same source of truth.

No paste. No recap. No drift.

What this looks like in practice

Here's a real workflow I run multiple times a day.

Step 1 — Decide something in the browser.

In Claude.ai I'm working through whether to ship a feature flag in the next release. After 15 minutes of back-and-forth, I land on: "Ship behind a feature flag, default off, gradual rollout to power users first. Reason: previous launch broke for 3% of users, can't afford that again."

So I tell Claude:

"Save that as a decision for this project — feature flag, default off, gradual rollout, with the 3% breakage reason."

Claude reads it, writes the decision to Context Hub via the MCP decision-writing tool, and confirms. Done. I didn't look at any syntax. I just talked to it.

Step 2 — Switch to the terminal.

I open Claude Code in the project directory and type:

"I'm implementing the new export feature. What did we decide about the rollout strategy?"

Claude Code searches Context Hub for matching decisions, finds the one I saved 15 minutes ago in the browser, and quotes the exact reasoning back to me before writing any code. I never told it to look in Context Hub. The MCP tool is wired up; it knows.

Step 3 — Ship.

The implementation Claude Code writes uses a feature flag, defaults off, with a comment explaining the gradual rollout reasoning. I didn't have to dictate any of that. It read the decision and acted on it.

The whole interaction took 90 seconds in the terminal. The memory saved 12 minutes of recap I would have otherwise had to write.

What I learned the hard way

Three things broke when I first tried this.

1. Untagged memories rot fast.My first version of Context Hub stored everything as flat text. Within a week I had 200 memories and no way to find anything. The fix was making "project" a first-class field, not a tag — every memory belongs to a project, and the AI client passes the current working-directory project name on every read.

2. Source attribution matters more than I expected. I almost shipped without tracking which client wrote each memory. Then I noticed Claude Code occasionally writing memories that contradicted things Claude.ai had said earlier. Without knowing the source, I couldn't tell if I'd changed my mind or if a model had hallucinated. Now every memory carries source: "claude.ai" | "claude-code" | "chatgpt" | … and the UI shows it on every entry.

3. Memory size is a real performance constraint. Claude.ai's MCP integration reads the full memory store at session start. If you have 500 memories, that's a lot of tokens before the user types anything. Context Hub limits the default fetch to the 50 most recent plus anything matching the current project, and exposes a search tool the model can call when it needs to dig deeper into history. This kept first-token latency under 800ms even with thousands of stored items.

These aren't AI breakthroughs. They're the boring infrastructure decisions that make AI clients usable across a real workflow.

Setup, in one command

$ npx create-context-hub

That command does five things:

  1. Scaffolds a Cloudflare Workers + D1 project in your current directory
  2. Provisions a free D1 database tied to your Cloudflare account
  3. Runs the schema migrations
  4. Deploys the MCP server to Cloudflare's edge
  5. Prints connection instructions for Claude.ai, Claude Code, ChatGPT, Cursor, and Perplexity

Total time on a fresh machine: about 4 minutes. Total cost: $0 if you stay inside Cloudflare's free tier — and you will, because D1's free tier is generous for personal-scale memory.

For Claude.ai, you add the MCP server URL in Settings → Connectors. For Claude Code, you add it via claude mcp add in your terminal. Both clients now read and write the same D1 store.

What this isn't

This isn't a way to make Claude.ai and Claude Code feel like the same product. They're not. The browser is for thinking. The terminal is for shipping. They have different ergonomics for good reasons.

What Context Hub does is make the decisions and constraints portable between them. The conversation stays in each tool. The conclusions follow you everywhere.

If that sounds like a small thing, you're probably not running this workflow daily. If you are, you already know what an hour a day back is worth.

Frequently asked questions

Does this require both Claude.ai Pro and Claude Code?
No. Context Hub speaks the open Model Context Protocol (MCP). Any Claude.ai plan that exposes MCP connectors works (Pro and above as of 2026), and Claude Code is free for personal use. The shared memory is yours either way — it lives in your Cloudflare D1, not in Anthropic's account model.
What if I switch from Claude.ai to ChatGPT mid-project — does my context follow?
Yes. That's the entire point. Context Hub is client-agnostic. ChatGPT, Cursor, Perplexity, and any future MCP client read the same D1 row store. The memory follows you, not the tool.
Where does the memory actually live?
In a Cloudflare D1 database tied to your own Cloudflare account. Not in Anthropic, not in OpenAI, not in a third-party SaaS. The CLI provisions it during install. You own the rows.
What happens if I have hundreds of memories — does Claude.ai slow down?
No. Context Hub limits the default fetch at session start to the 50 most recent memories plus anything tagged to the current project. When the model needs deeper context, it can search the full store on demand. First-token latency stays under 800ms even with thousands of stored items.

Ship in one command

Try Context Hub yourself.

One command. Every AI tool you use, finally on the same page.

>_npx create-context-hub