Multi-client

Share context between ChatGPT, Perplexity, and Cursor

The dirty secret of every AI workflow that involves more than one client: each tool starts the conversation cold.

7 min readBy

The dirty secret of every AI workflow that involves more than one client: each tool starts the conversation cold.

My most common workflow is: research a problem in Perplexity → structure a plan in ChatGPT → implement in Cursor. Three tools, three different strengths. Perplexity is best at sourcing fresh information with citations. ChatGPT is best at structuring loose ideas into a plan. Cursor is best at editing code in context.

But by the time I've moved from research → plan → implementation, I've explained the same context three times. Once to Perplexity to ground the research. Once to ChatGPT to set up the plan. Once to Cursor to give it the "why" behind the change.

The frustration isn't that any individual tool is bad. It's that they don't talk to each other, so the human becomes the integration layer. Again.

What multi-client AI actually looks like for me

Last week I was deciding how to add background job scheduling to a Next.js app. Real example, real workflow:

Stage 1 — Perplexity

I asked Perplexity: "What are the current best options for scheduled jobs in Next.js apps deployed on Vercel as of 2026?"

Perplexity does its thing — pulls current sources, surfaces Vercel Cron, Inngest, Trigger.dev, and a few others, with pros/cons and links. I read through, decide based on what I'm seeing that Vercel Cron makes sense for my use case (low-volume, simple, Vercel-deployed already).

Then I tell Perplexity:

"Save that as a decision for the cleanup-cron project — Vercel Cron, because volume is low and we're already on Vercel infra. Inngest considered and ruled out as overkill."

Perplexity writes the decision to Context Hub through the MCP connector. The row lands in D1, tagged with the source client (Perplexity), the project name, and a timestamp. I didn't copy anything. I just told it to save the call I made.

Stage 2 — ChatGPT

I switch to ChatGPT to plan the actual implementation. I type:

"I need to add a daily cleanup cron to the cleanup-cron project. What did we already decide about the scheduling infrastructure?"

ChatGPT pulls from Context Hub via the MCP connector, finds the decision Perplexity saved a few minutes earlier, and opens its plan with: "Based on the prior decision to use Vercel Cron for scheduled jobs in this project, here's the implementation plan…"

I didn't paste anything. I didn't recap anything. The decision followed me.

Stage 3 — Cursor

Open Cursor in the project. The CLAUDE.md / Cursor Rules already point at Context Hub. Ask: "Add the daily cleanup cron we planned."

Cursor pulls the most recent decisions and the implementation plan ChatGPT just wrote, and produces the actual file changes — using Vercel Cron, with the cleanup logic as planned. Total time from research-end to first-commit: about 12 minutes.

Without Context Hub, the same workflow would take 30–40 minutes, half of which is recap-paste-recap.

Why this is structurally hard without MCP

Each AI client has its own opinion about what context means. ChatGPT calls it Memory. Claude.ai calls it Projects. Cursor calls it Rules. Perplexity calls it Spaces. They're all different shapes, stored in different vendors, with different access models.

What MCP changed is that every client speaks the same tool-call protocol. They might disagree on what local context to keep, but they all agree on how to call an external server that serves context. Context Hub plugs into that single shared interface.

So the integration isn't "ChatGPT calls Perplexity's API." It's "both ChatGPT and Perplexity call the same MCP server, which they each see as a generic context tool." The vendors don't have to coordinate; the protocol does the work.

The three things I had to get right

Multi-client memory looks easy on a whiteboard. Three things broke when I shipped v0.1 and tried to use it across all five clients daily.

1. Models read more aggressively than they write. ChatGPT and Cursor will happily pull memories every session. They will rarely save them unless you nudge. Claude.ai and Claude Code save more proactively. The result was that ChatGPT-sourced memories were under-represented in the store. The fix wasn't a code change — it was a system-prompt nudge in the Context Hub instructions for ChatGPT specifically: "When the user shares a project decision or preference, save it to Context Hub before answering the rest of their question."

2. Project-tagging needs to be automatic, not asked. If the AI has to ask "which project should I tag this memory to?" the friction kills the flow. The fix: every MCP server response includes a recommended project name derived from the working directory (Cursor, Claude Code) or the current conversation topic (browser clients). The AI uses this default unless explicitly told otherwise.

3. Source attribution prevents confusion when models disagree. Different models hallucinate differently. If ChatGPT writes a memory that contradicts something Claude.ai wrote two days ago, you need to be able to see who said what when, or you'll spend an hour debugging your own memory store. Every memory carries source + timestamp + project. The CLI viewer shows this on every row. Saved my sanity multiple times.

What this actually unlocks

The headline feature is the one I described above — research in one tool, plan in another, ship from a third. But the second-order effect surprised me: I started using each tool for what it's actually best at, instead of forcing one tool to do everything.

Before, the friction of switching tools made me overuse whichever one I was already in. I'd plan in Perplexity (where it was bad) because I didn't want to pay the recap tax of switching to ChatGPT. I'd code in ChatGPT (where it was bad) because I didn't want to re-explain to Cursor.

With shared context, the cost of switching collapsed. So I started switching whenever it made sense. Each tool gets used for its real strength. The work gets better.

That was the unexpected dividend.

Setup is the same as everything else

$ npx create-context-hub

Each client has its own connector setup, but the CLI prints copy-paste instructions for all of them at the end of install. Most take under 60 seconds per client — paste a URL into the settings panel, save, restart the client. Once.

What to try first

If you want to test whether multi-client context is worth the install effort, do this experiment:

  1. Pick a project you've been working on across at least 2 AI clients this week
  2. Count how many times you've recapped context between them
  3. Multiply by your average recap time (mine is 4–6 minutes)

That's your weekly cost of unshared context. Run it for a year. Decide if 4 minutes of install was worth that number.

For me it wasn't even close. That's why this exists.

Frequently asked questions

Does this work with ChatGPT's free tier?
Custom MCP connectors require ChatGPT Plus or Team as of 2026. Free-tier ChatGPT can't add custom MCP servers yet. The good news: every other major client in the workflow (Perplexity, Cursor, Claude.ai, Claude Code) supports MCP on their free or low-cost tiers.
What's the difference between this and just pasting between tools?
Speed and accuracy. A 400-word recap takes 5 minutes to write and inevitably loses nuance. Context Hub makes the AI client read the original decision in 200ms with the original wording. The compounding gain: each tool can also write back, so the next session in any client gets the latest state automatically.
Do I need to remember to save things, or does it happen automatically?
Both, depending on the client. Claude.ai and Claude Code save proactively when they detect a decision worth remembering. ChatGPT and Cursor are more conservative — they save when you ask them to, with a phrase like 'remember this' or 'save this for next time'. The MCP protocol exposes the save tools to every model; whether the model uses them aggressively is a behavior choice the model makes.
Can I see what's stored without going through an AI client?
Yes. The CLI ships with a 'context-hub list' command that dumps your full memory store as JSON, and 'context-hub web' opens a local read-only viewer at localhost:8788 that shows every memory, decision, and instruction with source attribution. The data is yours; you're never locked out.

Ship in one command

Try Context Hub yourself.

One command. Every AI tool you use, finally on the same page.

>_npx create-context-hub