May 18, 202610 min

ChatGPT Memory vs Claude Projects vs Gemini Gems Compared

The three big assistants each have a different answer to the same question: "How do I get this AI to remember the context of my work?" ChatGPT has memory. Claude has Projects. Gemini has Gems. They sound interchangeable. They are not. This post breaks down what each one actually does, where each falls short, and the workflow that gets continuity without picking favorites.

TL;DR

  • ChatGPT memory is automatic but opaque. The model decides what to store, and you can't read or move it.
  • Claude Projects are explicit and document-shaped. You add files; the model reads them on every chat in the project.
  • Gemini Gems are persona-shaped. They're customized assistants with persistent instructions, not document memory.

None of them solve cross-platform memory. The fix isn't picking the right one — it's adding a memory layer that works across all three.

Side by Side

FeatureChatGPT MemoryClaude ProjectsGemini Gems
Primary unitBullet-point factsReference documentsCustom assistant + instructions
Who decides what's storedThe model (mostly)YouYou
VisibilityLimited — bullet list viewFull — files you uploadedFull — instructions you wrote
EditableManual delete onlyAdd / remove / replace filesEdit instructions any time
Carries to other vendorsNoNoNo
Best forPersonal preferencesLong-running projectsRecurring task templates
Worst forProject-specific memoryCross-project memoryDocument-heavy memory

ChatGPT Memory in Detail

ChatGPT's memory is a list of short bullet points the model writes about you over time. Things like "User is building a Next.js app with Postgres" or "User prefers concise answers."

What it's good at:

  • Personal preferences that should always apply (tone, style, language).
  • Repeated facts about your work that are stable across projects.

What it's not good at:

  • Project-specific knowledge. Bullet points can't carry the reasoning behind a decision or the structure of a document.
  • Auditability. You can see the bullet list, but you can't see why a fact was stored or when it was last updated.
  • Portability. There's no export of memory itself in a useful format. If you switch to Claude tomorrow, this memory does not come with you.
  • Long-running projects. The bullet format degrades for anything richer than a fact.

The failure mode: you ask ChatGPT about a project you've been working on for weeks, and it knows you exist and what stack you use, but it doesn't know any of the actual decisions you made. That's because none of those decisions ever made it into a bullet point.

Claude Projects in Detail

Claude Projects are bundles of reference material attached to a workspace. You upload files (docs, code, transcripts, PDFs), set instructions, and every chat inside that project sees them.

What it's good at:

  • Long-running, document-heavy work. The model reads your files every chat.
  • Explicit, auditable memory — you literally see and choose what's in the project.
  • Deep, project-specific reasoning, because the source material is right there.

What it's not good at:

  • Cross-project memory. Each project is its own bubble. Knowledge in Project A doesn't help in Project B.
  • Free-form chats. If most of your work isn't structured into projects, you're back to ad-hoc context.
  • Portability. Files live inside Anthropic's product. Moving the project to ChatGPT or Gemini means rebuilding it.
  • Auto-updating. Files don't update themselves; if your project state moves, you're responsible for keeping the docs in sync.

The failure mode: you build a great project, then start a new chat outside the project (or in a different project) and the carefully curated documents are invisible. Same chasm as ChatGPT memory, just shaped differently.

Gemini Gems in Detail

Gemini Gems are custom assistants — a persona plus instructions, optionally with files. Closer to ChatGPT's "Custom GPTs" than to ChatGPT memory.

What it's good at:

  • Recurring tasks with a stable shape. "Code reviewer Gem," "Spanish tutor Gem," "Daily-standup Gem." The instructions are persistent.
  • Sharing a configured assistant with someone else.
  • Layering on top of Gemini's strengths (Google integrations, multimodal, fast).

What it's not good at:

  • Memory in the working sense. A Gem doesn't accumulate knowledge across chats with you the way ChatGPT memory tries to.
  • Project-specific memory at depth — file support exists but is shallower than Claude Projects in practice.
  • Anything cross-vendor. Gems live inside Gemini.

The failure mode: Gems give you durable instructions but not durable knowledge. Every new chat is still a fresh slate beyond the Gem's preset.

What All Three Have in Common

The common shape:

  1. Vendor-specific surface. Each lives inside one product. None work across platforms.
  2. Limited shape. Bullet points (ChatGPT), file bundles (Claude), persona+instructions (Gemini). None of them are "the memory of your work life."
  3. Black-box-ish UX. ChatGPT decides what to store. Claude reads files but the read is opaque. Gemini's Gem behavior depends on the underlying model version.
  4. No portability. Switching providers means starting over.

If the only AI you'll ever use is ChatGPT, ChatGPT memory plus a few Custom GPTs gets you reasonably far. If you'll ever use more than one — and you almost certainly will, because the best model for the job changes — these features stop being enough.

The Pattern That Actually Works

Stop trying to make any one vendor's memory feature do all the work. Keep the canonical memory outside the vendor, and load context into whichever model you're using.

The MindLock workflow:

  1. Capture — save chats from any AI (ChatGPT, Claude, Gemini, Perplexity) with Ctrl + S, or feed in transcripts from official exports.
  2. Import into the Dashboard — local-first storage on your device.
  3. Distill — convert raw transcripts into compact memory documents (profile + topic). Run locally on your GPU via WebLLM, or in the cloud via Gemini.
  4. Search and paste — semantic search across your memory, generate a context block, paste into the next chat on whichever model is right for the task.

Why this works where vendor features don't:

  • Vendor-agnostic. The memory document doesn't know or care which model reads it next.
  • Auditable. You can read every memory document as plain text. No black box.
  • Portable. Files on your disk. Move them, version them, back them up.
  • Composable. Claude Projects are great for one project's docs; ChatGPT memory is fine for personal preferences. The MindLock layer lives above both — it's where the canonical record lives, and you copy what you need into either tool.

Try it now: open the Dashboard, import one ChatGPT chat and one Claude conversation, and watch the same workflow handle both.

For a longer treatment of the cross-platform pattern, see Give ChatGPT, Claude, and Gemini Persistent Memory Across Every Chat.

When to Still Use the Native Features

This isn't an "either/or." Use the vendor features for what they're good at:

  • Use ChatGPT memory for tone and stable preferences. Let it do its bullet-point thing.
  • Use Claude Projects for long-running, document-heavy collaborations where every chat genuinely needs the same source files.
  • Use Gemini Gems for recurring task shapes with stable instructions.
  • Use a personal memory layer for anything you'd be furious to lose, anything project-specific, and anything you want available no matter which model you talk to next.

The mistake is treating vendor memory as your only memory. The fix is layering a portable, auditable record on top.

Privacy Differences

Quick note: each of these features stores data on the vendor's servers under their privacy policy. Different policies, different retention rules, different model-training opt-outs. If your concern is "my work shouldn't live on OpenAI's servers," none of the three solve that — only a local-first memory layer does. See Private AI Memory: Incognito Mode With Full Data Sovereignty.

Decision Matrix

If you're picking a primary tool today:

  • Want depth on a small number of long-running projects? Claude with Projects.
  • Want broad daily use across many small tasks? ChatGPT with memory + a few Custom GPTs.
  • Live inside Google Workspace and want assistants for recurring shapes? Gemini with Gems.
  • Want continuity that survives switching tools? Any of the above plus an external memory layer like MindLock.

Realistically, most heavy users end up using two or three of these, plus something like MindLock to keep the connective tissue intact.

Start

The pattern that ages best is the one that doesn't bet on any single vendor. Open the Dashboard, import a chat from ChatGPT or Claude or Gemini, run a distillation, and see what your memory looks like as a document instead of a black box. From there, the choice between memory, Projects, and Gems stops being an architecture decision — it's just which interface you happen to be in today.

Use-Case Walkthroughs

Three concrete shapes that show the differences in practice.

Walkthrough 1 — Solo developer building a SaaS. ChatGPT memory holds your stack and tone preferences. A Claude Project holds the architecture doc, schema, and key code modules. A Gemini Gem handles your weekly "release-notes-from-commits" task. The MindLock layer holds distilled memory of every meaningful chat across all three, and you paste a project-scoped context block into whichever model you're using today. Each tool does what it's good at; the memory layer is the connective tissue that prevents context loss whenever you switch.

Walkthrough 2 — Researcher synthesizing a literature review. Claude Projects shines here — upload the PDFs, and every chat sees the source material. ChatGPT memory is barely used. Gemini Gems aren't relevant. But the distilled summaries of each Claude session, plus your own notes on what's been ruled in or out, belong in MindLock so the review survives Claude's project-by-project compartmentalization. When you start the writing phase in ChatGPT, you paste the distilled summary and ChatGPT picks up where Claude left off.

Walkthrough 3 — Operations manager running recurring playbooks. Gemini Gems carry the most weight: a "weekly KPI review" Gem, a "vendor follow-up" Gem, a "meeting prep" Gem. ChatGPT memory keeps personal preferences. Claude Projects barely come into play. The MindLock layer holds the outcomes of each playbook run — the actual numbers, the actual decisions — because Gems hold instructions, not history. Run-level memory is portable and searchable; instructions stay in the Gem.

The thing all three walkthroughs have in common: the vendor features are useful but partial. The external memory layer is what makes them add up.

Migrating Between

If you've been heavily invested in one of these features and want to migrate (or just hedge), a few practical notes:

  • Leaving ChatGPT memory: export your data, distill your most important chats, drop the bullet-style memories. The bullets will rehydrate as a richer profile memory in document form.
  • Leaving Claude Projects: export each project's reference docs and import them as a memory document set. The original chats can be Ctrl + S-saved before you close the project.
  • Leaving Gemini Gems: copy the Gem's instructions into a memory document tagged with the task name. The persona-shaped behavior is now portable as text you can paste into any model's system prompt.

In every case, the destination is the same — a portable memory layer — even if the starting point is different. That's the upside of the pattern: it doesn't care which vendor's feature you used yesterday.

Picking a Default for Today

If you want a single recommendation:

  • Use ChatGPT for default daily work, with memory on for tone and personal preferences.
  • Use Claude with Projects for any long-running, document-heavy work.
  • Use Gemini Gems for recurring task shapes you'd otherwise re-explain weekly.
  • Keep the canonical record in a memory layer outside any of them, so when one of them changes (and they will), you're not the one starting over.