A Reddit thread posted two days ago asks a question that has been bubbling up more frequently: "I'm seriously doubting Claude's Incognito policy." The thread is full of developers and professionals who turned on incognito mode assuming it worked like a browser's private window, then started reading the fine print and got nervous.
That skepticism is healthy. Claude incognito mode is a real, useful feature. It is also narrower than its name implies, and there are meaningful gaps that matter if you are working with client code, sensitive business data, or anything that falls under an NDA. This is not a hype piece. Let us look at what incognito mode actually does, where it stops, and what your options are when it is not enough.
Key Takeaways
- Claude incognito mode removes chats from your history and opts you out of training data, but does not make you anonymous.
- Incognito chats are retained by Anthropic for ~30 days for safety and legal reasons, so "not in history" does not mean "not stored".
- On Team and Enterprise plans, admins can export incognito conversations, so never assume your manager cannot see them.
- Claude Code has no incognito mode (GitHub issue #9044 still open); full session transcripts are stored locally in `~/.claude/projects/`.
- For stronger privacy guarantees, use AWS Bedrock with Zero Data Retention or an Enterprise contract with explicit data policies.
Learn this hands-on
Want to get more out of Claude? Master Claude Code with 8 video lessons. Check out the How to Master Claude Code: Ship Code Faster & Build AI Agents.

How to Enable Claude Incognito Mode
Enabling incognito mode is straightforward. When you start a new chat outside of any project, look for the ghost icon in the upper right corner of the chat interface. Click it, and you will see the chat window take on a black border with an "Incognito chat" label confirming the mode is active.
A few things worth noting:
- Incognito is only available outside of projects. If you are working inside a Claude Project, the option does not appear. You need to start a standalone chat.
- The session ends when you close the window. There is no way to return to an incognito conversation once it is closed.
- You do not need to configure anything else to activate it. The ghost icon is the only toggle.
That is the full setup. Simple enough. Now let us talk about what you actually get.
What Claude Incognito Mode Actually Does
When you use claude incognito chat, three things happen that do not happen in a regular chat:
Your conversation is not saved to your chat history. You will not see it in your sidebar, and it will not be accessible after the session ends.
It does not feed into Claude's memory. If you have Claude's memory feature enabled, incognito chats are siloed from it. Starting an incognito chat does not use your existing memory, and the conversation will not show up in future memory summaries.
It is generally not used for training. According to Anthropic's help center, incognito chats are not used to train Claude's models by default. For professionals concerned about their proprietary code or business context ending up in future model weights, this is the primary reason to use it.
These are genuine, meaningful protections for a specific set of concerns. If your goal is to keep a one-off sensitive query out of your conversation history and away from model training, incognito mode delivers on that.
What Claude Incognito Mode Does NOT Do (the Privacy Reality Check)
This is the section the Reddit thread was missing. Incognito mode does not make you private in any comprehensive sense, and Anthropic is transparent about this if you read carefully.
Chats are retained for 30 days
Even in incognito mode, your conversations are retained by Anthropic for approximately 30 days. The official help documentation states this is for "safety purposes and legal requirements." So "not saved to your history" and "not retained at all" are two different things. The chat is gone from your interface, but not from Anthropic's infrastructure during that window.
As Jennifer King, Privacy and Data Policy Fellow at the Stanford Institute for Human-Centered AI, noted, "We have hundreds of millions of people interacting with AI chatbots, which are collecting personal data for training, and almost no research has been conducted to examine the privacy practices for these emerging tools." The same blind spot applies to privacy features like incognito mode: most users activate them without understanding what protections they actually provide.
You are not anonymous
Incognito mode does not mask your identity. Anthropic still knows it is your account making the request. Your IP address, session metadata, and account association are all present. If you are concerned about Anthropic knowing who asked a specific question, incognito mode does not help with that.
Enterprise and Team admins can still see your chats
This is the one that surprises people the most. If you are on a Team or Enterprise plan, your organization's administrators can include incognito chats in data exports. According to Anthropic's documentation, organization data exports may include incognito conversation data. If your employer administers your Claude account, do not assume incognito means your manager cannot see it.
The "Help improve Claude" setting still matters
For maximum privacy, you will want to go to your profile settings and turn off "Help improve Claude." The incognito mode does not automatically disable this. Turning it off ensures your conversations, across all modes, are not used for product improvement purposes beyond what Anthropic's legal and safety obligations require.
According to the Cisco 2026 Data and Privacy Benchmark Study, approximately 15% of employees already paste sensitive information into public AI chatbots without realizing the implications. Claude's incognito mode helps with some of those risks, but it is not a substitute for understanding what data you are actually sharing and with whom.
We have hundreds of millions of people interacting with AI chatbots, which are collecting personal data for training, and almost no research has been conducted to examine the privacy practices for these emerging tools.
The Developer Gap: Claude Code Has No Incognito Mode (Yet)
Here is the part of the privacy picture that almost no one talks about, because the conversation is usually focused on the web interface.
Claude Code does not have an incognito mode. GitHub issue #9044, filed in October 2025, is still open as of April 2026. The feature has been requested, acknowledged, and has not shipped.
What this means in practice: every conversation you have with Claude Code is stored locally in ~/.claude/projects/. These are full conversation transcripts. That includes the code you pasted, the prompts you wrote, the file contents that were read during the session, and any context you provided about the project or the client.
If you are doing client work, that directory may contain things you should not be storing unencrypted on your laptop long-term: API keys that slipped through, proprietary business logic, data that falls under an NDA.
Practical workarounds until official support ships:
- Delete the project folder after sensitive work. The path is
~/.claude/projects/[project-name]/. You can remove it once the work is done. This is blunt but effective. - Use a separate Claude Code config directory for client work. You can point Claude Code at an alternate config with the
--config-dirflag, which keeps client sessions isolated from your personal workspace. - Add
.gitignorepatterns to prevent leaking into repos. At minimum, add~/.claude/awareness to your workflow so you are not accidentally committing transcript data. If your project lives in a shared repo, adding.claude/to.gitignoreprevents local session data from being pushed. - Watch issue #9044 for official support. When it ships, it will likely look similar to the web UI's ghost icon. Until then, manual cleanup is the only option.
Claude Incognito vs ChatGPT Temporary Chat vs Gemini
Here is how the three main tools compare on privacy-related features (for a broader look at how these tools stack up on capability, see the AI coding tools comparison for 2026):
| Feature | Claude Incognito | ChatGPT Temporary Chat | Gemini |
|---|---|---|---|
| Available on free tier | Yes | Yes | No equivalent feature |
| Chat history saved | No | No | N/A |
| Used for training | No (by default) | No | Varies by setting |
| Retention period | ~30 days | Not specified publicly | Not specified publicly |
| Available in desktop app | Yes (claude.ai) | Yes | Limited |
| Available in CLI/code tool | No (GitHub #9044 open) | No | No |
The main differentiator is Claude's transparency about the 30-day retention window. ChatGPT's temporary chat policy is less explicit about retention timelines. None of the three tools offer incognito-style protection in their CLI or code-focused tools, which is a meaningful gap across the board for developers doing sensitive work. If you are specifically weighing Gemini 3.0 Pro against Claude for daily use, that comparison covers the capability tradeoffs in depth.
When You Should Actually Use Incognito Mode
Incognito mode is well-suited to a specific set of scenarios. Use it when:
You have a one-off sensitive query that you do not want in your permanent history. A contract clause you are not sure about. A business decision you are thinking through. A technical question about a client's architecture.
You are brainstorming something you do not want cluttering your history. Exploratory thinking, hypothetical scenarios, draft messaging you are not sure about yet.
You are evaluating a prompt or testing a workflow and do not want that experimental session polluting Claude's memory with context that does not reflect your actual preferences.
You are doing client work and have not yet gotten NDA or data use clearance. If you are starting early-stage work with a new client and the data governance conversation has not happened yet, incognito mode is a reasonable default until it does.
When Incognito Mode Is NOT Enough
If you have stronger privacy requirements than "not in my history and not used for training," incognito mode on claude.ai is not the right tool. Understanding what vibe coding actually means for your workflow helps clarify which privacy controls matter most in practice. Here are the alternatives that actually address deeper concerns:
Claude via AWS Bedrock with Zero Data Retention (ZDR). Bedrock's ZDR option means Anthropic does not retain inputs or outputs at all. No 30-day window. This is the right choice for handling genuinely sensitive data at work, provided your team can set it up and pay for it.
Claude Enterprise with specific contractual data policies. Enterprise contracts can include specific provisions about data handling, retention windows, and whether your data is used for any Anthropic purposes. If you are at a company where this matters, your legal team should be reviewing the Enterprise terms.
Self-hosted open models. If you need to ensure that zero data leaves your infrastructure, models like Llama, Mistral, or DeepSeek running on your own hardware or private cloud are the only option that fully closes the data egress question. You trade capability for control.
As the VentureBeat coverage of Claude's memory launch noted, incognito mode was positioned as the privacy counterpart to persistent memory. It is useful in that framing. It is not a zero-trust privacy architecture.
Use It Deliberately
Claude's is a real, useful feature that solves a specific problem well: keeping sensitive chats out of your history and away from model training. Use it for that. Do not use it as a substitute for understanding what Anthropic actually retains, what your organization's admins can see, or what Claude Code is storing on your local filesystem.
The 30-day retention window, the admin visibility issue on Team and Enterprise plans, and the complete absence of incognito mode in Claude Code are not gotchas buried in fine print. They are just the actual behavior of the system. Understanding them makes you a more deliberate user of a tool that is increasingly present in real professional work.
Push Anthropic on GitHub issue #9044. Incognito mode in Claude Code would close the most significant privacy gap that professional developers face today. Until then, manage your ~/.claude/projects/ directory like it contains exactly what it does: a full record of what you built and how.
If you want to build a stronger technical foundation around using AI tools responsibly in professional workflows, the Master Course at Vibe Coding Academy covers the practical patterns that matter for real client work.

