Sorry for the two-week gap in newsletters. Things got busy, but we’re back.
You’ve probably noticed the headlines. ChatGPT now remembers everything you’ve ever told it. Claude can search through your past conversations. It sounds like AI assistants finally get you. But scratch the surface, and you’ll find something more complicated.
Most people assume your AI is building up this comprehensive understanding of you over time, like a friend who remembers your birthday and knows you take your coffee black. That’s happening, but the how matters more than you think.
When you’re chatting with an AI in a single conversation, it’s using what’s called a context window—basically its working memory for that session. Everything you say stays accessible until you close that chat. Then it’s gone. The next conversation starts from scratch.
The new memory features change this, but ChatGPT and Claude took different approaches. ChatGPT now automatically extracts “memories” from your conversations. It picks out facts it thinks are important and loads those into every future conversation. The system references all your past conversations to deliver more relevant responses. You can view, edit, or delete individual memory entries in your settings, and you can turn off the memory feature entirely if you want. But by default, it’s building that profile.
In April 2025, OpenAI expanded this capability, letting ChatGPT reference all your previous conversations indefinitely. Not just snippets—everything. It’s genuinely impressive from a technical standpoint, and while you maintain control over what’s stored, the system is designed to remember automatically unless you intervene.
Claude went a different direction. When Anthropic rolled out memory in September 2025 for Team and Enterprise users, they built it around transparency and compartmentalization. Claude can recall projects, preferences, and conversations, but users can view, edit, or disable the feature. For business users, there’s an automatic component that can be turned off or restricted by admins. Claude also keeps separate memories for different projects. Your work stuff never touches your personal conversations.
It’s a different philosophy. ChatGPT optimizes for convenience—it wants to feel like it just knows you. Claude optimizes for control and transparency—you decide what it remembers and when, with clear visibility into what’s being recalled.
The data retention story got complicated earlier this year. In May 2025, a federal court ordered OpenAI to preserve all ChatGPT conversations—including deleted ones—as part of The New York Times copyright lawsuit. For months, people who thought they’d permanently erased sensitive conversations discovered those chats were still on OpenAI’s servers. Enterprise customers and API users with special agreements were exempt. Regular users weren’t.
OpenAI called it a “privacy nightmare” and appealed. On October 9, 2025, the judge lifted the broad preservation order.
OpenAI can now delete most user data again, except for logs linked to plaintiff-flagged accounts. The temporary freeze is over, but it revealed just how much data these systems hold and how little control users had during that window.
So what’s AI actually remembering? In Claude’s case, your exact words from past conversations, retrievable on demand with clear transparency about when and how. In ChatGPT’s case, an ever-growing profile of facts, preferences, and patterns extracted from everything you’ve said, with controls you can adjust if you know where to look. Both are powerful. Both are useful. But they handle your data very differently.
The next time an AI “remembers” something about you, you’ll know what’s really happening—and you can make an informed choice about what you’re comfortable having it store.