← All posts
3 November 2025

Your AI Doesn't Know You — And That's the Problem

Sometime in the last year, you probably had a genuinely useful conversation with an AI. You explained your situation carefully, asked the right questions, got something back that helped. Then you closed the tab. The next time you opened it, none of that happened. The AI didn't know you'd been there before.

This is the fundamental problem with AI assistants that nobody quite talks about honestly. The tools are getting more capable every month — the reasoning is sharper, the code actually runs — but they still don't know you. Every session is a clean slate. Everything you want to accomplish gets filtered through the overhead of re-establishing who you are and what you're trying to do.

The re-explaining tax

Sophie Leroy, a researcher at the University of Washington, coined the term "attention residue" to describe what happens when you switch tasks mid-flow: part of your cognitive capacity stays stuck on the previous thing, and your performance on the new task suffers for it. Switching costs aren't just time. They're quality.

Something similar happens with AI tools, but the burden runs the other direction. The AI has no residue — it remembers nothing. You carry the accumulated context of your work, and each new session requires you to pour some of it back in before you get anything useful out. For a quick question, that overhead is trivial. For real thinking — working through a decision, planning something complex, trying to understand something at the edge of your knowledge — it compounds fast.

A 2022 Harvard Business Review study found that knowledge workers spend nearly four hours a week just re-orienting themselves after switching between apps and contexts. That's bad enough for human-to-human work. The AI chat interface, ironically, is just another silo to re-enter — one that knows less about you than your email does.

What "memory" features actually solve

The industry noticed the problem. OpenAI added persistent memory to ChatGPT. Anthropic built Projects, which let you attach documents and standing instructions to a persistent space. These help. But they solve a narrower version of the problem than they appear to.

What these features give you is preference retention. The AI recalls that you like short answers, or that your name is David, or that you're a freelance designer. That's useful. But it's not a model of your knowledge — your current decisions, your relationships, your projects, the things that change week to week and month to month.

That distinction matters. A decision you made in January affects how you think in February. Something you learned in a meeting changes your understanding of a problem. You have a conversation that shifts your direction entirely. None of that gets into the AI's memory unless you explicitly put it there — and even then, it lives in a system you can't inspect, can't read directly, and can't take somewhere else.

The other kind of context

There's a different way to think about this. Instead of asking the AI to remember things about you, build a place where you keep things for the AI to use.

It sounds like a minor distinction. It isn't.

When memory lives inside the AI — in a proprietary format, updated by processes you can't observe — it's a black box. You can't tell what it knows, what it's gotten wrong, or whether it's quietly drifted from reality. When your knowledge lives in a structured document that you own, the situation changes: you can read it, correct it, trust it, and give it to whatever AI you're working with on a given day.

A knowledge base like this isn't a notes app or a productivity system. It's a more specific thing: structured facts about your work and life — your projects, relationships, preferences, decisions — stored in a format that AI can query rather than just skim in bulk. The AI doesn't have to guess what matters or hallucinate context it doesn't have. You've organized it. The conversation starts in the middle, where it should.

Who holds the context

I've been thinking about why this feels more important than it initially seemed. When the AI knows you — actually knows you — through a system you control, the nature of the tool shifts. It stops being a capable stranger you brief every session and starts being something more useful: a collaborator that picks up where you left off.

That shift depends entirely on where the context lives. There's something quiet but significant in the difference between borrowed context and owned context. Between memory that degrades when you close the tab and memory that's yours to keep, edit, and inspect whenever you want.

The most useful AI assistant probably isn't the most powerful one. It's the one that knows a lot more about you.


Asgeir Albretsen is the founder of Harbor.

Your AI Doesn't Know You — And That's the Problem — Harbor Blog | Harbor