Tools That Think With You, Not For You
An MIT Media Lab study from 2025 found that when people write essays with an LLM, their brains show measurably less connectivity than when they write alone. Not different — less. That result stuck with me, because it puts a number on something I'd only felt: that certain AI tools help you think, and others quietly do the thinking instead of you.
The distinction sounds philosophical. It isn't. It has real consequences for which tools you should trust with the things that matter.
The lineage
Douglas Engelbart published "Augmenting Human Intellect: A Conceptual Framework" in 1962. His core argument was not that computers should replace human thought but that they could extend what a human mind can reach. The mouse, hypertext, the modern windowing interface — all of it grew from that one animating idea: tools that make humans smarter, not redundant.
Andy Matuschak and Michael Nielsen picked up that thread in a 2019 essay asking why so few software tools had actually fulfilled Engelbart's vision. Their answer, roughly, was that most software is still fundamentally about storing or retrieving things, not about changing what it's possible to think.
What both of them are pointing at, sixty years apart, is a difference in posture. A calculator extends your reach. A tool that calculates for you and hands over an answer is something else. Both are useful. Only one makes you better at thinking over time.
The autocomplete problem
Most AI productivity tools right now are optimized for the second thing. Describe what you want, tool produces it, you review and ship. Friction minimized.
But friction is often where thinking happens.
When I sit down to write something hard — a decision I'm uncertain about, a message where the words matter — the act of writing is the thinking. The draft isn't output. It's process. A tool that writes the draft for me doesn't help me work through the decision. It skips it.
This is what the MIT study was capturing. Less brain connectivity means the thinking loop was shorter. The cognitive engagement that normally accompanies writing — choosing words, reconsidering a sentence, arguing with yourself about what you actually mean — was offloaded. Output was produced. Something else didn't happen.
Not every task deserves that loop. Reformatting a document, extracting action items from a transcript, converting data — autocomplete away, gladly. But decisions, plans, what you actually believe about a situation — those benefit from the friction. The ones that matter most are usually the ones where skipping the loop costs you most.
What "thinking with you" looks like in practice
A tool that thinks with you looks different from a tool that thinks for you.
It proposes rather than decides. It raises possibilities and waits. It shows you what it's doing — surfaces the reasoning so you can push back or redirect. It doesn't optimize for reaching an answer; it optimizes for helping you understand the problem better.
I'd rather have an AI that says "here are three ways you could frame this decision — which one matters most to you?" than one that immediately tells me what to do. The first makes me sharper. The second makes me dependent.
Part of why I built Harbor the way I did is this. When an AI writes something to my knowledge base — updates a person record, proposes a task, adds a preference — it shows a diff. I review it. The ceremony isn't distrust. It's keeping me in the loop, making sure the decision still passes through a human mind rather than around it.
That review step is friction. Deliberate, useful friction.
The hard part
I should be honest: this distinction isn't clean. Some things are genuinely better fully automated. Some friction is just friction — the bad kind, the kind that serves nobody. And there are days when the last thing I want is to think carefully. I just want something done.
The problem is that tools built entirely for the "just do it" case tend to flatten everything into that mode. They're optimized for throughput. Over time, you start applying them to things that probably deserved more thought.
Engelbart's original insight was that a tool shapes how you think, not just what you can do. The tools we integrate into our daily work are forming habits and mental models in us, slowly, without announcement. The question isn't only "does this tool get things done?" It's "what kind of thinker does this tool make me?"
I don't have a clean answer. But I think about it more than I probably should.
Asgeir Albretsen is the founder of Harbor.