← All posts
14 October 2025

AI Agents Are Configuration, Not Magic

When I built the first agent configuration in Harbor, I kept waiting for something to happen. The record saved. There was no process to watch, no thread to monitor, no moment where the system became "agentic." I had written a name, a purpose statement, a system prompt, and a list of tools to a SQLite row. That was the agent.

I'm still not sure why this was surprising. I'd built the thing. I knew what was in it. But somewhere between "AI agent" as a marketed concept and "AI agent" as an implemented thing, I had apparently picked up an expectation that there would be more to it.

There isn't.

What an agent actually is

Strip away the branding and an AI agent is a named configuration that gets loaded into context when a model runs. The configuration has maybe five meaningful parts.

A system prompt that gives the model its instructions — what it's for, how it should behave, what it shouldn't do.

A set of tools it's allowed to call. search_knowledge. propose_patch. add_task. read_document. The list is finite, usually short.

A knowledge scope — which folders, documents, or databases it can see. "This agent can read Plans/ and People/. It cannot read Journal/."

An approval policy — which of its writes require human review before being applied, and which can go through automatically.

A memory path — a document or set of documents it reads at the start of each session, so it carries context across conversations without starting blind.

That's it. In Harbor's database, this fits in one table:

agents (
  id, name, purpose,
  system_prompt,
  tool_permissions_json,
  knowledge_scope_json,
  approval_policy_json
)

The model doesn't live in that record. The autonomy doesn't live there either. What lives there is constraint. The configuration is a set of boundaries — and everything inside those boundaries is the model doing what it always does: reading context, deciding what to say or call next, repeating until it stops.

The loop is also boring

The thing people usually imagine when they hear "agent" is something that persists. A running process. Something that wakes up, decides, acts. What actually happens when an agent runs is a loop. The model reads its context — system prompt, memory files, current conversation, tool results so far — and produces either a response or a tool call. If it produces a tool call, your code runs the tool, appends the result to the context, and calls the model again. Repeat until the model stops calling tools.

Simon Willison described this well in a September 2025 post on designing agentic loops: the LLM doesn't "run" between tool calls. It receives a context, produces output, and stops. The next step is just another inference call, with more context accumulated. The autonomy, such as it is, lives in how the model decides which tool to call — and that's a function of training and the system prompt you wrote, not something embedded in the agent record.

This might sound like I'm being reductive about something genuinely impressive. I'm not. Models deciding when to search, when to write, when to ask for clarification — that is impressive. What I'm saying is that the impressive part isn't the agent abstraction. The agent abstraction is the thing you configure.

Why this is useful to know

If an agent is a configuration, then building a better agent means writing clearer instructions. Scoping its knowledge more deliberately. Being explicit about which tools it should and shouldn't have. Putting the right things in its memory files so it doesn't rediscover the same context each session. None of that is magic. It's editing.

It also means agents are inspectable. You can open the agents table and read exactly what any agent is allowed to do. You can change its system prompt. You can expand its knowledge scope for one project and narrow it for another. You can require approval for writes to the People folder while auto-approving writes to Tasks. The configuration is the agent, and the configuration is text you can read.

Compare this to AI memory that lives inside the model — weights updated somewhere in training, preferences accumulated in a cloud system you can't open. You can't inspect that. You can't edit it. You can't know what it knows about you or why it behaves differently today than it did last month.

The database row version is less impressive-sounding. But it's yours.

The part that is actually hard

Configuration being simple doesn't make configuration easy. The hard part isn't defining what an agent is. It's deciding what it should be for.

A useful agent has a tight purpose — one thing it does well, with exactly the tools and scope that task requires. "Research assistant for Ideas/ and Projects/, with read-only access to GitHub issues, propose-only on documents, and approval required for anything in People/" is a useful agent definition. "General-purpose assistant with access to everything" is just AI chat with an extra naming layer.

The discipline is in narrowing. That discipline requires thinking carefully about what knowledge an agent actually needs, which writes it should be allowed to make without asking, and what it should surface for review. Those are design decisions, not technical ones. And they're the same design decisions you'd face with any other tool you let into your workspace.

The hype around agents focuses on how capable they are. The more useful question is: capable of what, exactly, with access to what, producing what kind of audit trail? Answer those, and you've written your agent schema. The rest is just filling in the fields.


Asgeir Albretsen is the founder of Harbor.

AI Agents Are Configuration, Not Magic — Harbor Blog | Harbor