May 10, 20267 min

Proactive agents need three primitives

A proactive agent needs three things: a clock, a listener, and an inbox, wired together with durable state. Here's why.

Most agents I see in the wild are reactive. They sit in a loop, waiting for a prompt. When the prompt arrives, they tool-call their way to an answer and go back to sleep. The whole cycle starts when you push a button.

That works for a chat box, but it falls apart for an agent that's supposed to do something for you while you are doing something else. And it's the model nearly every agent SDK still ships by default.

A proactive agent is different. It wakes itself up.

The reactive loop most teams ship.

What "proactive" doesn't mean

Running on a schedule is just cron with extra steps. Watching a webhook is push without persistence. Accepting messages is a chatbot. Each of those is a useful primitive, but none of them alone makes an agent proactive.

The most common shape in the wild right now is "agent + heartbeat" — a loop that wakes every N minutes, polls a few APIs, reasons about what it found, and acts. Some of these heartbeat agents are quite sophisticated.MindStudio's heartbeat pattern is the canonical example. It works, but it papers over the missing bits with prompt engineering and ad-hoc state. They are still polling. They are still acting on a snapshot that is N seconds out of date. And they still have no canonical place to keep what they learned between runs.

Polling is really just a symptom. What's actually missing is the primitives underneath.

Push beats poll — most of the time.

The first primitive: a way to wake up

A proactive agent needs to know when to run. Not "every five minutes" — when the moment is right. That moment is one of three things:

  1. Time passed. A schedule, an interval, a quiet stretch that lasted long enough to mean something.
  2. Data changed. A record moved, a file appeared, a status flipped from in_review to blocked.
  3. A message arrived. A human, another agent, or a system said something it expects to be heard.

Reactive agents only honor the third one, and they don't even honor it well — they wait to be invoked, not to be spoken to.

The shift is small in description and large in consequence. The agent stops being a function someone calls. It starts being a participant in a system.

Push isn't free, though. Webhooks fail in ways polling doesn't — provider outages drop events, out-of-order delivery breaks naive consumers, replay storms ruin your queue depth. The honest answer is push usually beats poll for proactive agents, but the failure modes are real and we catalogue them in Where push architectures break.

The second primitive: state that survives

The hardest thing about going from reactive to proactive isn't the wakeup. It's the memory.

A reactive agent wakes up from nothing every time. The prompt is the world. When the prompt finishes, the world ends. Nothing has to be remembered — whatever the agent knew is in the conversation it just had.

A proactive agent wakes up into a world. It has to know what it saw last time, what it acted on last time, what it's currently in the middle of. Otherwise every wakeup is a cold start, and a cold-start agent does the same useless thing over and over: it processes whatever is in front of it without context.

The shape of state that actually works is closer to a filesystem than a database. Real-time read, real-time write, conflict detection on concurrent writes, change events when anything moves. Treat it as a place agents live in, not a thing they query.

Three triggers, one shared substrate.

The third primitive: durability

Wakeup and state get you most of the way. The third primitive is the one nobody puts on the slide: the agent has to be able to fail and recover without losing its mind.

A proactive agent is, almost by definition, long-lived. It will be running while you sleep. It will be running while a deploy ships. It will be running while one of your provider integrations rate-limits itself for three hours. The runtime around it has to assume failure is the steady state.

What that means in practice:

  • Wake, work, sleep cycles need to be checkpointed — if the worker dies mid-task, the next worker resumes where it stopped, not where it started.
  • Idempotency on every external call, because the next worker will replay them.
  • Spend observability, because an always-on agent that loops is the most expensive bug in your account.
  • Auth scoped per agent per workspace, because you do not want a runaway agent with a god token.

That last bullet deserves more attention than it usually gets. A reactive agent runs for seconds: it wakes, tool-calls, responds, and its token expires with the session. A proactive agent holds a standing credential that can act while you sleep. If that credential has broad scope, a single bad loop can close every ticket in your Zendesk, post to every Slack channel, or burn through your OpenAI budget before anyone notices.

The fix is narrow scoping at three levels. First, per-agent: the digest agent gets read access to GitHub issues, not write access to your production database. Second, per-provider: a token for Notion doesn't also unlock Slack. Third, per-workspace: an agent operating in the "support" workspace can't reach into "finance." Token rotation matters too. Short-lived tokens with refresh grants mean a compromised agent loses access on the next rotation, not indefinitely. And every action the agent takes needs to be logged and attributable, because "the agent did it" is not a sufficient incident postmortem.

None of this is exotic security engineering. It's the same principle behind IAM policies and service accounts. But most agent frameworks ship with a single API key that has access to everything, because that's easier to demo. And then you wonder why things go sideways at 3am.

The runtime: clock, listener, inbox, plus the durability ring.

Wired together: the runtime

What you actually want, as a developer building one of these things, is to write the agent — the part that's yours — and have everything around it be solved by something else.

That something else is what we've been calling the proactive runtime. Concretely:

  • a clock that fires on schedules and intervals
  • a listener that surfaces normalized change events from every provider you care about
  • an inbox that delivers messages between agents, humans, and systems
  • a shared workspace the agent reads from and writes to
  • a durability ring — checkpointing, repair, spend guardrails, scoped auth

The agent code that sits in the middle of that is shockingly small. A handler. A workspace name. The runtime brings the rest.

The thing we keep coming back to is simpler than labels like "multi-agent coordination" or "agent orchestration" make it sound: give the agent a clock, an inbox, and a place to live, and let it act when something moves.

Why we're writing this down

Because the language we've been using — multi-agent coordination layer, headless Slack for agents, integration filesystem — turned out to be three different people describing the same elephant. The elephant was a runtime for proactive agents. We'd had it the whole time and kept calling it the wrong thing.

If you're building an agent right now and you find yourself reaching for a queue, a cron service, a polling loop, a webhook receiver, and a JSON column to remember what you did last, you're building a proactive runtime by hand. I kept seeing that pattern over and over, and it's why we decided to build the runtime as a standalone layer that sits under whatever agent you happen to be writing.

The next essays in this folio go deeper. Reactive vs proactive: a tour of the difference lays out the architectural divergence with examples. The eight-week webhook tax costs out the build-it-yourself path. Why we stopped saying multi-agent coordination is the longer version of why we relabelled the elephant.

Read them in any order — the clock, the listener, and the inbox show up in all of them.

Posted May 10, 2026· AgentWorkforce

Issues, PRs, and arguments welcome on GitHub. Or email [email protected].