Skip to main content

The Big Picture

OpenClaw is built around a persistent, loop-driven agent that wakes up, thinks, acts, and remembers — all on your local machine. No cloud required.

SQLite — the only database
Markdown files for config
Loop-driven, not event-only
100% local execution
Model agnostic LLM brain

The Heartbeat

Proactive autonomy through a scheduled agentic loop

Most AI tools are purely reactive — they only do something when you ask. OpenClaw is different because of the Heartbeat: a scheduled trigger that fires automatically on a regular interval (every 30 minutes by default) and wakes the agent up to check whether there's anything it should be doing proactively.

On each heartbeat, the agent reads HEARTBEAT.md — a simple Markdown checklist of recurring responsibilities you've defined. It might include things like "check if any emails need urgent replies," "verify the build passed," or "remind me of tomorrow's meetings at 6pm." The agent evaluates each item against the current context and decides whether to act.

If there's nothing to do, the agent responds with the special token HEARTBEAT_OK. The Gateway recognises this and suppresses it — you never see it. If there is something to do, the agent acts and delivers the result to your messaging app.

# HEARTBEAT.md — Example # The agent reads this every 30 minutes - Check email inbox. Flag anything marked urgent. - If it's after 5pm on a weekday, summarise today's activity. - If the CI build status changed since last check, notify me on Slack. - Every morning at 8am, send me today's calendar summary on WhatsApp.

⏰ Default interval: 30 minutes

Configurable in your OpenClaw settings. You can set it to 5 minutes for a highly active agent or 2 hours for a lighter-touch setup.

🤐 The HEARTBEAT_OK token

When there's nothing to act on, the agent returns this special string. The Gateway silently drops it so you're never spammed with "nothing to do" messages.

The Soul — SOUL.md

Persistent identity, personality, and values

SOUL.md is the most foundational file in an OpenClaw installation. It defines who your agent is — its name, personality, communication style, core values, and standing instructions that persist across every single interaction, heartbeat, and session.

Unlike a traditional chatbot that forgets its "personality" when you close the window, OpenClaw loads SOUL.md into the context at the start of every agentic loop iteration. This means your agent behaves consistently — it always knows its name, its tone, what it prioritises, and what boundaries you've set for it.

You write SOUL.md yourself in plain Markdown. It can be as simple as a few lines or as detailed as a full character document. The more specific you are, the more consistently the agent reflects your preferences.

# SOUL.md — Example ## Identity Your name is Max. You are a focused, professional assistant working for Conan. You are direct, concise, and never verbose. ## Values - Privacy first: never share personal data with third-party skills - Always ask before taking irreversible actions - When uncertain, ask rather than assume ## Communication Style - Use plain language. No jargon. - Keep responses under 3 sentences unless asked for detail - Respond in the same language the user writes in ## Standing Instructions - Never send emails without showing me a draft first - Flag anything time-sensitive immediately

Think of SOUL.md as a permanent system prompt — it's injected at the top of every conversation and heartbeat cycle, giving your agent a consistent identity across all contexts and time.

Memory & Storage

SQLite, embeddings, and context compaction

OpenClaw's memory system is deliberately simple and local. There's no external database — no Redis, no Pinecone, no cloud vector store. Everything lives in a local SQLite file on your machine. This keeps the system portable, private, and fast.

Memory is layered. The agent stores every conversation turn, heartbeat action, and result in SQLite. When you ask a question or give a command, the agent retrieves relevant context using embedding-based semantic search — optionally accelerated by the sqlite-vec extension, which enables fast vector similarity queries directly inside SQLite.

For installations that need both conceptual and exact matching, OpenClaw supports a hybrid search mode that combines embedding similarity with keyword-based search. This means the agent can find memories by meaning ("what did I say about the project deadline?") and by exact terms ("show me messages containing 'invoice #4421'").

Conversation History

Every message and response stored as timestamped turns in SQLite. The full record of everything you've said and the agent has done.

Semantic Embeddings

Each memory turn is vectorised and stored. When context is needed, the agent runs a similarity search to surface the most relevant past interactions.

Compacted Summaries

When conversation history grows too large for the context window, older turns are automatically summarised and replaced with compressed entries — preserving meaning, reducing tokens.

SOUL.md & Config Files

Personality, values, and standing instructions stored as Markdown files — loaded fresh into context on every loop iteration.

Context Compaction

How OpenClaw manages the LLM context window limit

Every LLM has a context window limit — a maximum number of tokens it can process at once. For a persistent agent that accumulates thousands of conversation turns over days and weeks, this is a real engineering challenge.

OpenClaw solves this with an automatic compaction process. When loading the full conversation history would exceed the context window, OpenClaw runs a summarisation step over the oldest turns — condensing multiple messages into a single compressed summary entry that preserves the semantic meaning while using far fewer tokens.

The key insight is that recent turns are kept verbatim (the last N interactions stay in full detail), while older history is progressively compacted. This means the agent always has perfect recall of recent events and a semantic approximation of older history — just like human working memory.

Nothing is permanently lost. The original raw turns remain in a separate SQLite table for auditing. Only the working memory used in the context window is compacted.

The Gateway

The bridge between your messaging apps and the agent

The Gateway is the local server process that bridges your messaging apps and the OpenClaw agent. It listens for incoming messages from WhatsApp, Telegram, Slack, Discord, or iMessage — normalises them into a standard format — and routes them into the Agentic Loop.

On the way back out, the Gateway takes the agent's response and delivers it through the correct messaging platform. It handles authentication, message queueing, and rate limiting automatically.

The Gateway also plays an important filtering role: it recognises the HEARTBEAT_OK token and drops it silently, ensuring that the dozens of "nothing to do" heartbeat results never reach you as noise. Only meaningful responses get delivered.

📥 Inbound handling

  • Receives messages from all connected platforms
  • Normalises message format
  • Queues messages to prevent overload
  • Handles webhook authentication

📤 Outbound handling

  • Routes responses to the correct platform
  • Suppresses HEARTBEAT_OK silently
  • Applies rate limits per platform
  • Logs all activity to SQLite

The Agentic Loop

The core reasoning and execution engine

The Agentic Loop is the heart of OpenClaw's intelligence. It's the process that runs on every trigger — whether from a user message or a heartbeat — and orchestrates reasoning, tool use, and memory updates.

On each iteration, the loop assembles a full context packet: it loads SOUL.md, retrieves relevant memories from SQLite, adds the current trigger (message or heartbeat), and sends everything to the configured LLM. The model responds with either a direct reply or a tool call — an instruction to execute a skill, run a shell command, search the web, and so on.

If the LLM calls a tool, the loop executes it, captures the result, and feeds it back into the model — this continues in a reasoning cycle until the LLM produces a final answer. The result is then returned through the Gateway and written to memory.

// Agentic Loop — pseudocode function agenticLoop(trigger) { // 1. Build context const soul = loadFile('SOUL.md'); const hb = loadFile('HEARTBEAT.md'); const memory = retrieveMemory(trigger, topK: 20); // 2. Reason with LLM let response = callLLM({ soul, hb, memory, trigger }); // 3. Tool-use loop while (response.hasToolCall()) { const result = executeTool(response.toolCall); response = callLLM({ ...context, toolResult: result }); } // 4. Persist and deliver saveToMemory(trigger, response); compactIfNeeded(); return response.text; // Gateway delivers this }

Tools & Skills

How the agent takes action in the real world

Tools are the built-in capabilities OpenClaw ships with — shell execution, file system access, web browsing, email, and calendar. These are always available to the Agentic Loop with no extra installation needed.

Skills are community-built tool extensions installed from ClawHub. When installed, a skill registers new tool definitions that the LLM can call during the loop — enabling integrations like GitHub, Notion, Spotify, and hundreds of others. From the loop's perspective, a skill is just another callable tool.

All tool definitions are injected into the LLM's context as a tool schema — a structured description of what each tool does, what parameters it takes, and what it returns. The LLM decides autonomously which tools to call and in what sequence based on your instruction.

Security implication: Skills can register arbitrary tool definitions and execute code. A malicious skill can abuse the tool-call mechanism to run harmful commands during the loop. This is why reviewing skill permissions before installing is critical. See the Security Guide.