Skip to main content
Config Reference

openclaw.json — Your Agent's Settings File

Everything OpenClaw needs to know about your setup lives in one JSON file. This guide explains exactly what it is, where it goes, and how to configure it — even if you've never touched JSON before.

What is openclaw.json?

openclaw.json is the main configuration file for your OpenClaw agent. It tells OpenClaw which AI model to use, how to connect to the internet, which tools are available, and dozens of other settings.

Think of it like the settings panel on a TV remote — the remote (OpenClaw) works the same way no matter what, but the settings panel lets you control what channel it starts on, how loud the volume is, and which inputs are active.

📄 Why JSON?

JSON (JavaScript Object Notation) is a simple text format that's easy for both computers and humans to read. It uses curly braces { }, keys in quotes, and values separated by colons. OpenClaw uses JSON as the default config format because it's widely supported and works in every code editor.

Where Does the File Live?

Your project folder

Place openclaw.json in the root of your project — the same folder where you'd run openclaw start.

Global fallback (optional)

You can also put a config in ~/.openclaw/openclaw.json to use as a default across all projects.

Project config wins

If a project-level file exists, it takes priority over the global one. Great for testing different setups per project.

Minimal Starter Config

You don't need every setting from day one. Here's the smallest valid openclaw.json to get your agent running:

{
  "llm": {
    "provider": "anthropic",
    "model": "claude-haiku-4-5-20251001",
    "apiKey": "sk-ant-..."
  }
}

⚠️ Keep your API key safe

Never commit your API key to GitHub. Add openclaw.json to your .gitignore file, or use an environment variable like "apiKey": "$ANTHROPIC_API_KEY" — OpenClaw will substitute it automatically.

Full Annotated Example

Here's a more complete config showing the most common settings. Every line is explained:

{
  "llm": {
    "provider": "anthropic",       // AI provider: "anthropic", "openai", "ollama"
    "model": "claude-sonnet-4-6", // Which model to use
    "apiKey": "$ANTHROPIC_API_KEY", // Reads from environment variable
    "maxTokens": 4096,          // Max tokens per response
    "temperature": 0.7          // 0 = focused, 1 = creative
  },

  "gateway": {
    "port": 8080,               // Port OpenClaw listens on
    "host": "127.0.0.1",        // localhost only (most secure)
    "timeout": 30000            // Request timeout in milliseconds
  },

  "memory": {
    "type": "sqlite",           // Storage: "sqlite", "postgres", "memory"
    "path": "./agent.db",       // Where the database file is saved
    "maxEntries": 10000         // Max remembered items
  },

  "tools": {
    "enabled": ["web_search", "file_read", "code_run"],
    "webSearch": {
      "provider": "brave",
      "apiKey": "$BRAVE_API_KEY"
    }
  },

  "heartbeat": {
    "enabled": true,             // Keep agent alive between requests
    "intervalMs": 60000          // Ping every 60 seconds
  },

  "voice": {
    "enabled": false,            // Enable voice input/output
    "provider": "elevenlabs",    // TTS provider
    "apiKey": "$ELEVENLABS_KEY"
  }
}

Field Reference

Click any section below to see all available fields and what they do.

🤖 llm — Language Model Settings

Controls which AI model powers your agent.

FieldTypeDefaultDescription
providerstringrequired AI provider: "anthropic", "openai", "ollama", "groq"
modelstringrequired Model name, e.g. "claude-sonnet-4-6" or "gpt-4o"
apiKeystringrequired Your API key or an env var reference like "$ANTHROPIC_API_KEY"
maxTokensnumber4096optional Max tokens per response
temperaturenumber0.7optional 0 = deterministic, 1 = creative
systemPromptstringoptional Custom system prompt to give your agent a specific persona or instructions
baseUrlstringoptional Override the API endpoint (useful for local or proxy setups)
🌐 gateway — Server & Port Settings

Controls how OpenClaw exposes its HTTP gateway for apps and tools to connect to.

FieldTypeDefaultDescription
portnumber8080optional Port to listen on
hoststring127.0.0.1optional Host to bind to. Use "0.0.0.0" to accept connections from other devices on your network
timeoutnumber30000optional Request timeout in milliseconds
corsbooleanfalseoptional Enable CORS headers (needed for browser-based clients)
authobjectoptional Authentication config — see Security guide
🧠 memory — Storage & Persistence

Controls how your agent remembers things between conversations.

FieldTypeDefaultDescription
typestringmemoryoptional Storage backend: "memory" (no persistence), "sqlite", "postgres"
pathstring./agent.dboptional File path for SQLite database
connectionStringstringoptional Database URL for Postgres
maxEntriesnumber10000optional Max items stored in memory
ttlSecondsnumberoptional Auto-expire memories after N seconds
🔧 tools — Capabilities & Integrations

Controls which tools your agent can use — web search, file access, code execution, etc.

FieldTypeDefaultDescription
enabledarray[]optional List of tool names to activate: "web_search", "file_read", "file_write", "code_run", "browser"
webSearchobjectoptional Web search config — requires provider and apiKey
codeRunobjectoptional Code execution sandbox settings
maxConcurrentnumber3optional Max number of tool calls running at the same time
💓 heartbeat — Keep-Alive Settings

Keeps your agent active between requests so it's always ready to respond quickly.

FieldTypeDefaultDescription
enabledbooleanfalseoptional Turn keep-alive on or off
intervalMsnumber60000optional How often to ping in milliseconds
onIdlestringkeepoptional What to do when idle: "keep", "suspend", "shutdown"
🎙️ voice — Voice Mode Settings

Enables spoken interaction with your agent. See the Voice Mode guide for setup details.

FieldTypeDefaultDescription
enabledbooleanfalseoptional Enable voice input/output
providerstringoptional TTS provider: "elevenlabs", "openai", "piper" (local)
apiKeystringoptional API key for cloud TTS provider
voiceIdstringoptional Specific voice to use (provider-specific)
sttProviderstringoptional Speech-to-text provider: "whisper", "deepgram"
🔌 mcpServers — MCP Tool Connections

Connect your agent to external tools via the Model Context Protocol (MCP). Each entry registers one MCP server. See the MCP guide for full details.

{
  "mcpServers": {
    "filesystem": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-filesystem", "/Users/you/docs"]
    },
    "github": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-github"],
      "env": { "GITHUB_TOKEN": "$GITHUB_TOKEN" }
    }
  }
}
FieldDescription
commandExecutable to launch the server (e.g. "npx", "python", "node")
argsArray of arguments passed to the command
envEnvironment variables to inject into the server process

JSON vs YAML — Which Should You Use?

✅ Use openclaw.json if…

You're comfortable with JSON, your setup is simple, or you're generating the config programmatically. JSON is the default and works everywhere without extra dependencies.

📝 Use config.yaml if…

You want to add comments to explain your settings, prefer a cleaner look without all the quotes and curly braces, or you're managing a more complex multi-environment setup. See the config.yaml guide →

📋
config.yaml
Same settings in YAML format — easier to read and comment
🔌
MCP Servers
Connect external tools via Model Context Protocol
🔒
Security
Safely manage API keys and authentication
🚀
Getting Started
New to OpenClaw? Start here