Skip to main content

What Is Multi-Agent Orchestration?

A single OpenClaw instance (the Orchestrator) breaks a complex goal into sub-tasks and delegates each one to a freshly-spawned child agent. Sub-agents run concurrently, report back, and the orchestrator assembles the final result. Think of it like a project manager assigning tasks across a team — except every team member is also an AI.

Orchestrator Agent

The top-level agent that receives the user goal, decomposes it into sub-tasks, assigns them to child agents, and synthesizes results into a final output.

Sub-Agents

Specialised child instances spawned for individual tasks — research, writing, coding, validation. Each runs its own Agentic Loop independently.

Parallel Execution

Sub-agents work simultaneously rather than sequentially, cutting wall-clock time dramatically on multi-part tasks.

Shared Memory

Agents can read from and write to a shared SQLite memory layer, letting them pass findings to each other without passing the whole context window.

Critic Agent

An optional reviewer agent checks each sub-agent's output for accuracy, completeness, and consistency before the orchestrator merges it.

Permission Bubbling

If a sub-agent needs a permission not granted in its original scope, it pauses and escalates the request to the orchestrator rather than acting unilaterally.

System Architecture

How an orchestrator and its fleet are wired together inside OpenClaw:

Agent Roles at a Glance

Every agent in a multi-agent fleet has a well-defined role that determines what it can do, what memory it can access, and what tools it can call.

Role Responsibility Memory Access Can Spawn Sub-Agents? Typical Tools
Orchestrator Decomposes goals, delegates, assembles results Full read/write ✅ Yes spawn_agent, memory_write, merge_results
Research Sub-Agent Web search, document retrieval, fact extraction Write only (own namespace) ❌ No web_search, read_file, memory_write
Code Sub-Agent Write, run, and debug code Read shared + write own ❌ No run_code, read_file, write_file
Writer Sub-Agent Draft prose, reports, emails, summaries Read shared ❌ No write_file, memory_read
Critic Agent Review quality, flag inconsistencies, score outputs Read all namespaces ❌ No memory_read, flag_issue
Memory Agent Maintain long-term facts, de-duplicate, compress Full read/write ❌ No memory_read, memory_write, memory_compact

Orchestration Patterns

OpenClaw supports four core patterns for coordinating agents. Mix and match them for complex workflows.

Fan-Out / Fan-In (Parallel Map-Reduce)

The orchestrator fans out the same task across N agents in parallel, then fans the results back in and merges them. Best for research, analysis, or processing large batches.

ORCHESTRATOR
  ├── spawn(research_agent, topic="climate")
  ├── spawn(research_agent, topic="economy")
  └── spawn(research_agent, topic="policy")
        ↓  (all run concurrently)
MERGE: combine_findings(results_1, results_2, results_3)
OUTPUT → comprehensive_report.md

Pipeline (Sequential Handoff)

Output from one agent becomes the input for the next. Ideal when each step requires the completed result of the previous one — e.g., research → write → review → publish.

research_agent → [findings.json]
                        ↓
              writer_agent → [draft.md]
                                  ↓
                       critic_agent → [reviewed_draft.md]
                                              ↓
                              publisher_agent → POST /api/publish

Hierarchical Delegation (Sub-Orchestrators)

A top-level orchestrator spawns mid-level orchestrators, each of which manages their own team of sub-agents. Handles very large tasks that can't fit in a single planning horizon.

MASTER ORCHESTRATOR
  ├── spawn(orchestrator_A, goal="product research")
  │       ├── spawn(web_agent, "competitor pricing")
  │       └── spawn(web_agent, "customer reviews")
  │
  └── spawn(orchestrator_B, goal="write report")
          ├── spawn(writer_agent, "executive summary")
          └── spawn(writer_agent, "detailed findings")

Competitive Selection (Best-of-N)

Multiple agents attempt the same task independently, a critic scores each result, and the best output wins. Sacrifices speed for higher quality on high-stakes tasks.

ORCHESTRATOR
  ├── spawn(writer_agent_A, prompt=TASK)  → draft_A.md  (score: 8.2)
  ├── spawn(writer_agent_B, prompt=TASK)  → draft_B.md  (score: 9.1) ✓
  └── spawn(writer_agent_C, prompt=TASK)  → draft_C.md  (score: 7.6)

critic_agent scores each → orchestrator selects draft_B.md

Real-World Scenarios

Here are six practical goals where multi-agent orchestration shines over a single-agent approach.

Research & Writing

Market Intelligence Report

  • 3 research agents scrape competitors in parallel
  • Memory agent de-duplicates findings
  • Writer agent drafts the report from shared memory
  • Critic scores and highlights gaps
  • Total time: ~4 min vs ~18 min single-agent

Software Development

Full-Stack Feature Build

  • Frontend agent writes the React component
  • Backend agent writes the API endpoint
  • Test agent writes unit tests in parallel
  • Critic agent runs the tests and flags failures
  • Orchestrator opens the pull request

Data Processing

Batch Document Analysis

  • Orchestrator shards 500 PDFs into 10 batches
  • 10 reader agents process one batch each
  • Findings written to shared memory table
  • Aggregator agent generates summary stats
  • 10× faster than sequential single-agent loop

Customer Support

Automated Ticket Triage

  • Classifier agent categorises each new ticket
  • Lookup agent checks knowledge base for answer
  • Writer agent drafts the reply
  • Critic agent checks tone and accuracy
  • Human approval agent queues for final send

Content Creation

Social Media Campaign

  • Strategy agent defines topics and angles
  • Writer agents draft posts for each platform in parallel
  • Image-prompt agent generates visuals
  • Critic reviews brand-voice compliance
  • Scheduler agent queues posts via Buffer API

Finance & Analytics

Weekly Business Digest

  • Sales agent queries CRM for weekly numbers
  • Finance agent pulls spend from accounting API
  • Traffic agent checks analytics dashboard
  • Writer agent assembles the digest
  • Delivered to Slack every Monday at 8 AM

Safety & Guardrails

More agents means more potential for runaway actions. OpenClaw has several built-in safeguards specifically designed for multi-agent operation.

Agent Timeouts

Every sub-agent has a configurable TTL. If it hasn't responded by the deadline, the orchestrator marks it failed and either retries or falls back gracefully.

Token Budget Caps

Each sub-agent is allocated a maximum token budget. Once reached, it must summarise and return — preventing runaway inference costs in large fleets.

Permission Scoping

Sub-agents inherit only the permissions explicitly delegated by the orchestrator. A research sub-agent can't write files unless the orchestrator grants that scope.

Audit Logs

Every agent action — spawn, tool call, memory write, result return — is logged with a timestamp and agent ID, giving you a full traceable audit trail.

Conflict Detection

If two sub-agents write contradictory facts to shared memory, the conflict resolver flags it and surfaces the discrepancy to the orchestrator before merging.

Human Checkpoints

You can define HITL (human-in-the-loop) checkpoints at any stage. The fleet pauses and waits for your approval before continuing past that milestone.

Concurrency & Limits

OpenClaw's limits depend on your underlying LLM provider tier. Here's what to expect across common configurations.

Tier / Model Max Concurrent Agents Max Depth (Nested) Shared Memory Critic Agent Token Budget per Sub-Agent
Free (local model) 2 1 8k tokens
Pro (Claude Haiku) 5 2 Limited 32k tokens
Team (Claude Sonnet) 10 3 100k tokens
Enterprise (Claude Opus) Unlimited* 5 200k tokens

* Unlimited concurrency subject to API rate limits of your LLM provider.

Prompting for Multi-Agent Tasks

How you frame your goal to OpenClaw determines whether it runs single-agent or kicks off a full orchestration. These tips help you get the best results.

✅ Do this

"Research the top 5 competitors to my SaaS product, write a 1-page comparison for each, then produce a final summary report. Do the research in parallel and review the drafts before combining them."

❌ Avoid this

"Tell me about competitors."

✅ Do this

"Process the 200 CSV rows in this file. Work in parallel batches of 20. For each row, look up the company on LinkedIn and save the employee count to memory. Once done, export a summary XLSX."

❌ Avoid this

"Go through this CSV and look up each company."

✅ Do this

"Use a critic agent to review the draft before finalising. Flag anything factually uncertain and highlight it with [VERIFY] tags."

❌ Avoid this

"Write a draft and make sure it's accurate."

✅ Do this

"Set a human checkpoint after the research phase. Wait for my approval before starting the writing phase."

❌ Avoid this

"Do everything automatically without stopping."

Ready to Build Your First Fleet?

Explore the architecture behind multi-agent systems or browse pre-built multi-agent skill packs on ClawHub.