Skip to main content
learnopenclaw.org is an independent community resource. Not the official OpenClaw site. Visit openclaw.ai for the official project.

OpenClaw is designed to run as a persistent process — always on, always listening, always ready to act. That raises an immediate question most new users hit within the first week: where should it actually live?

You have two main options: run it yourself on hardware you own (self-hosting), or rent a small cloud server (VPS). Both work. Neither is universally better. The right answer depends on what you want out of the setup — and this post will help you figure out which one that is.

Why the hosting decision matters for OpenClaw

OpenClaw isn't a desktop app you open when you need it. It's a server process that handles incoming messages from WhatsApp, Telegram, or Slack; fires scheduled HEARTBEAT.md tasks; and maintains a persistent connection to your AI model. If the process dies, your agent goes silent.

That's why most users eventually move beyond running OpenClaw in a terminal window on their laptop. The question is whether you move to your own always-on hardware or to a rented cloud server.

Quick note: If you're just evaluating OpenClaw or running occasional one-off tasks, running it on your existing machine is completely fine. This guide is for people who want it running reliably, 24/7.

Self-hosting: what it means and when it works

Self-hosting means running OpenClaw on hardware you own and control — a spare laptop, a mini PC, a Raspberry Pi, an old desktop, or a home server. The process runs on your network, your electricity, your hardware.

The case for self-hosting

The downsides of self-hosting

Watch out for CGNAT: Before committing to self-hosting with inbound webhooks, check whether your ISP puts you behind CGNAT. If they do, you'll need a Cloudflare Tunnel or similar workaround to receive incoming connections reliably.

VPS: what it means and when it works

A VPS (Virtual Private Server) is a rented Linux server in a data centre. You get root access, a static IP, and a guaranteed slice of CPU, RAM, and storage. The server runs 24/7 regardless of what's happening at your house.

Good VPS options for OpenClaw

For OpenClaw with a cloud API model, you need at least 2 GB RAM. For multi-agent setups or heavier workloads, 4 GB is more comfortable.

The case for a VPS

The downsides of a VPS

Side-by-side comparison

Factor Self-Host VPS
Monthly costElectricity only (~£2–5)$5–10/month
Uptime reliabilityDepends on your setup99.9%+ SLA
Static IPUsually no (DDNS needed)Yes, always
Inbound webhooksCGNAT workaround often neededWorks out of the box
Data privacyMaximum — stays at homeDepends on your trust in the provider
Local LLM supportYes (with enough RAM/GPU)Only on expensive tiers
Setup complexityMedium (hardware + OS)Medium (Linux + SSH)
MaintenanceYou own all the problemsProvider handles hardware
ScalabilityConstrained by hardwareResize with one click

What about local LLMs?

This is the biggest technical fork in the road. If running a local model (Llama 3, Mistral, Phi-3) matters to you — either for cost, privacy, or offline use — your hosting choice is essentially made for you: you need to self-host.

A useful local model needs at minimum 8 GB of RAM (for a small quantised model like Phi-3-mini), and realistically 16 GB for anything capable enough to handle complex tasks reliably. VPS plans at that memory tier cost $20–40/month, at which point you've lost the cost advantage entirely.

The exception is Oracle Cloud's always-free tier: 4 ARM cores and 24 GB RAM, completely free. It can run quantised models, though ARM inference is slower than x86. It's worth trying if you want a free cloud setup with local model support.

Hybrid approach: Some users run a cheap VPS ($5/month) for always-on availability and messaging integrations, and point OpenClaw's model config at a local Ollama instance running on their home machine for actual inference. You get the uptime of a VPS with the privacy and cost of a local model — at the cost of some added complexity.

Who should pick what

Pick a VPS if…

You want something that just works

If reliability matters more than cost savings, you're running cloud API models (Claude, GPT-4), and you want to set it up once and forget about it — a $6/month VPS is the right call. Start with DigitalOcean or Hetzner, run OpenClaw in a Docker container, and you'll have a production-ready setup in an afternoon.

Self-host if…

Privacy or local LLMs are a priority

If you want all your data to stay on your network, you want to run local models for free, or you already have suitable hardware sitting idle — self-hosting makes total sense. Invest in a mini PC with 16 GB+ RAM (an Intel N100 system costs £150–200 and runs cool and silent) and you'll have a capable, permanent home for OpenClaw.

Just starting out?

Run it on your existing machine first

There's no reason to commit to either option before you've used OpenClaw for a few weeks. Run it locally, see what tasks you actually use it for, and let that experience guide your infrastructure decision. Most people only move to a dedicated setup once they've found workflows they actually rely on.

Frequently asked questions

Can I run OpenClaw on a Raspberry Pi?

Yes — a Raspberry Pi 4 or 5 with 4–8 GB of RAM handles OpenClaw comfortably when using a cloud API model. It's not powerful enough for meaningful local LLM inference, but for scheduling tasks, managing messages, and running automations via Claude or GPT-4, it works well and draws very little power.

Does OpenClaw need a static IP?

Not always. OpenClaw connects out to messaging platforms (Telegram's API, WhatsApp Business API, Slack webhooks) — it doesn't need inbound connections for basic use. A static IP only becomes necessary if you're running a custom webhook endpoint or exposing the OpenClaw API directly to the internet.

How do I move OpenClaw from my laptop to a VPS?

The cleanest way is to use the Docker setup guide — package your OpenClaw config, SOUL.md, AGENTS.md, and MEMORY.md into a container and deploy it to your VPS. If you're not using Docker, the simplest migration is to copy your ~/.openclaw directory to the VPS via scp, install OpenClaw fresh, and run it as a systemd service.

What's the minimum VPS spec for OpenClaw?

With a cloud API model: 1 vCPU and 2 GB RAM is workable for a single user. For multi-agent setups or multiple users sharing one instance, 2 vCPU and 4 GB RAM gives you comfortable headroom. For local LLM inference on a VPS, don't go below 16 GB RAM.