OpenClaw is designed to run as a persistent process — always on, always listening, always ready to act. That raises an immediate question most new users hit within the first week: where should it actually live?
You have two main options: run it yourself on hardware you own (self-hosting), or rent a small cloud server (VPS). Both work. Neither is universally better. The right answer depends on what you want out of the setup — and this post will help you figure out which one that is.
Why the hosting decision matters for OpenClaw
OpenClaw isn't a desktop app you open when you need it. It's a server process that handles incoming messages from WhatsApp, Telegram, or Slack; fires scheduled HEARTBEAT.md tasks; and maintains a persistent connection to your AI model. If the process dies, your agent goes silent.
That's why most users eventually move beyond running OpenClaw in a terminal window on their laptop. The question is whether you move to your own always-on hardware or to a rented cloud server.
Self-hosting: what it means and when it works
Self-hosting means running OpenClaw on hardware you own and control — a spare laptop, a mini PC, a Raspberry Pi, an old desktop, or a home server. The process runs on your network, your electricity, your hardware.
The case for self-hosting
- Zero ongoing cost — after the hardware, you only pay for electricity (a Raspberry Pi 5 costs about £2–3/month to run 24/7).
- Total privacy — your data, your files, your conversations never touch a third-party server. Everything stays on your home network.
- Local LLM support — if you have a machine with 16 GB+ RAM or a decent GPU, you can run Llama 3, Mistral, or Phi-3 locally at zero per-query cost. This is the only way to get genuinely free, fully private AI.
- Full hardware control — add an SSD, upgrade RAM, attach external drives for large memory stores.
The downsides of self-hosting
- Uptime depends on your setup — your home internet going down, a power cut, or a router reboot will take OpenClaw with it.
- Dynamic IP issues — most home internet connections use a changing IP address. If you're exposing webhooks (for some messaging integrations), you'll need a DDNS service to keep things working.
- CGNAT and ISP blocking — many ISPs put home users behind carrier-grade NAT, making it hard or impossible to accept inbound connections without a workaround like a reverse tunnel (Cloudflare Tunnel, ngrok).
- Hardware failures are your problem — no SLA, no support. If the drive dies at 2am, your agent is down until you fix it.
- Raspberry Pi limitations — a Pi 4 with 4 GB RAM can run OpenClaw with a cloud API model, but it will struggle with local LLMs and multi-agent setups.
VPS: what it means and when it works
A VPS (Virtual Private Server) is a rented Linux server in a data centre. You get root access, a static IP, and a guaranteed slice of CPU, RAM, and storage. The server runs 24/7 regardless of what's happening at your house.
Good VPS options for OpenClaw
For OpenClaw with a cloud API model, you need at least 2 GB RAM. For multi-agent setups or heavier workloads, 4 GB is more comfortable.
- Hetzner CX22 — 2 vCPU, 4 GB RAM, €4.35/month. Best value in Europe, excellent performance.
- Vultr Cloud Compute — 1 vCPU, 2 GB RAM, $6/month. Solid US option with 25+ data centre locations.
- DigitalOcean Basic Droplet — 1 vCPU, 2 GB RAM, $6/month. Great documentation and community for beginners.
- Oracle Cloud Free Tier — 4 ARM cores, 24 GB RAM, always free. Excellent for OpenClaw — the free tier is genuinely generous, though setup is more involved.
The case for a VPS
- Reliable uptime — data centres offer 99.9%+ uptime SLAs, UPS power, and redundant connections. Your agent won't go down because your cat unplugged the router.
- Static IP out of the box — every VPS comes with a static public IP, making webhooks and messaging integrations straightforward.
- No CGNAT headaches — full inbound and outbound connectivity, no tunnels needed.
- SSH from anywhere — manage, update, or debug your agent from any device, anywhere.
- Easy Docker support — running OpenClaw in a container is the cleanest deployment approach, and VPS providers make this trivially easy.
The downsides of a VPS
- Monthly cost — small but real. Budget $5–10/month for a usable setup.
- Your data is on someone else's hardware — you trust the provider. For most use cases this is fine; for highly sensitive data it's worth thinking about.
- Local LLMs are limited by RAM — cheap VPS tiers cap out at 2–4 GB RAM, which isn't enough for most local models. You'd need a $20–40/month tier to run even a small quantised model.
- Requires basic Linux knowledge — you need to be comfortable with SSH, systemd, and package management. Not a high bar, but it is a bar.
Side-by-side comparison
| Factor | Self-Host | VPS |
|---|---|---|
| Monthly cost | Electricity only (~£2–5) | $5–10/month |
| Uptime reliability | Depends on your setup | 99.9%+ SLA |
| Static IP | Usually no (DDNS needed) | Yes, always |
| Inbound webhooks | CGNAT workaround often needed | Works out of the box |
| Data privacy | Maximum — stays at home | Depends on your trust in the provider |
| Local LLM support | Yes (with enough RAM/GPU) | Only on expensive tiers |
| Setup complexity | Medium (hardware + OS) | Medium (Linux + SSH) |
| Maintenance | You own all the problems | Provider handles hardware |
| Scalability | Constrained by hardware | Resize with one click |
What about local LLMs?
This is the biggest technical fork in the road. If running a local model (Llama 3, Mistral, Phi-3) matters to you — either for cost, privacy, or offline use — your hosting choice is essentially made for you: you need to self-host.
A useful local model needs at minimum 8 GB of RAM (for a small quantised model like Phi-3-mini), and realistically 16 GB for anything capable enough to handle complex tasks reliably. VPS plans at that memory tier cost $20–40/month, at which point you've lost the cost advantage entirely.
The exception is Oracle Cloud's always-free tier: 4 ARM cores and 24 GB RAM, completely free. It can run quantised models, though ARM inference is slower than x86. It's worth trying if you want a free cloud setup with local model support.
Who should pick what
You want something that just works
If reliability matters more than cost savings, you're running cloud API models (Claude, GPT-4), and you want to set it up once and forget about it — a $6/month VPS is the right call. Start with DigitalOcean or Hetzner, run OpenClaw in a Docker container, and you'll have a production-ready setup in an afternoon.
Privacy or local LLMs are a priority
If you want all your data to stay on your network, you want to run local models for free, or you already have suitable hardware sitting idle — self-hosting makes total sense. Invest in a mini PC with 16 GB+ RAM (an Intel N100 system costs £150–200 and runs cool and silent) and you'll have a capable, permanent home for OpenClaw.
Run it on your existing machine first
There's no reason to commit to either option before you've used OpenClaw for a few weeks. Run it locally, see what tasks you actually use it for, and let that experience guide your infrastructure decision. Most people only move to a dedicated setup once they've found workflows they actually rely on.
Frequently asked questions
Can I run OpenClaw on a Raspberry Pi?
Yes — a Raspberry Pi 4 or 5 with 4–8 GB of RAM handles OpenClaw comfortably when using a cloud API model. It's not powerful enough for meaningful local LLM inference, but for scheduling tasks, managing messages, and running automations via Claude or GPT-4, it works well and draws very little power.
Does OpenClaw need a static IP?
Not always. OpenClaw connects out to messaging platforms (Telegram's API, WhatsApp Business API, Slack webhooks) — it doesn't need inbound connections for basic use. A static IP only becomes necessary if you're running a custom webhook endpoint or exposing the OpenClaw API directly to the internet.
How do I move OpenClaw from my laptop to a VPS?
The cleanest way is to use the Docker setup guide — package your OpenClaw config, SOUL.md, AGENTS.md, and MEMORY.md into a container and deploy it to your VPS. If you're not using Docker, the simplest migration is to copy your ~/.openclaw directory to the VPS via scp, install OpenClaw fresh, and run it as a systemd service.
What's the minimum VPS spec for OpenClaw?
With a cloud API model: 1 vCPU and 2 GB RAM is workable for a single user. For multi-agent setups or multiple users sharing one instance, 2 vCPU and 4 GB RAM gives you comfortable headroom. For local LLM inference on a VPS, don't go below 16 GB RAM.