
OpenClaw on a VPS: Full Deployment & Cost Guide 2026
Table of Contents
- Introduction
- What is OpenClaw and how does it work?
- Why use a VPS for OpenClaw instead of a local machine?
- What are the minimum and recommended VPS specs for OpenClaw?
- Which VPS providers work best for OpenClaw in 2025/2026?
- How much does it cost to run OpenClaw on a VPS?
- How do I deploy OpenClaw on a VPS step by step?
- How do I configure models and API keys for OpenClaw?
- What security practices does OpenClaw recommend?
- How do I optimize LLM API costs when running OpenClaw?
- What about scaling, monitoring, and backups?
- Frequently Asked Questions (FAQs)
Introduction
If you want to run OpenClaw for real - not just on your laptop for a quick test - you need a server that is always on and reachable. A VPS is the right choice. This guide explains why, what specs to pick, which providers to consider, what you will actually pay (including LLM API costs), and how to get OpenClaw running on a fresh Ubuntu VPS with Docker and Node. All claims are tied to primary sources: the OpenClaw repo, official docs, and current API and pricing pages.
What is OpenClaw and how does it work?
OpenClaw is an open-source, local-first AI assistant framework that you run yourself. It acts as a gateway and agent runtime that connects messaging channels (WhatsApp, Telegram, Slack, Discord, iMessage, and more) to LLMs and local tools or skills. The project lives on GitHub, includes a CLI onboarding wizard, and strongly recommends Anthropic and OpenAI models while supporting multiple providers.
Why that matters for hosting: OpenClaw is built to run on a machine (laptop or server) and persist state. You need a host that can run long-lived processes, handle network traffic safely, and has enough CPU and memory for agents and tool execution.
Why use a VPS for OpenClaw instead of a local machine?
A few practical reasons:
- Uptime and remote access - A VPS is available 24/7 without leaving a home machine on or exposing your home network.
- Network and bandwidth - Datacenter networks usually give you better bandwidth, a static IP, and more predictable latency.
- Isolation - Your OpenClaw instance runs in an isolated environment, separate from your local devices.
- Scaling and recovery - You can resize the VM or use snapshots to redeploy quickly.
- Cost and reliability - For production use, a VPS is often cheaper and safer long term than keeping a Mac Mini or laptop running at home.
Bottom line: A Mac Mini or home machine is fine for dev and testing. For reliable, remotely accessible, always-on OpenClaw, use a VPS.
What are the minimum and recommended VPS specs for OpenClaw?
These numbers line up with real usage, repo guidance, and typical agent overhead:
| Purpose | vCPU | RAM | Storage | Notes |
|---|---|---|---|---|
| Experimental, single user, light | 2 | 4 GB | 40 GB SSD | One user, low concurrency, external LLMs. |
| Recommended production (single org) | 4 | 8 GB | 80 GB | Multiple agents, file indexing, audio, caching. |
| High load, multi-agent, local models | 8+ | 16+ GB | 160+ GB | Vector DBs, local models, or heavy media. |
OpenClaw expects Node 22 or newer and runs a Gateway daemon plus per-agent runtimes and optional sandboxes. Add container overhead and occasional Python or voice tools, and memory and disk become important. The repo recommends Node 22+ and provides Docker Compose and a daemon for persistent runs.
Which VPS providers work best for OpenClaw in 2025/2026?
Providers differ on price, network, and I/O. From late 2025 and early 2026 benchmarks and roundups:
- Hetzner - Strong price/performance in Europe, good NVMe options, great for budget to midrange.
- DigitalOcean - Simple UI, predictable pricing, good for small and medium setups.
- Vultr / Linode - Similar to DigitalOcean with a decent global footprint.
- AWS Lightsail / EC2 - Reliable and global but costlier and more complex. Sensible if you already use AWS.
- OVH - Competitive in Europe.
- Specialist providers (Scaleway, UpCloud, etc.) - Worth checking for region and cost.
Hetzner and some smaller specialists often lead on value in price/performance tests. Choose by region, IOPS, and bandwidth. Real cost examples: Hetzner Cloud can start around 3.49 EUR per month for a small instance; Hostinger VPS offers entry tiers around 4.59-4.99 USD per month with backups and DDoS protection included. Pick based on whether you want the lowest base price (Hetzner-style) or bundled features (Hostinger-style).
How much does it cost to run OpenClaw on a VPS?
Infra-only examples per month (approximate):
- Tiny (2 vCPU, 4 GB RAM) - 6-12 USD (Hetzner, Azure, DigitalOcean often have offers).
- Standard (4 vCPU, 8 GB RAM) - 12-25 USD.
- Large (8 vCPU, 16 GB RAM) - 40-100 USD depending on provider and bandwidth.
LLM API costs can dominate. OpenAI and Anthropic publish per-million-token pricing; frontier models cost more. If OpenClaw uses Claude Opus/Sonnet or GPT-tier models for long context, expect higher rates than smaller models. Check OpenAI pricing and Anthropic pricing for current numbers.
What do real OpenClaw cost examples look like?
Assume one user, 100 agent prompts per day, about 1k tokens per prompt-response pair. That is 100k tokens per day, or about 3M tokens per month. At 3-15 USD per million tokens (mid to high tier), that is roughly 9-45 USD per month on the low end and 45-450 USD on the high end for the LLM alone. Add infra (e.g. 18 USD for a 4 vCPU / 8 GB VPS) and you get total monthly costs in the 30-500+ USD range depending on model and usage. Always confirm on the providers' pricing pages before committing.
How do I deploy OpenClaw on a VPS step by step?
This path gets OpenClaw running on a fresh Ubuntu 22.04 VPS with minimal friction. The repo provides a Docker setup and CLI wizard - use them.
Assumptions: Ubuntu 22.04 LTS, root or a sudo user. Ubuntu 22.04 is in security maintenance until June 2027.
Provision the VPS - Create a VM with at least 2 vCPU, 4 GB RAM, 40 GB SSD. Attach an SSH key.
Quick server hardening:
sudo apt update && sudo apt upgrade -y
sudo adduser openclawuser
sudo usermod -aG sudo openclawuser
sudo apt install -y ufw
sudo ufw allow OpenSSH
sudo ufw enable
Install Docker - Follow the Docker Engine install guide for Ubuntu 22.04. Then allow your user to run Docker without sudo: sudo usermod -aG docker $USER.
Install Node 22+ and the OpenClaw CLI - Use NodeSource, nvm, or your package manager. Example with NodeSource:
curl -fsSL https://deb.nodesource.com/setup_22.x | sudo -E bash -
sudo apt-get install -y nodejs
node -v
Then install OpenClaw and run onboarding:
npm install -g openclaw@latest
openclaw onboard --install-daemon
Docker Compose option - The repo includes docker-compose.yml and scripts. Clone the repo, then run docker compose up -d (or the provided setup script) and follow the README for workspace setup, models, and channel auth.
How do I configure models and API keys for OpenClaw?
OpenClaw supports multiple providers (Anthropic, OpenAI, others). The README and onboarding flow guide you through adding API keys.
- Prefer account-level API keys with usage limits and billing alerts.
- Model choice matters: use cheaper models for high volume and reserve expensive models for critical reasoning. The repo recommends Anthropic Pro/Max and Opus 4.5 for long-context strength.
Example env (pseudo):
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=claude-...
OPENCLAW_MODELS="anthropic:opus-4.5,openai:gpt-4o-mini"
Store secrets in a vault or your provider's secrets manager.
What security practices does OpenClaw recommend?
OpenClaw documents security defaults and the risks of inbound channels and untrusted DMs. Follow these:
- Treat inbound DMs as untrusted - Use pairing and approval; the default policy is described in the docs.
- Use sandbox mode for group sessions - If you open the bot to broader audiences, run non-main sessions in Docker sandboxes.
- Do not expose admin ports - Restrict the gateway with a firewall and reverse proxy.
- Secrets and API keys - Use secret stores or encrypted env files and rotate keys regularly.
- Logging and least privilege - Avoid logging full content unless needed; run agents with minimal permissions.
Read the repo's security guide and DM policies before exposing the bot to public channels.
How do I optimize LLM API costs when running OpenClaw?
API spend can grow fast. Tactics that help:
- Cache and rate-limit - Cache repeated answers and avoid duplicate API calls.
- Model tiering - Use a cheaper model for routine tasks and expensive models only for key steps. OpenClaw supports model routing and failover.
- Token limits and truncation - Keep prompts and inputs concise.
- Batching - If the API supports it, batch small calls to reduce overhead.
- Local models for cheap tasks - Use local small models (e.g. llama.cpp) for low-cost tasks if you have the hardware and can manage the tradeoffs.
- Monitor and alert - Set billing alerts on provider consoles and watch token usage.
What about scaling, monitoring, and backups?
Scaling - Easiest is vertical: bigger VM. For high concurrency, run multiple gateways behind a load balancer with a shared datastore (object store and vector DB).
Monitoring - Use Prometheus, Datadog, or provider metrics. Alert on high CPU, memory, and unusual outbound traffic (possible abuse).
Backups - Snapshot the VM and back up gateway config, skills, and important data. OpenClaw uses local skills registries and persistent config; losing them means re-onboarding. Schedule daily or weekly backups to object storage.
