3/10/2026
A complete guide to deploying OpenClaw, the open-source AI Agent framework with 100K+ GitHub stars. Covers local setup with Ollama, cloud deployment on DigitalOcean/Railway, step-by-step installation, and a full comparison across security, cost, and performance.
OpenClaw Deployment Guide: Local vs Cloud β Which One Is Right for You?
OpenClaw has become the hottest open-source AI Agent framework of 2026, surpassing 100,000 GitHub stars in record time. Unlike traditional chatbots, OpenClaw isn't just about conversation β it can execute shell commands, read and write files, control browsers, and integrate with messaging platforms like Telegram, Discord, and Slack.
The first real decision you'll face: deploy locally or in the cloud?
This guide walks you through both paths with step-by-step instructions, then helps you pick the right one.
What Makes OpenClaw Different
Most AI tools stop at text generation. OpenClaw goes further β it's an autonomous execution framework with system-level permissions:
- File system access: Read, write, and manage files on your machine
- Shell execution: Run commands and scripts autonomously
- Browser control: Navigate the web, fill forms, extract data
- Memory: Maintains context across sessions, not just single conversations
- Skills ecosystem: Extensible plugins for code review, data analysis, IM bots, and more
- Multi-platform integration: Works with Telegram, Discord, Slack, WhatsApp out of the box
β οΈ Security note: Because OpenClaw has system-level access, security experts recommend not running it on your primary computer. Use a dedicated machine or a cloud server with proper isolation.
Prerequisites
Regardless of deployment method, you'll need:
- Node.js 22 or higher (check with
node --version) - At least 4GB RAM (16GB+ if running local LLMs)
- A stable internet connection (for cloud APIs) or a capable GPU (for local LLMs)
Option 1: Local Deployment
Running OpenClaw on your own hardware β a Mac mini, home server, Raspberry Pi, or workstation β gives you complete control, full privacy, and zero recurring costs.
Step 1: Install OpenClaw
The fastest way is the official one-click installer:
macOS / Linux / WSL2:
curl -fsSL https://openclaw.ai/install.sh | bash
Windows (PowerShell):
iwr -useb https://openclaw.ai/install.ps1 | iex
The installer auto-detects your Node.js version, installs the CLI, and launches the onboarding wizard.
Or install manually with npm:
npm install -g openclaw@latest
openclaw onboard --install-daemon
Step 2: Connect a Language Model
OpenClaw needs a brain. You have two options:
Option A: Cloud API (Claude / GPT)
Edit ~/.openclaw/.env:
ANTHROPIC_API_KEY=your_key_here
OPENAI_API_KEY=your_key_here
Option B: Local LLM with Ollama (fully offline)
First, install Ollama and pull a tool-calling capable model:
# Install Ollama: https://ollama.ai
ollama pull llama3.3 # general reasoning
ollama pull qwen2.5-coder:32b # code tasks
Then link OpenClaw to Ollama:
# Set a placeholder API key (Ollama doesn't need a real one)
openclaw config set models.providers.ollama.apiKey "ollama-local"
Run the wizard to auto-detect your models:
openclaw onboard
Or manually configure ~/.openclaw/openclaw.json:
{
"agents": {
"defaults": {
"model": {
"primary": "ollama/llama3.3",
"fallbacks": ["ollama/qwen2.5-coder:32b"]
}
}
}
}
Step 3: Start the Gateway
openclaw gateway
The gateway starts at ws://127.0.0.1:18789. Open http://127.0.0.1:18789 in your browser to access the control panel.
Step 4: Verify Everything Works
# List models OpenClaw has detected
openclaw models list
# Check installed Ollama models
ollama list
Option 2: Cloud Deployment
Cloud deployment trades customization for speed and security. Most platforms offer one-click templates that are live in under 5 minutes.
DigitalOcean (Recommended)
- Go to the DigitalOcean Marketplace and search for "OpenClaw"
- Choose a Droplet ($24/month or higher is recommended)
- Wait 2-3 minutes for initialization
- Visit
http://your_server_IP:18789to open the control panel - Get your Gateway Token for authentication:
# Run this on your server
cat ~/.openclaw/gateway-token
Paste the token into the control panel Settings to complete setup.
The DigitalOcean image includes pre-configured security hardening:
- Token-based gateway authentication
- Firewall rules and port rate limiting
- Non-root execution
- Docker container isolation
Railway / Vultr
Both platforms offer one-click OpenClaw deployment templates. The flow is similar: pick a plan, deploy, configure your API key, done.
Docker (Any Cloud or VPS)
# One-line install
bash <(curl -fsSL https://raw.githubusercontent.com/phioranex/openclaw-docker/main/install.sh)
# Edit your API keys
nano ~/.openclaw/.env
# Start the service
cd ~/.openclaw
docker compose up -d openclaw-gateway
Local vs Cloud: Full Comparison
| Dimension | Cloud | Local | |-----------|-------|-------| | Startup time | ~5 minutes | 30+ minutes | | Security hardening | Built-in | Manual config required | | Data isolation | Natural (separate server) | Requires container setup | | Customization | Platform-limited | Full control | | Cost | $24+/month recurring | One-time hardware cost | | Offline capability | None | Yes (with Ollama) | | Maintenance | Managed by platform | Self-managed | | Privacy | Data on cloud server | Data stays on your device |
Choose Cloud If:
- You want to get started in minutes with zero configuration
- You're connecting it to personal data you don't want on your main machine
- You need team collaboration or always-on availability
- You don't have spare hardware or ops experience
Choose Local If:
- Privacy is a hard requirement β data must never leave your network
- You want to use it with local LLMs (Ollama + Qwen3.5, Llama 3, etc.) for zero API costs
- You're a developer who wants to inspect and modify OpenClaw's source code
- You already have spare hardware (Mac mini, NAS, old PC)
Real-World Use Cases
Automated Code Review
"Traverse all .tsx files in src/components, check for useEffect missing
dependencies, and save the risk summary to review_report.md."
OpenClaw reads files, calls the LLM for analysis, and writes the report β all locally, all private.
Remote Commander via Telegram
Configure the Telegram integration, then from your phone:
"Hey Claw, check the disk space on my home NAS. Alert me if below 10%."
OpenClaw runs df -h on your server, analyzes the results, and replies via Telegram.
Local AI Workstation (Mac Mini Setup)
Pair OpenClaw with Qwen3.5-9B running on a Mac mini β you get a capable AI work system for less than the cost of a junior employee's monthly salary, running completely offline.
Pairing with Qwen3.5 Small Models
If you're running OpenClaw locally, the newly released Qwen3.5 series is an excellent model choice:
| Model | Use case | VRAM required | |-------|----------|---------------| | Qwen3.5-9B | Server-side tasks, reasoning | ~8GB | | Qwen3.5-4B | Lightweight agent core | ~4GB | | Qwen3.5-2B | Mobile/edge devices | ~2GB |
# Pull Qwen3.5 via Ollama
ollama pull qwen3.5:9b
# Point OpenClaw to it
openclaw config set agents.defaults.model.primary "ollama/qwen3.5:9b"
Troubleshooting
Gateway won't start
# Check if port 18789 is in use
lsof -i :18789
# Kill if needed
kill -9 <PID>
Ollama models not detected
# Verify Ollama is running
curl http://localhost:11434/api/tags
# Start if needed
ollama serve
Node.js version too low
# Install Node.js 22 via nvm
nvm install 22
nvm use 22
Summary
OpenClaw represents a meaningful shift: AI that doesn't just respond, but acts. Whether you choose cloud for speed and security or local for privacy and cost, the setup process is straightforward.
The local + Ollama combination is particularly compelling in 2026 β with models like Qwen3.5-9B matching 120B-scale performance in a package that runs on a Mac mini, the economics of a private AI workstation have never been better.
Resources:
- Official docs: docs.openclaw.ai
- Ollama: ollama.ai
- GitHub: github.com/openclaw/openclaw