Deployment
You can run your stack in three ways: Docker (all services in containers), direct install (OpenClaw on host, companion services in Docker), or bare-metal (native + Docker hybrid). This page covers all three.
Local / Docker Desktop
The simplest way to run your OpenClaw stack. Ideal for development, testing, and personal use. Requires Docker Desktop (macOS/Windows) or Docker Engine (Linux).
Quick Start
# Generate your stack
npx create-better-openclaw my-stack --preset researcher --yes
# Start it up
cd my-stack
cp .env.example .env # Add your API keys
docker compose up -d
# Check status
docker compose ps
docker compose logs -f openclaw-gatewayConfiguration
Environment Variables
The generated .env.example file contains all configuration variables. Copy it to .env and fill in your values:
# Required: At least one LLM API key
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
# Optional: Service-specific config
QDRANT_API_KEY= # Auto-generated if --generateSecrets
REDIS_PASSWORD= # Auto-generated if --generateSecrets
N8N_BASIC_AUTH_USER=admin
N8N_BASIC_AUTH_PASSWORD= # Auto-generated if --generateSecretsPorts
Default port assignments for common services:
| Service | Port | URL |
|---|---|---|
| OpenClaw Gateway | 8080 | http://localhost:8080 |
| OpenClaw Web UI | 3000 | http://localhost:3000 |
| Qdrant | 6333 | http://localhost:6333/dashboard |
| Redis | 6379 | — |
| n8n | 5678 | http://localhost:5678 |
| Grafana | 3001 | http://localhost:3001 |
| SearXNG | 8888 | http://localhost:8888 |
| Ollama | 11434 | http://localhost:11434 |
Common Operations
# Start all services
docker compose up -d
# Stop all services
docker compose down
# Stop and remove volumes (⚠️ deletes data)
docker compose down -v
# Restart a single service
docker compose restart qdrant
# View logs for a specific service
docker compose logs -f n8n
# Scale a service (if supported)
docker compose up -d --scale worker=3Helper Scripts
Generated stacks include convenience scripts in the scripts/ directory:
| Script | Description |
|---|---|
scripts/start.sh | Start all services with health check validation |
scripts/stop.sh | Gracefully stop all services |
scripts/update.sh | Pull latest images and restart |
scripts/backup.sh | Backup all named volumes to tar archives |
scripts/status.sh | Show service status, resource usage, and disk |
Troubleshooting
Port Conflict
If a port is already in use, edit docker-compose.yml and change the host port:
services:
qdrant:
ports:
- "6334:6333" # Changed host port to 6334Service Won't Start
# Check the service logs
docker compose logs qdrant
# Check resource usage
docker stats
# Restart with fresh state
docker compose down
docker compose up -dInsufficient Memory
Docker Desktop has a default memory limit. Increase it in Docker Desktop → Settings → Resources → Memory.
- Minimal stack: 2 GB
- Standard stack: 4 GB
- Full stack or AI services: 8 GB+
Direct Install (OpenClaw on Host)
With the direct install method, OpenClaw itself runs directly on your host (not in Docker), while companion services like PostgreSQL, Redis, Qdrant, etc. still run in Docker containers.
This approach is ideal when you:
- Need direct access to host GPU drivers for AI workloads
- Want the OpenClaw process to have full access to host filesystem
- Prefer a lighter Docker footprint
- Are running on a machine where Docker performance overhead matters
Quick Start
# Generate a stack with direct install
npx create-better-openclaw my-stack --openclaw-install direct --yes
cd my-stack
cp .env.example .env # Add your API keys
# Install OpenClaw on the host
./scripts/install-openclaw.sh
# Start companion services
docker compose up -d
# Run onboarding
openclaw onboardThe generated docker-compose.yml will not contain the openclaw-gateway or openclaw-cli services — only your selected companion services.
Bare-metal (native + Docker hybrid)
With --deployment-type bare-metal (or by choosing "Bare-metal" in the stack builder), you get a native + Docker hybrid:
- Services that have a native recipe (e.g. Redis on Linux) run on the host via install/run scripts in
native/(e.g.native/install-linux.sh). - All other services (including the OpenClaw gateway) run in Docker. The generated
docker-compose.ymlonly includes those; the gateway connects to native services viahost.docker.internal. - A top-level installer (
install.shorinstall.ps1) installs and starts native services first, then runsdocker compose upfor the rest.
Currently Redis supports a native Linux recipe (apt/dnf + systemd). More services may be added over time. Node/Python apps, La Suite Meet, Ollama, and similar remain Docker-only.
# Generate a bare-metal stack (e.g. Redis native, rest in Docker)
npx create-better-openclaw my-stack --preset minimal --deployment-type bare-metal --platform linux/amd64 --yes
cd my-stack
cp .env.example .env
./install.sh # Linux/macOS: installs native services, then docker compose up
# or: .\install.ps1 # WindowsSecurity Notes
- Local stacks bind to
localhostby default — not exposed to the network - Never commit
.envfiles to version control - Use
--generateSecretseven for local dev to practice good habits
Next Steps
- VPS Deployment — deploy to production
- Homelab Deployment — ARM64, GPU passthrough
- Service Catalog — explore all available services