Homelab Deployment
Run your OpenClaw stack on your own hardware — bare metal servers, NAS devices, Raspberry Pis, or virtualized environments. This guide covers ARM64 support, GPU passthrough, and platform-specific tips.
Supported Platforms
| Platform | Architecture | Notes |
|---|---|---|
| x86_64 bare metal / VM | AMD64 | Full service support |
| Apple Silicon (M1/M2/M3) | ARM64 | Runs via Docker Desktop or OrbStack |
| Raspberry Pi 4/5 | ARM64 | 64-bit OS required, limited by RAM |
| Unraid | AMD64 | Docker via Unraid template system |
| Proxmox LXC/VM | AMD64/ARM64 | LXC containers or full VMs |
| TrueNAS Scale | AMD64 | Docker via TrueNAS apps |
ARM64 Setup
Most services in better-openclaw have ARM64 builds. Use the --platform flag to ensure compatibility:
npx create-better-openclaw my-stack \
--preset minimal \
--platform linux/arm64 \
--yesThe CLI will warn you if any selected services don't have ARM64 images. You can check platform support for each service in the Service Catalog.
Raspberry Pi Specifics
- Use Raspberry Pi OS 64-bit (Lite is fine for headless)
- Pi 4 with 4 GB RAM works for minimal stacks; 8 GB recommended for more services
- Pi 5 is significantly faster for Docker workloads
- Use an SSD (via USB 3.0 or NVMe hat) instead of SD card for data volumes
- Avoid GPU-heavy services (Ollama, Whisper) — Pi lacks the memory
# Install Docker on Raspberry Pi OS
curl -fsSL https://get.docker.com | sh
sudo usermod -aG docker $USER
# Install Docker Compose plugin
sudo apt install -y docker-compose-plugin
# Generate a lightweight stack
npx create-better-openclaw pi-stack \
--services redis,searxng \
--skills researcher \
--platform linux/arm64 \
--yes
cd pi-stack
cp .env.example .env
docker compose up -dGPU Passthrough
For AI services like Ollama and Whisper, GPU acceleration dramatically improves performance. better-openclaw supports NVIDIA GPU passthrough.
Prerequisites
- NVIDIA GPU (GTX 1060 or better recommended)
- NVIDIA drivers installed on the host
nvidia-container-toolkitinstalled
Install NVIDIA Container Toolkit
# Ubuntu/Debian
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | \
sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg
curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \
sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
sudo apt update
sudo apt install -y nvidia-container-toolkit
sudo nvidia-ctk runtime configure --runtime=docker
sudo systemctl restart docker
# Verify
docker run --rm --gpus all nvidia/cuda:12.0-base nvidia-smiGenerate a GPU-Enabled Stack
npx create-better-openclaw ai-stack \
--services ollama,whisper,redis,qdrant \
--skills local-ai,memory,voice \
--gpu \
--yesThe generated docker-compose.yml includes the NVIDIA runtime configuration:
services:
ollama:
image: ollama/ollama:latest
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]
volumes:
- ollama_models:/root/.ollamaAMD ROCm GPUs
AMD GPU support is experimental. For Ollama with ROCm:
# Use the ROCm image tag instead
image: ollama/ollama:rocmUnraid
Unraid runs Docker natively through its web UI, but you can also use docker-compose via the command line:
# SSH into Unraid (or use the terminal in the web UI)
cd /mnt/user/appdata/
# Generate the stack
npx create-better-openclaw openclaw-stack \
--preset researcher \
--yes
cd openclaw-stack
cp .env.example .env
nano .env # Configure your API keys
# Start with docker-compose
docker compose up -dUnraid Tips
- Store data volumes on the array or a cache pool via
/mnt/user/appdata/ - Use the Compose Manager plugin for web UI management
- GPU passthrough works if the GPU is not assigned to a VM
- Set up a Cron job under Settings → Scheduler for automated backups
Proxmox
Option A: LXC Container (Recommended)
LXC containers are lightweight and share the host kernel. Great for Docker workloads:
# Create a privileged LXC container with Docker support
# In Proxmox web UI:
# 1. Create CT → Template: ubuntu-22.04
# 2. Set Resources: 4GB RAM, 2 cores, 50GB disk
# 3. Under Options → Features: enable "nesting" and "keyctl"
# Inside the container:
curl -fsSL https://get.docker.com | sh
apt install -y docker-compose-plugin
# Generate and start your stack
npx create-better-openclaw my-stack --preset researcher --yes
cd my-stack && docker compose up -dOption B: Full VM
Use a VM if you need GPU passthrough or full isolation:
- Create a VM with Ubuntu 22.04, allocate resources as needed
- For GPU passthrough: pass through the entire PCIe device in Proxmox → VM → Hardware → Add → PCI Device
- Follow the standard VPS deployment guide for the rest
Proxmox Tips
- Use ZFS for automatic snapshots before updates
- Set up Proxmox Backup Server for automated CT/VM backups
- Use
pct push/pct pullto transfer files to LXC containers
Network Configuration
For accessing your stack from other devices on your LAN:
# Option 1: Access via host IP (simplest)
# http://192.168.1.100:8080
# Option 2: Add a local DNS entry (Pi-hole, AdGuard Home)
# openclaw.local → 192.168.1.100
# Option 3: Reverse proxy with local HTTPS
# Use Caddy with a local CA:
openclaw.local {
tls internal
reverse_proxy openclaw-gateway:8080
}Persistent Storage Best Practices
- Always use named Docker volumes (the default) — never bind-mount to the SD card on a Pi
- Run backups regularly with
./scripts/backup.sh - Use an SSD or NVMe for database volumes (Qdrant, Postgres)
- Monitor disk usage with
docker system dfand prune unused images periodically
Power Management
For 24/7 homelab setups, ensure your stack starts on boot:
# Enable Docker to start on boot
sudo systemctl enable docker
# Add restart policies to your services (already included by default)
# In docker-compose.yml:
services:
openclaw-gateway:
restart: unless-stoppedNext Steps
- Local Docker Guide — development setup basics
- VPS Deployment — production cloud deployment
- Service Catalog — check platform support for each service