Skip to main content

Host machine

MinimumRecommended
OSLinux x86_64Linux x86_64 (Ubuntu 22.04+)
CPU4 cores8+ cores
RAM8 GB16+ GB
Disk30 GB free100+ GB SSD (workspace image alone is ~10 GB)
Docker24.0+ with Compose v2Latest stable
macOS and Windows work for local development via Docker Desktop, but production deployments should be Linux — Docker Desktop has resource ceilings and worse networking for multi-container stacks.

Per-chat sandbox resources

Each active chat gets its own container. Defaults (overridable via .env):
LimitDefaultEnv var
Memory2 GBCONTAINER_MEM_LIMIT=2g
CPU1.0 coreCONTAINER_CPU_LIMIT=1.0
Budget for concurrent users: N chats × 2 GB RAM plus overhead. Tune in .env — see Configuration.

External services you’ll need

  • An OpenAI-compatible LLM provider. OpenAI, Anthropic (via LiteLLM), OpenRouter, Azure, self-hosted vLLM — anything that speaks POST /chat/completions. Required for the main chat.
  • (Optional) Anthropic API for the sub_agent (Claude Code) tool. Set ANTHROPIC_AUTH_TOKEN to enable.
  • (Optional) A vision model for the describe-image skill. Set VISION_API_KEY, VISION_API_URL, VISION_MODEL.

Ports

ServiceDefault portNotes
Computer Use Server (MCP)8081MCP_PORT
Open WebUI3000OPENWEBUI_PORT
Both are exposed on the host by the bundled Compose files. Put a reverse proxy in front for TLS.

A note on PUBLIC_BASE_URL

The single most common mistake self-hosters make is leaving PUBLIC_BASE_URL at http://localhost:8081 when deploying behind a domain. It must be a URL your browser can reach:
# Local dev (Docker Desktop)
PUBLIC_BASE_URL=http://localhost:8081

# LAN / production
PUBLIC_BASE_URL=https://cu.your-domain.com
It’s baked into the system prompt and into every file link. See Configuration.

Next

Install quickstart

docker compose up --build and you’re running.