Host machine
| Minimum | Recommended | |
|---|---|---|
| OS | Linux x86_64 | Linux x86_64 (Ubuntu 22.04+) |
| CPU | 4 cores | 8+ cores |
| RAM | 8 GB | 16+ GB |
| Disk | 30 GB free | 100+ GB SSD (workspace image alone is ~10 GB) |
| Docker | 24.0+ with Compose v2 | Latest stable |
Per-chat sandbox resources
Each active chat gets its own container. Defaults (overridable via.env):
| Limit | Default | Env var |
|---|---|---|
| Memory | 2 GB | CONTAINER_MEM_LIMIT=2g |
| CPU | 1.0 core | CONTAINER_CPU_LIMIT=1.0 |
N chats × 2 GB RAM plus overhead. Tune in .env — see Configuration.
External services you’ll need
- An OpenAI-compatible LLM provider. OpenAI, Anthropic (via LiteLLM), OpenRouter, Azure, self-hosted vLLM — anything that speaks
POST /chat/completions. Required for the main chat. - (Optional) Anthropic API for the
sub_agent(Claude Code) tool. SetANTHROPIC_AUTH_TOKENto enable. - (Optional) A vision model for the
describe-imageskill. SetVISION_API_KEY,VISION_API_URL,VISION_MODEL.
Ports
| Service | Default port | Notes |
|---|---|---|
| Computer Use Server (MCP) | 8081 | MCP_PORT |
| Open WebUI | 3000 | OPENWEBUI_PORT |
A note on PUBLIC_BASE_URL
The single most common mistake self-hosters make is leaving PUBLIC_BASE_URL at http://localhost:8081 when deploying behind a domain. It must be a URL your browser can reach:
Next
Install quickstart
docker compose up --build and you’re running.