0. Prerequisites
Docker 24.0+ with Compose v2. See Requirements for hardware.1. Clone and configure
.env and set at minimum:
2. Start the Computer Use Server
Uvicorn running on http://0.0.0.0:8081, the orchestrator is live.
Smoke test in another terminal:
3. Start Open WebUI
In a second terminal:function_calling: native + stream_response: true as the default model params.
Open http://localhost:3000 and sign in with ADMIN_EMAIL / ADMIN_PASSWORD.
4. Add a model
- Go to Admin → Settings → Connections and add your LLM provider (the same
OPENAI_API_KEYyou set in.env, or a separate dedicated key). - Go to Workspace → Models and enable at least one model.
- Verify Function Calling = Native and Stream Chat Response = On — set globally in Admin → Settings → Models → Advanced Params if you want these to be the default for every model.
5. Chat
Start a new chat and ask for something tool-worthy:“Create a professional cover letter for a Senior Software Engineer applying to a startup in .docx, three paragraphs.”You should see:
- The model picks the
docxskill. bash_toolandcreate_filecalls execute inside a fresh sandbox container.- A file link appears. A preview iframe auto-inserts below it.
Two compose files, on purpose
| File | What it runs | Talks to |
|---|---|---|
docker-compose.yml | Computer Use Server (computer-use-server:8081) | Docker socket, per-chat sandboxes |
docker-compose.webui.yml | Open WebUI | computer-use-server:8081 over the shared Docker network |
computer-use-net) that the server creates on first up.
Next
Configuration
Full env var reference.
Docker details
Why the server needs the Docker socket, resource limits, image layout.
Model settings
Why Function Calling = Native and streaming matter.
Open WebUI traps
Four things that silently break Computer Use in custom deployments.
