Skip to main content
There are three good starting points. Pick the one that matches your goal.

1. Just want to try it

Open chat.yambr.com

Hosted Open WebUI with Computer Use already installed and wired up. Sign in, pick a model, ask the assistant to “create a PowerPoint about Q1 sales” — you’ll see skills, the live browser, and file previews on a real example.
Good for: one-off experiments, demos, seeing what the system can do before committing.

2. Drive it from your own app (Yambr as MCP provider)

Yambr publishes only the Computer Use MCP endpoint — tools, sandboxes, file hosting. You bring your own LLM provider and wire Yambr in as one more tool server.
There is no public chat/completions endpoint on api.yambr.com. LLM requests go to your provider; your Yambr key unlocks the MCP tools.
1

Get an API key

Sign in at app.yambr.com with GitHub or Google. New accounts get approved (usually quick — ping us on Telegram if needed); default budget is $10 / 30 days. Copy the key value when it’s created — you see it once. See API keys.
2

Register the MCP endpoint in your client

The endpoint is https://api.yambr.com/mcp/computer_use, auth is Authorization: Bearer <yambr-key>. Point any MCP-capable client at it — Claude Desktop, OpenAI Agents SDK, LangChain, Cursor, n8n, self-hosted LiteLLM, Open WebUI.Example with the OpenAI Agents SDK (your OpenAI key drives the model; Yambr provides the tools):
import os
from agents import Agent, Runner
from agents.mcp import MCPServerStreamableHttp

yambr_mcp = MCPServerStreamableHttp(
    params={
        "url": "https://api.yambr.com/mcp/computer_use",
        "headers": {"Authorization": f"Bearer {os.environ['YAMBR_API_KEY']}"},
    },
    name="yambr-computer-use",
)

agent = Agent(
    name="computer-user",
    model="gpt-4o",                        # ← YOUR model, YOUR provider
    mcp_servers=[yambr_mcp],
)

result = await Runner.run(agent, "Build me a landing page for a coffee shop")
print(result.final_output)
Full walkthrough: LiteLLM gateway.
3

Preview artifacts on cu.yambr.com

Files the model creates (PDFs, screenshots, HTML) are served from https://cu.yambr.com/files/{chat_id}/... and can be embedded as iframes. Artifact URLs are public (URL is the token) and scoped to the chat id. See cu.yambr.com.

3. Self-host everything

Run the whole stack on your own machine. You’ll need Docker.
git clone https://github.com/Yambr/open-computer-use.git
cd open-computer-use
cp .env.example .env
# Edit .env — at minimum set OPENAI_API_KEY (any OpenAI-compatible provider)

# 1. Start the Computer Use Server (builds the workspace image on first run, ~15 min)
docker compose up --build

# 2. In another terminal, start Open WebUI
docker compose -f docker-compose.webui.yml up --build
Open http://localhost:3000. After adding a model in Open WebUI, flip Function Calling = Native and Stream Chat Response = On — without those two, tools don’t fire. Details: Model settings. Full install guide: Self-hosting · Configuration · Docker details.

What next?

How it's built

Computer Use Server, sandbox container per chat, six redundant MCP-native channels for the system prompt.

Skills catalogue

docx, xlsx, pptx, pdf, playwright-cli, sub-agent, frontend-design, and more.

MCP API reference

Initialize, list tools, call tools, browse resources — plain JSON-RPC over Streamable HTTP.

Integrations

Open WebUI, Claude Desktop, LiteLLM, n8n, custom clients.