1. Just want to try it
Open chat.yambr.com
Hosted Open WebUI with Computer Use already installed and wired up. Sign in, pick a model, ask the assistant to “create a PowerPoint about Q1 sales” — you’ll see skills, the live browser, and file previews on a real example.
2. Drive it from your own app (Yambr as MCP provider)
Yambr publishes only the Computer Use MCP endpoint — tools, sandboxes, file hosting. You bring your own LLM provider and wire Yambr in as one more tool server.Get an API key
Sign in at app.yambr.com with GitHub or Google. New accounts get approved (usually quick — ping us on Telegram if needed); default budget is $10 / 30 days. Copy the key value when it’s created — you see it once. See API keys.
Register the MCP endpoint in your client
The endpoint is Full walkthrough: LiteLLM gateway.
https://api.yambr.com/mcp/computer_use, auth is Authorization: Bearer <yambr-key>. Point any MCP-capable client at it — Claude Desktop, OpenAI Agents SDK, LangChain, Cursor, n8n, self-hosted LiteLLM, Open WebUI.Example with the OpenAI Agents SDK (your OpenAI key drives the model; Yambr provides the tools):Preview artifacts on cu.yambr.com
Files the model creates (PDFs, screenshots, HTML) are served from
https://cu.yambr.com/files/{chat_id}/... and can be embedded as iframes. Artifact URLs are public (URL is the token) and scoped to the chat id. See cu.yambr.com.3. Self-host everything
Run the whole stack on your own machine. You’ll need Docker.What next?
How it's built
Computer Use Server, sandbox container per chat, six redundant MCP-native channels for the system prompt.
Skills catalogue
docx, xlsx, pptx, pdf, playwright-cli, sub-agent, frontend-design, and more.
MCP API reference
Initialize, list tools, call tools, browse resources — plain JSON-RPC over Streamable HTTP.
Integrations
Open WebUI, Claude Desktop, LiteLLM, n8n, custom clients.
