Install Open WebUI first
For a local Open WebUI test, install Docker Desktop, start the official Open WebUI container, open the UI on localhost, and then add CorvusLLM as an OpenAI-compatible connection.
| What to install | Official source | Quick check |
|---|---|---|
| Docker Desktop | Docker Desktop docs | docker --version |
| Open WebUI | Open WebUI quick start | http://localhost:3000 after the container starts. |
| CorvusLLM key | Buy or top up balance | Paste as the OpenAI-compatible API key. |
docker --version
docker run -d -p 3000:8080 -v open-webui:/app/backend/data --name open-webui ghcr.io/open-webui/open-webui:main Connection values
Open Open WebUI in your browser, then go to Admin Panel, Settings, and Connections. Add or edit an OpenAI-compatible connection and paste these values.
| Open WebUI field | Where to change it | Value to paste |
|---|---|---|
| Connection type | Admin Panel > Settings > Connections > OpenAI-compatible connection | OpenAI-compatible or custom OpenAI |
| Base URL | The connection's API/Base URL input | https://base.corvusllm.com/v1 |
| API key | The same connection's API key input | Your delivered CorvusLLM key |
| First model | Model picker after the connection is saved | gpt-5.5 |
Look for the OpenAI-compatible provider or connection screen. The important parts are the /v1 base URL, your CorvusLLM key, and a supported model slug.
Enable Settings > Connections > Cache Base Model List and keep only the model slugs you actually use in the Open WebUI model list.
After it connects
After the first successful chat, you can switch to other supported slugs from Models & Slugs, including Claude and GLM families through the same OpenAI-compatible path. Treat the live catalog as the source of truth for any additional families.
Recommended live catalog picks
This block follows the current public catalog, so new models and reordered families can surface here automatically after you save them.
Loading current catalog
CorvusLLM is pulling the latest recommended models for Open WebUI.