Install SDK prerequisites
Use curl for the smallest test. For application code, install Python or Node.js first, then install the official OpenAI SDK and point it at CorvusLLM with the base URL shown below.
| What to install | Official source | Quick check |
|---|---|---|
| Python | Python downloads | python --version |
| Node.js and npm | Node.js download | node --version and npm --version |
| OpenAI SDK package | OpenAI Python SDK or OpenAI Node SDK | Install only the language you use. |
python --version
python -m pip install --upgrade openai
node --version
npm --version
npm install openai Environment values
Set the same two variables before running local SDK examples, CI scripts, or backend apps. The SDK reads the key from OPENAI_API_KEY and the CorvusLLM endpoint from OPENAI_BASE_URL.
$env:OPENAI_BASE_URL="https://base.corvusllm.com/v1"
$env:OPENAI_API_KEY="YOUR_CORVUSLLM_KEY" export OPENAI_BASE_URL="https://base.corvusllm.com/v1"
export OPENAI_API_KEY="YOUR_CORVUSLLM_KEY" curl
curl https://base.corvusllm.com/v1/chat/completions ^
-H "Authorization: Bearer YOUR_CORVUSLLM_KEY" ^
-H "Content-Type: application/json" ^
-d "{\"model\":\"gpt-5.5\",\"messages\":[{\"role\":\"user\",\"content\":\"Say sdk-ok\"}]}"
OpenAI Python SDK
import os
from openai import OpenAI
client = OpenAI(
base_url=os.environ.get("OPENAI_BASE_URL", "https://base.corvusllm.com/v1"),
api_key=os.environ["OPENAI_API_KEY"],
)
response = client.chat.completions.create(
model="gpt-5.5",
messages=[{"role": "user", "content": "Say sdk-ok"}],
)
print(response.choices[0].message.content)
OpenAI Node SDK
import OpenAI from "openai";
const client = new OpenAI({
baseURL: process.env.OPENAI_BASE_URL || "https://base.corvusllm.com/v1",
apiKey: process.env.OPENAI_API_KEY,
});
const response = await client.chat.completions.create({
model: "gpt-5.5",
messages: [{ role: "user", content: "Say sdk-ok" }],
});
console.log(response.choices[0].message.content);
Responses API
CorvusLLM exposes /v1/responses, but keep the first implementation simple: use it in non-stream mode first.
Live provider matrix
The matrix below is rendered from the current public catalog. Desktop shows one model per column; mobile uses compact model rows so slugs, providers, and endpoints stay readable.
| Model | Loading |
|---|---|
| Provider | Loading current catalog. |
| Endpoint | /v1 |