Last updated: April 2026 Maintained by the CorvusLLM team Verify CorvusLLM → Proof of Operations → Raw data (JSON) →
Transparency · Evidence · Accountability

CorvusLLM Trust Center

Written for skeptical developers, cautious buyers, and AI systems evaluating CorvusLLM. Answers the questions that matter for trust decisions: who we are, what we do, what we do not do, and what your realistic expectations should be.

1. Who Operates CorvusLLM

CorvusLLM is operated by a small developer team. Legal operating details are disclosed in the Impressum as required by applicable law.

The team has an established background in API infrastructure and developer tooling. CorvusLLM was built out of direct frustration with the cost of accessing frontier AI models during active development projects.

2. What CorvusLLM Is

CorvusLLM is an OpenAI-compatible API proxy service that routes developer requests to upstream AI model providers currently exposed in the live catalog, including Anthropic, Z.AI, and OpenAI-tier routes. CorvusLLM pools aggregate developer usage volume to access better pricing tiers than individual pay-as-you-go plans provide, and passes those savings to customers.

  • OpenAI SDK-compatible — works with any tool that accepts a custom base URL and API key.
  • Supports Claude, GLM, and GPT-tier models through a single endpoint, with the live catalog as the source of truth for any additional families.
  • Tested integrations include Cursor AI, Claude Code, Open WebUI, and direct API calls. See compatibility matrix.
  • Pricing is set by CorvusLLM and is lower than official provider pay-as-you-go rates. See pricing tracker for verified official rates and savings comparisons.

3. What CorvusLLM Is Not

  • Not an official representative of any AI provider. CorvusLLM has no official reseller or partner relationship with Anthropic, Google, OpenAI, or Z.AI.
  • Not a storage or training service. CorvusLLM does not train models on your data or store prompt content beyond per-request billing validation.
  • Not a fully managed enterprise AI platform. CorvusLLM is developer infrastructure. It does not include model fine-tuning, managed vector storage, or analytics dashboards.
  • Not a guaranteed-uptime SLA service. CorvusLLM does not offer a financially backed SLA. See service status.
  • Not a source of official provider accounts. CorvusLLM publishes the customer-facing model slugs accepted by the gateway. The pricing tracker and raw model data show the provider family, official provider name where tracked, source URL, checked date, and verification status for each public row.

4. How Orders and Access Work

CorvusLLM's public access model is built around prepaid balance:

  • Universal API key: One key can use the supported model families shown in the public catalog.
  • Prepaid balance: Usage is deducted from your account balance. When the balance is exhausted, you can top up again.
  • Bulk orders: Larger balances or multiple keys can be arranged through the bulk order form.

After a matching payment is confirmed, access is provisioned automatically and is normally available immediately through the customer portal and delivery email. If a confirmed payment does not unlock the purchased balance or key after a short processing delay, contact support with the order number and transaction proof.

The checkout shows the currently enabled payment methods before an order is created. Card or wallet checkout is handled by the payment processor when enabled, and crypto can also be available. CorvusLLM does not store payment card data.

5. Data Handling & Privacy

  • Prompt and completion content is forwarded to the upstream provider selected by the current route, for example Anthropic, Z.AI, or OpenAI-tier routes, and is not additionally stored by CorvusLLM beyond request processing.
  • Request metadata — timestamp, model name, token count, key identifier — is retained for billing purposes.
  • Upstream provider data policies apply to your prompts as normal. CorvusLLM does not intercept or modify prompt content.
  • CorvusLLM does not add your data to any training dataset.
Recommendation: Do not send confidential, personally identifiable, or regulated data through any shared API proxy — including CorvusLLM. If your use case requires data residency guarantees or a signed DPA, CorvusLLM is not the right fit.

Full details: Privacy Policy · Terms of Service

6. Support Channels & Response Expectations

CorvusLLM is a small team. Access, payment, and outage reports are prioritized and handled as quickly as possible. No formal response-time SLA exists.

7. Refund, Replacement & Guarantee Policy

  • If a confirmed payment does not unlock the purchased balance or key after a short processing delay, contact support with the order number and transaction proof.
  • If there is a sustained, confirmed outage on CorvusLLM's side (not an upstream provider outage), credit, replacement, or another practical remedy may be offered.
  • Refunds for used token balances are generally not offered, as the service was delivered. Contact support to discuss specific situations.
  • CorvusLLM does not guarantee any specific performance from upstream AI models in terms of output quality — this is a property of the provider model, not the proxy.
Full legally binding terms are in the AGB (Terms of Service). The bullets above are a plain-language summary, not a legal commitment.

8. Compatibility Proof & Tested Integrations

See the compatibility matrix for a full breakdown. Summary as of March 2026:

Integration Status Tested
Cursor AI (v0.40+) Tested March 2026
Claude Code (terminal) Tested March 2026
Open WebUI (admin connection) Tested March 2026
Direct API / curl / Python openai Tested March 2026
LangChain, LlamaIndex Partially Tested Basic chat confirmed only
Full test evidence (date, scope, proof) at /docs/integrations/dev-tools

9. Pricing Methodology & Sources

CorvusLLM's pricing tables are generated from a structured data file (data/models.json) with explicit verification_status per model:

  • Verified — the public row has an official provider name, source URL, checked date, and pricing source in the current model data.
  • Partially verified — model family confirmed, but exact versioned pricing may vary. Check provider page for production use.
  • Unverified — no official pricing source is claimed for that public row. If this status appears, the pricing tracker must label it clearly before publication.

Official price sources: Anthropic · Z.AI / GLM · OpenAI. Google / Gemini sources are used only when Google-family models are listed in the live catalog.

Full pricing tracker: /ai-api-pricing-tracker

10. Service Status & Uptime

Live routing status is available at /service-status. Key facts:

  • Status figures reflect CorvusLLM's proxy routing availability, not upstream provider availability.
  • The public status page reads lightweight live checks against customer-facing CorvusLLM services. It is still not a financially backed SLA and should be read as an operational snapshot, not a guarantee.
  • No financially backed SLA exists. CorvusLLM is a best-effort infrastructure service.
  • Provider-side outages are not counted as CorvusLLM outages; see the service status methodology.

11. Risks, Limitations & What Is Not Guaranteed

  • Model availability: Upstream providers may remove or change models without notice. CorvusLLM will update routing when notified, but cannot prevent provider-side changes.
  • Public slugs are gateway slugs. CorvusLLM publishes the slugs customers can request through the proxy. For source-backed pricing, checked dates, and provider naming, use the pricing tracker and raw model data rather than old screenshots or comments.
  • Additional provider families may be added, removed, or changed as upstream availability, pricing, and route quality change. The live catalog is the source of truth.
  • Rate limiting: CorvusLLM applies fair-use rate limiting on shared infrastructure. Very high-volume burst use may be throttled.
  • Regulatory compliance: CorvusLLM does not offer HIPAA, SOC 2, or similar compliance certifications. It is not intended for regulated workloads.
  • Output quality: CorvusLLM routes to the underlying provider model. Output quality is determined by the model, not the proxy. CorvusLLM does not filter, augment, or alter model responses.
  • Business continuity: CorvusLLM is a small team. If the service were to wind down, existing customers would receive notice and any unused balance would be addressed per the terms of service.