What counts as input?
User text, system messages, tool schemas, selected files, retrieved context, and prior chat messages can all increase input tokens.
Estimate CorvusLLM prepaid usage for supported Claude, GPT, and GLM rows before topping up or moving a workflow.
Independent service. Not affiliated with OpenAI, Anthropic, Google, or Z.AI.
Use rough monthly token totals, then compare the official reference cost with the current CorvusLLM prepaid rate.
Estimates depend on actual tokenization, selected model, cache behavior, and current pricing data. For Anthropic cache writes, generic cache-write tokens use the 5-minute cache-write reference unless duration buckets are available.
User text, system messages, tool schemas, selected files, retrieved context, and prior chat messages can all increase input tokens.
Output tokens are what the model generates, including code, explanations, structured JSON, or streamed assistant text.
Cache reads and writes are usually visible in provider or proxy usage logs. Enter them separately when your workflow reuses large contexts.
Run one known workflow, compare the calculator against real usage logs, then scale only after the estimate matches your setup.