Providers
Providers
Section titled “Providers”handover supports 8 LLM providers through a unified interface. The provider system is designed around named presets: each provider ships with a base URL, default model, API key env var, and concurrency settings. You only need to name the provider — everything else is pre-configured.
Under the hood, BaseProvider handles retry logic and rate-limiting. Concrete providers implement the completion call, using either the Anthropic SDK or the OpenAI-compatible SDK depending on the provider’s sdkType.
The authoritative preset registry is in src/providers/presets.ts. The schema’s valid provider values are defined in src/config/schema.ts.
Provider comparison
Section titled “Provider comparison”| Provider | Env var | Default model | Local? | Notes |
|---|---|---|---|---|
anthropic | ANTHROPIC_API_KEY | claude-opus-4-6 | No | Default provider; uses Anthropic SDK |
openai | OPENAI_API_KEY | gpt-4o | No | OpenAI-compatible SDK |
ollama | (none required) | (set via --model) | Yes | Fully local; no data leaves your machine |
groq | GROQ_API_KEY | llama-3.3-70b-versatile | No | Fast inference; OpenAI-compatible |
together | TOGETHER_API_KEY | meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo | No | Open model hosting; OpenAI-compatible |
deepseek | DEEPSEEK_API_KEY | deepseek-chat | No | Low cost; OpenAI-compatible |
azure-openai | AZURE_OPENAI_API_KEY | gpt-4o | No | Requires baseUrl (your Azure endpoint) |
custom | LLM_API_KEY | (set via --model) | Varies | Any OpenAI-compatible API; requires baseUrl |
Configuring a provider
Section titled “Configuring a provider”Set the provider via .handover.yml, the HANDOVER_PROVIDER environment variable, or the --provider CLI flag.
Anthropic (default):
export ANTHROPIC_API_KEY=sk-ant-...npx handover-cli generateOr in .handover.yml:
provider: anthropicmodel: claude-sonnet-4-5Authentication: Anthropic requires API key authentication. OAuth/subscription auth is not supported.
OpenAI:
export OPENAI_API_KEY=sk-...npx handover-cli generate --provider openaiOllama (fully local, free):
ollama pull llama3.1:8bnpx handover-cli generate --provider ollama --model llama3.1:8bOllama requires no API key and runs entirely on your machine. The default concurrency is set to 1 automatically to avoid overwhelming a local inference server.
Custom provider
Section titled “Custom provider”The custom provider is an escape hatch for any OpenAI-compatible API endpoint not listed above. It requires two settings:
baseUrl— the API endpoint (required; no preset default)model— the model name to request (required; no preset default)
The API key is read from LLM_API_KEY by default. Override with apiKeyEnv to use a different env var name.
Example for a self-hosted vLLM server:
provider: custombaseUrl: http://my-vllm-server:8000/v1model: mistral-7b-instructapiKeyEnv: VLLM_API_KEYexport VLLM_API_KEY=...npx handover-cli generateThe custom provider works with any service that implements the OpenAI /v1/chat/completions API, including vLLM, LM Studio, llama.cpp server, and hosted services with OpenAI-compatible APIs.