Skip to content

Providers

handover supports 8 LLM providers through a unified interface. The provider system is designed around named presets: each provider ships with a base URL, default model, API key env var, and concurrency settings. You only need to name the provider — everything else is pre-configured.

Under the hood, BaseProvider handles retry logic and rate-limiting. Concrete providers implement the completion call, using either the Anthropic SDK or the OpenAI-compatible SDK depending on the provider’s sdkType.

The authoritative preset registry is in src/providers/presets.ts. The schema’s valid provider values are defined in src/config/schema.ts.

ProviderEnv varDefault modelLocal?Notes
anthropicANTHROPIC_API_KEYclaude-opus-4-6NoDefault provider; uses Anthropic SDK
openaiOPENAI_API_KEYgpt-4oNoOpenAI-compatible SDK
ollama(none required)(set via --model)YesFully local; no data leaves your machine
groqGROQ_API_KEYllama-3.3-70b-versatileNoFast inference; OpenAI-compatible
togetherTOGETHER_API_KEYmeta-llama/Meta-Llama-3.1-70B-Instruct-TurboNoOpen model hosting; OpenAI-compatible
deepseekDEEPSEEK_API_KEYdeepseek-chatNoLow cost; OpenAI-compatible
azure-openaiAZURE_OPENAI_API_KEYgpt-4oNoRequires baseUrl (your Azure endpoint)
customLLM_API_KEY(set via --model)VariesAny OpenAI-compatible API; requires baseUrl

Set the provider via .handover.yml, the HANDOVER_PROVIDER environment variable, or the --provider CLI flag.

Anthropic (default):

Terminal window
export ANTHROPIC_API_KEY=sk-ant-...
npx handover-cli generate

Or in .handover.yml:

provider: anthropic
model: claude-sonnet-4-5

Authentication: Anthropic requires API key authentication. OAuth/subscription auth is not supported.

OpenAI:

Terminal window
export OPENAI_API_KEY=sk-...
npx handover-cli generate --provider openai

Ollama (fully local, free):

Terminal window
ollama pull llama3.1:8b
npx handover-cli generate --provider ollama --model llama3.1:8b

Ollama requires no API key and runs entirely on your machine. The default concurrency is set to 1 automatically to avoid overwhelming a local inference server.

The custom provider is an escape hatch for any OpenAI-compatible API endpoint not listed above. It requires two settings:

  1. baseUrl — the API endpoint (required; no preset default)
  2. model — the model name to request (required; no preset default)

The API key is read from LLM_API_KEY by default. Override with apiKeyEnv to use a different env var name.

Example for a self-hosted vLLM server:

provider: custom
baseUrl: http://my-vllm-server:8000/v1
model: mistral-7b-instruct
apiKeyEnv: VLLM_API_KEY
Terminal window
export VLLM_API_KEY=...
npx handover-cli generate

The custom provider works with any service that implements the OpenAI /v1/chat/completions API, including vLLM, LM Studio, llama.cpp server, and hosted services with OpenAI-compatible APIs.