| summary | read_when | |
|---|---|---|
Config file location, precedence, and schema. |
|
summarize supports an optional JSON config file for defaults.
Default path:
~/.summarize/config.json
For model:
- CLI flag
--model - Env
SUMMARIZE_MODEL - Config file
model - Built-in default (
auto)
For output language:
- CLI flag
--language/--lang - Config file
output.language(preferred) orlanguage(legacy) - Built-in default (
auto= match source content language)
See docs/language.md for supported values.
For prompt:
- CLI flag
--prompt/--prompt-file - Config file
prompt - Built-in default prompt
For environment variables:
- Process environment variables
- Config file
env - Legacy config file
apiKeys(mapped to env names)
For UI theme:
- CLI flag
--theme - Env
SUMMARIZE_THEME - Config file
ui.theme - Built-in default (
aurora)
~/.summarize/config.json:
{
"model": { "id": "google/gemini-3-flash-preview" },
"env": { "OPENAI_API_KEY": "sk-..." },
"output": { "language": "auto" },
"prompt": "Explain like I am five.",
"ui": { "theme": "ember" }
}Shorthand (equivalent):
{
"model": "google/gemini-3-flash-preview"
}model can also be auto:
{
"model": { "mode": "auto" }
}Shorthand (equivalent):
{
"model": "auto"
}prompt replaces the built-in summary instructions (same behavior as --prompt).
Example:
{
"prompt": "Explain for a kid. Short sentences. Simple words."
}Set any env var in config (process env still wins):
{
"env": {
"OPENAI_API_KEY": "sk-...",
"OPENROUTER_API_KEY": "sk-or-...",
"FIRECRAWL_API_KEY": "...",
"CUSTOM_FLAG": "1"
}
}Legacy shortcut (still supported):
{
"apiKeys": {
"openai": "sk-...",
"anthropic": "sk-ant-...",
"google": "...",
"openrouter": "sk-or-...",
"xai": "...",
"zai": "...",
"apify": "...",
"firecrawl": "...",
"fal": "..."
}
}Configure the on-disk SQLite cache (extracted content, transcripts, summaries).
{
"cache": {
"enabled": true,
"maxMb": 512,
"ttlDays": 30,
"path": "~/.summarize/cache.sqlite",
"media": {
"enabled": true,
"maxMb": 2048,
"ttlDays": 7,
"path": "~/.summarize/cache/media",
"verify": "size"
}
}
}Notes:
cache.mediacontrols the media file cache (yt-dlp downloads).--no-cachebypasses summary caching only (LLM output); extract/transcript caches still apply. Use--no-media-cachefor media.verify:size(default),hash, ornone.
Set a default CLI theme:
{
"ui": { "theme": "moss" }
}Enable slides by default and tune extraction parameters:
{
"slides": {
"enabled": true,
"ocr": false,
"dir": "slides",
"sceneThreshold": 0.3,
"max": 20,
"minDuration": 2
}
}Enable JSON log files for the daemon:
{
"logging": {
"enabled": true,
"level": "info",
"format": "json",
"file": "~/.summarize/logs/daemon.jsonl",
"maxMb": 10,
"maxFiles": 3
}
}Notes:
- Default: logging is off.
format:json(default) orpretty.maxMbis per file;maxFilescontrols rotation (ring).- Extension “Extended logging” sends full input/output to daemon logs (large). Cache hits skip content logging.
Define presets you can select via --model <preset>:
{
"models": {
"fast": { "id": "openai/gpt-5-mini" },
"or-free": {
"rules": [
{
"candidates": [
"openrouter/google/gemini-2.0-flash-exp:free",
"openrouter/meta-llama/llama-3.3-70b-instruct:free"
]
}
]
}
}
}Notes:
autois reserved and can’t be defined as a preset.freeis built-in (OpenRouter:freecandidates). Override it by definingmodels.freein your config, or regenerate it viasummarize refresh-free.
Use a preset as your default model:
{
"model": "fast"
}Notes:
- For presets,
"mode": "auto"is optional when"rules"is present.
For auto selection with rules:
{
"model": {
"mode": "auto",
"rules": [
{
"when": ["video"],
"candidates": ["google/gemini-3-flash-preview"]
},
{
"when": ["website", "youtube"],
"bands": [
{
"token": { "max": 8000 },
"candidates": ["openai/gpt-5-mini"]
},
{
"candidates": ["xai/grok-4-fast-non-reasoning"]
}
]
},
{
"candidates": ["openai/gpt-5-mini", "openrouter/openai/gpt-5-mini"]
}
]
},
"media": { "videoMode": "auto" }
}Notes:
- Parsed leniently (JSON5), but comments are not allowed.
- Unknown keys are ignored.
model.rulesis optional. If omitted, built-in defaults apply.model.rules[].when(optional) must be an array (e.g.["video","youtube"]).model.rules[]must use eithercandidatesorbands.
Set a default output language for summaries:
{
"output": { "language": "auto" }
}Examples:
"auto"(default): match the source language."en","de": common shorthands."english","german": common names."en-US","pt-BR": BCP-47-ish tags.
{
"cli": {
"enabled": ["gemini", "agent"],
"autoFallback": {
"enabled": true,
"onlyWhenNoApiKeys": true,
"order": ["claude", "gemini", "codex", "agent"]
},
"codex": { "model": "gpt-5.2" },
"claude": { "binary": "/usr/local/bin/claude", "extraArgs": ["--verbose"] },
"agent": { "binary": "/usr/local/bin/agent", "model": "gpt-5.2" }
}
}Notes:
cli.enabledis an allowlist (and order) for auto + explicit CLI model ids.cli.autoFallbackcontrols implicit-auto CLI fallback whencli.enabledis not set.- Default auto fallback order:
claude, gemini, codex, agent. - Auto fallback stores the last successful provider in
~/.summarize/cli-state.jsonand prioritizes it on the next run. cli.<provider>.binaryoverrides CLI binary discovery.cli.<provider>.extraArgsappends extra CLI args.
{
"openai": {
"baseUrl": "https://my-openai-proxy.example.com/v1",
"useChatCompletions": true,
"whisperUsdPerMinute": 0.006
}
}Notes:
openai.baseUrloverrides the OpenAI-compatible API endpoint. Use this for proxies, gateways, or OpenAI-compatible APIs. EnvOPENAI_BASE_URLtakes precedence.openai.whisperUsdPerMinuteis only used to estimate transcription cost in the finish-line metrics when Whisper transcription runs via OpenAI.
Override API endpoints for any provider to use proxies, gateways, or compatible APIs:
{
"openai": { "baseUrl": "https://my-openai-proxy.example.com/v1" },
"nvidia": { "baseUrl": "https://integrate.api.nvidia.com/v1" },
"anthropic": { "baseUrl": "https://my-anthropic-proxy.example.com" },
"google": { "baseUrl": "https://my-google-proxy.example.com" },
"xai": { "baseUrl": "https://my-xai-proxy.example.com" }
}Or via environment variables (which take precedence over config):
| Provider | Environment Variable(s) |
|---|---|
| OpenAI | OPENAI_BASE_URL |
| NVIDIA | NVIDIA_BASE_URL |
| Anthropic | ANTHROPIC_BASE_URL |
GOOGLE_BASE_URL (alias: GEMINI_BASE_URL) |
|
| xAI | XAI_BASE_URL |