File-first Python CLI pipeline for crawling serialized web novels, translating them to Vietnamese, generating TTS audio, and rendering publishable video assets.
- Architecture:
docs/ARCHITECTURE.md - Agent notes (internal):
docs/agents/codex/AGENTS.md - Compact coding context map:
docs/agents/context-map.yaml
- What this repo does
- Quick start
- Requirements
- Configuration
- Storage contract and invariants
- Recommended workflows
- Developer Context Workflow
- Command reference
- Troubleshooting
At a high level, novel-tts supports this pipeline:
- Crawl source chapters into
input/<novel_id>/origin/chuong_<start>-<end>/ - Translate chapters into canonical per-chapter outputs under
input/<novel_id>/.parts/ - Rebuild merged translated files under
input/<novel_id>/translated/*.txt - Generate TTS audio under
output/<novel_id>/audio/ - Generate visual assets under
output/<novel_id>/visual/ - Mux final MP4s under
output/<novel_id>/video/ - Upload outputs to supported platforms
Important operational note:
- translation policy should normally be queue-first
- direct
translate novelis mainly for debugging and small one-off runs
uv sync
uv run novel-tts --helpTip:
- all commands support top-level
--log-file /path/to/file.log - narrow coding context with
uv run novel-tts-context <task>
Use this when coding with Claude Code, Codex, or another repo agent.
- Start with the compact map instead of re-reading the whole repo:
uv run novel-tts-context --list
uv run novel-tts-context translate
uv run novel-tts-context queue- Read only the
Read firstfiles for the task you are changing. - Add
Read only if neededfiles only after the bug points there. - Do not scan large/generated directories:
input/,output/,image/,tmp/,.logs/,.secrets/,.venv/,tests/.
Why this exists:
- keeps stable repo context in one small deterministic artifact
- gives a single obvious narrow-scope path for common tasks
- reduces repeated loading of large files like
novel_tts/cli/main.py,novel_tts/translate/novel.py, andnovel_tts/queue/translation_queue.py
Base requirements:
- Python
3.10 uv
Optional / stage-specific requirements:
- crawl:
- Playwright + Chromium
- install with
uv run playwright install chromium
- queue translation:
- Redis
- default config in
configs/app.yaml: host127.0.0.1, port6379, db1, prefixnovel_tts
- media rendering:
ffmpegffprobe
- current TTS path:
- a reachable Gradio TTS server configured in app/novel config
- novel configs:
configs/novels/*.yaml - source configs:
configs/sources/*.json - per-novel glossaries:
configs/glossaries/*.json - app defaults:
configs/app.yaml - polish replacements:
configs/polish_replacement/common.jsonconfigs/polish_replacement/<novel_id>.json
- TTS provider catalogs:
configs/providers/tts_servers.yamlconfigs/providers/tts_models.yaml
Gemini:
- queue workers read API keys from
.secrets/gemini-keys.txt - use one key per line
- direct translation can still use
GEMINI_API_KEY - if
GEMINI_API_KEYis unset, direct Gemini translation falls back to the first non-empty line in.secrets/gemini-keys.txt
OpenAI:
- set
OPENAI_API_KEY
YouTube:
- OAuth secrets and token live under
.secrets/youtube/
This repo is intentionally file-first: stages communicate through files under input/<novel_id>/ and output/<novel_id>/.
- canonical per-chapter state:
input/<novel_id>/.parts/<batch>/content/origin/*.txtinput/<novel_id>/.parts/<batch>/content/translated/*.txtinput/<novel_id>/.parts/<batch>/heading/origin/*.txtinput/<novel_id>/.parts/<batch>/heading/translated/*.txt
- derived merged outputs:
input/<novel_id>/translated/*.txt
translated/*.txt is rebuildable from .parts and should not be treated as the primary source of truth.
Current translation contract:
.parts/<batch>/content/origin/<chapter>.txtstores the source body snapshot.parts/<batch>/content/translated/<chapter>.txtstores translated body only.parts/<batch>/heading/origin/<chapter>.txtstores the source heading snapshot.parts/<batch>/heading/translated/<chapter>.txtstores the translated heading linetranslated/*.txtis rebuilt by combining heading + body back into inline chapter text
- crawled source batches:
input/<novel_id>/origin/chuong_<start>-<end>/
- resumable state:
input/<novel_id>/.progress/*
- audio:
output/<novel_id>/audio/<range>/*
- visual:
output/<novel_id>/visual/*
- final video:
output/<novel_id>/video/*
Do not casually change heading formats:
- crawl/origin headings are typically ASCII:
Chuong <n> ...
- translated/TTS headings are typically Vietnamese:
Chương <n> ...
Changing headings affects:
- chapter splitting
- translated file rebuild
- TTS chapter detection
- subtitle/menu generation
- downstream media assets
Use this as the default production workflow.
# 1) Crawl
uv run novel-tts crawl run <novel_id> --range <start>-<end>
# 2) Verify crawled files (does not recrawl)
uv run novel-tts crawl verify <novel_id> --range <start>-<end>
# 3) Start the global quota supervisor
uv run novel-tts quota-supervisor
# or:
uv run novel-tts quota-supervisor -d
# 4) Launch the shared queue stack
uv run novel-tts queue launch
# 5) Enqueue chapters for translation
uv run novel-tts queue add <novel_id> --range <start>-<end>
# or:
uv run novel-tts queue add <novel_id> --all
# 6) Monitor progress
uv run novel-tts queue monitor
uv run novel-tts queue ps <novel_id>
uv run novel-tts queue ps-all
# 7) TTS + media
uv run novel-tts tts <novel_id> --range <start>-<end>
uv run novel-tts visual <novel_id> --range <start>-<end>
uv run novel-tts video <novel_id> --range <start>-<end>
# 8) Upload
uv run novel-tts upload <novel_id> --platform youtube --range <start>-<end>Queue-first guidance:
- prefer queue translation for normal work
- avoid direct
translate novelexcept for debugging or very small runs quota-supervisorshould be running whenever queue workers need quota progress
pipeline run is useful for orchestration, but note:
- its translation step launches the queue stack, enqueues the requested chapter range, and waits for queue completion before downstream media
- it does not run caption translation
Examples:
# End-to-end range
uv run novel-tts pipeline run <novel_id> --range <start>-<end>
# Use existing translated chapters only
uv run novel-tts pipeline run <novel_id> --range <start>-<end> --skip-translate
# Downstream media stage-by-stage across a large range
uv run novel-tts pipeline run <novel_id> --range 1-2000 --mode per-stage
# Process each translated batch end-to-end
uv run novel-tts pipeline run <novel_id> --range 1-2000 --mode per-video
# Watch one or more novels for new remote chapters, then crawl -> queue translate ->
# repair -> polish -> TTS. Visual/video/upload run automatically once a full batch is ready.
uv run novel-tts pipeline watch <novel_id>
uv run novel-tts pipeline watch <novel_a> <novel_b> --interval-seconds 600
uv run novel-tts pipeline watch --allWrites crawled batches to input/<novel_id>/origin/chuong_<start>-<end>/ and resumable state under input/<novel_id>/.progress/.
Backward compatibility:
novel-tts crawl <novel_id> ...is treated asnovel-tts crawl run <novel_id> ...
# Crawl everything from chapter 1 to the latest remote chapter
uv run novel-tts crawl run <novel_id> --all
# Crawl a chapter range
uv run novel-tts crawl run <novel_id> --range 1-10
# Same, but with explicit bounds
uv run novel-tts crawl run <novel_id> --from 1 --to 10
# Override directory URL if needed
uv run novel-tts crawl run <novel_id> --range 1-10 --dir-url 'https://...'
# Recrawl chapters that already exist locally
uv run novel-tts crawl run <novel_id> --range 1-10 --forceBy default, crawl run skips chapters already present in local input/<novel_id>/origin/ batch directories.
Use --force to recrawl those chapters.
Use --all to auto-resolve the latest remote chapter and crawl 1..latest.
Sanity-checks already-crawled origin files. It does not recrawl.
--file is interpreted as a batch id under input/<novel_id>/origin/ and can be repeated.
uv run novel-tts crawl verify <novel_id> --range 1-10
uv run novel-tts crawl verify <novel_id> --file chuong_1-10Repairs already-crawled origin batches on disk. It can generate input/<novel_id>/repair_config.yaml from crawl research logs, then apply the repair plan back onto the saved origin batch directories.
# Generate/update repair_config.yaml from crawl research files
uv run novel-tts crawl repair <novel_id> --generate-repair-config
# Run repair using input/<novel_id>/repair_config.yaml
uv run novel-tts crawl repair <novel_id> --run --range 1-10
# Repair only selected origin files
uv run novel-tts crawl repair <novel_id> --run --file chuong_1-10Direct translation:
- reads
input/<novel_id>/origin/chuong_<start>-<end>/ - writes canonical chapter outputs under
input/<novel_id>/.parts/ - rebuilds
input/<novel_id>/translated/*.txt
# Translate all discovered origin batches
uv run novel-tts translate novel <novel_id>
# Translate one origin batch file
uv run novel-tts translate novel <novel_id> --file chuong_1-10
# Re-translate even if parts already exist
uv run novel-tts translate novel <novel_id> --file chuong_1-10 --forceUsed by queue workers, and also useful for debugging:
uv run novel-tts translate chapter <novel_id> --file chuong_1-10 --chapter 7
# Re-translate only the chapter heading
uv run novel-tts translate chapter <novel_id> --file chuong_1-10 --chapter 7 --heading-only
# Re-translate only the chapter content/body
uv run novel-tts translate content <novel_id> --file chuong_1-10 --chapter 7Translates captions when they exist under input/<novel_id>/captions/:
uv run novel-tts translate captions <novel_id>Runs a cleanup pass on translated outputs:
uv run novel-tts translate polish <novel_id> --range 101-500
uv run novel-tts translate polish <novel_id> --file chuong_1-10translate polish loads exact-match replacements from:
configs/polish_replacement/common.jsonconfigs/polish_replacement/<novel_id>.json
Novel-specific entries override common keys.
Queue translation produces the same on-disk artifacts as direct translation:
.parts- rebuilt
translatedbatch files
Under the split heading/body contract, translate polish polishes heading files and body files separately, then rebuilds translated/*.txt.
Migrates legacy inline-heading chapter parts into the split heading/body layout.
uv run novel-tts translate migrate-headings <novel_id>
uv run novel-tts translate migrate-headings <novel_id> --file chuong_1-10 --forceMigrates legacy origin/*.txt batches and legacy .parts chapter files into the folder-based storage layout.
uv run novel-tts translate migrate-storage-layout <novel_id>
uv run novel-tts translate migrate-storage-layout <novel_id> --batch chuong_1-10 --force
uv run novel-tts translate migrate-storage-layout --all-novelsThe queue runtime is now shared across all novels.
queue launch reads .secrets/gemini-keys.txt and spawns:
- a supervisor
- a status monitor
- workers for configured models/keys
Workers pick jobs from one shared Redis queue and load the matching NovelConfig per job based on the job id prefix:
- chapter job:
{novel_id}::{file_name}::{chapter} - captions job:
{novel_id}::captions - glossary repair job:
{novel_id}::repair-glossary::{chunk}
Queue workers use the centralized quota gate (central quota v2) to coordinate rate-limit and quota waits across processes.
quota-supervisor is global, not per-novel.
Run it in a separate terminal whenever you use queue mode. Without it, jobs can remain in waiting-quota and appear stalled.
# Foreground
uv run novel-tts quota-supervisor
# Background daemon
uv run novel-tts quota-supervisor -d
# Stop / restart the background daemon
uv run novel-tts quota-supervisor --stop
uv run novel-tts quota-supervisor --restartProcess-control commands manage running queue processes.
# Launch or restart the shared queue stack
uv run novel-tts queue launch
uv run novel-tts queue restart
# Launch and immediately enqueue one or more novels
uv run novel-tts queue launch --add-queue --novel <novel_id>
uv run novel-tts queue launch --add-queue --novel <novel_a> --novel <novel_b>
# Monitor/status
uv run novel-tts queue monitor
uv run novel-tts queue ps <novel_id>
uv run novel-tts queue ps <novel_id> --all
uv run novel-tts queue ps-all
uv run novel-tts queue ps-all --all -f
# Stop shared queue processes
uv run novel-tts queue stop
uv run novel-tts queue stop --role supervisor,worker
uv run novel-tts queue stop --pid 1234Behavior notes:
queue launch,queue supervisor,queue monitor,queue worker,queue stop, andqueue reset-keyare globalqueue add,queue remove,queue repair,queue requeue-untranslated-exhausted,queue drain, andqueue psstill target one novelqueue ps-allshows one consolidated table for the shared worker pool plus a per-novel summary- shared queue logs live under
.logs/_shared/queue/
Scheduling commands enqueue work into Redis. They do not start workers.
If the queue stack is not running, nothing will change on disk.
Enqueue chapters for translation. Pass exactly one of --range, --chapters, --repair-report, or --all.
Use --force to re-translate.
uv run novel-tts queue add <novel_id> --range 2001-2500
uv run novel-tts queue add <novel_id> --range 2004-2016 --force
uv run novel-tts queue add <novel_id> --chapters 1205,1214
uv run novel-tts queue add <novel_id> --repair-report .logs/<novel_id>/crawl/addition-replacement_chapter_list.txt
uv run novel-tts queue add <novel_id> --allReset per-key Redis state when a key gets stuck in cooldown/quota/throttle state.
uv run novel-tts queue reset-key --key k5
uv run novel-tts queue reset-key --key k5 --model gemini-3.1-flash-lite-preview
uv run novel-tts queue reset-key --key k5,k6 --model gemma-3-27b-it,gemma-3-12b-it
uv run novel-tts queue reset-key --allScans a chapter range and enqueues only broken chapters back into the queue with force re-translate behavior.
Typical reasons:
- placeholder tokens like
ZXQ1156QXZ/QZX...QXZ - residual Han text
- missing or empty parts
- stale parts where origin is newer
If you do not see changes on disk after running this, ensure the shared queue stack is running and watch progress via queue monitor or queue ps <novel_id>.
uv run novel-tts queue repair <novel_id> --range 1401-1410
uv run novel-tts queue repair <novel_id> --allRe-enqueues jobs that exhausted retries but still do not have valid translated output on disk.
uv run novel-tts queue requeue-untranslated-exhausted <novel_id>Removes pending jobs from Redis without touching inflight work or on-disk translations.
uv run novel-tts queue remove <novel_id> --range 1401-1410
uv run novel-tts queue remove <novel_id> --chapters 1205,1214
uv run novel-tts queue remove <novel_id> --allRemoves all pending jobs for one novel from the shared queue without touching inflight jobs or on-disk translations.
uv run novel-tts queue drain <novel_id>Glossary repair is a queue-backed maintenance workflow for cleaning or rebuilding large glossary files in chunks.
Typical flow:
- enqueue chunk jobs with
glossary repair - inspect progress with
glossary repair-status - merge completed chunk outputs back with
glossary repair-merge
By default it uses the novel glossary from translation.glossary_file or configs/glossaries/<novel_id>/glossary.json. Use --glossary-file to target another file such as an auto glossary.
uv run novel-tts glossary repair <novel_id>
uv run novel-tts glossary repair <novel_id> --chunk-size 200 --force
uv run novel-tts glossary repair-status <novel_id>
uv run novel-tts glossary repair-merge <novel_id>
uv run novel-tts glossary repair-merge <novel_id> --dry-runReads .secrets/gemini-keys.txt and inspects Redis metrics emitted by queue workers.
Raw keys are never printed.
uv run novel-tts ai-key ps
uv run novel-tts ai-key ps -f
uv run novel-tts ai-key ps --filter k1 --filter 1234
uv run novel-tts ai-key ps --filter-raw "$GEMINI_API_KEY"Reads the translated chapter corpus under input/<novel_id>/translated/*.txt, assembles the requested media batch from overlapping translated files, and writes audio assets under output/<novel_id>/audio/<range>/.
Behavior notes:
- per-chapter WAVs are written under
output/<novel_id>/audio/<range>/.parts/ - per-chapter text hashes are cached under
output/<novel_id>/audio/<range>/.parts/.cache/ - merged MP3 cache metadata is stored at
output/<novel_id>/audio/<range>/.parts/.cache/merged.sha256 - if all chapter WAVs are cache hits and
output/<novel_id>/audio/<range>/<range>.mp3already exists, merge is skipped unless--forceis used - if translated text changes, the chapter will be re-synthesized even without
--force - media ranges are resolved from
media.media_batch, while translated files remain rebuildable 10-chapter batches
uv run novel-tts tts <novel_id> --range 1-10
uv run novel-tts tts <novel_id> --range 1-10 --force
uv run novel-tts tts <novel_id> --range 701-800 --tts-server-name onPremise --tts-model-name cpu
uv run novel-tts tts <novel_id> --range 701-800 --re-generate-menu
uv run novel-tts tts <novel_id> --re-generate-menu --allOptimizes image/<novel_id>/background.mp4 in place for lighter production assets while keeping the approved background-friendly quality profile.
Behavior notes:
- rewrites
background.mp4in place after encoding to a temporary file - keeps only the main video stream
- keeps the original resolution and frame rate
- defaults to HEVC/H.265 with
CRF 24andpreset slow
uv run novel-tts background optimize <novel_id>
uv run novel-tts background optimize <novel_id> --crf 24 --preset slowBuilds or refreshes chapter menu files under output/<novel_id>/subtitle/ from translated chapter headings without re-running TTS.
uv run novel-tts create-menu <novel_id> --range 1-10Generates the visual layer under output/<novel_id>/visual/.
Behavior notes:
- final visual outputs are cached via
output/<novel_id>/visual/.cache/<range>.sha256 - rerender is skipped when the cached inputs still match the existing visual MP4 + thumbnail PNG
- use
--forceto rerender even when the cache matches
uv run novel-tts visual <novel_id> --range 1-10
uv run novel-tts visual <novel_id> --range 1-10 --force
uv run novel-tts visual <novel_id> --chapter 1Writes final MP4s under output/<novel_id>/video/.
Behavior notes:
- final muxed videos are cached via
output/<novel_id>/video/.cache/<range>.sha256 - remux is skipped when the cached visual/audio inputs still match the existing final MP4
- use
--forceto remux even when the cache matches
uv run novel-tts video <novel_id> --range 1-10
uv run novel-tts video <novel_id> --range 1-10 --forceUploads rendered videos by range.
Platforms:
youtube: real upload via OAuth local token + YouTube Data APItiktok: dry-run payload/validation only
YouTube metadata convention:
- title:
output/<novel_id>/title.txt
- description:
output/<novel_id>/description.txt- plus
output/<novel_id>/subtitle/chuong_<start>-<end>_menu.txt
- thumbnail:
output/<novel_id>/visual/chuong_<start>-<end>.png
- playlist:
output/<novel_id>/playlist.txt
Default audience/visibility:
- not made for kids
- public
uv run novel-tts upload <novel_id> --platform youtube --range 1-10
uv run novel-tts upload <novel_id> --platform youtube --range 1-10 --dry-run
uv run novel-tts upload <novel_id> --platform tiktok --range 1-10
uv run novel-tts upload <novel_id> --platform youtube --remove-duplicatedYouTube upload pacing can be tuned with:
upload.youtube.upload_batch_sizeupload.youtube.upload_batch_sleep_seconds
This helps smooth bursts even though Google mainly documents daily quota units rather than short-window limits.
Required files:
- one
client_secrets.jsonper account - one
token.jsonper account
Example templates:
.secrets/youtube/client_secrets.example.json.secrets/youtube/token.example.json
How to get client_secrets.json:
- Open Google Cloud Console.
- Create or select a project.
- Enable
YouTube Data API v3. - Configure the OAuth consent screen.
- Create OAuth client credentials as a Desktop app.
- Download the JSON and save it as
.secrets/youtube/client_secrets.json.
How to get token.json:
- Ensure
upload.youtube.credentials_pathcontains the OAuth client JSON path for each account. - Ensure
upload.youtube.token_pathcontains the matching token JSON path list in the same order. - Run a first upload or dry-run command:
uv run novel-tts upload <novel_id> --platform youtube --range 1-10 --dry-run
- Complete the browser login/consent flow once.
- The CLI will create the matching
token.jsonfor the account being used.
Multi-account note:
- upload video now auto-selects across all configured YouTube project slots based on remaining daily quota
- the uploader estimates quota for duplicate-check + upload + thumbnail + playlist insert before choosing a slot
upload.youtube.projectis still used by some admin/read-only YouTube commands, but upload selection itself is now quota-drivenupload.youtube.credentials_pathandupload.youtube.token_pathare parallel arrays.- Entry
0incredentials_pathis paired with entry0intoken_path, and so on. - When a YouTube request fails with
quotaExceeded, upload rotates to the next configured account automatically. - before upload, the CLI estimates quota for duplicate-check + upload + thumbnail + playlist insert and prefers a configured project slot with enough remaining quota
- if a saved quota session is missing or expired for a slot, upload automatically re-captures that slot's quota session via the attached debug browser before checking usage
- each slot's last known YouTube quota snapshot is also stored in Redis with
last_sync_time - if live quota sync fails, upload temporarily falls back to Redis + estimated spend since the last sync
- if the cached snapshot crosses YouTube's daily quota reset boundary while live sync is unavailable, the Redis snapshot resets itself to the new day before estimating further spend
Inspect accessible YouTube playlists and videos using configured OAuth credentials.
# Playlists
uv run novel-tts youtube playlist
uv run novel-tts youtube playlist --title-only
uv run novel-tts youtube playlist --id PLxxxxxxxx
uv run novel-tts youtube playlist --id 'https://www.youtube.com/playlist?list=PLxxxxxxxx'
# Playlist update
uv run novel-tts youtube playlist update --id PLxxxxxxxx --title 'New title'
uv run novel-tts youtube playlist update --id PLxxxxxxxx --description 'New description'
uv run novel-tts youtube playlist update --id PLxxxxxxxx --privacy-status private
uv run novel-tts youtube playlist update --id PLxxxxxxxx
# Videos
uv run novel-tts youtube video
uv run novel-tts youtube video --title-only
uv run novel-tts youtube video --id xxxxxxxx
# Video update
uv run novel-tts youtube video update --id xxxxxxxx --title 'New title'
uv run novel-tts youtube video update --id xxxxxxxx --description 'New description'
uv run novel-tts youtube video update --id xxxxxxxx --privacy_status private
uv run novel-tts youtube video update --id xxxxxxxx --made_for_kids true
uv run novel-tts youtube video update --id xxxxxxxx --playlist_position 7
uv run novel-tts youtube video update --id xxxxxxxx
# Quota usage
# 1) Capture browser-derived session secrets
uv run novel-tts youtube quota capture --session-slot 1
uv run novel-tts youtube quota capture --session-slot 2
uv run novel-tts youtube quota capture --session-slot 3
uv run novel-tts youtube quota capture --session-slot 1 --remote-debugging-url http://127.0.0.1:9222
uv run novel-tts youtube quota capture --project-id your-gcp-project-id --session-slot 1
# 2) Call HTTP directly using the saved session secret file
uv run novel-tts youtube quota --session-slot 1
uv run novel-tts youtube quota --session-slot 2
uv run novel-tts youtube quota --session-slot 3
uv run novel-tts youtube quota --session-file .secrets/youtube/quota_session-1.json
uv run novel-tts youtube quota --raw
# 3) Inspect shared Redis quota snapshots
uv run novel-tts youtube quota redis --session-slot 1
uv run novel-tts youtube quota redis --session-slot 2
uv run novel-tts youtube all --redis
# Rewrite playlist links in uploaded video descriptions
uv run novel-tts upload <novel_id> --platform youtube --update-playlist-index
uv run novel-tts upload <novel_id> --platform youtube --update-playlist-index --range 1-150
# Reorder videos inside the configured playlist by episode number in title
uv run novel-tts upload <novel_id> --platform youtube --update-playlist-position
# Remove duplicated videos in the configured playlist, keeping the latest uploaded copy per title
uv run novel-tts upload <novel_id> --platform youtube --remove-duplicatedyoutube quota notes:
youtube quota captureattaches to an existing Chrome/Chromium debug browser via CDP- it captures the Cloud Console quota request secrets and stores them at
.secrets/youtube/quota_session-<slot>.json youtube quotathen calls HTTP directly using the saved secret fileyoutube quota redis --session-slot Nreads the shared Redis snapshot for one slotyoutube all --redisreads Redis snapshots for all configured slots- it prints normalized summary fields including
current_usage,effective_limit, andremaining - if the saved session expires, rerun
youtube quota capture - if the attached browser is not logged into Google Cloud Console, sign in there first and retry
- YouTube daily quota reset is treated as
midnight Pacific Time (PT), matching the YouTube docs
How to get the required secrets safely:
- Start Chrome or Chromium with remote debugging enabled and log into Google Cloud Console in that browser profile.
- Run
uv run novel-tts youtube quota capture --session-slot <1|2|3>. - If needed, override the inferred project id with
--project-id <your-project-id>. - The command saves the required request headers/body into
.secrets/youtube/quota_session-<slot>.json. - Future quota reads use
uv run novel-tts youtube quota --session-slot <1|2|3>without needing browser interaction until the session expires.
Runs multiple stages in order for a given range, with optional --skip-* flags for iteration.
By default, pipeline runs upload at the end using upload.default_platform (default youtube).
Use:
--skip-uploadto disable upload--upload-platformto override platform--from-stage/--to-stageto run only a contiguous slice of stages--forceto force supported stages to rerun existing work
uv run novel-tts pipeline run <novel_id> --range 1-10
uv run novel-tts pipeline run <novel_id> --range 1-10 --skip-translate
uv run novel-tts pipeline run <novel_id> --range 1-10 --skip-upload
uv run novel-tts pipeline run <novel_id> --range 1-10 --upload-platform tiktok
uv run novel-tts pipeline run <novel_id> --range 1-10 --skip-crawl --skip-translate --skip-tts
uv run novel-tts pipeline run <novel_id> --range 1-10 --from-stage tts
uv run novel-tts pipeline run <novel_id> --range 1-10 --from-stage translate --to-stage video
uv run novel-tts pipeline run <novel_id> --range 1-10 --forcepipeline watch is continuous orchestration for ongoing serialized novels:
- checks the remote source for newly published chapters
- crawls only chapters newer than the current highest local crawled chapter
- launches queue-first translation, waits for completion, then runs repair and polish
- reruns TTS on the affected batch range (for example
1251-1260when chapter1253arrives) - only runs
visual,video, anduploadonce the audio batch has all chapter parts for that range
Notes:
- for safety, a novel with no local crawled chapters is skipped unless you pass
--bootstrap-from - upload completion is remembered in
input/<novel_id>/.progress/watch_pipeline_state.json - default watch settings come from
configs/app.yaml > pipeline.watchand can still be overridden by CLI flags --allusesconfigs/app.yaml > pipeline.watch.novelswhen that list is non-empty; otherwise it falls back to all files underconfigs/novels/*.yaml--from-stage/--to-stagelet you run only a contiguous slice of watch stages
uv run novel-tts pipeline watch <novel_id>
uv run novel-tts pipeline watch <novel_id> --once
uv run novel-tts pipeline watch <novel_id> --interval-seconds 900
uv run novel-tts pipeline watch --all
uv run novel-tts pipeline watch <novel_id> --bootstrap-from 1201
uv run novel-tts pipeline watch <novel_id> --skip-upload
uv run novel-tts pipeline watch <novel_id> --skip-crawl --skip-translate --skip-repair --skip-polish
uv run novel-tts pipeline watch --all --to-stage polish
uv run novel-tts pipeline watch <novel_id> --from-stage tts --to-stage upload- if a source blocks plain HTTP, install Playwright runtime:
uv run playwright install chromium
- if
waiting-quotanever progresses, ensure the global quota supervisor is running:uv run novel-tts quota-supervisor- or
uv run novel-tts quota-supervisor -d
- if a specific key gets stuck in cooldown/quota state:
uv run novel-tts queue reset-key --key kN [--model ...]
- if translated outputs look poisoned:
uv run novel-tts queue repair <novel_id> --range <start>-<end>
When in doubt, rely on CLI help:
uv run novel-tts --help
uv run novel-tts queue --help
uv run novel-tts translate --help