-
Notifications
You must be signed in to change notification settings - Fork 15
Expand file tree
/
Copy path.env.example
More file actions
119 lines (82 loc) · 4.72 KB
/
.env.example
File metadata and controls
119 lines (82 loc) · 4.72 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
# ═══════════════════════════════════════════════════════════════
# Tofu (豆腐) — Environment Configuration
# ═══════════════════════════════════════════════════════════════
#
# Copy this file to .env and fill in your values:
#
# cp .env.example .env
#
# Then start the server:
#
# python server.py
#
# Priority: Settings UI > environment variables > defaults
# (Any setting you configure in the web UI takes precedence)
#
# Lines starting with # are comments. Remove # to enable a variable.
# ═══════════════════════════════════════════════════════════════
# ── LLM Provider ──────────────────────────────────────────────
# The easiest way is to configure via Settings UI (⚙️ → Providers).
# These env vars serve as fallback for headless / Docker deployments.
# API key (required — comma-separated for multiple keys)
# LLM_API_KEYS=sk-key1,sk-key2,sk-key3
# Single key (legacy, works too)
# LLM_API_KEY=sk-your-key-here
# API endpoint (default: https://api.openai.com/v1)
# LLM_BASE_URL=https://api.openai.com/v1
# Default model (default: gpt-4o)
# LLM_MODEL=gpt-4o
# Fallback model — used when the primary model fails (default: disabled)
# FALLBACK_MODEL=gpt-4o-mini
# ── Server ────────────────────────────────────────────────────
# Server port (default: 15000, auto-increments if occupied)
# PORT=15000
# Bind address (default: 0.0.0.0)
# BIND_HOST=0.0.0.0
# Flask debug mode (default: 0)
# FLASK_DEBUG=0
# ── Tunnel Authentication ─────────────────────────────────────
# Set to enable token-based auth for public tunnel access (e.g.
# VS Code port forwarding, ngrok). Leave empty for LAN-only mode.
# TUNNEL_TOKEN=your-secret-token
# ── Proxy (if behind a corporate firewall) ────────────────────
# HTTP_PROXY=http://proxy.example.com:8080
# HTTPS_PROXY=http://proxy.example.com:8080
# PROXY_BYPASS_DOMAINS=.internal.example.com,.corp.example.com
# ── Feishu / Lark Bot (optional) ──────────────────────────────
# Create an app at https://open.feishu.cn/app with Bot capability.
# FEISHU_APP_ID=cli_xxx
# FEISHU_APP_SECRET=xxx
# ── Feature Flags ─────────────────────────────────────────────
# Enable trading advisor module (default: 0 = off)
# TRADING_ENABLED=0
# Enable debug mode (default: 0 = off)
# DEBUG_MODE=0
# ── Search & Fetch ────────────────────────────────────────────
# These can also be configured in Settings UI (⚙️ → Search & Fetch).
# Number of search results to auto-fetch (default: 6)
# FETCH_TOP_N=6
# Per-page fetch timeout in seconds (default: 15)
# FETCH_TIMEOUT=15
# Max characters per page for search results (default: 60000)
# FETCH_MAX_CHARS_SEARCH=60000
# Max characters per page for direct URL fetch (default: 200000)
# FETCH_MAX_CHARS_DIRECT=200000
# ── PDF Parsing ───────────────────────────────────────────────
# Default text-extract strategy for /api/pdf/parse:
# rich — pymupdf4llm (default; ships out of the box)
# structured — IBM Docling (better tables + math on academic PDFs).
# Requires `pip install docling` (~2 GB; pulls torch).
# If docling is missing or fails, server falls back to
# rich automatically — uploads never break.
# fast — raw pymupdf get_text (no Markdown structure, ~50× faster)
# PDF_TEXT_MODE=rich
# VLM PDF parser tuning (applies when VLM mode is used).
# Pages per single VLM call (1–16). Default 4. Larger = fewer HTTP
# round-trips and less 429 thrash, but more output tokens per call.
# PDF_VLM_BATCH_PAGES=4
# Cap on concurrent VLM calls. Default = unlimited (one thread per
# batch). Lower this on shared keys to avoid 429 storms.
# PDF_VLM_MAX_WORKERS=8
# Output token cap per VLM call. Default scales with batch (4096/page).
# PDF_VLM_MAX_TOKENS=16384