Skip to content

Commit e1f1acc

Browse files
committed
Add unified configuration guide documentation
Introduces docs/UNIFIED_CONFIG_GUIDE.md, detailing the unified config system for Graphiti MCP Server. The guide covers setup, provider mixing, migration from provider-specific configs, troubleshooting, and best practices for configuration management.
1 parent 7d5cfa3 commit e1f1acc

File tree

1 file changed

+397
-0
lines changed

1 file changed

+397
-0
lines changed

docs/UNIFIED_CONFIG_GUIDE.md

Lines changed: 397 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,397 @@
1+
# Unified Configuration Guide
2+
3+
This guide explains the unified configuration system for the Graphiti MCP Server. The unified config simplifies setup by allowing you to configure both LLM and embeddings in a single file, with support for mixed providers.
4+
5+
## Overview
6+
7+
The unified configuration system provides a simpler, more flexible way to configure your Graphiti MCP server:
8+
9+
- **Single config file**: `config/config.local.yml` for all settings
10+
- **Mixed providers**: Use different providers for LLM and embeddings
11+
- **Auto-detection**: Provider type detected from `base_url`
12+
- **Backward compatible**: Old provider-specific configs still work
13+
14+
## Quick Start
15+
16+
### 1. Create Your Config File
17+
18+
Copy the template:
19+
20+
```bash
21+
cp config/config.local.yml.example config/config.local.yml
22+
```
23+
24+
Or create your own `config/config.local.yml`:
25+
26+
```yaml
27+
llm:
28+
model: "claude-sonnet-4-20250514"
29+
base_url: "https://your-gateway.com"
30+
temperature: 0.1
31+
max_tokens: 25000
32+
33+
embedder:
34+
model: "nomic-embed-text"
35+
base_url: "http://localhost:11434/v1"
36+
dimension: 768
37+
```
38+
39+
### 2. Set Environment Variables
40+
41+
```bash
42+
# For Bedrock/OpenAI LLM (required if not using Ollama)
43+
export OPENAI_API_KEY="your-api-key-here"
44+
45+
# For Neo4j (always required)
46+
export NEO4J_URI="bolt://localhost:7687"
47+
export NEO4J_USER="neo4j"
48+
export NEO4J_PASSWORD="your-password"
49+
```
50+
51+
### 3. Start the Server
52+
53+
```bash
54+
uv run src/graphiti_mcp_server.py --transport stdio
55+
```
56+
57+
## Configuration Examples
58+
59+
### Example 1: Bedrock LLM + Local Ollama Embeddings (Recommended)
60+
61+
This is the optimal configuration for most use cases - enterprise LLM with fast local embeddings.
62+
63+
```yaml
64+
llm:
65+
model: "claude-sonnet-4-20250514"
66+
base_url: "https://eng-ai-model-gateway.sfproxy.devx-preprod.aws-esvc1-useast2.aws.sfdc.cl"
67+
temperature: 0.1
68+
max_tokens: 25000
69+
70+
model_parameters:
71+
presence_penalty: 0.0
72+
frequency_penalty: 0.0
73+
top_p: 1.0
74+
75+
embedder:
76+
model: "nomic-embed-text"
77+
base_url: "http://localhost:11434/v1"
78+
dimension: 768
79+
80+
model_parameters:
81+
num_ctx: 4096
82+
```
83+
84+
**Required:**
85+
- `OPENAI_API_KEY` for Bedrock gateway
86+
- Ollama running locally with `nomic-embed-text` model
87+
88+
**Benefits:**
89+
- Enterprise-grade LLM for complex reasoning
90+
- Fast local embeddings (no API latency)
91+
- Lower cost (embeddings are free)
92+
- Works offline for embedding generation
93+
94+
### Example 2: All Ollama (Local Development)
95+
96+
Perfect for development, testing, or offline work.
97+
98+
```yaml
99+
llm:
100+
model: "llama3.1:8b"
101+
base_url: "http://localhost:11434/v1"
102+
temperature: 0.1
103+
max_tokens: 10000
104+
105+
model_parameters:
106+
num_ctx: 4096
107+
keep_alive: "5m"
108+
109+
embedder:
110+
model: "nomic-embed-text"
111+
base_url: "http://localhost:11434/v1"
112+
dimension: 768
113+
```
114+
115+
**Required:**
116+
- Ollama running locally
117+
- Models pulled: `ollama pull llama3.1:8b` and `ollama pull nomic-embed-text`
118+
119+
**Benefits:**
120+
- Completely free
121+
- Works offline
122+
- Fast iteration
123+
- No API keys needed
124+
125+
### Example 3: All OpenAI/Bedrock (Enterprise)
126+
127+
Full enterprise gateway setup.
128+
129+
```yaml
130+
llm:
131+
model: "gpt-4o"
132+
base_url: "https://your-enterprise-gateway.com"
133+
temperature: 0.1
134+
max_tokens: 8192
135+
136+
embedder:
137+
model: "text-embedding-3-small"
138+
base_url: "https://your-enterprise-gateway.com/bedrock/embeddings"
139+
dimension: 1536
140+
```
141+
142+
**Required:**
143+
- `OPENAI_API_KEY` for gateway access
144+
145+
**Benefits:**
146+
- Centralized billing/monitoring
147+
- Enterprise security compliance
148+
- Consistent provider
149+
150+
### Example 4: Azure OpenAI
151+
152+
For Azure-hosted OpenAI services.
153+
154+
```yaml
155+
llm:
156+
model: "gpt-4" # Your deployment model
157+
temperature: 0.1
158+
max_tokens: 8192
159+
160+
embedder:
161+
model: "text-embedding-3-small"
162+
dimension: 1536
163+
```
164+
165+
**Required Environment Variables:**
166+
```bash
167+
export AZURE_OPENAI_ENDPOINT="https://your-resource.openai.azure.com"
168+
export AZURE_OPENAI_API_VERSION="2024-02-01"
169+
export AZURE_OPENAI_DEPLOYMENT_NAME="your-deployment"
170+
export OPENAI_API_KEY="your-azure-key"
171+
172+
# For embeddings (if separate deployment)
173+
export AZURE_OPENAI_EMBEDDING_DEPLOYMENT_NAME="your-embedding-deployment"
174+
```
175+
176+
## How Provider Detection Works
177+
178+
The system automatically detects which provider to use based on the `base_url`:
179+
180+
| URL Pattern | Detected Provider |
181+
|-------------|-------------------|
182+
| `localhost:11434` or `127.0.0.1:11434` | Ollama |
183+
| `azure.com` in hostname | Azure OpenAI |
184+
| Everything else | OpenAI-compatible |
185+
186+
**Note:** Detection is fully automatic based on the `base_url` in your configuration. No environment variables needed for detection. Each component (LLM and embedder) is detected independently, allowing mixed provider configurations.
187+
188+
## Configuration Hierarchy
189+
190+
Settings are loaded with this precedence (highest to lowest):
191+
192+
1. **CLI arguments** (e.g., `--model`, `--temperature`)
193+
2. **Environment variables** (e.g., `OPENAI_API_KEY`, `OLLAMA_BASE_URL`)
194+
3. **Unified config** (`config/config.local.yml`)
195+
4. **Provider-specific configs** (`config/providers/*.local.yml`) - deprecated
196+
5. **Default values** (hardcoded fallbacks)
197+
198+
## Migrating from Provider-Specific Configs
199+
200+
If you have existing provider-specific configs:
201+
202+
### Old Setup (Multiple Files)
203+
```
204+
config/providers/openai.local.yml
205+
config/providers/ollama.local.yml
206+
```
207+
208+
### New Setup (Single File)
209+
```
210+
config/config.local.yml
211+
```
212+
213+
**Migration Steps:**
214+
215+
1. Create `config/config.local.yml`
216+
2. Copy your LLM settings from the old file
217+
3. Copy your embedder settings (can be from a different provider file!)
218+
4. Test with `uv run src/graphiti_mcp_server.py --transport stdio`
219+
5. Remove old provider-specific files (optional - they're still supported)
220+
221+
## Advanced Configuration
222+
223+
### Model-Specific Parameters
224+
225+
Each provider supports specific parameters in `model_parameters`:
226+
227+
**OpenAI/Bedrock:**
228+
```yaml
229+
model_parameters:
230+
presence_penalty: 0.0
231+
frequency_penalty: 0.0
232+
top_p: 1.0
233+
n: 1
234+
stream: false
235+
```
236+
237+
**Ollama:**
238+
```yaml
239+
model_parameters:
240+
num_ctx: 4096
241+
num_predict: -1
242+
repeat_penalty: 1.1
243+
top_k: 40
244+
top_p: 0.9
245+
keep_alive: "5m"
246+
```
247+
248+
### Multiple Environments
249+
250+
Use different config files for different environments:
251+
252+
```bash
253+
# Development
254+
cp config/config.dev.yml config/config.local.yml
255+
256+
# Production
257+
cp config/config.prod.yml config/config.local.yml
258+
```
259+
260+
Or use environment-specific variable prefixes:
261+
262+
```bash
263+
# Development
264+
export DEV_OPENAI_API_KEY="dev-key"
265+
266+
# Production
267+
export PROD_OPENAI_API_KEY="prod-key"
268+
```
269+
270+
## Troubleshooting
271+
272+
### Config Not Loading
273+
274+
**Symptom:** Server uses default values instead of your config
275+
276+
**Solution:**
277+
```bash
278+
# Check if config.local.yml exists
279+
ls -la config/config.local.yml
280+
281+
# Check YAML syntax
282+
python3 -c "import yaml; print(yaml.safe_load(open('config/config.local.yml')))"
283+
284+
# Check logs for warnings
285+
uv run src/graphiti_mcp_server.py --transport stdio 2>&1 | grep -i "config"
286+
```
287+
288+
### Wrong Provider Detected
289+
290+
**Symptom:** Server uses Ollama when you want OpenAI, or vice versa
291+
292+
**Solution:**
293+
```bash
294+
# Check your base_url in config.local.yml
295+
# Provider is auto-detected from the URL:
296+
# localhost:11434 -> Ollama
297+
# azure.com -> Azure OpenAI
298+
# everything else -> OpenAI-compatible
299+
300+
# If you want to use Ollama, set base_url to:
301+
# base_url: "http://localhost:11434/v1"
302+
303+
# If you want to use OpenAI/Bedrock, set base_url to your gateway:
304+
# base_url: "https://your-gateway.com"
305+
```
306+
307+
### Missing API Key
308+
309+
**Symptom:** `OPENAI_API_KEY must be set` error
310+
311+
**Solution:**
312+
```bash
313+
# Check if env var is set
314+
echo $OPENAI_API_KEY
315+
316+
# Set it properly
317+
export OPENAI_API_KEY="your-key-here"
318+
319+
# Or add to .env file
320+
echo 'OPENAI_API_KEY="your-key-here"' >> .env
321+
```
322+
323+
### Ollama Connection Failed
324+
325+
**Symptom:** `Connection refused` to `localhost:11434`
326+
327+
**Solution:**
328+
```bash
329+
# Check if Ollama is running
330+
curl http://localhost:11434/api/tags
331+
332+
# Start Ollama
333+
ollama serve
334+
335+
# Check if model is available
336+
ollama list | grep nomic-embed-text
337+
338+
# Pull model if missing
339+
ollama pull nomic-embed-text
340+
```
341+
342+
## Best Practices
343+
344+
### 1. Keep Secrets in Environment Variables
345+
346+
**DO:**
347+
```yaml
348+
# config.local.yml
349+
llm:
350+
model: "gpt-4"
351+
base_url: "https://api.openai.com/v1"
352+
```
353+
354+
```bash
355+
# .env
356+
OPENAI_API_KEY="sk-..."
357+
```
358+
359+
**DON'T:**
360+
```yaml
361+
# config.local.yml
362+
llm:
363+
api_key: "sk-..." # Never commit API keys!
364+
```
365+
366+
### 2. Use .gitignore
367+
368+
Add to `.gitignore`:
369+
```
370+
config/config.local.yml
371+
config/**/*.local.yml
372+
.env
373+
```
374+
375+
### 3. Document Your Setup
376+
377+
Create a `config/README.md` or `config/config.local.yml.example` with:
378+
- Which providers you're using
379+
- Required environment variables
380+
- Setup instructions for new team members
381+
382+
### 4. Test Configuration Changes
383+
384+
```bash
385+
# Always test after config changes
386+
uv run src/graphiti_mcp_server.py --transport stdio
387+
388+
# Check the startup logs for:
389+
# "Using [Provider] LLM: [model] at [url]"
390+
# "Using [Provider] embedder: [model] at [url]"
391+
```
392+
393+
## See Also
394+
395+
- [Mixed Provider Setup Guide](MIXED_PROVIDER_SETUP.md) - Detailed guide for mixing providers
396+
- [Ollama Setup Guide](../README.md#ollama-setup) - Installing and configuring Ollama
397+
- [Configuration Reference](../config/README.md) - All configuration options

0 commit comments

Comments
 (0)