-
Notifications
You must be signed in to change notification settings - Fork 0
Configuration
This page explains how to configure LLMs, vector databases, and other settings in DHTI. Most configuration is done via bootstrap.py and environment variables in docker-compose.yml. The default working directory is ~/dhti (on Windows, use %USERPROFILE%\dhti).
NOTE: The template elixir is already configured to look for Google Gemini and OpenAI. See the template's default Bootstrap file
The main configuration file for all installed elixirs is ~/dhti/elixir/app/bootstrap.py. This file overrides any modular or default settings (as in the template elixir above). Use it to:
- Swap LLM providers (Ollama, OpenAI, Gemini, etc.)
- Change model names, hyperparameters, or prompt templates
- Configure tool integrations (vector DB, LangFuse, etc.)
Example: Switching to Google Gemini
from langchain_google_genai import ChatGoogleGenerativeAI
from dotenv import load_dotenv
from langchain.prompts import PromptTemplate
def bootstrap():
load_dotenv()
llm = ChatGoogleGenerativeAI(model="gemini-2.5-flash")
di["main_prompt"] = PromptTemplate.from_template(
"Summarize the following in 100 words: {input}"
)
di["template_main_llm"] = llmTip: After editing, apply the new config with:
npx dhti-cli docker bootstrap -f ~/dhti/elixir/app/bootstrap.py -c dhti-langserve-1
If you use a new LLM or tool, add its dependency to ~/dhti/elixir/pyproject.toml:
dependencies = [
"dhti-elixir-base>=1.2.0",
"fhiry>=5.2.1",
"langchain-google-genai",
"langchain-openai",
]Set API keys, service URLs, and other secrets in the environment: section for each service. Example:
langserve:
image: beapen/genai-test:1.0
ports:
- '8001:8001'
environment:
- OLLAMA_SERVER_URL=http://ollama:11434
- GOOGLE_API_KEY=YourAPIKey
- LANGFUSE_HOST=http://langfuse:3000
- LANGFUSE_PUBLIC_KEY=pk-lf-abcd
- LANGFUSE_SECRET_KEY=sk-lf-abcdNote: On Windows, use %USERPROFILE%\dhti for paths.
bootstrap.py
from langchain_community.vectorstores import Redis
vectorstore = Redis.from_existing_index(
redis_url="redis://redis:6379/0",
index_name="my-index"
)
di["vectorstore"] = vectorstoreAfter changing configuration or environment variables, restart affected containers:
npx dhti-cli docker -d # Stop and remove
npx dhti-cli docker -u # Start again
-
Linux/macOS: Use
~/dhtifor all paths. -
Windows: Use
%USERPROFILE%\dhtiand adjust path separators as needed.
-
Switching to Ollama LLM:
- Set
OLLAMA_SERVER_URLindocker-compose.yml. - In
bootstrap.py, use the appropriate LangChain Ollama integration.
- Set
-
Adding Neo4j:
- Add a
neo4jservice and configure connection inbootstrap.py.
- Add a
-
Multiple Elixirs/Conchs:
- Each can have its own config and environment variables.
For more, see the LangChain docs.