Skip to content

Configuration

Bell Eapen edited this page Nov 19, 2025 · 3 revisions

Configuration: Swapping LLMs, Vector DBs, and More

This page explains how to configure LLMs, vector databases, and other settings in DHTI. Most configuration is done via bootstrap.py and environment variables in docker-compose.yml. The default working directory is ~/dhti (on Windows, use %USERPROFILE%\dhti).


1. Main Configuration File: bootstrap.py

NOTE: The template elixir is already configured to look for Google Gemini and OpenAI. See the template's default Bootstrap file

The main configuration file for all installed elixirs is ~/dhti/elixir/app/bootstrap.py. This file overrides any modular or default settings (as in the template elixir above). Use it to:

  • Swap LLM providers (Ollama, OpenAI, Gemini, etc.)
  • Change model names, hyperparameters, or prompt templates
  • Configure tool integrations (vector DB, LangFuse, etc.)

Example: Switching to Google Gemini

from langchain_google_genai import ChatGoogleGenerativeAI
from dotenv import load_dotenv
from langchain.prompts import PromptTemplate

def bootstrap():
		load_dotenv()
		llm = ChatGoogleGenerativeAI(model="gemini-2.5-flash")
		di["main_prompt"] = PromptTemplate.from_template(
				"Summarize the following in 100 words: {input}"
		)
		di["template_main_llm"] = llm

Tip: After editing, apply the new config with:

npx dhti-cli docker bootstrap -f ~/dhti/elixir/app/bootstrap.py -c dhti-langserve-1

2. Adding Dependencies

If you use a new LLM or tool, add its dependency to ~/dhti/elixir/pyproject.toml:

dependencies = [
  "dhti-elixir-base>=1.2.0",
  "fhiry>=5.2.1",
  "langchain-google-genai",
  "langchain-openai",
]

3. Setting Environment Variables in ~/dhti/docker-compose.yml

Set API keys, service URLs, and other secrets in the environment: section for each service. Example:

	langserve:
		image: beapen/genai-test:1.0
		ports:
			- '8001:8001'
		environment:
			- OLLAMA_SERVER_URL=http://ollama:11434
			- GOOGLE_API_KEY=YourAPIKey
			- LANGFUSE_HOST=http://langfuse:3000
			- LANGFUSE_PUBLIC_KEY=pk-lf-abcd
			- LANGFUSE_SECRET_KEY=sk-lf-abcd

Note: On Windows, use %USERPROFILE%\dhti for paths.


4. Example: Configuring a Vector Database (Redis)

bootstrap.py

from langchain_community.vectorstores import Redis
vectorstore = Redis.from_existing_index(
		redis_url="redis://redis:6379/0",
		index_name="my-index"
)
di["vectorstore"] = vectorstore

5. Restarting Services

After changing configuration or environment variables, restart affected containers:

npx dhti-cli docker -d  # Stop and remove
npx dhti-cli docker -u  # Start again

6. Platform Notes

  • Linux/macOS: Use ~/dhti for all paths.
  • Windows: Use %USERPROFILE%\dhti and adjust path separators as needed.

7. More Examples

  • Switching to Ollama LLM:
    • Set OLLAMA_SERVER_URL in docker-compose.yml.
    • In bootstrap.py, use the appropriate LangChain Ollama integration.
  • Adding Neo4j:
    • Add a neo4j service and configure connection in bootstrap.py.
  • Multiple Elixirs/Conchs:
    • Each can have its own config and environment variables.

For more, see the LangChain docs.

Clone this wiki locally