Local RAG-powered chatbot for Indian travel destinations using Ollama + ChromaDB.
- Python 3.10+
- Ollama installed
git clone https://github.com/your-repo/shivYatra.git
cd shivYatra
pip install -r app/requirements.txt# Install Ollama (Linux)
curl -fsSL https://ollama.ai/install.sh | sh
# Pull the model
ollama pull qwen2.5:1.5b
# Start Ollama (keep running)
ollama servepython app/run.pyOpen http://localhost:5000 in your browser.
shivYatra/
├── app/
│ ├── api/server.py # Flask server
│ ├── core/rag_engine.py # RAG pipeline
│ ├── config/ # Configuration
│ ├── web/templates/ # Chat UI
│ └── run.py # Entry point
├── data/ # Tourism data
├── database/ # ChromaDB vectors
└── notebooks/ # Data processing
Edit app/config/rag_config.py:
OLLAMA_CONFIG = {
"model": "qwen2.5:1.5b", # Change LLM model
"temperature": 0.7,
"max_tokens": 1000
}| Issue | Fix |
|---|---|
| Ollama not running | Run ollama serve in terminal |
| Model not found | Run ollama pull qwen2.5:1.5b |
| Port 5000 in use | Kill process: lsof -ti:5000 | xargs kill |
- LLM: Ollama (qwen2.5:1.5b)
- Vector DB: ChromaDB
- Embeddings: all-MiniLM-L6-v2
- Backend: Flask
- Frontend: HTML/CSS/JS