LifeNode is a local-first, self-hosted knowledge workspace.
It runs as a single web app with:
- Rust backend (
axum+sqlx+ SQLite) - React frontend (
Vite+MUI) llama.cppservices for chat + embeddings- Per-user local storage under
./user-files
- Authenticated multi-user app with optional admin bootstrap.
Wikitab: Kiwix Wikipedia download center + embedded Kiwix viewer.Mapstab: OpenStreetMap download center + embedded OSM visualizer.- Download job system for maps/wiki payloads (queue, progress, cancel, delete logs, delete files).
Asktab with:- chat sidebar and persistent thread history
- thread create/rename/delete
- streaming assistant responses over SSE (
/api/ask/stream) - optional model "thinking" mode
Notestab with folder tree, file management, markdown editing, and live preview.Drivetab with folder/file operations and rich file preview.- Drive viewer supports Markdown, text, PDF, image, audio, and video formats.
Calendartab for personal events.- Health indicator for backend + model status.
Browser (React + MUI)
|
v
Rust API (Axum) ---> SQLite (user/files + metadata)
|
+---> llama-qwen (chat, OpenAI-compatible endpoint)
+---> llama-embed (embeddings, OpenAI-compatible endpoint)
+---> Kiwix service (serves downloaded .zim files)
- Clone and enter the repo.
- Create env file and required directories.
- Place GGUF model files.
- Build and start all services.
cp .env.example .env
mkdir -p user-files models
# put your models in ./models
# - Qwen3.5-0.8B-UD-Q3_K_XL.gguf
# - embeddinggemma-300M-Q8_0.gguf
docker compose up --build -d
docker compose psOpen:
- App:
http://<your-private-ip>:8000 - Kiwix direct (optional):
http://<your-private-ip>:8081
Find your Wi-Fi private IP (Linux):
ip -4 addr show wlan0 | awk '/inet / {print $2}' | cut -d/ -f1If your Wi-Fi interface is not wlan0, replace it (for example wlp2s0).
compose.yml runs 4 services:
lifenode: backend API + bundled frontendllama-qwen: chat inferencellama-embed: embeddingskiwix: serves local.zimfiles on port8081
- Database:
./user-files/lifenode.db - Per-user maps/wiki files:
./user-files/<username>/maps/... - Per-user drive files:
./user-files/<username>/files/... - Other per-user assets are stored under
./user-files/<username>/...
Backend:
cd backend
cargo runFrontend:
cd frontend
npm install
npm run dev -- --hostOptional frontend API override:
export VITE_API_BASE=http://127.0.0.1:8000/apiGeneral:
LIFENODE_HOST(default0.0.0.0)LIFENODE_PORT(default8000)LIFENODE_CORS_ORIGINS(default*)LIFENODE_MAX_UPLOAD_MB(default100)
Storage:
LIFENODE_DB_PATH(default/data/user-files/lifenode.db)LIFENODE_USER_FILES_DIR(default/data/user-files)LIFENODE_FRONTEND_DIST(default/app/frontend/dist)
Admin bootstrap (optional):
LIFENODE_ADMIN_USERNAMELIFENODE_ADMIN_PASSWORD
Kiwix:
LIFENODE_KIWIX_EMBED_URL(defaulthttp://localhost:8081)LIFENODE_KIWIX_PORT(default8081)
llama.cpp endpoints:
LIFENODE_LLAMACPP_EMBED_URLLIFENODE_LLAMACPP_EMBED_MODELLIFENODE_LLAMACPP_CHAT_URLLIFENODE_LLAMACPP_CHAT_MODELLIFENODE_LLAMACPP_API_KEYLIFENODE_LLAMACPP_CHAT_MAX_TOKENSLIFENODE_LLAMACPP_CHAT_TIMEOUT_SECSLIFENODE_LLAMACPP_CHAT_THINKING_DEFAULT
llama.cpp container runtime:
LLAMACPP_QWEN_MODEL_PATHLLAMACPP_EMBED_MODEL_PATHLLAMACPP_QWEN_CTXLLAMACPP_EMBED_CTXLLAMACPP_NGL
Main route groups:
- Auth:
/api/auth/* - Ask + chat threads:
/api/ask/* - Maps/wiki download center:
/api/maps/* - Legacy wiki indexing/search endpoints:
/api/wiki/*,/api/search - Notes:
/api/notes/* - Drive:
/api/drive/* - Calendar:
/api/calendar/* - Health:
/api/health
For a concrete route map, see backend/src/main.rs.
Backend:
cd backend
cargo checkFrontend:
cd frontend
npm run buildStack:
docker compose up --build -d
docker compose ps- Download preset returns 404:
- Use
Custom Download URLin the UI for a known-good.zimor.osm.pbf. - The backend also attempts to auto-discover latest Kiwix English filenames.
- Use
- Kiwix panel is blank:
- Confirm at least one
.zimfile exists in your configured user-files mount. - Confirm
kiwixcontainer is running andLIFENODE_KIWIX_EMBED_URLis reachable.
- Confirm at least one
- Chat answers are fallback-style:
- Check
GET /api/health; if llama services are unavailable, backend falls back to deterministic embeddings and retrieval-only behavior.
- Check