Note: This repository is a showcase of the LUMINAL architecture and user experience. It is not a turnkey deployment and is not intended to be cloned or run as-is.
LUMINAL is a self-hosted AI automation platform built with Docker and Docker Compose that integrates workflow automation with state-of-the-art AI capabilities. The project demonstrates containerized AI services including LLM inference, visual workflow development, and intelligent automation tools.
| Category | Service | Purpose | |
|---|---|---|---|
| π€ AI Services | n8n | Workflow Automation Platform | |
| OpenWebUI | AI Chat Interface with RAG | ||
| Ollama | Local LLM Inference Server | ||
| π§ AI Infrastructure | Qdrant | Vector Database for Semantic Search | |
| Docker | Containerization Platform | ||
| π Home Automation | Home Assistant | Home Automation Platform | |
| π Security | Cloudflare Access | Zero Trust SSO Authentication | |
- Docker: Engine installed and running
- Docker Compose v2:
docker composeCLI available - NVIDIA GPU (recommended): For accelerated AI workloads
- Proxmox VE (optional): Recommended host environment
LUMINAL uses direnv for automatic environment variable loading:
.envrc: Automatically loads.env(for Docker Compose) andenv.sh(for scripts)- direnv hook: Configured in shell for seamless environment isolation
- Automatic loading: Environment variables load automatically when you
cdinto the project directory - Script compatibility: Scripts still source
env.shexplicitly for non-interactive execution (cron, etc.)
This ensures consistent environment variable access across interactive shells, scripts, and Docker Compose commands.
LUMINAL uses a centralized architecture for managing environment files and secrets, following Docker best practices:
-
/etc/LUMINAL/env.sh: Centralized environment configuration file (actual file location)env.shin project root is a symlink pointing to/etc/LUMINAL/env.sh- Excluded from git (tracked in
.gitignore) - Contains environment variable exports for local development
-
/etc/LUMINAL/secrets/: Centralized secrets directory (actual directory location)secrets/in project root is a symlink pointing to/etc/LUMINAL/secrets- Excluded from git (tracked in
.gitignore) - Contains sensitive credentials and keys managed via Docker Secrets
Create the following files with secure permissions in /etc/LUMINAL/secrets/:
mkdir -p /etc/LUMINAL/secrets
printf "<YOUR_N8N_ENCRYPTION_KEY>\n" > /etc/LUMINAL/secrets/n8n_encryption_key.txt
printf "<YOUR_N8N_JWT_SECRET>\n" > /etc/LUMINAL/secrets/n8n_jwt_secret.txt
printf "<YOUR_OPENWEBUI_SECRET_KEY>\n" > /etc/LUMINAL/secrets/openwebui_secret_key.txt
chmod 700 /etc/LUMINAL/secrets
chmod 600 /etc/LUMINAL/secrets/*The symlinks in the project root (env.sh and secrets/) will automatically point to these centralized locations.
- n8n_encryption_key.txt: Encryption key for n8n credential storage (32+ characters recommended)
- n8n_jwt_secret.txt: JWT secret for n8n user authentication (64 characters recommended)
- openwebui_secret_key.txt: Secret key for OpenWebUI session management (64 characters recommended)
This centralized approach ensures consistent configuration management across the system and aligns with industry best practices for Docker-based deployments.
LUMINAL uses Docker Named Volumes for all persistent data storage, ensuring separation of data from the container lifecycle.
luminal_n8n_storage: Stores n8n workflows, credentials, and execution data.luminal_ollama_storage: caches downloaded LLM models (approx. 40GB+ for full model set).luminal_qdrant_storage: Stores vector embeddings and database indices.luminal_openwebui_storage: Persists chat history, user settings, and document knowledge base.luminal_homeassistant_storage: Stores Home Assistant configuration (configuration.yaml, database).
- PUID/PGID: All services are configured to run with specific user/group IDs (defined in
.env) to ensure file permissions on mounted volumes match the host system user.
# 1) Set up environment and secrets
# Create /etc/LUMINAL/env.sh and /etc/LUMINAL/secrets/ (see above)
# 2) Start all services
docker compose up -d
# 3) Stop services (optional)
docker compose downOnce your stack is running, you can access the following services:
- n8n Workflow Automation: http://localhost:5678
- OpenWebUI AI Interface: http://localhost:3000
- Home Assistant: http://localhost:8123
- Qdrant Dashboard: http://localhost:6333/dashboard
- Ollama API: http://localhost:11434
- Multi-Model Support: Seamlessly switch between llama3.1:8b, gemma3:12b, and gpt-oss:20b
- RAG Integration: Built-in retrieval-augmented generation using your Qdrant vector database
- Midnight Media Assistant: Natural language interface to query Plex library, actor/director searches, watch history, and request new content via Overseerr
- GPU Acceleration: Direct GPU passthrough for optimal performance
- Cloudflare Access SSO: Google OAuth authentication via Cloudflare Zero Trust with automatic user provisioning
- Natural Language Control: Chat with OpenWebUI β n8n processes request β Home Assistant executes device control
- Intelligent Automation: Motion sensor triggers β n8n workflow β AI analyzes with Ollama β Smart response via Home Assistant
- Predictive Automation: AI analyzes usage patterns β n8n workflows β Proactive home automation via Home Assistant
- Single project name:
luminal - Unified network architecture:
luminal_default: Main application network (all AI services)
- Secrets stored in
/etc/LUMINAL/secrets/(centralized location, symlinked from project root) - Environment variables centralized at
/etc/LUMINAL/env.sh(symlinked from project root) - NVIDIA GPU passthrough for accelerated AI workloads (Ollama, OpenWebUI)
OpenWebUI uses Cloudflare Access for Zero Trust authentication:
- Users visit the public OpenWebUI URL
- Cloudflare Access intercepts and redirects to Google sign-in
- After authentication, Cloudflare passes user email via trusted header
- OpenWebUI auto-creates/logs in the user based on email
OpenWebUI trusts Cloudflare's authentication headers:
- WEBUI_AUTH_TRUSTED_EMAIL_HEADER=Cf-Access-Authenticated-User-Email
- WEBUI_AUTH_TRUSTED_NAME_HEADER=Cf-Access-Authenticated-User-Name
- ENABLE_OAUTH_SIGNUP=trueAccess policies are managed in Cloudflare Zero Trust Dashboard:
- Access controls β Policies β Edit policy
- Add/remove email addresses or domains
- Supports: specific emails, email domains, or "Everyone"
LUMINAL follows Docker best practices for configuration directory ownership:
- Ownership: Service configuration directories follow container-specific ownership requirements
- Permissions: All config directories use appropriate permissions for container access
- Rationale: Ensures container-created files have consistent ownership
- Benefits: Prevents permission issues, aligns with Docker ecosystem standards
LUMINAL supports three LLM models via Ollama:
- llama3.1:8b (4.9GB) - Fast and capable general-purpose model
- gemma3:12b (8.1GB) - High-performance model for complex tasks
- gpt-oss:20b (~20GB) - Maximum capability for advanced reasoning
Models are automatically pulled on first startup and cached in the ollama_storage volume.
Midnight is a custom AI assistant built on top of OpenWebUI that provides intelligent access to your entire media library. It demonstrates advanced prompt engineering, tool integration, and RAG (Retrieval-Augmented Generation) capabilities.
| Component | Description |
|---|---|
| Base Model | gemma3:12b via Ollama |
| Interface | OpenWebUI with custom system prompt |
| Tools | 7 Python-based function tools |
| Knowledge | RAG-enabled reference documentation |
| Tool | Purpose |
|---|---|
midnight_plex_tool |
Search library, get recently added, episode details, actor/director search |
midnight_radarr_tool |
Movie details, genres, synopses |
midnight_sonarr_tool |
TV show details, upcoming episodes |
midnight_tautulli_tool |
Watch history, current activity, most watched |
midnight_bazarr_tool |
Subtitle status and history |
midnight_sabnzbd_tool |
Download queue and history |
midnight_overseerr_tool |
Content requests and search |
- Real-time library queries - Never guesses, always calls tools for current data
- Anti-hallucination rules - Explicit prompt engineering to prevent made-up information
- Quote normalization - Handles curly quotes and special characters in searches
- Episode synopses - Full episode details including plot summaries from Plex
- Multi-service integration - Seamlessly queries across Plex, Radarr, Sonarr, and more
- Date accuracy - Returns actual "added on" dates from Plex, not download dates
"What movies do we have with Tom Hanks?"
"What's new in the library?"
"What's the Bob's Burgers episode 'It's a Stunterful Life' about?"
"Show me Christmas movies"
"What's currently downloading?"
"Who's watching right now?"
See midnight/README.md for full documentation and system prompt.
- Docker containerization with advanced configuration patterns
- NVIDIA GPU passthrough for accelerated AI workloads
- Container orchestration with Docker Compose
- Secure secrets management and environment configuration
- Service networking and inter-container communication
- Large Language Model (LLM) deployment and optimization
- Vector database setup for AI applications
- Workflow automation architecture
- Hardware acceleration integration for AI workloads
- Retrieval-augmented generation (RAG) implementation
- Implementation of Docker security best practices
- Secrets management without hardcoded credentials
- Proper network isolation between services
- Environment variable security patterns
- Persistent volumes configured for data security
- Zero Trust authentication via Cloudflare Access SSO
For a detailed log of the technical evolution of this project, including specific achievements and skills demonstrated, please see the CHANGELOG.md file.