Skip to content

pjmarz/LUMINAL

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

55 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation


Note: This repository is a showcase of the LUMINAL architecture and user experience. It is not a turnkey deployment and is not intended to be cloned or run as-is.

🎯 Project Overview

LUMINAL is a self-hosted AI automation platform built with Docker and Docker Compose that integrates workflow automation with state-of-the-art AI capabilities. The project demonstrates containerized AI services including LLM inference, visual workflow development, and intelligent automation tools.

πŸ› οΈ System Components

Category Service Purpose
πŸ€– AI Services n8n n8n Workflow Automation Platform
OpenWebUI OpenWebUI AI Chat Interface with RAG
Ollama Ollama Local LLM Inference Server
🧠 AI Infrastructure Qdrant Qdrant Vector Database for Semantic Search
Docker Docker Containerization Platform
🏠 Home Automation Home Assistant Home Assistant Home Automation Platform
πŸ” Security Cloudflare Cloudflare Access Zero Trust SSO Authentication

βœ… Prerequisites (for reference only)

  • Docker: Engine installed and running
  • Docker Compose v2: docker compose CLI available
  • NVIDIA GPU (recommended): For accelerated AI workloads
  • Proxmox VE (optional): Recommended host environment

πŸ” Secrets & Environment Configuration

πŸ”„ Environment Variable Management

LUMINAL uses direnv for automatic environment variable loading:

  • .envrc: Automatically loads .env (for Docker Compose) and env.sh (for scripts)
  • direnv hook: Configured in shell for seamless environment isolation
  • Automatic loading: Environment variables load automatically when you cd into the project directory
  • Script compatibility: Scripts still source env.sh explicitly for non-interactive execution (cron, etc.)

This ensures consistent environment variable access across interactive shells, scripts, and Docker Compose commands.

LUMINAL uses a centralized architecture for managing environment files and secrets, following Docker best practices:

Centralized Architecture

  • /etc/LUMINAL/env.sh: Centralized environment configuration file (actual file location)

    • env.sh in project root is a symlink pointing to /etc/LUMINAL/env.sh
    • Excluded from git (tracked in .gitignore)
    • Contains environment variable exports for local development
  • /etc/LUMINAL/secrets/: Centralized secrets directory (actual directory location)

    • secrets/ in project root is a symlink pointing to /etc/LUMINAL/secrets
    • Excluded from git (tracked in .gitignore)
    • Contains sensitive credentials and keys managed via Docker Secrets

Setting Up Secrets

Create the following files with secure permissions in /etc/LUMINAL/secrets/:

mkdir -p /etc/LUMINAL/secrets
printf "<YOUR_N8N_ENCRYPTION_KEY>\n" > /etc/LUMINAL/secrets/n8n_encryption_key.txt
printf "<YOUR_N8N_JWT_SECRET>\n" > /etc/LUMINAL/secrets/n8n_jwt_secret.txt
printf "<YOUR_OPENWEBUI_SECRET_KEY>\n" > /etc/LUMINAL/secrets/openwebui_secret_key.txt
chmod 700 /etc/LUMINAL/secrets
chmod 600 /etc/LUMINAL/secrets/*

The symlinks in the project root (env.sh and secrets/) will automatically point to these centralized locations.

Required Secrets

  • n8n_encryption_key.txt: Encryption key for n8n credential storage (32+ characters recommended)
  • n8n_jwt_secret.txt: JWT secret for n8n user authentication (64 characters recommended)
  • openwebui_secret_key.txt: Secret key for OpenWebUI session management (64 characters recommended)

This centralized approach ensures consistent configuration management across the system and aligns with industry best practices for Docker-based deployments.

πŸ’Ύ Storage Configuration

LUMINAL uses Docker Named Volumes for all persistent data storage, ensuring separation of data from the container lifecycle.

Storage Architecture

  • luminal_n8n_storage: Stores n8n workflows, credentials, and execution data.
  • luminal_ollama_storage: caches downloaded LLM models (approx. 40GB+ for full model set).
  • luminal_qdrant_storage: Stores vector embeddings and database indices.
  • luminal_openwebui_storage: Persists chat history, user settings, and document knowledge base.
  • luminal_homeassistant_storage: Stores Home Assistant configuration (configuration.yaml, database).

File Ownership

  • PUID/PGID: All services are configured to run with specific user/group IDs (defined in .env) to ensure file permissions on mounted volumes match the host system user.

πŸš€ Quickstart (for reference only)

# 1) Set up environment and secrets
# Create /etc/LUMINAL/env.sh and /etc/LUMINAL/secrets/ (see above)

# 2) Start all services
docker compose up -d

# 3) Stop services (optional)
docker compose down

🎨 Accessing Your Services

Once your stack is running, you can access the following services:

Key OpenWebUI Features

  • Multi-Model Support: Seamlessly switch between llama3.1:8b, gemma3:12b, and gpt-oss:20b
  • RAG Integration: Built-in retrieval-augmented generation using your Qdrant vector database
  • Midnight Media Assistant: Natural language interface to query Plex library, actor/director searches, watch history, and request new content via Overseerr
  • GPU Acceleration: Direct GPU passthrough for optimal performance
  • Cloudflare Access SSO: Google OAuth authentication via Cloudflare Zero Trust with automatic user provisioning

AI-Powered Home Automation Examples

  • Natural Language Control: Chat with OpenWebUI β†’ n8n processes request β†’ Home Assistant executes device control
  • Intelligent Automation: Motion sensor triggers β†’ n8n workflow β†’ AI analyzes with Ollama β†’ Smart response via Home Assistant
  • Predictive Automation: AI analyzes usage patterns β†’ n8n workflows β†’ Proactive home automation via Home Assistant

🧩 Architecture & Configuration

  • Single project name: luminal
  • Unified network architecture:
    • luminal_default: Main application network (all AI services)
  • Secrets stored in /etc/LUMINAL/secrets/ (centralized location, symlinked from project root)
  • Environment variables centralized at /etc/LUMINAL/env.sh (symlinked from project root)
  • NVIDIA GPU passthrough for accelerated AI workloads (Ollama, OpenWebUI)

πŸ” Authentication (SSO)

OpenWebUI uses Cloudflare Access for Zero Trust authentication:

How It Works

  1. Users visit the public OpenWebUI URL
  2. Cloudflare Access intercepts and redirects to Google sign-in
  3. After authentication, Cloudflare passes user email via trusted header
  4. OpenWebUI auto-creates/logs in the user based on email

Configuration

OpenWebUI trusts Cloudflare's authentication headers:

- WEBUI_AUTH_TRUSTED_EMAIL_HEADER=Cf-Access-Authenticated-User-Email
- WEBUI_AUTH_TRUSTED_NAME_HEADER=Cf-Access-Authenticated-User-Name
- ENABLE_OAUTH_SIGNUP=true

Managing Access

Access policies are managed in Cloudflare Zero Trust Dashboard:

  • Access controls β†’ Policies β†’ Edit policy
  • Add/remove email addresses or domains
  • Supports: specific emails, email domains, or "Everyone"

πŸ’‘ Implementation Details

πŸ“ Configuration Directory Ownership

LUMINAL follows Docker best practices for configuration directory ownership:

  • Ownership: Service configuration directories follow container-specific ownership requirements
  • Permissions: All config directories use appropriate permissions for container access
  • Rationale: Ensures container-created files have consistent ownership
  • Benefits: Prevents permission issues, aligns with Docker ecosystem standards

🧠 AI Models

LUMINAL supports three LLM models via Ollama:

  • llama3.1:8b (4.9GB) - Fast and capable general-purpose model
  • gemma3:12b (8.1GB) - High-performance model for complex tasks
  • gpt-oss:20b (~20GB) - Maximum capability for advanced reasoning

Models are automatically pulled on first startup and cached in the ollama_storage volume.

πŸŒ™ Midnight Media Assistant

Midnight is a custom AI assistant built on top of OpenWebUI that provides intelligent access to your entire media library. It demonstrates advanced prompt engineering, tool integration, and RAG (Retrieval-Augmented Generation) capabilities.

Architecture

Component Description
Base Model gemma3:12b via Ollama
Interface OpenWebUI with custom system prompt
Tools 7 Python-based function tools
Knowledge RAG-enabled reference documentation

Custom Tools (midnight/)

Tool Purpose
midnight_plex_tool Search library, get recently added, episode details, actor/director search
midnight_radarr_tool Movie details, genres, synopses
midnight_sonarr_tool TV show details, upcoming episodes
midnight_tautulli_tool Watch history, current activity, most watched
midnight_bazarr_tool Subtitle status and history
midnight_sabnzbd_tool Download queue and history
midnight_overseerr_tool Content requests and search

Key Features

  • Real-time library queries - Never guesses, always calls tools for current data
  • Anti-hallucination rules - Explicit prompt engineering to prevent made-up information
  • Quote normalization - Handles curly quotes and special characters in searches
  • Episode synopses - Full episode details including plot summaries from Plex
  • Multi-service integration - Seamlessly queries across Plex, Radarr, Sonarr, and more
  • Date accuracy - Returns actual "added on" dates from Plex, not download dates

Example Queries

"What movies do we have with Tom Hanks?"
"What's new in the library?"
"What's the Bob's Burgers episode 'It's a Stunterful Life' about?"
"Show me Christmas movies"
"What's currently downloading?"
"Who's watching right now?"

See midnight/README.md for full documentation and system prompt.

πŸ“š Technical Skills Demonstrated

Infrastructure & DevOps

  • Docker containerization with advanced configuration patterns
  • NVIDIA GPU passthrough for accelerated AI workloads
  • Container orchestration with Docker Compose
  • Secure secrets management and environment configuration
  • Service networking and inter-container communication

AI & Data Engineering

  • Large Language Model (LLM) deployment and optimization
  • Vector database setup for AI applications
  • Workflow automation architecture
  • Hardware acceleration integration for AI workloads
  • Retrieval-augmented generation (RAG) implementation

Security Engineering

  • Implementation of Docker security best practices
  • Secrets management without hardcoded credentials
  • Proper network isolation between services
  • Environment variable security patterns
  • Persistent volumes configured for data security
  • Zero Trust authentication via Cloudflare Access SSO

Technical Evolution

For a detailed log of the technical evolution of this project, including specific achievements and skills demonstrated, please see the CHANGELOG.md file.


Moon Logo

About

the child of light

Resources

Stars

Watchers

Forks

Contributors