AI-powered log triage tool backend built with FastAPI.
# Clone and navigate
cd backend
# Create virtual environment
python3 -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install dependencies
pip install -r requirements.txt
# Configure environment (optional - works without LLM key)
cp .env.example .env
# Edit .env and add your OpenRouter API key if desired
# Run server
uvicorn app.main:app --reload --port 8000Server will be available at http://localhost:8000
API documentation: http://localhost:8000/docs
| Variable | Description | Default |
|---|---|---|
OPENAI_API_KEY |
OpenRouter API key (optional) | None |
OPENAI_BASE_URL |
API endpoint | https://openrouter.ai/api/v1 |
OPENAI_MODEL |
Model to use | openai/gpt-4o-mini |
DB_PATH |
SQLite database path | ./data/signaltrace.db |
CORS_ORIGINS |
Allowed CORS origins | http://localhost:5173 |
LOG_LEVEL |
Logging level | INFO |
Upload and analyze a log file.
curl -X POST http://localhost:8000/api/analyze \
-F "file=@sample.log"List recent analysis runs.
curl http://localhost:8000/api/runsGet run details with incident summary.
curl http://localhost:8000/api/runs/{run_id}Get full incident details with evidence and explanation.
curl http://localhost:8000/api/runs/{run_id}/incidents/{incident_id}Health check endpoint.
curl http://localhost:8000/healthReact Frontend → FastAPI Backend → Pipeline Orchestrator
↓
Parse → Group/Rank → Evidence
↓
LLM (OpenRouter) or Fallback
↓
Validation → SQLite → Response
- Works offline: Fallback mode when no LLM key provided
- Validation: Strict schema validation with retry logic
- Error handling: Graceful degradation, no crashes
- Logging: Request IDs and timing for debugging
The pipeline is modular. To replace parsing/grouping logic:
- Edit
app/services/pipeline_interfaces.py - Modify
parse_lines(),group_and_rank(), orbuild_evidence() - Keep function signatures intact
To customize LLM prompts:
- Edit
app/services/llm_client.py - Modify the prompt in
explain_incident()