Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions .env.example
Original file line number Diff line number Diff line change
Expand Up @@ -75,6 +75,8 @@ MAX_TOKENS_PER_REQUEST=4000

# Monitoring and Analytics
SENTRY_DSN=your-sentry-dsn-for-error-tracking
SENTRY_TRACES_SAMPLE_RATE=0.1
VERSION=1.0.0
PROMETHEUS_ENABLED=true
GRAFANA_ADMIN_USER=admin
GRAFANA_ADMIN_PASSWORD=change-this-secure-grafana-password
Expand Down
69 changes: 69 additions & 0 deletions .github/CODEOWNERS
Original file line number Diff line number Diff line change
@@ -0,0 +1,69 @@
# Code Ownership Rules
#
# This CODEOWNERS file defines code ownership and review requirements for the ModPorter-AI project.
# Review is required from code owners before merging changes.
#
# For more information about CODEOWNERS, see:
# https://docs.github.com/en/repositories/managing-your-repositorys-settings-and-features/customizing-your-repository/about-code-owners

# ============================================================================
# Default Owners (catch-all for any file not matching specific rules)
# ============================================================================
* @alex

# ============================================================================
# Frontend Team - React/TypeScript UI Components
# ============================================================================
# All frontend-related files require review from frontend maintainers
/frontend/ @alex

# ============================================================================
# Backend Team - Python API and Server
# ============================================================================
# All backend-related files require review from backend maintainers
/backend/ @alex

# ============================================================================
# AI-Engine Team - ML/AI Components
# ============================================================================
# All AI engine-related files require review from AI engine maintainers
/ai-engine/ @alex

# ============================================================================
# Infrastructure & DevOps
# ============================================================================
# Docker and infrastructure configurations
/docker/ @alex
docker-compose*.yml @alex
Dockerfile* @alex

# ============================================================================
# Security & Compliance
# ============================================================================
# Security-related files require review from security team
/.github/security-check.sh @alex
/.github/security-config-guide.md @alex

# ============================================================================
# Documentation
# ============================================================================
# Documentation changes can be reviewed by any maintainer
/docs/ @alex
*.md @alex
!/.github/*.md

# ============================================================================
# Configuration Files
# ============================================================================
# Project-wide configuration files
/.github/ @alex
/database/ @alex
/monitoring/ @alex
/scripts/ @alex
/modporter/ @alex
/tests/ @alex

# ============================================================================
# CI/CD Workflows
# ============================================================================
/.github/workflows/ @alex
47 changes: 45 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -178,8 +178,14 @@ All services include health checks for monitoring:
# Check frontend health
curl http://localhost:3000/health

# Check backend health
curl http://localhost:8080/api/v1/health
# Check backend health (basic liveness)
curl http://localhost:8080/health

# Check backend readiness (includes dependency checks)
curl http://localhost:8080/health/readiness

# Check backend liveness (process running)
curl http://localhost:8080/health/liveness

# Check AI engine health
curl http://localhost:8001/api/v1/health
Expand All @@ -188,6 +194,43 @@ curl http://localhost:8001/api/v1/health
docker compose ps
```

### Health Check Endpoints

The backend provides three health check endpoints for Kubernetes probes:

| Endpoint | Purpose | Dependencies Checked |
|----------|---------|---------------------|
| `/health` | Basic health check | None |
| `/health/liveness` | Process is running | None |
| `/health/readiness` | Can serve traffic | Database, Redis |
Comment on lines +201 to +205
Copy link

Copilot AI Mar 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The health-check endpoints table is malformed Markdown because each row starts with || instead of |. This won’t render as a table on GitHub; remove the extra leading pipe so it’s a standard markdown table.

Suggested change
| Endpoint | Purpose | Dependencies Checked |
|----------|---------|---------------------|
| `/health` | Basic health check | None |
| `/health/liveness` | Process is running | None |
| `/health/readiness` | Can serve traffic | Database, Redis |
| Endpoint | Purpose | Dependencies Checked |
|--------------------|---------------------|----------------------|
| `/health` | Basic health check | None |
| `/health/liveness` | Process is running | None |
| `/health/readiness`| Can serve traffic | Database, Redis |

Copilot uses AI. Check for mistakes.

**Response Format:**
```json
{
"status": "healthy",
"timestamp": "2024-01-01T00:00:00",
"checks": {
"dependencies": {
"database": {
"status": "healthy",
"latency_ms": 5.2,
"message": "Database connection successful"
},
"redis": {
"status": "healthy",
"latency_ms": 1.8,
"message": "Redis connection successful"
}
}
}
}
```

**Status Values:**
- `healthy`: All checks passed
- `degraded`: Non-critical dependencies unavailable (e.g., Redis)
- `unhealthy`: Critical dependencies unavailable (e.g., Database)

### Troubleshooting

#### Common Issues
Expand Down
9 changes: 8 additions & 1 deletion ai-engine/main.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,13 +16,20 @@
import redis.asyncio as aioredis

# Configure logging using centralized configuration
from utils.logging_config import setup_logging, get_agent_logger
from utils.logging_config import setup_logging, get_agent_logger, configure_structlog

# Load environment variables
load_dotenv()

# Setup logging with environment-based configuration
debug_mode = os.getenv("DEBUG", "false").lower() == "true"

# Also configure structlog for structured JSON logging in production
configure_structlog(
debug_mode=debug_mode,
json_format=os.getenv("LOG_JSON_FORMAT", "false").lower() == "true"
Comment on lines +27 to +30
Copy link

Copilot AI Mar 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Passing json_format=os.getenv(... ) == "true" forces json_format=False by default, which prevents configure_structlog() from auto-enabling JSON when ENVIRONMENT=production. Consider leaving json_format=None (or explicitly OR-ing with the production env check) so production automatically switches to JSON as described.

Suggested change
# Also configure structlog for structured JSON logging in production
configure_structlog(
debug_mode=debug_mode,
json_format=os.getenv("LOG_JSON_FORMAT", "false").lower() == "true"
# Determine JSON logging format. If LOG_JSON_FORMAT is unset, pass None so
# configure_structlog() can apply its own environment-based default (e.g. enable
# JSON automatically in production).
log_json_env = os.getenv("LOG_JSON_FORMAT")
json_format = log_json_env.lower() == "true" if log_json_env is not None else None
# Also configure structlog for structured JSON logging in production
configure_structlog(
debug_mode=debug_mode,
json_format=json_format,

Copilot uses AI. Check for mistakes.
)

setup_logging(
debug_mode=debug_mode,
enable_file_logging=os.getenv("ENABLE_FILE_LOGGING", "true").lower() == "true"
Expand Down
3 changes: 2 additions & 1 deletion ai-engine/requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -48,4 +48,5 @@ pydantic-settings

# Monitoring
prometheus-client
psutil
psutil
structlog>=24.0.0
164 changes: 156 additions & 8 deletions ai-engine/utils/logging_config.py
Original file line number Diff line number Diff line change
@@ -1,22 +1,23 @@
"""
Centralized logging configuration for ModPorter AI Engine
Provides structured logging for all agents and crew operations

Issue #549: Enhanced with comprehensive agent logging capabilities
- Structured logging for all agents
- Agent decisions and reasoning logging
- Tool usage and results logging
- Debug mode for verbose output
- Log analysis tools
Provides structured logging using structlog for all agents and crew operations

Issue #695: Add structured logging
- Uses structlog for structured JSON logging
- Supports both console and JSON formats
- Auto-detects production mode for JSON output
- Correlation ID support for request tracing
"""

import logging
import logging.handlers
import structlog
import os
import sys
import time
import threading
import traceback
import uuid
from datetime import datetime
from pathlib import Path
from typing import Optional, Dict, Any, List
Expand All @@ -25,6 +26,101 @@
from collections import defaultdict
import json

# Context variable for correlation ID
correlation_id_var: ContextVar[Optional[str]] = ContextVar("correlation_id", default=None)


def configure_structlog(
log_level: str = None,
log_file: Optional[str] = None,
json_format: bool = None,
debug_mode: bool = False,
):
"""
Configure structlog for the AI engine.

Args:
log_level: Logging level (DEBUG, INFO, WARNING, ERROR)
log_file: Path to log file (optional)
json_format: Use JSON format (auto-detected from environment if None)
debug_mode: Enable debug mode for verbose output
"""
if log_level is None:
log_level = os.getenv("LOG_LEVEL", "INFO").upper()

# Auto-detect JSON format in production
if json_format is None:
json_format = os.getenv("LOG_JSON_FORMAT", "false").lower() == "true"
if os.getenv("ENVIRONMENT", "development") == "production":
json_format = True

# Get log directory
log_dir = os.getenv("LOG_DIR", "/tmp/modporter-ai/logs")

# Configure processors based on format
processors = [
structlog.contextvars.merge_contextvars,
structlog.stdlib.add_logger_name,
structlog.stdlib.add_log_level,
structlog.stdlib.PositionalArgumentsFormatter(),
structlog.processors.TimeStamper(fmt="iso"),
]

if debug_mode:
processors.append(structlog.dev.ConsoleRenderer())
elif json_format:
processors.append(structlog.processors.JSONRenderer())
else:
processors.append(structlog.dev.ConsoleRenderer(colors=False))

# Add exception info processor
processors.append(structlog.processors.StackInfoRenderer())
processors.append(structlog.processors.format_exc_info)

# Configure structlog
structlog.configure(
processors=processors,
wrapper_class=structlog.stdlib.BoundLogger,
context_class=dict,
logger_factory=structlog.stdlib.LoggerFactory(),
cache_logger_on_first_use=True,
)

# Also configure standard library logging
root_logger = logging.getLogger()
root_logger.setLevel(getattr(logging, log_level, logging.INFO))

# Clear existing handlers
root_logger.handlers.clear()

# Console handler
console_handler = logging.StreamHandler(sys.stdout)
console_handler.setLevel(getattr(logging, log_level, logging.INFO))
console_handler.setFormatter(logging.Formatter(
"%(asctime)s - %(name)s - %(levelname)s - %(message)s",
datefmt="%Y-%m-%d %H:%M:%S"
))
root_logger.addHandler(console_handler)

# File handler for production
if log_file is None:
os.makedirs(log_dir, exist_ok=True)
log_file = os.path.join(log_dir, "ai-engine.log")

file_handler = logging.handlers.RotatingFileHandler(
log_file,
maxBytes=10 * 1024 * 1024, # 10MB
backupCount=5,
encoding='utf-8'
)
file_handler.setLevel(logging.INFO)
file_handler.setFormatter(logging.Formatter(
"%(message)s"
))
root_logger.addHandler(file_handler)

return structlog.get_logger()


class AgentLogFormatter(logging.Formatter):
"""Custom formatter for agent logging with structured output"""
Expand Down Expand Up @@ -234,6 +330,58 @@ def get_agent_logger(agent_name: str) -> AgentLogger:
return AgentLogger(logger_name)


def get_structlog_logger(name: str = None) -> structlog.BoundLogger:
"""
Get a structlog logger instance.

Args:
name: Logger name (optional)

Returns:
Configured structlog logger
"""
if name:
return structlog.get_logger(name)
return structlog.get_logger()


def set_correlation_id(correlation_id: Optional[str] = None) -> str:
"""
Set the correlation ID for the current context.

Args:
correlation_id: Optional correlation ID to use

Returns:
The correlation ID (either provided or generated)
"""
if correlation_id is None:
correlation_id = str(uuid.uuid4())

correlation_id_var.set(correlation_id)
structlog.contextvars.clear_contextvars()
structlog.contextvars.bind_contextvars(correlation_id=correlation_id)
return correlation_id


def get_correlation_id() -> Optional[str]:
"""
Get the current correlation ID from the context.

Returns:
Current correlation ID or None
"""
return correlation_id_var.get()


def clear_correlation_id() -> None:
"""
Clear the correlation ID from the current context.
"""
correlation_id_var.set(None)
structlog.contextvars.clear_contextvars()


def get_crew_logger() -> AgentLogger:
"""Get a configured logger for crew operations"""
return AgentLogger("crew.conversion_crew")
Expand Down
25 changes: 25 additions & 0 deletions backend/main.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,31 @@
from db.init_db import init_db
from api.feedback import router as feedback_router

# Sentry error tracking initialization
import sentry_sdk
from sentry_sdk.integrations.fastapi import FastApiIntegration
from sentry_sdk.integrations.sqlalchemy import SqlalchemyIntegration

SENTRY_DSN = os.getenv("SENTRY_DSN")
if SENTRY_DSN:
sentry_sdk.init(
dsn=SENTRY_DSN,
integrations=[
FastApiIntegration(),
SqlalchemyIntegration(),
],
# Set traces_sample_rate to 1.0 to capture 100% of transactions for tracing
traces_sample_rate=float(os.getenv("SENTRY_TRACES_SAMPLE_RATE", "0.1")),
# Include environment and release info
environment=os.getenv("ENVIRONMENT", "development"),
release=os.getenv("VERSION", "1.0.0"),
# Attach serverless context
send_default_pii=False,
# Filter out common non-critical events
before_send=lambda event, hint: None if 'ignore' in hint else event,
)
print(f"Sentry error tracking initialized for environment: {os.getenv('ENVIRONMENT', 'development')}")


# AI Engine settings

Expand Down
Loading
Loading