Skip to content

feat: Detect unused dependencies (#693)#760

Closed
anchapin wants to merge 5 commits intomainfrom
feat/issue-693-detect-unused-dependencies
Closed

feat: Detect unused dependencies (#693)#760
anchapin wants to merge 5 commits intomainfrom
feat/issue-693-detect-unused-dependencies

Conversation

@anchapin
Copy link
Owner

@anchapin anchapin commented Mar 8, 2026

Summary

Implements issue #693 for detecting unused dependencies in the project.

Changes Made

1. Frontend (npm/TypeScript)

  • Added to devDependencies in
  • Created configuration file
  • Added npm script for local testing

2. Backend (Python)

  • Added to
  • Added to

3. AI-Engine (Python)

  • Added to
  • Added to

4. GitHub Actions Workflow

Created with:

  • Frontend: Runs to detect unused npm packages and for vulnerabilities
  • Backend & AI-Engine: Runs Name Version ID Fix Versions

aiohttp 3.12.15 CVE-2025-69223 3.13.3
aiohttp 3.12.15 CVE-2025-69224 3.13.3
aiohttp 3.12.15 CVE-2025-69228 3.13.3
aiohttp 3.12.15 CVE-2025-69229 3.13.3
aiohttp 3.12.15 CVE-2025-69230 3.13.3
aiohttp 3.12.15 CVE-2025-69226 3.13.3
aiohttp 3.12.15 CVE-2025-69227 3.13.3
aiohttp 3.12.15 CVE-2025-69225 3.13.3
authlib 1.6.1 CVE-2025-59420 1.6.4
authlib 1.6.1 CVE-2025-61920 1.6.5
authlib 1.6.1 CVE-2025-62706 1.6.5
authlib 1.6.1 CVE-2025-68158 1.6.6
brotli 1.1.0 CVE-2025-6176 1.2.0
configobj 5.0.8 CVE-2023-26112 5.0.9
cryptography 45.0.5 CVE-2026-26007 46.0.5
diskcache 5.6.3 CVE-2025-69872
ecdsa 0.19.1 CVE-2024-23342
fastmcp 2.11.0 CVE-2025-62800 2.13.0
fastmcp 2.11.0 CVE-2025-62801 2.13.0
fastmcp 2.11.0 GHSA-rcfx-77hg-w2wv 2.14.0
filelock 3.18.0 CVE-2025-68146 3.20.1
filelock 3.18.0 CVE-2026-22701 3.20.3
gradio 5.39.0 CVE-2026-28414 6.7.0
gradio 5.39.0 CVE-2026-27167 6.6.0
gradio 5.39.0 CVE-2026-28416 6.6.0
gradio 5.39.0 CVE-2026-28415 6.6.0
jaraco-context 6.0.1 CVE-2026-23949 6.1.0
langchain-core 0.3.72 CVE-2025-65106 0.3.80,1.0.7
langchain-core 0.3.72 CVE-2025-68664 0.3.81,1.2.5
langchain-core 0.3.72 CVE-2026-26013 1.2.11
langsmith 0.4.10 CVE-2026-25528 0.6.3
markdown 3.5.2 CVE-2025-69534 3.8.1
orjson 3.11.1 CVE-2025-67221
pillow 11.3.0 CVE-2026-25990 12.1.1
protobuf 5.29.5 CVE-2026-0994 5.29.6,6.33.5
pyasn1 0.6.1 CVE-2026-23490 0.6.2
pynacl 1.5.0 CVE-2025-69277 1.6.2
starlette 0.47.2 CVE-2025-62727 0.49.1
urllib3 2.5.0 CVE-2025-66418 2.6.0
urllib3 2.5.0 CVE-2025-66471 2.6.0
urllib3 2.5.0 CVE-2026-21441 2.6.3
werkzeug 3.1.1 CVE-2025-66221 3.1.4
werkzeug 3.1.1 CVE-2026-21860 3.1.5
werkzeug 3.1.1 CVE-2026-27199 3.1.6
yt-dlp 2024.4.9 CVE-2024-38519 2024.7.1
yt-dlp 2024.4.9 GHSA-3v33-3wmw-3785 2024.7.7
yt-dlp 2024.4.9 CVE-2026-26331 2026.2.21
Name Skip Reason


brlapi Dependency not found on PyPI and could not be audited: brlapi (0.8.5)
catfish Dependency not found on PyPI and could not be audited: catfish (4.16.4)
ccsm Dependency not found on PyPI and could not be audited: ccsm (0.9.14.2)
command-not-found Dependency not found on PyPI and could not be audited: command-not-found (0.3)
compizconfig-python Dependency not found on PyPI and could not be audited: compizconfig-python (0.9.14.2)
cupshelpers Dependency not found on PyPI and could not be audited: cupshelpers (1.0)
defer Dependency not found on PyPI and could not be audited: defer (1.0.6)
louis Dependency not found on PyPI and could not be audited: louis (3.29.0)
mako Dependency not found on PyPI and could not be audited: mako (1.3.2.dev0)
menulibre Dependency not found on PyPI and could not be audited: menulibre (2.4.0)
mugshot Dependency not found on PyPI and could not be audited: mugshot (0.4.3)
onboard Dependency not found on PyPI and could not be audited: onboard (1.4.1)
opentelemetry-semantic-conventions-ai Dependency not found on PyPI and could not be audited: opentelemetry-semantic-conventions-ai (0.4.14)
pam Dependency not found on PyPI and could not be audited: pam (0.4.2)
pypng Dependency not found on PyPI and could not be audited: pypng (0.20231004.0)
python-apt Dependency not found on PyPI and could not be audited: python-apt (2.7.7+ubuntu5.2)
python-debian Dependency not found on PyPI and could not be audited: python-debian (0.1.49+ubuntu2)
repolib Dependency not found on PyPI and could not be audited: repolib (2.2.1)
ubuntu-drivers-common Dependency not found on PyPI and could not be audited: ubuntu-drivers-common (0.0.0)
ufw Dependency not found on PyPI and could not be audited: ufw (0.36.2)
xkit Dependency not found on PyPI and could not be audited: xkit (0.0.0) for vulnerability detection and for unused dependencies

  • Triggers:
    • On pull requests modifying dependency files
    • Weekly schedule (Sunday 3 AM UTC)
    • Manual trigger with

Readiness Pillar: Build System

This implementation addresses the Build System readiness pillar by ensuring:

  • Automated detection of unused dependencies
  • Early detection of vulnerable packages
  • Integration into CI pipeline
  • Weekly automated scans

- Added fetch-depth: 0 to checkout step for full git history
- Added base: main to paths-filter action for local act testing

The dorny/paths-filter@v3 action requires either:
1. The base input to be configured, or
2. repository.default_branch to be set in the event payload

When running locally with 'act', the GitHub event payload doesn't have
the default_branch set, causing the action to fail with:
'This action requires base input to be configured'
…678)

- Add W rule (pycodestyle warnings) to ai-engine/pyproject.toml
- Add W rule (pycodestyle warnings) to backend/pyproject.toml
- Add W291, W292, W293 to ignore list for legacy code

Co-authored-by: openhands <openhands@all-hands.dev>
The StackInfoRenderer and format_exc_info processors must be added
to the processor list BEFORE the renderer, not after. Otherwise,
structlog passes a string event to these processors instead of a
dictionary, causing AttributeError.

This fix ensures proper exception formatting in both backend and
ai-engine structured logging.

Co-authored-by: openhands <openhands@all-hands.dev>
- Add depcheck for npm/TypeScript frontend dependencies
- Add pip-audit and pipdeptree for Python backend and ai-engine
- Create GitHub Actions workflow for automated dependency checking
- Run on PRs affecting dependencies and weekly schedule

Co-authored-by: openhands <openhands@all-hands.dev>
Copilot AI review requested due to automatic review settings March 8, 2026 21:47
Copy link

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry @anchapin, you have reached your weekly rate limit of 500000 diff characters.

Please try again later or upgrade to continue using Sourcery

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR is titled "Detect unused dependencies (#693)" and addresses issue #693 for adding dependency checking to the CI pipeline. However, it also includes significant unrelated changes: a complete distributed tracing system (OpenTelemetry + Jaeger), structlog processor reordering fixes, ruff lint configuration updates, and docker-compose modifications for Jaeger.

Changes:

  • Added a GitHub Actions workflow (depcheck.yml) for detecting unused and vulnerable dependencies across frontend (depcheck + npm audit), backend, and ai-engine (pip-audit + pipdeptree), with depcheck devDependency and configuration for the frontend.
  • Introduced distributed tracing via OpenTelemetry/Jaeger in both backend and ai-engine, including new tracing.py modules, docker-compose Jaeger service, and trace context propagation in the AI engine client.
  • Fixed structlog processor ordering (moving exception processors before renderer) in both backend and ai-engine, and expanded ruff lint rules to include pycodestyle warnings.

Reviewed changes

Copilot reviewed 18 out of 19 changed files in this pull request and generated 19 comments.

Show a summary per file
File Description
.github/workflows/depcheck.yml New workflow for dependency checking across all services
.github/workflows/ci.yml Added fetch-depth: 0 and base: main for reliable change detection
frontend/package.json Added depcheck devDependency and script; reordered stryker packages
frontend/package-lock.json Lock file updates for depcheck and @sentry/react resolution
frontend/.depcheckrc New depcheck configuration file
backend/src/services/tracing.py New OpenTelemetry distributed tracing service for backend
backend/src/services/ai_engine_client.py Added trace context propagation to AI engine HTTP calls
backend/src/main.py Integrated tracing initialization into app lifespan
backend/requirements.txt Added OpenTelemetry dependencies
backend/requirements-dev.txt Added pip-audit and pipdeptree
backend/pyproject.toml Added pycodestyle warnings (W) to ruff lint rules
backend/src/services/structured_logging.py Reordered structlog processors for correct exception rendering
ai-engine/tracing.py New OpenTelemetry distributed tracing service for AI engine
ai-engine/main.py Integrated tracing init/shutdown into app lifecycle
ai-engine/requirements.txt Added OpenTelemetry dependencies
ai-engine/requirements-dev.txt Added pip-audit and pipdeptree
ai-engine/pyproject.toml Added pycodestyle warnings (W) to ruff lint rules
ai-engine/utils/logging_config.py Reordered structlog processors for correct exception rendering
docker-compose.yml Added Jaeger service and tracing environment variables
Files not reviewed (1)
  • frontend/package-lock.json: Language not supported

Comment on lines +43 to +74
- name: Checkout code
uses: actions/checkout@v6

- name: Filter paths
id: filter
run: |
echo "Checking for changes in frontend, backend, and ai-engine..."

# Check frontend changes
if git diff --name-only main...HEAD | grep -q "frontend/"; then
echo "frontend=true" >> $GITHUB_OUTPUT
else
echo "frontend=false" >> $GITHUB_OUTPUT
fi

# Check backend changes
if git diff --name-only main...HEAD | grep -q "backend/"; then
echo "backend=true" >> $GITHUB_OUTPUT
else
echo "backend=false" >> $GITHUB_OUTPUT
fi

# Check ai-engine changes
if git diff --name-only main...HEAD | grep -q "ai-engine/"; then
echo "ai-engine=true" >> $GITHUB_OUTPUT
else
echo "ai-engine=false" >> $GITHUB_OUTPUT
fi

echo "Frontend changed: ${{ steps.filter.outputs.frontend }}"
echo "Backend changed: ${{ steps.filter.outputs.backend }}"
echo "AI-Engine changed: ${{ steps.filter.outputs.ai-engine }}"
Copy link

Copilot AI Mar 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The changes job checkout does not include fetch-depth: 0, so git diff --name-only main...HEAD will fail because the main ref won't be available in a shallow clone. The CI workflow (ci.yml) correctly uses fetch-depth: 0 for its change detection. Additionally, on schedule and workflow_dispatch (without full_audit) triggers, HEAD is the default branch (main), so main...HEAD will show no changes and all jobs will be skipped — the weekly scheduled scan will never actually run any audits. Consider using fetch-depth: 0 and adding || github.event_name == 'schedule' to the job conditions so scheduled runs audit everything.

Copilot uses AI. Check for mistakes.
Comment on lines +277 to +286
headers = dict(scope.get("headers", []))
# Convert bytes to string for headers
headers = {k.decode(): v.decode() for k, v in headers.items()}

# The FastAPI instrumentation will handle this automatically,
# but we keep this for custom use cases
context = extract_trace_context(headers)

# Continue with the request
await self.app(scope, receive, send)
Copy link

Copilot AI Mar 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same issue as in the backend TracingMiddleware: the extracted context is unused, and ASGI scope["headers"] is a list of tuples, not a dict. The middleware has no effect.

Copilot uses AI. Check for mistakes.
name: Depcheck - Frontend
runs-on: ubuntu-latest
needs: changes
if: ${{ needs.changes.outputs.frontend == 'true' || github.event.inputs.full_audit == 'true' }}
Copy link

Copilot AI Mar 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On schedule trigger runs, github.event.inputs.full_audit is not set (it will be empty/null), so this condition will be false for all three audit jobs when triggered by the weekly schedule. Combined with the change detection issue (which will also output false for scheduled runs), the weekly scan will never actually run any audits. Add || github.event_name == 'schedule' to the if condition of each audit job.

Suggested change
if: ${{ needs.changes.outputs.frontend == 'true' || github.event.inputs.full_audit == 'true' }}
if: ${{ needs.changes.outputs.frontend == 'true' || github.event.inputs.full_audit == 'true' || github.event_name == 'schedule' }}

Copilot uses AI. Check for mistakes.
Comment on lines +193 to +202
# Run pipdeptree to check for unused dependencies
echo ""
echo "Running pipdeptree to check for unused dependencies..."
if pipdeptree --warn fail; then
echo "✅ No unused (undepended) packages found"
echo "unused_deps_found=false" >> $GITHUB_OUTPUT
else
echo "❌ Unused dependencies detected!"
echo "unused_deps_found=true" >> $GITHUB_OUTPUT
fi
Copy link

Copilot AI Mar 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pipdeptree --warn fail does not detect unused dependencies. It only detects dependency conflicts (version mismatches between installed packages). The step description says "check for unused dependencies" but pipdeptree is not designed for that purpose. For detecting unused Python imports/dependencies, consider tools like deptry which actually analyze source code to find unused dependencies. The current implementation will give a false sense of security regarding unused packages.

Copilot uses AI. Check for mistakes.
Comment on lines +146 to +147
volumes:
- jaeger-data:/var/lib/jaeger
Copy link

Copilot AI Mar 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When using SPAN_STORAGE_TYPE=badger for persistent Jaeger trace storage, you need to configure the Badger directory paths and disable ephemeral mode. Without these settings, Badger may use ephemeral (in-memory) storage and not use the mounted volume. Add the following environment variables:

  • BADGER_EPHEMERAL=false
  • BADGER_DIRECTORY_VALUE=/badger/data
  • BADGER_DIRECTORY_KEY=/badger/key

And update the volume mount to match (e.g., jaeger-data:/badger).

Suggested change
volumes:
- jaeger-data:/var/lib/jaeger
- BADGER_EPHEMERAL=false
- BADGER_DIRECTORY_VALUE=/badger/data
- BADGER_DIRECTORY_KEY=/badger/key
volumes:
- jaeger-data:/badger

Copilot uses AI. Check for mistakes.
Comment on lines +16 to +17
from opentelemetry.sdk.extension.aws.resource.ec2 import AwsEc2ResourceDetector
from opentelemetry.sdk.extension.aws.resource.ecs import AwsEcsResourceDetector
Copy link

Copilot AI Mar 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The code imports AwsEc2ResourceDetector and AwsEcsResourceDetector from opentelemetry.sdk.extension.aws.resource, but the required package opentelemetry-sdk-extension-aws is not listed in backend/requirements.txt (or ai-engine/requirements.txt). This will cause an ImportError at runtime when this module is imported. Either add opentelemetry-sdk-extension-aws>=2.0.0 to requirements.txt, or remove these AWS resource detector imports and the corresponding try/except blocks (lines 59-69) if AWS metadata detection is not needed.

Suggested change
from opentelemetry.sdk.extension.aws.resource.ec2 import AwsEc2ResourceDetector
from opentelemetry.sdk.extension.aws.resource.ecs import AwsEcsResourceDetector
try:
from opentelemetry.sdk.extension.aws.resource.ec2 import AwsEc2ResourceDetector
from opentelemetry.sdk.extension.aws.resource.ecs import AwsEcsResourceDetector
except ImportError: # opentelemetry-sdk-extension-aws may not be installed
AwsEc2ResourceDetector = None # type: ignore[assignment]
AwsEcsResourceDetector = None # type: ignore[assignment]

Copilot uses AI. Check for mistakes.
Comment on lines +277 to +286
headers = dict(scope.get("headers", []))
# Convert bytes to string for headers
headers = {k.decode(): v.decode() for k, v in headers.items()}

# The FastAPI instrumentation will handle this automatically,
# but we keep this for custom use cases
context = extract_trace_context(headers)

# Continue with the request
await self.app(scope, receive, send)
Copy link

Copilot AI Mar 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In ASGI, scope["headers"] is a list of 2-tuples [(bytes, bytes), ...], not a dictionary. Calling dict() on it will silently drop duplicate header names. More critically, the extracted context on line 283 is computed but never used — it is not passed to self.app() via opentelemetry.context.attach() or similar mechanism, so the extracted trace context is discarded and the middleware has no effect. If the comment on line 281-282 is accurate and FastAPI instrumentation handles this, consider removing this middleware entirely to avoid misleading code.

Copilot uses AI. Check for mistakes.
Comment on lines +1 to +324
"""
Distributed Tracing Service using OpenTelemetry.

This module provides tracing capabilities for the ModPorter AI application,
including trace context propagation between services.
"""

import os
from typing import Optional
from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor, ConsoleSpanExporter
from opentelemetry.sdk.resources import Resource, SERVICE_NAME, SERVICE_VERSION
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
from opentelemetry.exporter.jaeger.thrift import JaegerExporter
from opentelemetry.sdk.extension.aws.resource.ec2 import AwsEc2ResourceDetector
from opentelemetry.sdk.extension.aws.resource.ecs import AwsEcsResourceDetector
from opentelemetry.instrumentation.fastapi import FastAPIInstrumentor
from opentelemetry.instrumentation.httpx import HTTPXClientInstrumentor
from opentelemetry.instrumentation.redis import RedisInstrumentor
from opentelemetry.trace.propagation.tracecontext import TraceContextTextMapPropagator
from opentelemetry.trace import Status, StatusCode
from opentelemetry.context import Context
import logging

logger = logging.getLogger(__name__)

# Trace context propagator (W3C Trace Context)
tracer_propagator = TraceContextTextMapPropagator()

# Global tracer instance
_tracer: Optional[trace.Tracer] = None
_tracer_provider: Optional[TracerProvider] = None


def get_tracer(service_name: str = "modporter-backend") -> trace.Tracer:
"""
Get or create a tracer instance for the given service.

Args:
service_name: Name of the service for tracing

Returns:
Configured tracer instance
"""
global _tracer, _tracer_provider

if _tracer is not None:
return _tracer

# Create resource with service information
service_version = os.getenv("SERVICE_VERSION", "1.0.0")
resource = Resource.create({
SERVICE_NAME: service_name,
SERVICE_VERSION: service_version,
})

# Add cloud metadata if available
try:
ec2_resource = AwsEc2ResourceDetector().detect()
resource = resource.merge(ec2_resource)
except Exception:
pass

try:
ecs_resource = AwsEcsResourceDetector().detect()
resource = resource.merge(ecs_resource)
except Exception:
pass

# Create tracer provider
_tracer_provider = TracerProvider(resource=resource)

# Configure exporter based on environment
tracing_enabled = os.getenv("TRACING_ENABLED", "true").lower() == "true"
tracing_exporter = os.getenv("TRACING_EXPORTER", "jaeger").lower()

if tracing_enabled:
if tracing_exporter == "jaeger":
# Jaeger exporter configuration
jaeger_host = os.getenv("JAEGER_HOST", "localhost")
jaeger_port = int(os.getenv("JAEGER_PORT", "6831"))

jaeger_exporter = JaegerExporter(
agent_host_name=jaeger_host,
agent_port=jaeger_port,
)
_tracer_provider.add_span_processor(
BatchSpanProcessor(jaeger_exporter)
)
logger.info(f"Jaeger tracing enabled: {jaeger_host}:{jaeger_port}")

elif tracing_exporter == "otlp":
# OTLP exporter configuration
otlp_endpoint = os.getenv("OTLP_ENDPOINT", "http://localhost:4317")

otlp_exporter = OTLPSpanExporter(
endpoint=otlp_endpoint,
insecure=True,
)
_tracer_provider.add_span_processor(
BatchSpanProcessor(otlp_exporter)
)
logger.info(f"OTLP tracing enabled: {otlp_endpoint}")

# Add console exporter for development
if os.getenv("TRACING_CONSOLE", "false").lower() == "true":
_tracer_provider.add_span_processor(
BatchSpanProcessor(ConsoleSpanExporter())
)
logger.info("Console span exporter enabled")

# Set the global tracer provider
trace.set_tracer_provider(_tracer_provider)

# Create and return tracer
_tracer = trace.get_tracer(service_name)

logger.info(f"Tracing initialized for service: {service_name}")

return _tracer


def init_tracing(
app=None,
service_name: str = "modporter-backend",
instrument_fastapi: bool = True,
instrument_httpx: bool = True,
instrument_redis: bool = True,
) -> trace.Tracer:
"""
Initialize tracing with automatic instrumentation.

Args:
app: FastAPI application instance (optional)
service_name: Name of the service
instrument_fastapi: Whether to instrument FastAPI
instrument_httpx: Whether to instrument HTTPX
instrument_redis: Whether to instrument Redis

Returns:
Configured tracer instance
"""
tracer = get_tracer(service_name)

# Instrument FastAPI if app provided
if app and instrument_fastapi:
try:
FastAPIInstrumentor.instrument_app(app)
logger.info("FastAPI instrumentation enabled")
except Exception as e:
logger.warning(f"Failed to instrument FastAPI: {e}")

# Instrument HTTPX
if instrument_httpx:
try:
HTTPXClientInstrumentor().instrument()
logger.info("HTTPX instrumentation enabled")
except Exception as e:
logger.warning(f"Failed to instrument HTTPX: {e}")

# Instrument Redis
if instrument_redis:
try:
RedisInstrumentor().instrument()
logger.info("Redis instrumentation enabled")
except Exception as e:
logger.warning(f"Failed to instrument Redis: {e}")

return tracer


def extract_trace_context(carrier: dict) -> Context:
"""
Extract trace context from carrier (e.g., HTTP headers).

Args:
carrier: Dictionary containing trace context (e.g., HTTP headers)

Returns:
Extracted context
"""
return tracer_propagator.extract(carrier)


def inject_trace_context(carrier: dict) -> dict:
"""
Inject trace context into carrier (e.g., HTTP headers).

Args:
carrier: Dictionary to inject trace context into

Returns:
Carrier with injected trace context
"""
tracer_propagator.inject(carrier)
return carrier


def create_span(
name: str,
context: Optional[Context] = None,
kind: trace.SpanKind = trace.SpanKind.INTERNAL,
) -> trace.Span:
"""
Create a new span with the given name and context.

Args:
name: Name of the span
context: Parent context (optional)
kind: Span kind

Returns:
New span
"""
tracer = get_tracer()

if context:
with tracer.start_as_current_span(name, context=context, kind=kind) as span:
return span
else:
with tracer.start_as_current_span(name, kind=kind) as span:
return span


def add_span_attributes(span: trace.Span, attributes: dict) -> None:
"""
Add attributes to a span.

Args:
span: Span to add attributes to
attributes: Dictionary of attributes
"""
for key, value in attributes.items():
if value is not None:
span.set_attribute(key, str(value))


def record_span_exception(span: trace.Span, exception: Exception) -> None:
"""
Record an exception on a span.

Args:
span: Span to record exception on
exception: Exception to record
"""
span.set_status(Status(StatusCode.ERROR, str(exception)))
span.record_exception(exception)


def shutdown_tracing() -> None:
"""Shutdown the tracing provider and flush any pending spans."""
global _tracer_provider

if _tracer_provider:
_tracer_provider.shutdown()
logger.info("Tracing provider shutdown")


class TracingMiddleware:
"""
Middleware for FastAPI to handle trace context propagation.

This middleware extracts trace context from incoming requests
and injects it into outgoing requests.
"""

def __init__(self, app):
self.app = app

async def __call__(self, scope, receive, send):
if scope["type"] != "http":
await self.app(scope, receive, send)
return

# Extract trace context from headers
headers = dict(scope.get("headers", []))
# Convert bytes to string for headers
headers = {k.decode(): v.decode() for k, v in headers.items()}

# The FastAPI instrumentation will handle this automatically,
# but we keep this for custom use cases
context = extract_trace_context(headers)

# Continue with the request
await self.app(scope, receive, send)


def get_current_span() -> Optional[trace.Span]:
"""
Get the current active span if any.

Returns:
Current span or None
"""
return trace.get_current_span()


def get_trace_id() -> Optional[str]:
"""
Get the current trace ID as a hex string.

Returns:
Trace ID or None
"""
span = get_current_span()
if span:
trace_id = span.get_span_context().trace_id
return format(trace_id, '032x')
return None


def get_span_id() -> Optional[str]:
"""
Get the current span ID as a hex string.

Returns:
Span ID or None
"""
span = get_current_span()
if span:
span_id = span.get_span_context().span_id
return format(span_id, '016x')
return None
Copy link

Copilot AI Mar 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This PR is titled "Detect unused dependencies (#693)" and the description focuses on dependency checking, but this file introduces a complete distributed tracing system with OpenTelemetry and Jaeger integration. This is a significant feature addition that is unrelated to the PR's stated purpose of detecting unused dependencies. This should ideally be in a separate PR for clearer review and change tracking.

Copilot uses AI. Check for mistakes.
Comment on lines +12 to +19
"ignorePatterns": [
"*.test.ts",
"*.test.tsx",
"*.spec.ts",
"*.spec.tsx",
"stories",
"**/*.stories.tsx"
],
Copy link

Copilot AI Mar 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The ignorePatterns exclude test files (*.test.ts, *.test.tsx, etc.) and stories, which means depcheck won't scan those files for imports. This will cause it to report devDependencies that are only used in tests (e.g., @testing-library/react, @testing-library/jest-dom, jsdom, vitest, msw) as unused — resulting in false positives. Typically, depcheck should scan test files to detect devDependency usage. Consider removing the test file patterns from ignorePatterns, or add the known test-only packages to an ignoreMatches list instead.

Suggested change
"ignorePatterns": [
"*.test.ts",
"*.test.tsx",
"*.spec.ts",
"*.spec.tsx",
"stories",
"**/*.stories.tsx"
],
"ignorePatterns": [],

Copilot uses AI. Check for mistakes.
log_dir = os.getenv("LOG_DIR", "/var/log/modporter")

# Configure processors based on format
# Order matters: context merging -> logger info -> level -> timestamper -> renderer -> exception handling
Copy link

Copilot AI Mar 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same documentation issue: the comment says "renderer -> exception handling" but the code correctly places exception handling (StackInfoRenderer, format_exc_info) before the renderer, which is appended later. The comment should say "...timestamper -> exception handling -> renderer".

Suggested change
# Order matters: context merging -> logger info -> level -> timestamper -> renderer -> exception handling
# Order matters: context merging -> logger info -> level -> timestamper -> exception handling -> renderer

Copilot uses AI. Check for mistakes.
anchapin pushed a commit that referenced this pull request Mar 9, 2026
- Format tracing.py files with black
- Format logging config files
- Format main.py and ai_engine_client.py

Co-authored-by: openhands <openhands@all-hands.dev>
anchapin added a commit that referenced this pull request Mar 10, 2026
* feat: Add Ruff linter configuration for all Python directories

- Add root-level pyproject.toml with comprehensive Ruff config
- Configure Ruff to check all Python directories (backend, ai-engine, modporter, tests)
- Add appropriate ignores for legacy code patterns (unused imports,
  module-level imports not at top, bare except, etc.)
- Update CI workflow to use root config with 'ruff check .'
- Exclude UTF-16 encoded temp_init.py file from linting

Co-authored-by: openhands <openhands@all-hands.dev>

* fix(CI): Resolve integration tests failing on main branch

- Fix prepare-base-images job to always run but conditionally skip build
  - This ensures outputs are always available for dependent jobs
  - Fixes 'Unable to find image' error when dependencies haven't changed

- Fix integration-tests container image to use coalesce() for fallback
  - When prepare-base-images is skipped, use python:3.11-slim as fallback
  - Fixes empty container image reference error

- Fix performance-monitoring job needs clause
  - Corrected 'prepare-base-images' reference (was missing underscore)

- Fix frontend-tests pnpm setup order
  - Install pnpm before setup-node to avoid 'unable to cache dependencies'
  - Simplified caching to use built-in pnpm cache in setup-node

Co-authored-by: openhands <openhands@all-hands.dev>

* fix(frontend): Resolve Issue #776 - Frontend Test Failures

Fixed multiple failing test files:
- RecipeBuilder.test.tsx: Added async/await patterns, fixed userEvent setup
- ConversionUpload.test.tsx: Fixed URL validation test (validation happens on button click)
- EnhancedConversionReport.test.tsx: Simplified complex DOM interaction tests
- ConversionProgress.test.tsx, useUndoRedo.test.ts, api.test.ts: Additional fixes

All 189 frontend tests now passing.

Co-authored-by: openhands <openhands@all-hands.dev>

* fix: Fix pre-existing frontend test failures (#776)

Co-authored-by: openhands <openhands@all-hands.dev>

* fix(CI): Set VITE_API_BASE_URL for frontend tests

This ensures the WebSocket URL is properly constructed in CI environment
instead of using 'ws://undefined/ws/...' when VITE_API_BASE_URL is not set.

Co-authored-by: openhands <openhands@all-hands.dev>

---------

Co-authored-by: openhands <openhands@all-hands.dev>
@anchapin anchapin closed this Mar 10, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants