You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Integrate native Sentry logging for centralized error monitoring
Add SentryLoggerWrapper to dual-log to standard logging and Sentry
Implement Prometheus metrics and Grafana monitoring stack
Add request duration tracking per endpoint with middleware
Configure Sentry SDK with FastAPI and logging integrations
Add monitoring services (Prometheus, Grafana) to docker-compose
Diagram Walkthrough
flowchart LR
A["Application Logs"] --> B["SentryLoggerWrapper"]
B --> C["Standard Logger"]
B --> D["Sentry SDK"]
C --> E["Console/File Output"]
D --> F["Sentry Cloud"]
G["FastAPI App"] --> H["Metrics Middleware"]
H --> I["Prometheus"]
I --> J["Grafana Dashboard"]
Loading
File Walkthrough
Relevant files
Enhancement
2 files
logging_utils.py
Add SentryLoggerWrapper for dual logging to Sentry
Below is a summary of compliance checks for this PR:
Security Compliance
🔴
Hardcoded Sentry DSN
Description: A hardcoded Sentry DSN embeds a production-like external monitoring endpoint in source code, which can leak project identifiers and enable unintended data exfiltration to Sentry if deployed as-is. main.py [43-52]
Referred Code
sentry_sdk.init(
dsn="https://c217aed2d06bdc4504801adf99840b54@o4510432441860096.ingest.us.sentry.io/4510784901677056",
integrations=[
FastApiIntegration(),
# Disable legacy breadcrumb/event behavior - we use native Sentry logs via sentry_sdk.loggerLoggingIntegration(event_level=None, level=None),
],
traces_sample_rate=1.0,
enable_logs=True, # Enable Sentry's native structured logs
)
Debug endpoint exposure
Description: The publicly accessible /sentry-debug endpoint intentionally raises an exception (division by zero), enabling trivial remote error generation and potential denial-of-service/log flooding in deployed environments. main.py [129-133]
Add monitoring services to docker-compose.yml under a separate profile so docker compose --profile monitoring up runs monitoring containers + the website, while docker compose up runs only the website.
🔴
Prefer a free/open-source, self-hosted stack; suggested: OpenTelemetry + Loki + Prometheus + Tempo + Grafana.
⚪
Add self-hosted monitoring so performance, logs, and tracebacks can be viewed on demand.
Codebase Duplication Compliance
⚪
Codebase context is not defined
Follow the guide to enable codebase context checks.
Custom Compliance
🟢
Generic: Meaningful Naming and Self-Documenting Code
Objective: Ensure all identifiers clearly express their purpose and intent, making code self-documenting
Generic: Robust Error Handling and Edge Case Management
Objective: Ensure comprehensive error handling that provides meaningful context and graceful degradation
Status: Invalid logging API: The new code calls non-existent logger/handler methods (set_level, add_handler) which will raise runtime errors and break logging initialization/notification paths.
Referred Code
defset_level(self, level: int) ->None:
self._std_logger.set_level(level)
defadd_handler(self, handler: logging.Handler) ->None:
self._std_logger.add_handler(handler)
defsetup_logger(name: str) ->SentryLoggerWrapper:
"""Set up a logger with the specified name. Returns a wrapper that logs to both standard Python logging (console/file) and Sentry structured logs for centralized monitoring. Args: name (str): The name of the logger. Returns: SentryLoggerWrapper: A logger that outputs to console, file, and Sentry. """std_logger=logging.getLogger(name)
std_logger.set_level(logging.INFO)
... (clipped97lines)
Objective: To prevent the leakage of sensitive system information through error messages while providing sufficient detail for internal debugging.
Status: Debug crash endpoint: The newly added /sentry-debug endpoint intentionally triggers a ZeroDivisionError, which risks exposing stack traces/internal details to end users if deployed without strict environment gating.
Objective: To ensure logs are useful for debugging and auditing without exposing sensitive information like PII, PHI, or cardholder data.
Status: Hardcoded Sentry DSN: The PR hardcodes the Sentry DSN in source code instead of sourcing it from secure configuration, increasing the risk of credential/tenant leakage via the repository and logs/config dumps.
Referred Code
sentry_sdk.init(
dsn="https://c217aed2d06bdc4504801adf99840b54@o4510432441860096.ingest.us.sentry.io/4510784901677056",
integrations=[
FastApiIntegration(),
# Disable legacy breadcrumb/event behavior - we use native Sentry logs via sentry_sdk.loggerLoggingIntegration(event_level=None, level=None),
],
traces_sample_rate=1.0,
enable_logs=True, # Enable Sentry's native structured logs
)
Why: The suggestion correctly identifies Git merge conflict markers in prometheus.yml which would cause the Prometheus service to fail on startup, making this a critical fix.
High
✅ Use correct logging API methodsSuggestion Impact:Updated std_logger calls from add_handler(...) to addHandler(...) for console, file, and email handlers (but did not change any set_level usages in this diff).
code diff:
- std_logger.add_handler(console_handler)+ std_logger.addHandler(console_handler)
# File handler for persistent logging
file_handler = _setup_file_handler()
if file_handler:
- std_logger.add_handler(file_handler)+ std_logger.addHandler(file_handler)
# Email handler for errors (if configured)
email_handler = _setup_email_handler()
if email_handler:
- std_logger.add_handler(email_handler)+ std_logger.addHandler(email_handler)
In src/core/logging_utils.py, replace the incorrect method calls set_level and add_handler with the correct standard logging methods setLevel and addHandler.
[To ensure code accuracy, apply this suggestion manually]
Suggestion importance[1-10]: 10
__
Why: The suggestion correctly identifies the use of incorrect method names (set_level, add_handler) which would cause a runtime AttributeError and prevent the logging system from being configured.
High
Correct Prometheus scrape port for application
Correct the Prometheus scrape target for the 'app' job from port 9000 to 8000 to match the application's actual running port.
Why: The suggestion correctly identifies a critical port mismatch between the application's runtime configuration and the Prometheus scrape configuration, which would prevent application metric collection.
High
Capture exception stack traces in Sentry
Modify the SentryLoggerWrapper.exception method to use sentry_sdk.capture_exception() to ensure full exception details and stack traces are sent to Sentry.
def exception(self, msg: str, *args, **kwargs) -> None:
"""Log exception with traceback to console/file and Sentry."""
self._std_logger.exception(msg, *args, **kwargs)
self._log_to_sentry("error", msg, *args, **kwargs)
+ try:+ from sentry_sdk import capture_exception+ # logger.exception is expected to be called in an except block,+ # so capture_exception will find the exception info.+ capture_exception()+ except ImportError:+ pass # Sentry not installed or available
Apply / Chat
Suggestion importance[1-10]: 8
__
Why: The suggestion correctly identifies that the custom Sentry logging wrapper fails to capture stack traces for exceptions and provides the correct fix using sentry_sdk.capture_exception().
Medium
General
Validate presence of smtp_port
Add smtp_port to the all() check in _setup_email_handler to ensure its presence is validated before it is used, preventing a potential runtime error.
-if not all([smtp_server, smtp_username, smtp_password, from_email, to_emails]):+if not all([smtp_server, smtp_port, smtp_username, smtp_password, from_email, to_emails]):
return None
[To ensure code accuracy, apply this suggestion manually]
Suggestion importance[1-10]: 8
__
Why: The suggestion correctly identifies that smtp_port is missing from the validation check, which would lead to a TypeError when int() is called on None if the port is not configured.
Medium
Remove redundant custom metrics middleware
Remove the custom metrics_middleware as its functionality for tracking request latency is already provided by the prometheus-fastapi-instrumentator library, which is also configured.
-@app.middleware("http")-async def metrics_middleware(request: Request, call_next):- start_time = datetime.now()- response = await call_next(request)- process_time = (datetime.now() - start_time).total_seconds()+# This middleware should be removed.+# The Instrumentator().instrument(app) call already provides request latency metrics.- endpoint = request.scope.get("path", "unknown")- if request.scope.get("route"):- endpoint = request.scope["route"].path-- REQUEST_LATENCY.labels(- method=request.method, endpoint=endpoint, status_code=response.status_code- ).observe(process_time)- return response-
Apply / Chat
Suggestion importance[1-10]: 7
__
Why: The suggestion correctly identifies that the custom metrics_middleware is redundant because prometheus-fastapi-instrumentator already provides the same functionality, improving code by removing unnecessary duplication.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
User description
What is this issue for and how does it solve it
integrate sentry logging into our log handler. all logs should also be sent to the sentry app, can configure this later.
Link to the Github Issue
Addresses #167
PR Type
Enhancement
Description
Integrate native Sentry logging for centralized error monitoring
Add SentryLoggerWrapper to dual-log to standard logging and Sentry
Implement Prometheus metrics and Grafana monitoring stack
Add request duration tracking per endpoint with middleware
Configure Sentry SDK with FastAPI and logging integrations
Add monitoring services (Prometheus, Grafana) to docker-compose
Diagram Walkthrough
File Walkthrough
2 files
Add SentryLoggerWrapper for dual logging to SentryInitialize Sentry SDK and add Prometheus metrics3 files
Clean up import statement formattingReorder imports for consistencyReorder imports for consistency3 files
Add Prometheus and Grafana monitoring servicesConfigure Prometheus scrape targetsConfigure Grafana Prometheus data source1 files
Add prometheus-fastapi-instrumentator dependency