Skip to content

feat: implement core bot infrastructure and monitoring system #957

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 33 commits into
base: main
Choose a base branch
from

Conversation

kzndotsh
Copy link
Contributor

@kzndotsh kzndotsh commented Jul 20, 2025

  • Add comprehensive task management system
  • Implement distributed tracing and monitoring
  • Refactor bot core with new sentry integration
  • Update database controllers with tracing support
  • Rename emoji.py to emoji_manager.py

Description

Please include a summary of the changes and the related issue. Please also include relevant motivation and context. List any dependencies that are required for this change. If this change fixes any issues please put "Fixes #XX" in the description. Please also ensure to add the appropriate labels to the PR.

Guidelines

  • My code follows the style guidelines of this project (formatted with Ruff)

  • I have performed a self-review of my own code

  • I have commented my code, particularly in hard-to-understand areas

  • I have made corresponding changes to the documentation if needed

  • My changes generate no new warnings

  • I have tested this change

  • Any dependent changes have been merged and published in downstream modules

  • I have added all appropriate labels to this PR

  • I have followed all of these guidelines.

How Has This Been Tested? (if applicable)

Please describe how you tested your code. e.g describe what commands you ran, what arguments, and any config stuff (if applicable)

Screenshots (if applicable)

Please add screenshots to help explain your changes.

Additional Information

Please add any other information that is important to this PR.

Summary by Sourcery

Refactor the bot to introduce core infrastructure for distributed tracing and monitoring: centralize Sentry integration in a SentryManager, encapsulate task orchestration in a TaskManager, and provide tracing utilities; overhaul the cog loading system for priority-based loading and performance tracking; instrument commands, database controllers, and cogs end-to-end; add prefix caching, a convenient CLI ‘check-all’ command, and minor API refinements.

New Features:

  • Implement TaskManager for async task monitoring, health checks, and automated recovery
  • Add SentryManager for centralized error tracking, transaction and span management
  • Introduce tracing utilities (transaction/span decorators and context managers) and auto-instrument bot commands
  • Overhaul CogLoader with priority-based loading, load-time tracking and enhanced error reporting
  • Implement guild-specific prefix caching and add get_user_level_data helper in the Levels controller
  • Add “check-all” CLI command to run pre-commit and type checks together

Enhancements:

  • Refactor Tux bot core and TuxApp to use modular manager classes for setup, shutdown, and signal handling
  • Update database controllers to apply selective tracing on meaningful operations
  • Extend cogs with get_critical_tasks to integrate into the TaskManager
  • Rename emoji.py to emoji_manager.py and update related imports
  • Enhance startup banner to include date and simplified information

Copy link
Contributor

sourcery-ai bot commented Jul 20, 2025

Reviewer's Guide

This PR establishes a full-fledged core infrastructure and observability system for the Tux Discord bot by restructuring the main Tux class into a modular lifecycle orchestrator with dedicated managers, introducing a centralized SentryManager and a suite of tracing utilities for distributed performance monitoring, replacing the ad-hoc task loop with an advanced TaskManager that supports health checks and critical-task recovery, overhauling the CogLoader with priority-based loading and robust telemetry, applying selective instrumentation to database controllers, and streamlining error handling through the new Sentry integration. Additionally, it renames the emoji utility module to emoji_manager.py and updates all references accordingly.

Class diagram for new core bot infrastructure and managers

classDiagram
    class Tux {
        - BotState state
        - SentryManager sentry_manager
        - EmojiManager emoji_manager
        - TaskManager task_manager
        - Console console
        - setup_task
        - _startup_task
        + setup()
        + shutdown()
        + on_ready()
        + on_disconnect()
        + _setup_database()
        + _load_jishaku()
        + _load_cogs()
        + _setup_hot_reload()
        + _register_critical_tasks()
        + remove_cog()
        + invoke()
    }
    class BotState {
        is_shutting_down: bool
        setup_complete: bool
        start_time: float | None
        emoji_manager_initialized: bool
        hot_reload_loaded: bool
        banner_logged: bool
    }
    class SentryManager {
        + set_context()
        + capture_exception()
        + set_tag()
        + finish_transaction_on_error()
        + is_initialized: bool
    }
    class TaskManager {
        + start()
        + cancel_all_tasks()
        + setup_task_instrumentation()
        + discover_and_register_cog_tasks()
        + cleanup_cog_tasks()
        + critical_tasks
        + task_metrics
    }
    class EmojiManager {
        + init()
    }
    Tux --> BotState
    Tux --> SentryManager
    Tux --> EmojiManager
    Tux --> TaskManager
Loading

Class diagram for CogLoader and CogLoadResult refactor

classDiagram
    class CogLoader {
        - bot: commands.Bot
        - cog_ignore_list: set~str~
        - load_times: defaultdict~str, float~
        - load_priorities: dict~str, int~
        + load_cogs(path: Path): list~CogLoadResult~
        + load_cogs_from_folder(folder_name: str): list~CogLoadResult~
        + reload_cog(path: Path): bool
        + _load_single_cog(path: Path): CogLoadResult
        + _unload_single_cog(path: Path): bool
        + _discover_and_sort_cogs(path: Path): list~Path~
        + _get_cog_priority(path: Path): int
        + _is_eligible_cog_file(filepath: Path): bool
    }
    class CogLoadResult {
        + module: str
        + success: bool
        + load_time: float
        + error: Exception | None
        + load_time_ms: float
    }
    CogLoader --> CogLoadResult
Loading

Class diagram for SentryHandler and error handling integration

classDiagram
    class SentryHandler {
        + on_command(ctx)
        + on_command_error(ctx, error)
        + on_interaction(interaction)
        + on_app_command_error(interaction, error)
        - _set_command_context(ctx, command_name)
    }
    class ErrorHandlerConfig {
        message_format: str
        log_level: str
        sentry_status: str | None
        send_to_sentry: bool
        detail_extractor
    }
    SentryHandler ..> SentryManager : uses
    SentryHandler ..> ErrorHandlerConfig : uses
Loading

Class diagram for HotReload and CogWatcher refactor

classDiagram
    class HotReload {
        + __init__(bot)
        + cog_unload()
    }
    class CogWatcher {
        + __init__(bot, path, recursive, config)
        + start()
        + stop()
        + on_modified(event)
        + _handle_file_change_debounced(file_path)
        + _reload_extension(extension)
        + _build_extension_map()
        + _populate_all_file_hashes()
        + extension_map: dict~str, str~
        + debounce_timers: dict~str, TimerHandle~
    }
    HotReload --> CogWatcher
Loading

File-Level Changes

Change Details Files
Refactor Tux core into modular lifecycle orchestrator
  • Introduced BotState dataclass to centralize state flags
  • Composed SentryManager, EmojiManager, and TaskManager in Tux.init
  • Structured setup() and shutdown() into sequenced phases instrumented with spans
tux/bot.py
tux/app.py
Centralize Sentry integration and distributed tracing
  • Added SentryManager for SDK init, signal handling, context management, and message capture
  • Created tracing.py with @transaction/@span decorators and context managers
  • Integrated automatic command instrumentation via instrument_bot_commands and replaced inline sentry_sdk calls
tux/utils/sentry_manager.py
tux/utils/tracing.py
tux/utils/context_utils.py
Implement advanced TaskManager for async task lifecycle
  • Extracted task monitoring, categorization, and cancellation into a dedicated TaskManager
  • Supported critical-task discovery from cogs, health checks, and auto-restart
  • Wired TaskManager into Tux for start, instrumentation setup, and graceful shutdown
tux/utils/task_manager.py
tux/bot.py
tux/utils/protocols.py
Overhaul CogLoader with priority-based loading and telemetry
  • Rewrote load_cogs flow to discover, group, and load cogs by directory priorities
  • Introduced CogLoadResult and enhanced spans (capture_span_exception, enhanced_span) for load timing and error tracking
  • Added robust unload/reload paths with Sentry-integrated error capture
tux/cog_loader.py
Selective instrumentation in database controllers
  • Modified DatabaseController to wrap only meaningful public methods with @span and exclude noise
  • Removed generic span creation from _execute_query to avoid duplicate tracing
  • Applied decorator-based tracing to CRUD operations
tux/database/controllers/__init__.py
tux/database/controllers/base.py
Refactor error handling to use SentryManager
  • Removed inline transaction finalization logic in error handlers
  • Delegated exception capture and span status management to SentryManager methods
  • Simplified Sentry reporting in both prefix and slash command cogs
tux/handlers/error.py
tux/handlers/sentry.py
Rename emoji utility and update references
  • Renamed emoji.py to emoji_manager.py
  • Updated imports and initialization in Tux to use new EmojiManager
tux/utils/emoji_manager.py
tux/bot.py

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link
Contributor

github-actions bot commented Jul 20, 2025

Dependency Review

The following issues were found:
  • ✅ 0 vulnerable package(s)
  • ✅ 0 package(s) with incompatible licenses
  • ✅ 0 package(s) with invalid SPDX license definitions
  • ⚠️ 1 package(s) with unknown licenses.
See the Details below.

License Issues

poetry.lock

PackageVersionLicenseIssue Type
jaraco-functools4.2.1NullUnknown License

OpenSSF Scorecard

Scorecard details
PackageVersionScoreDetails
pip/aiohttp 3.12.14 🟢 6.9
Details
CheckScoreReason
Maintained🟢 1030 commit(s) and 25 issue activity found in the last 90 days -- score normalized to 10
Code-Review🟢 6Found 8/13 approved changesets -- score normalized to 6
Token-Permissions⚠️ 0detected GitHub workflow tokens with excessive permissions
Binary-Artifacts🟢 10no binaries found in the repo
Dangerous-Workflow🟢 10no dangerous workflow patterns detected
CII-Best-Practices⚠️ 0no effort to earn an OpenSSF best practices badge detected
License🟢 9license file detected
Vulnerabilities🟢 100 existing vulnerabilities detected
Fuzzing🟢 10project is fuzzed
Branch-Protection⚠️ -1internal error: error during branchesHandler.setup: internal error: githubv4.Query: Resource not accessible by integration
Security-Policy🟢 10security policy file detected
Pinned-Dependencies⚠️ 0dependency not pinned by hash detected -- score normalized to 0
Packaging🟢 10packaging workflow detected
SAST🟢 8SAST tool detected but not run on all commits
Signed-Releases⚠️ 0Project has not signed or included provenance with any releases.
pip/aiosignal 1.4.0 🟢 6
Details
CheckScoreReason
Code-Review⚠️ 0Found 1/13 approved changesets -- score normalized to 0
Binary-Artifacts🟢 10no binaries found in the repo
Maintained🟢 1030 commit(s) and 6 issue activity found in the last 90 days -- score normalized to 10
Dangerous-Workflow🟢 10no dangerous workflow patterns detected
Pinned-Dependencies⚠️ 2dependency not pinned by hash detected -- score normalized to 2
CII-Best-Practices⚠️ 0no effort to earn an OpenSSF best practices badge detected
Token-Permissions⚠️ 0detected GitHub workflow tokens with excessive permissions
Fuzzing⚠️ 0project is not fuzzed
License🟢 10license file detected
Vulnerabilities🟢 100 existing vulnerabilities detected
Signed-Releases⚠️ 0Project has not signed or included provenance with any releases.
Branch-Protection⚠️ -1internal error: error during branchesHandler.setup: internal error: githubv4.Query: Resource not accessible by integration
Security-Policy🟢 10security policy file detected
Packaging🟢 10packaging workflow detected
SAST🟢 10SAST tool is run on all commits
pip/backrefs 5.9 🟢 5.3
Details
CheckScoreReason
Dangerous-Workflow🟢 10no dangerous workflow patterns detected
Binary-Artifacts🟢 10no binaries found in the repo
Maintained🟢 56 commit(s) and 0 issue activity found in the last 90 days -- score normalized to 5
Security-Policy🟢 10security policy file detected
Code-Review⚠️ 0Found 0/29 approved changesets -- score normalized to 0
Token-Permissions⚠️ 0detected GitHub workflow tokens with excessive permissions
CII-Best-Practices⚠️ 0no effort to earn an OpenSSF best practices badge detected
Vulnerabilities🟢 100 existing vulnerabilities detected
Fuzzing⚠️ 0project is not fuzzed
Pinned-Dependencies⚠️ 0dependency not pinned by hash detected -- score normalized to 0
License🟢 10license file detected
Packaging🟢 10packaging workflow detected
Signed-Releases⚠️ -1no releases found
Branch-Protection⚠️ -1internal error: error during branchesHandler.setup: internal error: githubv4.Query: Resource not accessible by integration
SAST⚠️ 0SAST tool is not run on all commits -- score normalized to 0
pip/certifi 2025.7.14 🟢 6.9
Details
CheckScoreReason
Maintained🟢 109 commit(s) and 4 issue activity found in the last 90 days -- score normalized to 10
Code-Review⚠️ 2Found 1/4 approved changesets -- score normalized to 2
Security-Policy🟢 10security policy file detected
Binary-Artifacts🟢 10no binaries found in the repo
Dangerous-Workflow🟢 10no dangerous workflow patterns detected
Pinned-Dependencies🟢 5dependency not pinned by hash detected -- score normalized to 5
Token-Permissions🟢 10GitHub workflow tokens follow principle of least privilege
CII-Best-Practices⚠️ 0no effort to earn an OpenSSF best practices badge detected
Fuzzing⚠️ 0project is not fuzzed
Vulnerabilities🟢 100 existing vulnerabilities detected
License🟢 9license file detected
Signed-Releases⚠️ -1no releases found
Branch-Protection🟢 3branch protection is not maximal on development and all release branches
Packaging🟢 10packaging workflow detected
SAST⚠️ 0SAST tool is not run on all commits -- score normalized to 0
pip/coverage 7.9.2 🟢 8.5
Details
CheckScoreReason
Maintained🟢 1030 commit(s) and 19 issue activity found in the last 90 days -- score normalized to 10
Code-Review⚠️ 0Found 0/29 approved changesets -- score normalized to 0
Security-Policy🟢 10security policy file detected
Dangerous-Workflow🟢 10no dangerous workflow patterns detected
Binary-Artifacts🟢 10no binaries found in the repo
Token-Permissions🟢 10GitHub workflow tokens follow principle of least privilege
Vulnerabilities🟢 100 existing vulnerabilities detected
CII-Best-Practices🟢 5badge detected: Passing
License🟢 10license file detected
Branch-Protection⚠️ -1internal error: error during branchesHandler.setup: internal error: githubv4.Query: Resource not accessible by integration
Pinned-Dependencies🟢 5dependency not pinned by hash detected -- score normalized to 5
Signed-Releases⚠️ -1no releases found
Fuzzing🟢 10project is fuzzed
Packaging🟢 10packaging workflow detected
SAST🟢 10SAST tool is run on all commits
pip/cryptography 45.0.5 🟢 8.4
Details
CheckScoreReason
Code-Review🟢 10all changesets reviewed
Maintained🟢 1030 commit(s) and 12 issue activity found in the last 90 days -- score normalized to 10
CII-Best-Practices⚠️ 0no effort to earn an OpenSSF best practices badge detected
Security-Policy🟢 10security policy file detected
Token-Permissions🟢 9detected GitHub workflow tokens with excessive permissions
Dangerous-Workflow🟢 10no dangerous workflow patterns detected
License🟢 9license file detected
Branch-Protection⚠️ -1internal error: error during branchesHandler.setup: internal error: githubv4.Query: Resource not accessible by integration
Binary-Artifacts🟢 10no binaries found in the repo
Signed-Releases⚠️ -1no releases found
Fuzzing🟢 10project is fuzzed
Packaging🟢 10packaging workflow detected
Pinned-Dependencies🟢 5dependency not pinned by hash detected -- score normalized to 5
SAST⚠️ 0SAST tool is not run on all commits -- score normalized to 0
Vulnerabilities🟢 82 existing vulnerabilities detected
pip/distlib 0.4.0 UnknownUnknown
pip/hishel 0.1.3 UnknownUnknown
pip/jaraco-functools 4.2.1 🟢 5.6
Details
CheckScoreReason
Dangerous-Workflow🟢 10no dangerous workflow patterns detected
Binary-Artifacts🟢 10no binaries found in the repo
Code-Review⚠️ 0Found 1/28 approved changesets -- score normalized to 0
Packaging⚠️ -1packaging workflow not detected
Security-Policy🟢 10security policy file detected
Maintained🟢 1013 commit(s) and 2 issue activity found in the last 90 days -- score normalized to 10
Pinned-Dependencies⚠️ 0dependency not pinned by hash detected -- score normalized to 0
Token-Permissions🟢 10GitHub workflow tokens follow principle of least privilege
CII-Best-Practices⚠️ 0no effort to earn an OpenSSF best practices badge detected
Fuzzing⚠️ 0project is not fuzzed
Vulnerabilities🟢 100 existing vulnerabilities detected
License⚠️ 0license file not detected
Signed-Releases⚠️ -1no releases found
Branch-Protection⚠️ 0branch protection not enabled on development/release branches
SAST⚠️ 0SAST tool is not run on all commits -- score normalized to 0
pip/multidict 6.6.3 🟢 6.2
Details
CheckScoreReason
Code-Review⚠️ 1Found 5/26 approved changesets -- score normalized to 1
Maintained🟢 1030 commit(s) and 13 issue activity found in the last 90 days -- score normalized to 10
Binary-Artifacts🟢 10no binaries found in the repo
Dangerous-Workflow🟢 10no dangerous workflow patterns detected
Token-Permissions⚠️ 0detected GitHub workflow tokens with excessive permissions
Pinned-Dependencies⚠️ 0dependency not pinned by hash detected -- score normalized to 0
CII-Best-Practices⚠️ 0no effort to earn an OpenSSF best practices badge detected
Fuzzing🟢 10project is fuzzed
Vulnerabilities🟢 100 existing vulnerabilities detected
License🟢 9license file detected
Security-Policy🟢 10security policy file detected
Packaging🟢 10packaging workflow detected
SAST🟢 10SAST tool is run on all commits
Branch-Protection🟢 3branch protection is not maximal on development and all release branches
Signed-Releases⚠️ 0Project has not signed or included provenance with any releases.
pip/pbs-installer 2025.7.12 UnknownUnknown
pip/types-aiofiles 24.1.0.20250708 🟢 5.2
Details
CheckScoreReason
Maintained🟢 1030 commit(s) and 24 issue activity found in the last 90 days -- score normalized to 10
Code-Review🟢 9Found 25/26 approved changesets -- score normalized to 9
Packaging⚠️ -1packaging workflow not detected
CII-Best-Practices⚠️ 0no effort to earn an OpenSSF best practices badge detected
Token-Permissions⚠️ 0detected GitHub workflow tokens with excessive permissions
Dangerous-Workflow🟢 10no dangerous workflow patterns detected
License🟢 9license file detected
Fuzzing⚠️ 0project is not fuzzed
Signed-Releases⚠️ -1no releases found
Security-Policy🟢 10security policy file detected
Binary-Artifacts🟢 10no binaries found in the repo
Branch-Protection🟢 3branch protection is not maximal on development and all release branches
Pinned-Dependencies⚠️ 0dependency not pinned by hash detected -- score normalized to 0
Vulnerabilities⚠️ 012 existing vulnerabilities detected
SAST⚠️ 0SAST tool is not run on all commits -- score normalized to 0
pip/types-dateparser 1.2.2.20250627 🟢 5.2
Details
CheckScoreReason
Maintained🟢 1030 commit(s) and 24 issue activity found in the last 90 days -- score normalized to 10
Code-Review🟢 9Found 25/26 approved changesets -- score normalized to 9
Packaging⚠️ -1packaging workflow not detected
CII-Best-Practices⚠️ 0no effort to earn an OpenSSF best practices badge detected
Token-Permissions⚠️ 0detected GitHub workflow tokens with excessive permissions
Dangerous-Workflow🟢 10no dangerous workflow patterns detected
License🟢 9license file detected
Fuzzing⚠️ 0project is not fuzzed
Signed-Releases⚠️ -1no releases found
Security-Policy🟢 10security policy file detected
Binary-Artifacts🟢 10no binaries found in the repo
Branch-Protection🟢 3branch protection is not maximal on development and all release branches
Pinned-Dependencies⚠️ 0dependency not pinned by hash detected -- score normalized to 0
Vulnerabilities⚠️ 012 existing vulnerabilities detected
SAST⚠️ 0SAST tool is not run on all commits -- score normalized to 0
pip/types-python-dateutil 2.9.0.20250708 🟢 5.2
Details
CheckScoreReason
Maintained🟢 1030 commit(s) and 24 issue activity found in the last 90 days -- score normalized to 10
Code-Review🟢 9Found 25/26 approved changesets -- score normalized to 9
Packaging⚠️ -1packaging workflow not detected
CII-Best-Practices⚠️ 0no effort to earn an OpenSSF best practices badge detected
Token-Permissions⚠️ 0detected GitHub workflow tokens with excessive permissions
Dangerous-Workflow🟢 10no dangerous workflow patterns detected
License🟢 9license file detected
Fuzzing⚠️ 0project is not fuzzed
Signed-Releases⚠️ -1no releases found
Security-Policy🟢 10security policy file detected
Binary-Artifacts🟢 10no binaries found in the repo
Branch-Protection🟢 3branch protection is not maximal on development and all release branches
Pinned-Dependencies⚠️ 0dependency not pinned by hash detected -- score normalized to 0
Vulnerabilities⚠️ 012 existing vulnerabilities detected
SAST⚠️ 0SAST tool is not run on all commits -- score normalized to 0
pip/typing-extensions 4.14.1 🟢 6.6
Details
CheckScoreReason
Dangerous-Workflow🟢 10no dangerous workflow patterns detected
Maintained🟢 1018 commit(s) and 13 issue activity found in the last 90 days -- score normalized to 10
Packaging⚠️ -1packaging workflow not detected
Code-Review🟢 8Found 25/30 approved changesets -- score normalized to 8
CII-Best-Practices⚠️ 0no effort to earn an OpenSSF best practices badge detected
Token-Permissions🟢 10GitHub workflow tokens follow principle of least privilege
Pinned-Dependencies⚠️ 0dependency not pinned by hash detected -- score normalized to 0
Binary-Artifacts🟢 10no binaries found in the repo
License🟢 9license file detected
Fuzzing⚠️ 0project is not fuzzed
Signed-Releases⚠️ -1no releases found
Branch-Protection⚠️ -1internal error: error during branchesHandler.setup: internal error: githubv4.Query: Resource not accessible by integration
Security-Policy🟢 10security policy file detected
Vulnerabilities🟢 37 existing vulnerabilities detected
SAST⚠️ 0SAST tool is not run on all commits -- score normalized to 0
pip/xattr 1.2.0 🟢 4.5
Details
CheckScoreReason
Maintained⚠️ 23 commit(s) and 0 issue activity found in the last 90 days -- score normalized to 2
Dangerous-Workflow🟢 10no dangerous workflow patterns detected
Code-Review⚠️ 1Found 2/12 approved changesets -- score normalized to 1
Binary-Artifacts🟢 10no binaries found in the repo
Token-Permissions⚠️ 0detected GitHub workflow tokens with excessive permissions
Pinned-Dependencies⚠️ 0dependency not pinned by hash detected -- score normalized to 0
CII-Best-Practices⚠️ 0no effort to earn an OpenSSF best practices badge detected
Security-Policy⚠️ 0security policy file not detected
Fuzzing⚠️ 0project is not fuzzed
License🟢 9license file detected
Vulnerabilities🟢 100 existing vulnerabilities detected
Signed-Releases⚠️ -1no releases found
Branch-Protection⚠️ -1internal error: error during branchesHandler.setup: internal error: githubv4.Query: Resource not accessible by integration
Packaging🟢 10packaging workflow detected
SAST⚠️ 0SAST tool is not run on all commits -- score normalized to 0

Scanned Files

  • poetry.lock

@kzndotsh kzndotsh force-pushed the 01-core-foundation-split branch 2 times, most recently from 2976783 to 97c46e9 Compare July 20, 2025 04:50
Copy link

codecov bot commented Jul 20, 2025

Codecov Report

Attention: Patch coverage is 18.82530% with 1078 lines in your changes missing coverage. Please review.

Project coverage is 12.26%. Comparing base (3e4f69e) to head (7b06cc9).
Report is 2 commits behind head on main.

✅ All tests successful. No failed tests found.

Files with missing lines Patch % Lines
tux/utils/task_manager.py 25.60% 243 Missing and 1 partial ⚠️
tux/bot.py 17.15% 198 Missing ⚠️
tux/cog_loader.py 0.00% 161 Missing ⚠️
tux/utils/sentry_manager.py 28.35% 144 Missing ⚠️
tux/utils/tracing.py 24.09% 126 Missing ⚠️
tux/utils/hot_reload.py 0.00% 60 Missing ⚠️
tux/handlers/sentry.py 0.00% 33 Missing ⚠️
tux/app.py 22.58% 24 Missing ⚠️
tux/handlers/error.py 0.00% 23 Missing ⚠️
tux/utils/context_utils.py 34.78% 15 Missing ⚠️
... and 9 more
Additional details and impacted files
@@            Coverage Diff            @@
##            main     #957      +/-   ##
=========================================
+ Coverage   9.26%   12.26%   +2.99%     
=========================================
  Files        123      129       +6     
  Lines      10390    11140     +750     
  Branches    1276     1346      +70     
=========================================
+ Hits         963     1366     +403     
- Misses      9325     9691     +366     
+ Partials     102       83      -19     
Flag Coverage Δ *Carryforward flag
database 0.33% <0.00%> (+0.02%) ⬆️ Carriedforward from e46b4a3
integration 9.34% <18.82%> (+3.48%) ⬆️
unit 9.78% <18.82%> (+3.47%) ⬆️

*This pull request uses carry forward flags. Click here to find out more.

Components Coverage Δ
Core Bot Infrastructure 14.90% <11.13%> (-1.55%) ⬇️
Database Layer 35.71% <9.52%> (+35.71%) ⬆️
Bot Commands & Features 0.00% <0.00%> (ø)
Event & Error Handling ∅ <ø> (∅)
Utilities & Helpers ∅ <ø> (∅)
User Interface Components 0.00% <ø> (ø)
CLI Interface ∅ <ø> (∅)
External Service Wrappers ∅ <ø> (∅)

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@kzndotsh kzndotsh force-pushed the 01-core-foundation-split branch from 97c46e9 to 004c154 Compare July 20, 2025 04:54
Copy link

cloudflare-workers-and-pages bot commented Jul 20, 2025

Deploying tux with  Cloudflare Pages  Cloudflare Pages

Latest commit: 7b06cc9
Status: ✅  Deploy successful!
Preview URL: https://c74359ed.tux-afh.pages.dev
Branch Preview URL: https://01-core-foundation-split.tux-afh.pages.dev

View logs

Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @kzndotsh - I've reviewed your changes and found some issues that need to be addressed.

Blocking issues:

  • Syntax error in function definition: square brackets used instead of parentheses. (link)

General comments:

  • In the base controller’s _execute_query, the op_name parameter is never used—either remove it or integrate it into your span metadata for clearer tracing.
  • The TaskManager class is over 600 lines and handles monitoring, health checks, metrics, recovery, and shutdown—consider splitting it into smaller, focused modules or classes to improve readability and maintainability.
  • In get_prefix, you call commands.when_mentioned_or(self, message) but the helper provides a bot argument—using the passed-in bot parameter instead of self could prevent subtle reference issues.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- In the base controller’s `_execute_query`, the `op_name` parameter is never used—either remove it or integrate it into your span metadata for clearer tracing.
- The TaskManager class is over 600 lines and handles monitoring, health checks, metrics, recovery, and shutdown—consider splitting it into smaller, focused modules or classes to improve readability and maintainability.
- In `get_prefix`, you call `commands.when_mentioned_or(self, message)` but the helper provides a `bot` argument—using the passed-in `bot` parameter instead of `self` could prevent subtle reference issues.

## Individual Comments

### Comment 1
<location> `tux/bot.py:546` </location>
<code_context>
+        ctx : commands.Context[Any]
+            The context of the command invocation.
+        """
+        if not self.sentry_manager.is_initialized or not ctx.command:
+            await super().invoke(ctx)
+            return
+
</code_context>

<issue_to_address>
Possible missing Sentry transaction for commands with subcommands or errors.

Currently, Sentry transactions are only created when `ctx.command` is set. This may omit tracing for invocations with errors or unresolved commands. Consider tracing all invocations for better observability.
</issue_to_address>

<suggested_fix>
<<<<<<< SEARCH
        if not self.sentry_manager.is_initialized or not ctx.command:
            await super().invoke(ctx)
            return
=======
        if not self.sentry_manager.is_initialized:
            await super().invoke(ctx)
            return

        # Start a Sentry transaction for every invocation, even if ctx.command is None
        command_name = getattr(ctx.command, "qualified_name", None) or "unknown_command"
        with self.sentry_manager.start_transaction(
            name=f"discord.command.{command_name}",
            op="discord.command.invoke"
        ):
            await super().invoke(ctx)
            return
>>>>>>> REPLACE

</suggested_fix>

### Comment 2
<location> `tux/cog_loader.py:334` </location>
<code_context>
+        eligible_cogs = self._discover_and_sort_cogs(path)
</code_context>

<issue_to_address>
Loading cogs concurrently by priority group may mask dependency issues.

Concurrent loading within a priority group can cause race conditions or import errors if cogs depend on each other. Consider enforcing a stricter load order or documenting that intra-group dependencies are not allowed.
</issue_to_address>

### Comment 3
<location> `tux/cog_loader.py:388` </location>
<code_context>
-                    if sentry_sdk.is_initialized() and (current_span := sentry_sdk.get_current_span()):
-                        current_span.set_data("slow_cogs", slow_cogs)
-                    logger.warning(f"Slow loading cogs (>{slow_threshold * 1000:.0f}ms): {slow_cogs}")
+        if not path.exists():
+            logger.warning(f"Cog path not found: {path}")
+            return []

+        try:
</code_context>

<issue_to_address>
Returning an empty list for non-existent cog paths may hide configuration errors.

Returning an empty list may let misconfigured or missing cog folders go undetected. Consider raising an exception or escalating the warning to catch these issues early.
</issue_to_address>

<suggested_fix>
<<<<<<< SEARCH
        if not path.exists():
            logger.warning(f"Cog path not found: {path}")
            return []
=======
        if not path.exists():
            logger.error(f"Cog path not found: {path}")
            raise FileNotFoundError(f"Cog path not found: {path}")
>>>>>>> REPLACE

</suggested_fix>

### Comment 4
<location> `tux/utils/tracing.py:131` </location>
<code_context>
+    context_obj.set_data("duration_ms", (time.perf_counter() - start_time) * 1000)
+
+
+def create_instrumentation_wrapper[**P, R](
+    func: Callable[P, R],
+    context_factory: Callable[[], Any],
</code_context>

<issue_to_address>
Syntax error in function definition: square brackets used instead of parentheses.

Replace the square brackets with parentheses in the function definition to correct the syntax error.
</issue_to_address>

### Comment 5
<location> `tux/utils/task_manager.py:449` </location>
<code_context>
+            category = self._get_task_category(name)
+            tasks_by_type.setdefault(category, []).append(task)
+
+        if unknown_tasks := tasks_by_type.get(TaskCategory.UNKNOWN):
+            task_names = [self._get_task_name(t) for t in unknown_tasks]
+            logger.warning(f"Found {len(unknown_tasks)} uncategorized tasks: {', '.join(task_names)}")
</code_context>

<issue_to_address>
Logging unknown tasks every loop may cause log noise.

If unknown tasks persist, this warning could repeatedly flood the logs. Please consider rate-limiting or aggregating these log messages.
</issue_to_address>

<suggested_fix>
<<<<<<< SEARCH
        tasks_by_type: dict[TaskCategory, list[asyncio.Task[Any]]] = {category: [] for category in TaskCategory}

        for task in tasks:
            if task.done():
                continue

            name = self._get_task_name(task)
            category = self._get_task_category(name)
            tasks_by_type.setdefault(category, []).append(task)

        if unknown_tasks := tasks_by_type.get(TaskCategory.UNKNOWN):
            task_names = [self._get_task_name(t) for t in unknown_tasks]
            logger.warning(f"Found {len(unknown_tasks)} uncategorized tasks: {', '.join(task_names)}")

        return tasks_by_type
=======
        tasks_by_type: dict[TaskCategory, list[asyncio.Task[Any]]] = {category: [] for category in TaskCategory}

        for task in tasks:
            if task.done():
                continue

            name = self._get_task_name(task)
            category = self._get_task_category(name)
            tasks_by_type.setdefault(category, []).append(task)

        # Rate-limit unknown task warnings to avoid log flooding
        import time
        UNKNOWN_TASK_LOG_INTERVAL = 60  # seconds

        if not hasattr(self, "_last_unknown_task_log_time"):
            self._last_unknown_task_log_time = 0.0

        if unknown_tasks := tasks_by_type.get(TaskCategory.UNKNOWN):
            now = time.monotonic()
            if now - self._last_unknown_task_log_time > UNKNOWN_TASK_LOG_INTERVAL:
                task_names = [self._get_task_name(t) for t in unknown_tasks]
                logger.warning(f"Found {len(unknown_tasks)} uncategorized tasks: {', '.join(task_names)}")
                self._last_unknown_task_log_time = now

        return tasks_by_type
>>>>>>> REPLACE

</suggested_fix>

### Comment 6
<location> `tux/database/controllers/__init__.py:83` </location>
<code_context>
+        """Initializes the TuxApp, setting the bot instance to None initially."""
+        self.bot: Tux | None = None

-        if sentry_sdk.is_initialized():
-            with sentry_sdk.push_scope() as scope:
-                scope.set_tag("signal.number", signum)
</code_context>

<issue_to_address>
Consider keeping the new pattern-based wrapping approach, as it simplifies and improves maintainability over the previous method.

No actionable changes needed – the new pattern‐based wrapping is significantly simpler and more maintainable than the previous per‐method inspect/functools approach, and it keeps functionality intact.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment on lines +439 to +504
tasks_by_type: dict[TaskCategory, list[asyncio.Task[Any]]] = {category: [] for category in TaskCategory}

for task in tasks:
if task.done():
continue

name = self._get_task_name(task)
category = self._get_task_category(name)
tasks_by_type.setdefault(category, []).append(task)

if unknown_tasks := tasks_by_type.get(TaskCategory.UNKNOWN):
task_names = [self._get_task_name(t) for t in unknown_tasks]
logger.warning(f"Found {len(unknown_tasks)} uncategorized tasks: {', '.join(task_names)}")

return tasks_by_type
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (performance): Logging unknown tasks every loop may cause log noise.

If unknown tasks persist, this warning could repeatedly flood the logs. Please consider rate-limiting or aggregating these log messages.

Suggested change
tasks_by_type: dict[TaskCategory, list[asyncio.Task[Any]]] = {category: [] for category in TaskCategory}
for task in tasks:
if task.done():
continue
name = self._get_task_name(task)
category = self._get_task_category(name)
tasks_by_type.setdefault(category, []).append(task)
if unknown_tasks := tasks_by_type.get(TaskCategory.UNKNOWN):
task_names = [self._get_task_name(t) for t in unknown_tasks]
logger.warning(f"Found {len(unknown_tasks)} uncategorized tasks: {', '.join(task_names)}")
return tasks_by_type
tasks_by_type: dict[TaskCategory, list[asyncio.Task[Any]]] = {category: [] for category in TaskCategory}
for task in tasks:
if task.done():
continue
name = self._get_task_name(task)
category = self._get_task_category(name)
tasks_by_type.setdefault(category, []).append(task)
# Rate-limit unknown task warnings to avoid log flooding
import time
UNKNOWN_TASK_LOG_INTERVAL = 60 # seconds
if not hasattr(self, "_last_unknown_task_log_time"):
self._last_unknown_task_log_time = 0.0
if unknown_tasks := tasks_by_type.get(TaskCategory.UNKNOWN):
now = time.monotonic()
if now - self._last_unknown_task_log_time > UNKNOWN_TASK_LOG_INTERVAL:
task_names = [self._get_task_name(t) for t in unknown_tasks]
logger.warning(f"Found {len(unknown_tasks)} uncategorized tasks: {', '.join(task_names)}")
self._last_unknown_task_log_time = now
return tasks_by_type

@kzndotsh kzndotsh self-assigned this Jul 20, 2025
Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @kzndotsh - I've reviewed your changes and found some issues that need to be addressed.

Blocking issues:

  • Syntax error in function definition: invalid use of brackets. (link)

General comments:

  • This PR bundles major changes across Sentry integration, task management, and cog loading—consider splitting it into smaller, focused PRs (e.g. one for SentryManager, one for TaskManager, one for CogLoader) to simplify review and testing.
  • There are still a few direct sentry_sdk imports and calls sprinkled throughout; consolidate all error‐reporting logic behind the new SentryManager abstraction and remove leftover sentry_sdk usages for consistency.
  • The prefix_cache in Tux._get_prefix grows indefinitely and never invalidates—consider adding TTL or explicit invalidation hooks so runtime changes to guild prefixes take effect without a restart.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- This PR bundles major changes across Sentry integration, task management, and cog loading—consider splitting it into smaller, focused PRs (e.g. one for SentryManager, one for TaskManager, one for CogLoader) to simplify review and testing.
- There are still a few direct `sentry_sdk` imports and calls sprinkled throughout; consolidate all error‐reporting logic behind the new `SentryManager` abstraction and remove leftover `sentry_sdk` usages for consistency.
- The `prefix_cache` in `Tux._get_prefix` grows indefinitely and never invalidates—consider adding TTL or explicit invalidation hooks so runtime changes to guild prefixes take effect without a restart.

## Individual Comments

### Comment 1
<location> `tux/utils/tracing.py:131` </location>
<code_context>
+    context_obj.set_data("duration_ms", (time.perf_counter() - start_time) * 1000)
+
+
+def create_instrumentation_wrapper[**P, R](
+    func: Callable[P, R],
+    context_factory: Callable[[], Any],
</code_context>

<issue_to_address>
Syntax error in function definition: invalid use of brackets.

Replace the square brackets in the function definition with parentheses and use the correct type variable unpacking. The current syntax will cause a syntax error and prevent the module from loading.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

kzndotsh added 15 commits July 20, 2025 19:43
- Add comprehensive task management system
- Implement distributed tracing and monitoring
- Refactor bot core with new sentry integration
- Update database controllers with tracing support
- Rename emoji.py to emoji_manager.py
- Replace old sentry.py with new sentry_manager.py
- Update error handlers with improved context
- Streamline sentry error reporting
- Add get_prefix function to tux/app.py for dynamic guild prefix resolution
- Update bot initialization to use get_prefix as command_prefix
- Resolves remaining dependency issues from branch splitting
…tribute

- Add type: ignore comments for task_loop.coro access
- Resolves reportUnknownMemberType error in task_manager.py
- The .coro attribute exists at runtime but isn't in discord.py type stubs
- Refactor operation execution to directly return the result of the awaited operation
- Improve readability by reducing unnecessary variable assignment
…ring

- Introduced static methods for mapping database operations and transactions to standardized types.
- Implemented span filtering and grouping to improve trace readability and reduce noise.
- Updated transaction handling to streamline noise suppression and enhance reporting clarity.
- Simplified the conditional check for skipping tracing in production environments.
- Improved readability by restructuring the code for command instrumentation in the instrument_bot_commands function.
…_ call

The command_prefix argument is removed from the super().__init__ call
because it is not necessary to pass it explicitly. The command prefix
is managed internally, likely through other means, making the explicit
passing redundant and simplifying the initialization process.
…s to prevent dependency issues

- Updated cog loading logic to process cogs sequentially within each priority group, addressing potential race conditions.
- Implemented error handling to skip remaining cogs in a group if one fails to load, preventing cascading failures.
- Enhanced documentation to clarify the loading process and its implications on cog dependencies.
- Changed the warning log to an error log when a cog path is not found.
- Raised a FileNotFoundError with a descriptive message if the specified path does not exist.
- Added error handling to gracefully manage missing folders during the loading process while logging the error for visibility.
…gration

- Updated command transaction logic to create a transaction for every invocation, even when ctx.command is None.
- Improved command name retrieval by using a fallback to "unknown_command" if the command is not available.
- Streamlined the context setting for SentryManager during command invocations.
…xDB integration

- Added an info log statement to confirm successful initialization of the InfluxDB logger.
- Updated warning log to indicate task stopping when the InfluxDB writer is not initialized.
- Ensured the logger stops if InfluxDB is not configured, preventing unnecessary task execution.
… support

- Updated task registration to occur after cogs are loaded, ensuring tasks are only registered when their corresponding cogs exist.
- Added methods to unregister critical tasks and clean up tasks associated with unloaded cogs, improving resource management and logging for task cleanup.
…eanup

- Introduced a new method to register critical tasks after cogs are loaded, ensuring tasks are only registered for existing cogs.
- Enhanced the cog unloading process to clean up associated critical tasks, preventing orphaned task references.
- Updated the remove_cog method to ensure proper cleanup of tasks when a cog is removed.
- Improved logging for task registration and cleanup operations to enhance visibility and debugging.
kzndotsh added 12 commits July 20, 2025 19:43
…ction

- Add comprehensive SentryManager class with all SDK functionality
- Implement tracing and span management methods
- Add user and command context setting capabilities
- Use if expressions for cleaner code (Sourcery suggestions)
- Centralize all Sentry interactions for consistent error reporting
…gistration

- Remove hardcoded DEFAULT_CRITICAL_TASKS configuration
- Add discover_and_register_cog_tasks() method for dynamic task discovery
- Implement cog-driven task registration via get_critical_tasks() method
- Improve task registration documentation and error handling
- Make task management more flexible and maintainable
…thod

- Add @runtime_checkable decorator for runtime protocol checking
- Add add_cog method to BotProtocol for hot reload compatibility
- Make sentry_manager required (not optional) to match Tux implementation
- Improve type safety for dependency injection and Union types
… type safety

- Remove direct sentry_sdk usage, use SentryManager abstraction
- Add Union type support for bot parameter (commands.Bot | BotProtocol)
- Use runtime protocol checking with isinstance() for sentry_manager access
- Merge nested if statements for cleaner code (linter suggestions)
- Improve error handling and type safety throughout hot reload system
…rdcoded checks

- Replace hardcoded task registration with dynamic discovery
- Remove _is_cog_functional method in favor of cog-driven approach
- Use task_manager.discover_and_register_cog_tasks() for flexible task registration
- Improve separation of concerns between bot and cogs
- Make task management more maintainable and extensible
…ction

- Replace direct sentry_sdk calls with SentryManager methods
- Use set_tag and finish_transaction_on_error from SentryManager
- Improve consistency with centralized error reporting approach
- Maintain same functionality while using abstraction layer
… task registration

- Replace is_functional method with get_critical_tasks for new task manager
- Only register influx_db_logger task when InfluxDB is properly configured
- Add proper imports for CriticalTaskConfig and TaskPriority
- Improve task registration to be conditional on cog functionality
- Prevent task registration when InfluxDB environment variables are missing
…ask registration

- Add get_critical_tasks method to register old_gif_remover task
- Import CriticalTaskConfig and TaskPriority from task_manager
- Enable automatic task registration and monitoring for GIF cleanup
- Follow new cog-driven task management pattern
… registration

- Add get_critical_tasks method to register tempban_check task
- Import CriticalTaskConfig and TaskPriority from task_manager
- Enable automatic task registration and monitoring for tempban expiration
- Follow new cog-driven task management pattern for moderation tasks
…istration

- Add get_critical_tasks method to register handle_afk_expiration task
- Import CriticalTaskConfig and TaskPriority from task_manager
- Enable automatic task registration and monitoring for AFK expiration
- Follow new cog-driven task management pattern for utility tasks
…and type annotations

- Add CriticalTasksProvider Protocol with @runtime_checkable decorator
- Use isinstance() check with Protocol instead of getattr() for type safety
- Add proper type annotations for task discovery method
- Fix pyright type checking errors for task configuration handling
- Improve type safety for cog-driven task registration
…tion

- Add check-all command that runs both pre-commit and type checking
- Ensures local development catches the same issues as CI
- Provides single command for complete code validation
- Helps prevent type errors from reaching GitHub Actions
@kzndotsh kzndotsh force-pushed the 01-core-foundation-split branch from 30bfe73 to b216cee Compare July 20, 2025 23:43
Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @kzndotsh - I've reviewed your changes and found some issues that need to be addressed.

Blocking issues:

  • Syntax error in function definition: invalid use of brackets. (link)

General comments:

  • The Tux class is still extremely large and mixes lifecycle, event handling, and orchestration—consider extracting setup/shutdown sequences into dedicated service classes or modules to improve readability and testability.
  • The revamped CogLoader has become quite monolithic; splitting its discovery, priority grouping, and loading/result‐aggregation logic into smaller components would make it easier to maintain and unit‐test.
  • In the hot‐reload CogWatcher, errors now bubble up and can stop the observer entirely—consider catching/logging exceptions within the file‐change handlers so a single syntax or reload failure doesn’t halt the whole watcher.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- The `Tux` class is still extremely large and mixes lifecycle, event handling, and orchestration—consider extracting setup/shutdown sequences into dedicated service classes or modules to improve readability and testability.
- The revamped `CogLoader` has become quite monolithic; splitting its discovery, priority grouping, and loading/result‐aggregation logic into smaller components would make it easier to maintain and unit‐test.
- In the hot‐reload `CogWatcher`, errors now bubble up and can stop the observer entirely—consider catching/logging exceptions within the file‐change handlers so a single syntax or reload failure doesn’t halt the whole watcher.

## Individual Comments

### Comment 1
<location> `tux/utils/hot_reload.py:724` </location>
<code_context>
-        self._config = config or HotReloadConfig()
</code_context>

<issue_to_address>
Inconsistent attribute naming for config object.

Update all references to use `self.config` to avoid attribute errors and ensure consistency.
</issue_to_address>

### Comment 2
<location> `tux/utils/hot_reload.py:733` </location>
<code_context>
+        """
+        super().__init__()
         self.bot = bot
-        self.path = str(watch_path.resolve())
+        self.watch_path = Path(path)
         self.recursive = recursive
+        self.config = config or HotReloadConfig()
</code_context>

<issue_to_address>
Attribute name changed from 'path' to 'watch_path', but some methods still use 'self.path'.

Update all method references from `self.path` to `self.watch_path` to prevent runtime errors.
</issue_to_address>

### Comment 3
<location> `tux/utils/hot_reload.py:756` </location>
<code_context>
         self.help_file_path = self.base_dir / "help.py"

-        # Extension tracking
-        self.path_to_extension: dict[str, str] = {}
-        self.pending_tasks: list[asyncio.Task[None]] = []
-
</code_context>

<issue_to_address>
Renamed 'path_to_extension' to 'extension_map', but not all usages updated.

Some methods still use `self.path_to_extension` instead of `self.extension_map`. Please update all references to ensure consistency and prevent attribute errors.
</issue_to_address>

### Comment 4
<location> `tux/utils/hot_reload.py:763` </location>
<code_context>
-        self.dependency_graph = DependencyGraph(self._config)
-
-        # Debouncing configuration
-        self._debounce_timers: dict[str, asyncio.Handle] = {}
-
-        # Build initial extension map
</code_context>

<issue_to_address>
Debounce timer attribute renamed but not updated everywhere.

Some methods still use the old attribute name, which may break timer management. Please update all references to use the new name.
</issue_to_address>

### Comment 5
<location> `tux/utils/hot_reload.py:814` </location>
<code_context>
-        watch_root = Path(self.path)
+        watch_root = Path(self.watch_path)

-        for ext in self._config.hash_extensions:
+        for ext in self.config.hash_extensions:
             for file_path in watch_root.rglob(f"*{ext}"):
                 try:
</code_context>

<issue_to_address>
Inconsistent config attribute usage for hash_extensions.

The loop should use `self.config.hash_extensions` for consistency and to ensure the correct configuration is referenced.
</issue_to_address>

### Comment 6
<location> `tux/utils/hot_reload.py:908` </location>
<code_context>
+            del self.debounce_timers[file_key]

         # Validate syntax before attempting reload (if enabled)
-        if self._config.validate_syntax and file_path.suffix == ".py" and not validate_python_syntax(file_path):
+        if self.config.validate_syntax and file_path.suffix == ".py" and not validate_python_syntax(file_path):
             logger.debug(f"Skipping hot reload for {file_path.name} due to syntax errors")
             return
</code_context>

<issue_to_address>
Syntax validation config attribute is inconsistently referenced.

Use `self.config` instead of `self._config` to match the rest of the code.
</issue_to_address>

### Comment 7
<location> `tux/handlers/sentry.py:40` </location>
<code_context>
+        logger.info("Sentry handler initialized")

     @commands.Cog.listener()
     async def on_command_completion(self, ctx: commands.Context[Tux]) -> None:
         """
-        Finish the Sentry transaction for a completed prefix command.
</code_context>

<issue_to_address>
Setting Sentry span status to 'ok' may not always correspond to the correct transaction.

If multiple commands run at once or span context is mismanaged, the status may be set on the wrong span. Use a more reliable way to link spans to specific commands.
</issue_to_address>

### Comment 8
<location> `tux/utils/tracing.py:131` </location>
<code_context>
+    context_obj.set_data("duration_ms", (time.perf_counter() - start_time) * 1000)
+
+
+def create_instrumentation_wrapper[**P, R](
+    func: Callable[P, R],
+    context_factory: Callable[[], Any],
</code_context>

<issue_to_address>
Syntax error in function definition: invalid use of brackets.

Replace the square brackets in the function definition with parentheses and use the correct type variable syntax to avoid a syntax error.
</issue_to_address>

### Comment 9
<location> `tux/utils/tracing.py:212` </location>
<code_context>
-        set_name_func(name)
-
-
-def transaction(
-    op: str,
-    name: str | None = None,
</code_context>

<issue_to_address>
Decorator 'transaction' does not handle the case where Sentry is not initialized.

Add a check at the start of the decorator to return the original function if Sentry is not initialized, preventing unnecessary context manager usage and performance overhead.

Suggested implementation:

```python
def transaction(
    op: str,
    name: str | None = None,
    description: str | None = None,
) -> Callable[[Callable[P, R]], Callable[P, R]]:
    """
    Decorator to wrap a function with a Sentry transaction.

    This handles both synchronous and asynchronous functions automatically.
    It captures the function's execution time, sets the status to 'ok' on
    success or 'internal_error' on failure, and records exceptions.

    Parameters
    """
    def decorator(func: Callable[P, R]) -> Callable[P, R]:
        from tux.utils.tracing import is_sentry_initialized  # adjust import if needed
        if not is_sentry_initialized():
            return func
        # ... rest of the decorator logic goes here ...
        # (Move the existing decorator logic here)
    return decorator

```

1. You will need to move the existing decorator logic (the function that wraps `func`) inside the new `decorator` function.
2. Ensure that `is_sentry_initialized()` is implemented and correctly checks Sentry's initialization state. If it is not present, you will need to implement it or use the appropriate check from your Sentry integration.
3. Update all usages of the `transaction` decorator to use the new decorator structure.
</issue_to_address>

### Comment 10
<location> `tux/utils/tracing.py:255` </location>
<code_context>
-    return decorator
-
-
-def span(op: str, description: str | None = None) -> Callable[[Callable[P, R]], Callable[P, R]]:
-    """
-    Decorator to wrap a function with a Sentry span.
</code_context>

<issue_to_address>
Decorator 'span' does not short-circuit if Sentry is not initialized.

Please add a check in the 'span' decorator to return the original function if Sentry is not initialized, mirroring the approach used in the 'transaction' decorator.

Suggested implementation:

```python
def span(op: str, description: str | None = None) -> Callable[[Callable[P, R]], Callable[P, R]]:
    """
    Decorator to wrap a function with a Sentry span.

    This should be used on functions called within an existing transaction.
    It automatically handles both sync and async functions, captures execution
    time, and records success or failure status.

    If Sentry is not initialized, this decorator is a no-op and returns the original function.

    Parameters
    ----------
    op : str
        The operation name for the span (e.g., 'db.query.fetch').
    description : Optional[str]
    """
    def decorator(func: Callable[P, R]) -> Callable[P, R]:
        from tux.utils.tracing import is_sentry_initialized  # adjust import if needed
        if not is_sentry_initialized():
            return func
        # ... rest of the decorator logic goes here ...

```

You will need to:
- Ensure that the `is_sentry_initialized` function is available and works as expected (mirroring the `transaction` decorator).
- Move the rest of the decorator logic (the original body of `decorator`) after the Sentry check.
- If the import path for `is_sentry_initialized` is different, adjust it accordingly.
</issue_to_address>

### Comment 11
<location> `tux/bot.py:112` </location>
<code_context>

+        # Bot lifecycle routines are defined as lists of (name, function) tuples.
+        # This makes the setup and shutdown sequences clear and easy to modify.
+        self.setup_steps = [
+            ("database", self._setup_database),
+            ("jishaku", self._load_jishaku),
</code_context>

<issue_to_address>
Consider inlining the setup and shutdown steps directly in their respective methods to make the startup and teardown sequences more explicit and easier to follow.

Consider inlining the `setup_steps`/`shutdown_steps` tuples so the startup and teardown remain explicit and linear. This removes one level of indirection without losing any tracing or functionality:

```python
async def setup(self) -> None:
    """Set up the bot: connect DB, load extensions/cogs, enable hot reload & tasks."""
    try:
        with start_span("bot.setup", "Bot setup") as span:
            span.set_tag("phase", "database")
            await self._setup_database()

            span.set_tag("phase", "jishaku")
            await self._load_jishaku()

            span.set_tag("phase", "cogs")
            await self._load_cogs()

            span.set_tag("phase", "hot_reload")
            await self._setup_hot_reload()

            span.set_tag("phase", "tasks")
            self.task_manager.start()
            self.task_manager.setup_task_instrumentation()
            instrument_bot_commands(self)
    except Exception as e:
        logger.critical(f"Critical error during setup: {e}")
        self.sentry_manager.capture_exception(e)
        await self.shutdown()
        raise
```

```python
async def shutdown(self) -> None:
    """Gracefully shut down the bot and clean up resources."""
    with start_transaction("bot.shutdown", "Bot shutdown") as tx:
        if self.state.is_shutting_down:
            tx.set_data("already_shutting_down", True)
            return

        self.state.is_shutting_down = True
        logger.info("Shutting down...")

        await self._handle_setup_task()
        await self.task_manager.cancel_all_tasks()
        await self._close_connections()

        logger.info("Bot shutdown complete.")
```

This keeps the same ordering and span/tags while removing the `setup_steps` and `shutdown_steps` lists and their loop overhead.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment on lines -724 to -733
self._config = config or HotReloadConfig()
validate_config(self._config)
"""
Watches for file changes and automatically reloads affected cogs.
watch_path = Path(path)
if not watch_path.exists():
msg = f"Watch path does not exist: {path}"
raise FileWatchError(msg)
This class extends watchdog's FileSystemEventHandler to monitor Python files
in the bot's directory and trigger automatic reloading when changes are detected.
"""

def __init__(
self,
bot: commands.Bot | BotProtocol,
path: str,
*,
recursive: bool = True,
config: HotReloadConfig | None = None,
):
"""
Initialize the cog watcher.
Parameters
----------
bot : commands.Bot | BotProtocol
The bot instance to reload extensions on.
path : str
The directory path to watch for changes.
recursive : bool, optional
Whether to watch subdirectories recursively, by default True
config : HotReloadConfig | None, optional
Configuration for the hot reload system, by default None
"""
super().__init__()
self.bot = bot
self.path = str(watch_path.resolve())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue: Inconsistent attribute naming for config object.

Update all references to use self.config to avoid attribute errors and ensure consistency.

Comment on lines 733 to 736
self.path = str(watch_path.resolve())
self.watch_path = Path(path)
self.recursive = recursive
self.config = config or HotReloadConfig()
self.extension_map: dict[str, str] = {}
self.dependency_graph = DependencyGraph(self.config)
self.file_hash_tracker = FileHashTracker()
self.class_tracker = ClassDefinitionTracker()
self.debounce_timers: dict[str, asyncio.TimerHandle] = {}
self.observer = watchdog.observers.Observer()
self.observer.schedule(self, self.path, recursive=recursive)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (bug_risk): Attribute name changed from 'path' to 'watch_path', but some methods still use 'self.path'.

Update all method references from self.path to self.watch_path to prevent runtime errors.

self.observer.schedule(self, self.path, recursive=recursive)
self.base_dir = Path(__file__).parent.parent

# Store a relative path for logging
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (bug_risk): Renamed 'path_to_extension' to 'extension_map', but not all usages updated.

Some methods still use self.path_to_extension instead of self.extension_map. Please update all references to ensure consistency and prevent attribute errors.

try:
self.display_path = str(Path(path).relative_to(self.base_dir.parent))
except ValueError:
self.display_path = path
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (bug_risk): Debounce timer attribute renamed but not updated everywhere.

Some methods still use the old attribute name, which may break timer management. Please update all references to use the new name.

@@ -802,31 +796,33 @@
Pre-populate hash cache for all files in watched directories matching configured extensions.
This can be disabled via configuration to avoid startup overhead.
"""
if not self._config.prepopulate_hashes:
if not self.config.prepopulate_hashes:
logger.debug("Hash pre-population disabled in configuration")
return 0

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (bug_risk): Inconsistent config attribute usage for hash_extensions.

The loop should use self.config.hash_extensions for consistency and to ensure the correct configuration is referenced.

@@ -901,11 +897,11 @@
file_key = str(file_path)

# Remove from debounce tracking
if file_key in self._debounce_timers:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue: Syntax validation config attribute is inconsistently referenced.

Use self.config instead of self._config to match the rest of the code.

}

def __init__(self, bot: Tux) -> None:
"""Initialize the Sentry handler cog.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (bug_risk): Setting Sentry span status to 'ok' may not always correspond to the correct transaction.

If multiple commands run at once or span context is mismanaged, the status may be set on the wrong span. Use a more reliable way to link spans to specific commands.

# --- Decorators ---


def transaction(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion: Decorator 'transaction' does not handle the case where Sentry is not initialized.

Add a check at the start of the decorator to return the original function if Sentry is not initialized, preventing unnecessary context manager usage and performance overhead.

Suggested implementation:

def transaction(
    op: str,
    name: str | None = None,
    description: str | None = None,
) -> Callable[[Callable[P, R]], Callable[P, R]]:
    """
    Decorator to wrap a function with a Sentry transaction.

    This handles both synchronous and asynchronous functions automatically.
    It captures the function's execution time, sets the status to 'ok' on
    success or 'internal_error' on failure, and records exceptions.

    Parameters
    """
    def decorator(func: Callable[P, R]) -> Callable[P, R]:
        from tux.utils.tracing import is_sentry_initialized  # adjust import if needed
        if not is_sentry_initialized():
            return func
        # ... rest of the decorator logic goes here ...
        # (Move the existing decorator logic here)
    return decorator
  1. You will need to move the existing decorator logic (the function that wraps func) inside the new decorator function.
  2. Ensure that is_sentry_initialized() is implemented and correctly checks Sentry's initialization state. If it is not present, you will need to implement it or use the appropriate check from your Sentry integration.
  3. Update all usages of the transaction decorator to use the new decorator structure.

return decorator


def span(op: str, description: str | None = None) -> Callable[[Callable[P, R]], Callable[P, R]]:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (bug_risk): Decorator 'span' does not short-circuit if Sentry is not initialized.

Please add a check in the 'span' decorator to return the original function if Sentry is not initialized, mirroring the approach used in the 'transaction' decorator.

Suggested implementation:

def span(op: str, description: str | None = None) -> Callable[[Callable[P, R]], Callable[P, R]]:
    """
    Decorator to wrap a function with a Sentry span.

    This should be used on functions called within an existing transaction.
    It automatically handles both sync and async functions, captures execution
    time, and records success or failure status.

    If Sentry is not initialized, this decorator is a no-op and returns the original function.

    Parameters
    ----------
    op : str
        The operation name for the span (e.g., 'db.query.fetch').
    description : Optional[str]
    """
    def decorator(func: Callable[P, R]) -> Callable[P, R]:
        from tux.utils.tracing import is_sentry_initialized  # adjust import if needed
        if not is_sentry_initialized():
            return func
        # ... rest of the decorator logic goes here ...

You will need to:

  • Ensure that the is_sentry_initialized function is available and works as expected (mirroring the transaction decorator).
  • Move the rest of the decorator logic (the original body of decorator) after the Sentry check.
  • If the import path for is_sentry_initialized is different, adjust it accordingly.

tux/bot.py Outdated
self.console = Console(stderr=True, force_terminal=True)

# Bot lifecycle routines are defined as lists of (name, function) tuples.
# This makes the setup and shutdown sequences clear and easy to modify.
self.setup_steps = [
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (complexity): Consider inlining the setup and shutdown steps directly in their respective methods to make the startup and teardown sequences more explicit and easier to follow.

Consider inlining the setup_steps/shutdown_steps tuples so the startup and teardown remain explicit and linear. This removes one level of indirection without losing any tracing or functionality:

async def setup(self) -> None:
    """Set up the bot: connect DB, load extensions/cogs, enable hot reload & tasks."""
    try:
        with start_span("bot.setup", "Bot setup") as span:
            span.set_tag("phase", "database")
            await self._setup_database()

            span.set_tag("phase", "jishaku")
            await self._load_jishaku()

            span.set_tag("phase", "cogs")
            await self._load_cogs()

            span.set_tag("phase", "hot_reload")
            await self._setup_hot_reload()

            span.set_tag("phase", "tasks")
            self.task_manager.start()
            self.task_manager.setup_task_instrumentation()
            instrument_bot_commands(self)
    except Exception as e:
        logger.critical(f"Critical error during setup: {e}")
        self.sentry_manager.capture_exception(e)
        await self.shutdown()
        raise
async def shutdown(self) -> None:
    """Gracefully shut down the bot and clean up resources."""
    with start_transaction("bot.shutdown", "Bot shutdown") as tx:
        if self.state.is_shutting_down:
            tx.set_data("already_shutting_down", True)
            return

        self.state.is_shutting_down = True
        logger.info("Shutting down...")

        await self._handle_setup_task()
        await self.task_manager.cancel_all_tasks()
        await self._close_connections()

        logger.info("Bot shutdown complete.")

This keeps the same ordering and span/tags while removing the setup_steps and shutdown_steps lists and their loop overhead.

@allthingslinux allthingslinux deleted a comment from sourcery-ai bot Jul 21, 2025
…tion

- Add check-all command that runs both pre-commit and type checking
- Ensures local development catches the same issues as CI
- Provides single command for complete code validation
- Helps prevent type errors from reaching GitHub Actions
@kzndotsh
Copy link
Contributor Author

@sourcery-ai review

Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @kzndotsh - I've reviewed your changes and found some issues that need to be addressed.

Prompt for AI Agents
Please address the comments from this code review:
## Individual Comments

### Comment 1
<location> `tux/utils/hot_reload.py:734` </location>
<code_context>
+        config : HotReloadConfig | None, optional
+            Configuration for the hot reload system, by default None
+        """
+        super().__init__()
         self.bot = bot
-        self.path = str(watch_path.resolve())
</code_context>

<issue_to_address>
Missing call to validate_config in CogWatcher constructor.

Restoring validate_config(self._config) will help catch invalid configurations early and prevent potential bugs.
</issue_to_address>

<suggested_fix>
<<<<<<< SEARCH
        self.config = config or HotReloadConfig()
        self.extension_map: dict[str, str] = {}
        self.dependency_graph = DependencyGraph(self.config)
=======
        self.config = config or HotReloadConfig()
        validate_config(self.config)
        self.extension_map: dict[str, str] = {}
        self.dependency_graph = DependencyGraph(self.config)
>>>>>>> REPLACE

</suggested_fix>

### Comment 2
<location> `tux/handlers/sentry.py:85` </location>
<code_context>
-        Optional[Any]
-            The created transaction or None if Sentry is not initialized
-        """
-        if not self._is_sentry_available():
-            return None
-
</code_context>

<issue_to_address>
Redundant Sentry availability check in _set_command_context.

The inner Sentry availability check can be removed to simplify the method.
</issue_to_address>

### Comment 3
<location> `tux/app.py:78` </location>
<code_context>
+        # Initialize Sentry
+        SentryManager.setup()

+        # Set up signal handlers
         self.setup_signals()

+        # Validate config
</code_context>

<issue_to_address>
Signal handler setup may not be cross-platform safe.

Consider using `loop.add_signal_handler` for better cross-platform compatibility, or document any platform-specific limitations if this isn't feasible.

Suggested implementation:

```python
        # Set up signal handlers using the event loop for cross-platform compatibility
        import asyncio
        loop = asyncio.get_event_loop()
        self.setup_signals(loop)

```

```python
    def setup_signals(self, loop):
        """
        Set up signal handlers for graceful shutdown using the event loop for better cross-platform compatibility.
        Note: loop.add_signal_handler may not be available on all platforms (e.g., Windows for some signals).
        """
        import signal
        import sys

        try:
            loop.add_signal_handler(signal.SIGINT, lambda: self.handle_signal(signal.SIGINT, None))
            loop.add_signal_handler(signal.SIGTERM, lambda: self.handle_signal(signal.SIGTERM, None))
        except NotImplementedError:
            # Fallback for platforms that do not support add_signal_handler (e.g., Windows)
            signal.signal(signal.SIGINT, self.handle_signal)
            signal.signal(signal.SIGTERM, self.handle_signal)
            if sys.platform.startswith("win"):
                # Document limitation
                print("Warning: Signal handling is limited on Windows. Some signals may not be handled as expected.")

```
</issue_to_address>

### Comment 4
<location> `tux/app.py:135` </location>
<code_context>
-        if sentry_sdk.is_initialized():
-            sentry_sdk.flush()
-            await asyncio.sleep(0.1)
+        SentryManager.flush()
+        await asyncio.sleep(0.1)  # Brief pause to allow buffers to flush

</code_context>

<issue_to_address>
Consider making Sentry flush asynchronous.

Using a synchronous flush may block the event loop if there are pending events. If possible, use an async flush to prevent blocking during shutdown.

Suggested implementation:

```python
        await SentryManager.flush_async()
        await asyncio.sleep(0.1)  # Brief pause to allow buffers to flush

```

If `SentryManager.flush_async()` does not exist, you will need to implement it in the `SentryManager` class. This could look like:

```python
import asyncio

class SentryManager:
    @staticmethod
    async def flush_async(timeout=None):
        loop = asyncio.get_running_loop()
        await loop.run_in_executor(None, SentryManager.flush, timeout)
```

Make sure to update all usages accordingly.
</issue_to_address>

### Comment 5
<location> `tux/utils/task_manager.py:144` </location>
<code_context>
+        # Note: Critical tasks are now registered after cogs are loaded
+        # to ensure cogs exist before registering their tasks
+
+    def setup_task_instrumentation(self) -> None:
+        """
+        Initializes instrumentation for all registered critical tasks.
</code_context>

<issue_to_address>
Directly replacing task loop coroutines may cause subtle issues.

This approach relies on internal implementation details of discord.py, which may change and cause breakage. Consider wrapping the coroutine at definition or using a supported instrumentation method if available.

Suggested implementation:

```python
    def setup_task_instrumentation(self) -> None:
        """
        Initializes instrumentation for all registered critical tasks.

        Note:
            To ensure compatibility with discord.py and avoid relying on internal
            implementation details, critical task coroutines should be wrapped with
            the @instrumented_task decorator at definition time. This ensures that
            instrumentation is applied in a supported and robust manner.

            Example usage:
                @instrumented_task
                async def my_critical_task(...):
                    ...

        This method can still be used for any additional setup or validation.
        """
        logger.info("Validating Sentry instrumentation for critical tasks...")

        for task_name, config in self.critical_tasks.items():
            if not (cog := self.bot.cogs.get(config.cog_name)):
                logger.warning(f"Cog {config.cog_name} not found for task {task_name}. Skipping instrumentation.")
                continue

```

```python
        # Note: Critical tasks are now registered after cogs are loaded
        # to ensure cogs exist before registering their tasks

def instrumented_task(coro):
    """
    Decorator to instrument a task coroutine for monitoring/metrics.
    Apply this decorator to critical task coroutines at definition time.
    """
    import functools

    @functools.wraps(coro)
    async def wrapper(*args, **kwargs):
        # Insert instrumentation logic here (e.g., Sentry, metrics, logging)
        try:
            # Start timing, add tracing, etc.
            return await coro(*args, **kwargs)
        except Exception as exc:
            # Report exception to monitoring system
            raise
        finally:
            # End timing, record metrics, etc.
            pass

    return wrapper

```
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment on lines 738 to 740
self.config = config or HotReloadConfig()
self.extension_map: dict[str, str] = {}
self.dependency_graph = DependencyGraph(self.config)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (bug_risk): Missing call to validate_config in CogWatcher constructor.

Restoring validate_config(self._config) will help catch invalid configurations early and prevent potential bugs.

Suggested change
self.config = config or HotReloadConfig()
self.extension_map: dict[str, str] = {}
self.dependency_graph = DependencyGraph(self.config)
self.config = config or HotReloadConfig()
validate_config(self.config)
self.extension_map: dict[str, str] = {}
self.dependency_graph = DependencyGraph(self.config)

"discord.channel.id": ctx.channel.id,
"discord.user.id": ctx.author.id,
"discord.message.id": ctx.message.id,
"discord.command.type": "prefix",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nitpick: Redundant Sentry availability check in _set_command_context.

The inner Sentry availability check can be removed to simplify the method.

tux/app.py Outdated
Comment on lines 78 to 79
# Set up signal handlers
self.setup_signals()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion: Signal handler setup may not be cross-platform safe.

Consider using loop.add_signal_handler for better cross-platform compatibility, or document any platform-specific limitations if this isn't feasible.

Suggested implementation:

        # Set up signal handlers using the event loop for cross-platform compatibility
        import asyncio
        loop = asyncio.get_event_loop()
        self.setup_signals(loop)
    def setup_signals(self, loop):
        """
        Set up signal handlers for graceful shutdown using the event loop for better cross-platform compatibility.
        Note: loop.add_signal_handler may not be available on all platforms (e.g., Windows for some signals).
        """
        import signal
        import sys

        try:
            loop.add_signal_handler(signal.SIGINT, lambda: self.handle_signal(signal.SIGINT, None))
            loop.add_signal_handler(signal.SIGTERM, lambda: self.handle_signal(signal.SIGTERM, None))
        except NotImplementedError:
            # Fallback for platforms that do not support add_signal_handler (e.g., Windows)
            signal.signal(signal.SIGINT, self.handle_signal)
            signal.signal(signal.SIGTERM, self.handle_signal)
            if sys.platform.startswith("win"):
                # Document limitation
                print("Warning: Signal handling is limited on Windows. Some signals may not be handled as expected.")

tux/app.py Outdated
if sentry_sdk.is_initialized():
sentry_sdk.flush()
await asyncio.sleep(0.1)
SentryManager.flush()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (performance): Consider making Sentry flush asynchronous.

Using a synchronous flush may block the event loop if there are pending events. If possible, use an async flush to prevent blocking during shutdown.

Suggested implementation:

        await SentryManager.flush_async()
        await asyncio.sleep(0.1)  # Brief pause to allow buffers to flush

If SentryManager.flush_async() does not exist, you will need to implement it in the SentryManager class. This could look like:

import asyncio

class SentryManager:
    @staticmethod
    async def flush_async(timeout=None):
        loop = asyncio.get_running_loop()
        await loop.run_in_executor(None, SentryManager.flush, timeout)

Make sure to update all usages accordingly.

# Note: Critical tasks are now registered after cogs are loaded
# to ensure cogs exist before registering their tasks

def setup_task_instrumentation(self) -> None:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (bug_risk): Directly replacing task loop coroutines may cause subtle issues.

This approach relies on internal implementation details of discord.py, which may change and cause breakage. Consider wrapping the coroutine at definition or using a supported instrumentation method if available.

Suggested implementation:

    def setup_task_instrumentation(self) -> None:
        """
        Initializes instrumentation for all registered critical tasks.

        Note:
            To ensure compatibility with discord.py and avoid relying on internal
            implementation details, critical task coroutines should be wrapped with
            the @instrumented_task decorator at definition time. This ensures that
            instrumentation is applied in a supported and robust manner.

            Example usage:
                @instrumented_task
                async def my_critical_task(...):
                    ...

        This method can still be used for any additional setup or validation.
        """
        logger.info("Validating Sentry instrumentation for critical tasks...")

        for task_name, config in self.critical_tasks.items():
            if not (cog := self.bot.cogs.get(config.cog_name)):
                logger.warning(f"Cog {config.cog_name} not found for task {task_name}. Skipping instrumentation.")
                continue
        # Note: Critical tasks are now registered after cogs are loaded
        # to ensure cogs exist before registering their tasks

def instrumented_task(coro):
    """
    Decorator to instrument a task coroutine for monitoring/metrics.
    Apply this decorator to critical task coroutines at definition time.
    """
    import functools

    @functools.wraps(coro)
    async def wrapper(*args, **kwargs):
        # Insert instrumentation logic here (e.g., Sentry, metrics, logging)
        try:
            # Start timing, add tracing, etc.
            return await coro(*args, **kwargs)
        except Exception as exc:
            # Report exception to monitoring system
            raise
        finally:
            # End timing, record metrics, etc.
            pass

    return wrapper

kzndotsh added 5 commits July 21, 2025 01:02
…Handler

- Consolidate command-specific tag setting logic for better readability
- Maintain functionality for both commands.Context and discord.Interaction types
- Improve code clarity by reducing redundancy in attribute setting
…ility

- Add configuration validation in CogWatcher to ensure proper setup
- Enhance robustness by preventing potential misconfigurations
- Maintain existing functionality while improving error handling
- Introduce flush_async method to allow non-blocking flush of pending Sentry events
- Utilize asyncio to prevent blocking the event loop during shutdown
- Enhance performance and responsiveness in applications using SentryManager
- Introduce @instrumented_task decorator to instrument critical task coroutines for monitoring and metrics
- Update TaskManager documentation to reflect new usage pattern for wrapping critical tasks
- Enhance validation of Sentry instrumentation setup for better reliability
…oop support

- Update setup_signals method to accept an event loop for better cross-platform compatibility
- Implement signal handlers for SIGINT and SIGTERM using loop.add_signal_handler
- Add fallback for platforms that do not support add_signal_handler, with a warning for Windows limitations
- Change SentryManager.flush to SentryManager.flush_async for non-blocking shutdown
Copy link
Contributor

@amadaluzia amadaluzia left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks okay, I do think some things here could be improved though.

Comment on lines +224 to +232
def get_critical_tasks(self) -> list[CriticalTaskConfig]:
"""Get critical tasks for this cog.

Returns
-------
list[CriticalTaskConfig]
List of critical task configurations
"""
return [CriticalTaskConfig("afk_expiration_handler", "Afk", "handle_afk_expiration", TaskPriority.NORMAL)]
Copy link
Contributor

@amadaluzia amadaluzia Jul 21, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this need to exist? You would probably be better off returning the object itself rather than making it a singleton.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, the task stuff I may just get rid of entirely or redo.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants