Skip to content

Development

Firstp1ck edited this page Nov 27, 2025 · 3 revisions

Development

This page covers development tools, debugging, and code analysis for Pacsea contributors.

Related guides:

Development Scripts

Pacsea includes several development scripts in dev/scripts/ to help with code quality, analysis, and documentation:

Complexity Report

Generate a comprehensive complexity analysis report:

./dev/scripts/complexity_report.sh

This generates dev/scripts/complexity_report.txt with detailed complexity metrics.

Clippy Error Check

Check for Clippy errors and warnings:

./dev/scripts/clippy_errors.sh

Outputs to dev/scripts/clippy_errors.txt.

Clippy Cognitive Complexity

Check cognitive complexity specifically:

./dev/scripts/clippy_cognitive_complexity.sh

Outputs to dev/scripts/clippy_cognitive_complexity_report.txt.

Generate Documentation

Generate Rust documentation:

./dev/scripts/generate_docs.sh
cargo doc --open  # View the generated documentation

Generates HTML documentation in target/doc/ with private items included.

Module Structure Visualization

Generate module dependency graphs:

./dev/scripts/module_structure.sh

Requires cargo-modules and graphviz. Creates visual dependency graphs for each module in dev/scripts/Modules/.

Control Flow Diagram

Generate application control flow diagram:

./dev/scripts/generate_controlflow_diagram.sh
# With PNG export
./dev/scripts/generate_controlflow_diagram.sh --export-png --png-theme dark

Generates a Mermaid flowchart diagram showing the application's control flow. See dev/ControlFlow_Diagram.md for the generated diagram.

Complexity Tests

Pacsea includes complexity analysis tests to monitor code quality and maintainability. To view the detailed output from these tests, use the --nocapture flag:

Cyclomatic Complexity

cargo test --test cyclomatic_complexity test_cyclomatic_complexity -- --nocapture

Data Flow Complexity

cargo test --test data_flow_complexity test_data_flow_complexity -- --nocapture

Run Both Complexity Tests

cargo test --test cyclomatic_complexity --test data_flow_complexity -- --nocapture

Or run all tests matching "complexity":

cargo test complexity -- --nocapture

Output Results to a File

To save the test output to a file, redirect stdout. To filter out test runner output (keep only the complexity reports), use grep:

# Cyclomatic complexity (filtered)
cargo test --test cyclomatic_complexity test_cyclomatic_complexity -- --nocapture 2>&1 | grep -vE "(^running|^test result:|^test tests::|Finished.*test.*profile|Running unittests|Running tests/)" | sed '/^$/N;/^\n$/d' > cyclomatic_complexity_report.txt

# Data flow complexity (filtered)
cargo test --test data_flow_complexity test_data_flow_complexity -- --nocapture 2>&1 | grep -vE "(^running|^test result:|^test tests::|Finished.*test.*profile|Running unittests|Running tests/)" | sed '/^$/N;/^\n$/d' > data_flow_complexity_report.txt

# Both tests (combined, filtered)
cargo test --test cyclomatic_complexity --test data_flow_complexity -- --nocapture 2>&1 | grep -vE "(^running|^test result:|^test tests::|Finished.*test.*profile|Running unittests|Running tests/)" | sed '/^$/N;/^\n$/d' > complexity_report.txt

# Append to existing file (use >> instead of >)
cargo test complexity -- --nocapture 2>&1 | grep -vE "(^running|^test result:|^test tests::|Finished.*test.*profile|Running unittests|Running tests/)" | sed '/^$/N;/^\n$/d' >> complexity_report.txt

Note: The 2>&1 redirects stderr to stdout so all output is captured, grep -vE filters out test runner output, and sed '/^$/N;/^\n$/d' collapses multiple consecutive empty lines into single empty lines.

The tests generate detailed reports including:

  • Summary statistics (total files, functions, complexity)
  • Top 10 most complex functions
  • Files ranked by total complexity
  • Complexity distribution across the codebase
  • Functions exceeding complexity thresholds

Complexity Thresholds:

  • Cyclomatic complexity: Should be < 25 for new functions (warning at 10, very high at 20)
  • Data flow complexity: Should be < 25 for new functions (warning at 10, very high at 50)

Debugging

Logging Levels

Pacsea uses the tracing crate for structured logging. Logs are written to ~/.config/pacsea/logs/pacsea.log.

Enable debug logging:

# Method 1: Verbose flag
cargo run -- --dry-run --verbose

# Method 2: Log level flag
cargo run -- --dry-run --log-level debug

# Method 3: Environment variable
RUST_LOG=pacsea=debug cargo run -- --dry-run

Available log levels:

  • trace — Most detailed (very verbose)
  • debug — Debug information
  • info — General information (default)
  • warn — Warnings
  • error — Errors only

Preflight tracing: For detailed preflight operation timing:

PACSEA_PREFLIGHT_TRACE=1 cargo run -- --dry-run --log-level trace

Viewing Logs

Real-time log monitoring:

tail -f ~/.config/pacsea/logs/pacsea.log

Filter for errors/warnings:

grep -iE "(warn|error)" ~/.config/pacsea/logs/pacsea.log

View recent logs:

tail -100 ~/.config/pacsea/logs/pacsea.log

Debugging Tips

  1. Use dry-run mode: Always use --dry-run during development to avoid unintended changes
  2. Check logs first: Most issues are logged with helpful context
  3. Enable debug logging: Use --verbose or RUST_LOG=pacsea=debug for detailed information
  4. Test in isolation: Use a VM or container for testing install/update operations
  5. Check terminal compatibility: Test in different terminals (alacritty, kitty, xterm) if UI issues occur

Code Structure

Main Components

  • src/main.rs: Application entry point, argument parsing, logging setup
  • src/app/: Application runtime, event loop, state management
  • src/events/: Event handling (keyboard, mouse, modals, install operations)
  • src/ui/: UI rendering components (panes, modals, widgets)
  • src/logic/: Business logic (preflight, dependency resolution, package operations)
  • src/state/: Application state types and management
  • src/theme/: Theme and configuration management
  • src/i18n/: Internationalization and localization
  • src/index/: Package index management
  • src/install/: Package installation and removal logic
  • src/sources/: Package source management (AUR, official repos)

Architecture Overview

Pacsea uses an async event-driven architecture:

  1. Initialization: Loads config, caches, locale system
  2. Event Loop: tokio::select! handles multiple async channels concurrently
  3. Background Workers: Async workers handle search, analysis, and updates
  4. State Management: Centralized state with reactive updates
  5. UI Rendering: Ratatui-based TUI with immediate updates

See dev/ControlFlow_Diagram.md for a detailed control flow diagram.

Testing

Running Tests

Tests must be run single-threaded to avoid race conditions:

cargo test -- --test-threads=1
# or
RUST_TEST_THREADS=1 cargo test

Test Organization

  • Unit tests: In src/ files with #[cfg(test)] modules
  • Integration tests: In tests/ directory
  • Complexity tests: tests/cyclomatic_complexity.rs and tests/data_flow_complexity.rs
  • Preflight integration tests: tests/preflight_integration/ directory

Writing Tests

Guidelines:

  • Tests should be deterministic and not rely on external state
  • Use --dry-run in tests that would modify the system
  • For bug fixes: create failing tests first, then fix the issue

Building

Development Build

cargo build
# or
cargo run -- --dry-run

Release Build

cargo build --release
./target/release/pacsea --dry-run

Check Only (Fast)

cargo check

Documentation

Generate Rust Docs

cargo doc --no-deps --document-private-items
cargo doc --open  # View in browser

Or use the script:

./dev/scripts/generate_docs.sh

Documentation Standards

All public functions, methods, structs, and enums should have rustdoc comments following this format:

/// What: Brief description of what the function does.
///
/// Inputs:
/// - `param1`: Description of parameter 1
/// - `param2`: Description of parameter 2
///
/// Output:
/// - Description of return value or side effects
///
/// Details:
/// - Additional context, edge cases, or important notes
pub fn example_function(param1: Type1, param2: Type2) -> Result<Type3> {
    // implementation
}

Common Development Tasks

Adding a New Feature

  1. Create a feature branch: git checkout -b feat/my-feature
  2. Implement the feature with tests
  3. Add rustdoc comments to all new public items
  4. Run quality checks: cargo fmt, cargo clippy, cargo test
  5. Check complexity: cargo test complexity -- --nocapture
  6. Update documentation (README/wiki if needed)
  7. Commit with conventional commit message
  8. Open PR with detailed description

Fixing a Bug

  1. Create a failing test that reproduces the issue
  2. Fix the bug
  3. Verify the test passes
  4. Add additional tests for edge cases
  5. Run all quality checks
  6. Update documentation if behavior changed
  7. Commit and open PR

Refactoring

  1. Ensure existing tests pass before refactoring
  2. Refactor incrementally with tests passing at each step
  3. Check complexity doesn't increase
  4. Update documentation if structure changes
  5. Run all quality checks

Performance Profiling

Build with Debug Symbols

cargo build --profile dev

Using perf (Linux)

perf record --call-graph dwarf ./target/debug/pacsea --dry-run
perf report

Memory Profiling

Consider using tools like valgrind or heaptrack for memory analysis.

Troubleshooting Development Issues

Build Issues

  • Out of memory: Try cargo build -j1 to limit parallelism
  • Linker errors: Ensure all system dependencies are installed
  • Version conflicts: Run cargo update to update dependencies

Test Issues

  • Race conditions: Ensure tests run with --test-threads=1
  • Flaky tests: Check for timing dependencies or external state
  • Test failures: Run with --nocapture to see output: cargo test -- --nocapture --test-threads=1

Clippy Issues

  • Too many warnings: Fix warnings incrementally
  • Complexity warnings: Refactor to reduce complexity
  • Pedantic warnings: These are denied by default, must be fixed

For more troubleshooting help, see the Troubleshooting guide.

Clone this wiki locally