diff --git a/.cliffignore b/.cliffignore index 501b6045..90644309 100644 --- a/.cliffignore +++ b/.cliffignore @@ -10,4 +10,4 @@ ad958555df2bc7f685d786f7ee37d749afd3e4db 289e46a7830f09d81ceeec7f51a0566b8ade93a4 9bcb7e58774ebdc2aaa21673767f4f9dd2760fa7 f88bdac9597f5169ee8d10458e7618847c942da1 -8c53d71c9f307e8e66fbb084cbb3be0b6fac8faf +8c53d71c9f307e8e66fbb084cbb3be0b6fac8faf \ No newline at end of file diff --git a/.github/CLAUDE.md b/.github/CLAUDE.md deleted file mode 100644 index 804c8cba..00000000 --- a/.github/CLAUDE.md +++ /dev/null @@ -1,1097 +0,0 @@ -# CLAUDE.md - CI/CD & GitHub Actions Complete Guide - -This file provides comprehensive guidance for Claude Code and human engineers working with the CI/CD infrastructure and GitHub Actions workflows in this repository. - -## Overview - -The Aignostics Python SDK uses a **sophisticated multi-stage CI/CD pipeline** built on GitHub Actions with: - -* **19 workflow files** (8 entry points + 11 reusable workflows) -* **Reusable workflow architecture** for modularity and maintainability -* **Environment-based testing** (staging/production with scheduled validation) -* **Multi-category test execution** (unit, integration, e2e, long_running, very_long_running, scheduled) -* **Automated PR reviews** with Claude Code -* **Comprehensive quality gates** (lint, audit, test, CodeQL) -* **Native executable builds** for 6 platforms -* **Automated releases** with package publishing -* **External monitoring** via BetterStack heartbeats - -## Workflow Architecture - -```text -┌─────────────────────────────────────────────────────────────────────┐ -│ ci-cd.yml (Main Orchestrator) │ -│ Triggered on: push to main, PR, release, tag v*.*.* │ -├─────────────────────────────────────────────────────────────────────┤ -│ │ -│ ┌────────┐ ┌───────┐ ┌────────────────┐ ┌────────┐ │ -│ │ Lint │ │ Audit │ │ Test │ │ CodeQL │ │ -│ │ (5 min)│ │(3 min)│ │ (Multi-stage) │ │ (10m) │ │ -│ └───┬────┘ └───┬───┘ └───┬────────────┘ └───┬────┘ │ -│ │ │ │ │ │ -│ │ │ ┌─────┴──────┐ │ │ -│ │ │ │ unit (3m) │ │ │ -│ │ │ │ integ (5m) │ │ │ -│ │ │ │ e2e (7m) │ │ │ -│ │ │ │ long (opt) │ │ │ -│ │ │ │ vlong(opt) │ │ │ -│ │ │ └────────────┘ │ │ -│ │ │ │ │ │ -│ └───────────┴──────────┴────────────────────┘ │ -│ ↓ │ -│ ┌──────────────────────┐ │ -│ │ Ketryx Report Check │ │ -│ │ (Medical Compliance) │ │ -│ └──────────┬───────────┘ │ -│ ↓ │ -│ ┌───────────────┴─────────────────┐ │ -│ │ │ │ -│ ┌────────────┐ ┌────────────┐ │ -│ │ Package │ │ Docker │ │ -│ │ Publish │ │ Publish │ │ -│ │ (on tag) │ │ (on tag) │ │ -│ └────────────┘ └────────────┘ │ -└─────────────────────────────────────────────────────────────────────┘ - -┌───────────────────────────────────────────────────────────────┐ -│ Parallel Entry Points │ -├───────────────────────────────────────────────────────────────┤ -│ build-native-only.yml → Native executables (6 platforms) │ -│ claude-code-*.yml → PR reviews + interactive sessions │ -│ test-scheduled-*.yml → Staging (6h) + Production (24h) │ -│ audit-scheduled.yml → Security audit (hourly) │ -│ codeql-scheduled.yml → CodeQL scan (weekly) │ -└───────────────────────────────────────────────────────────────┘ -``` - -## All Workflows Reference - -### Entry Point Workflows (Triggered Directly) - -| Workflow | Triggers | Purpose | Calls | -|----------|----------|---------|-------| -| **ci-cd.yml** | push(main), PR, release, tag | Main CI/CD pipeline | _lint, _audit, _test, _codeql, _ketryx, _package-publish, _docker-publish | -| **build-native-only.yml** | push, PR, release (if msg contains `build:native:only`) | Native executable builds | _build-native-only | -| **claude-code-interactive.yml** | workflow_dispatch (manual) | Manual Claude sessions | _claude-code (interactive) | -| **claude-code-automation-pr-review.yml** | PR opened/sync (excludes bots) | Automated PR reviews | _claude-code (automation) | -| **test-scheduled-staging.yml** | schedule (every 6h) | Continuous staging validation | _scheduled-test (staging) | -| **test-scheduled-production.yml** | schedule (every 24h) | Daily production validation | _scheduled-test (production) | -| **audit-scheduled.yml** | schedule (hourly) | Security & license audit | _scheduled-audit | -| **codeql-scheduled.yml** | schedule (Tue 3:22 AM) | Weekly CodeQL scan | _codeql | - -### Reusable Workflows (Called by Others) - -| Workflow | Purpose | Duration | Key Outputs | -|----------|---------|----------|-------------| -| **_lint.yml** | Code quality (ruff, pyright, mypy) | ~5 min | Formatted code, type safety | -| **_audit.yml** | Security + license compliance | ~3 min | SBOM (CycloneDX, SPDX), vulnerabilities, licenses | -| **_test.yml** | Multi-stage test execution | ~15 min | Coverage reports, JUnit XML | -| **_codeql.yml** | Security vulnerability scanning | ~10 min | CodeQL SARIF results | -| **_ketryx_report_and_check.yml** | Medical device compliance | ~2 min | Ketryx project report | -| **_package-publish.yml** | PyPI package publishing | ~3 min | Wheel/sdist on PyPI, GitHub release | -| **_docker-publish.yml** | Docker image publishing | ~5 min | Multi-arch Docker images | -| **_build-native-only.yml** | Native executable builds | ~10 min/platform | aignostics.7z per platform | -| **_claude-code.yml** | Claude Code execution | varies | Code changes, analysis | -| **_scheduled-audit.yml** | Scheduled audit runner | ~5 min | Audit reports + BetterStack heartbeat | -| **_scheduled-test.yml** | Scheduled test runner | ~10 min | Test reports + BetterStack heartbeat | - -## Test Execution Strategy - -### Test Categories - -The SDK has **7 test categories** with different execution strategies. - -**CRITICAL REQUIREMENT**: Every test **MUST** be marked with at least one of: `unit`, `integration`, or `e2e`. Tests without these markers will **NOT run in CI** because the pipeline explicitly filters by these markers. - -```python -# ✅ CORRECT - Has category marker -@pytest.mark.unit -def test_something(): - pass - -# ❌ INCORRECT - No category marker, will NOT run in CI -def test_something_else(): - pass - -# ✅ CORRECT - Multiple markers including category -@pytest.mark.e2e -@pytest.mark.long_running -def test_complex_workflow(): - pass -``` - -#### 1. Unit Tests - -**Marker**: `unit` - -**Characteristics**: - -* Fast, isolated tests with no external dependencies -* No API calls, no file I/O (except temp files) -* ~3 minutes total execution time - -**Parallelization**: `XDIST_WORKER_FACTOR=0.0` (sequential execution) - -* Fast enough that parallelization overhead reduces performance -* Single worker for predictable execution - -**CI Behavior**: Always run in all CI contexts - -**Run locally**: - -```bash -make test_unit -# Or directly: -uv run pytest -m "unit and not long_running and not very_long_running" -v -``` - -#### 2. Integration Tests - -**Marker**: `integration` - -**Characteristics**: - -* Tests with mocked external services (API responses, S3 calls) -* Some I/O but mostly CPU-bound -* ~5 minutes total execution time - -**Parallelization**: `XDIST_WORKER_FACTOR=0.2` (20% of logical CPUs) - -* Limited parallelism due to CPU-bound nature -* Example: 8 CPU machine → max(1, int(8 * 0.2)) = 1 worker - -**CI Behavior**: Always run in all CI contexts - -**Run locally**: - -```bash -make test_integration -# Or directly: -uv run pytest -m "integration and not long_running and not very_long_running" -v -``` - -#### 3. E2E Tests (Regular) - -**Marker**: `e2e` (excluding `long_running` and `very_long_running`) - -**Characteristics**: - -* Real API calls to staging environment -* Network I/O bound -* ~7 minutes total execution time - -**Parallelization**: `XDIST_WORKER_FACTOR=1.0` (100% of logical CPUs) - -* Full parallelization maximizes throughput for I/O-bound tests -* Example: 8 CPU machine → 8 workers - -**CI Behavior**: Always run in all CI contexts - -**Requirements**: `.env` file with staging credentials - -**Run locally**: - -```bash -make test_e2e -# Or directly: -uv run pytest -m "e2e and not long_running and not very_long_running" -v -``` - -#### 4. Long Running Tests - -**Marker**: `long_running` - -**Characteristics**: - -* E2E tests taking >30 seconds each -* Typically involve large file operations or complex workflows -* Variable duration (5-15 minutes total) - -**Parallelization**: `XDIST_WORKER_FACTOR=2.0` (200% of logical CPUs) - -* Aggressive parallelization to reduce wall-clock time -* Example: 8 CPU machine → 16 workers - -**CI Behavior**: - -* **Draft PRs**: Always skipped -* **Non-draft PRs**: Run by default UNLESS: - * PR has label `skip:test:long_running`, OR - * Commit message contains `skip:test:long_running` -* **Main branch**: Always run -* **Releases**: Always run - -**Skip in PR**: - -```bash -# Add label -gh pr edit --add-label "skip:test:long_running" - -# Or commit message -git commit -m "fix: something skip:test:long_running" -``` - -**Run locally**: - -```bash -make test_long_running -# Or directly: -uv run pytest -m long_running -v -``` - -#### 5. Very Long Running Tests - -**Marker**: `very_long_running` - -**Characteristics**: - -* E2E tests taking >5 minutes each -* Extremely resource-intensive operations -* 15+ minutes total execution time - -**Parallelization**: `XDIST_WORKER_FACTOR=2.0` (200% of logical CPUs) - -**CI Behavior**: - -* **NEVER run by default** -* **Only run when explicitly enabled** via: - * PR label `enable:test:very_long_running`, OR - * Commit message contains `enable:test:very_long_running` - -**Enable in PR**: - -```bash -# Add label -gh pr edit --add-label "enable:test:very_long_running" - -# Or commit message -git commit -m "test: enable very long tests enable:test:very_long_running" -``` - -**Run locally**: - -```bash -make test_very_long_running -# Or directly: -uv run pytest -m very_long_running -v -``` - -#### 6. Sequential Tests - -**Marker**: `sequential` - -**Characteristics**: - -* Tests that must run in specific order -* Have interdependencies or shared state -* Cannot be parallelized - -**Parallelization**: None (single worker) - -**CI Behavior**: Always run in CI (as part of test suite) - -**Run locally**: - -```bash -make test_sequential -# Or directly: -uv run pytest -m sequential -v -``` - -#### 7. Scheduled Tests - -**Markers**: `scheduled` or `scheduled_only` - -**Characteristics**: - -* Tests designed for continuous validation against live environments -* May have different behavior in staging vs production -* Validate API contract stability - -**CI Behavior**: - -* **`scheduled`**: Run in scheduled jobs AND can run in regular CI -* **`scheduled_only`**: ONLY run in scheduled jobs (never in PR CI) - -**Scheduling**: - -* **Staging**: Every 6 hours (`test-scheduled-staging.yml`) -* **Production**: Every 24 hours (`test-scheduled-production.yml`) - -**Run locally**: - -```bash -make test_scheduled -# Or directly: -uv run pytest -m "(scheduled or scheduled_only)" -v -``` - -### Test Execution Flow in CI - -**Standard PR Flow** (_test.yml): - -```text -1. Unit Tests (3 min) - ├─ Python 3.11 ─┐ - ├─ Python 3.12 ─┼─ Parallel execution - └─ Python 3.13 ─┘ - -2. Integration Tests (5 min) - ├─ Python 3.11 ─┐ - ├─ Python 3.12 ─┼─ Parallel execution - └─ Python 3.13 ─┘ - -3. E2E Regular (7 min) - ├─ Python 3.11 ─┐ - ├─ Python 3.12 ─┼─ Parallel execution - └─ Python 3.13 ─┘ - -4. Long Running (if not skipped) - └─ Python 3.13 only (single version) - -5. Very Long Running (if explicitly enabled) - └─ Python 3.13 only (single version) -``` - -**Matrix Testing**: - -* Unit, Integration, E2E run on **all 3 Python versions** (3.11, 3.12, 3.13) -* Long running and very long running run on **Python 3.13 only** to save CI time -* Windows ARM excludes Python 3.12.12 due to instability - -### Skip Markers System - -**PR Labels** (preferred method): - -* `skip:ci` - Skip entire CI pipeline -* `build:native:only` - Only build native executables -* `skip:test:long_running` - Skip long-running tests -* `enable:test:very_long_running` - Enable very long running tests -* `skip:test:unit` - Skip unit tests (not recommended) -* `skip:test:integration` - Skip integration tests (not recommended) -* `skip:test:e2e` - Skip e2e tests (not recommended) - -**Commit Message Shortcuts**: - -* `skip:ci` - Skip entire CI pipeline -* `build:native:only` - Only build native executables -* `skip:test:long_running` - Skip long-running tests -* `enable:test:very_long_running` - Enable very long running tests -* `Bump version:` - Skip CI (version bump commits) - -**Usage**: - -```bash -# Add label to PR -gh pr edit --add-label "skip:test:long_running" - -# Or in commit message -git commit -m "fix: issue skip:test:long_running" -``` - -## Main CI/CD Pipeline (ci-cd.yml) - -**Purpose**: Orchestrates the entire CI/CD pipeline for all branches, PRs, and releases. - -**Triggers**: - -* `push` to `main` branch -* `pull_request` to `main` (opened, synchronize, reopened) -* `release` created -* `tags` matching `v*.*.*` - -**Concurrency Control**: - -```yaml -group: ${{ github.workflow }}-${{ github.ref_name }}-${{ github.event.pull_request.number || github.sha }} -cancel-in-progress: true -``` - -Cancels in-progress runs when new commits are pushed to same PR/branch. - -**Skip Conditions**: - -* Commit message contains `skip:ci` -* Commit message contains `build:native:only` -* Commit starts with `Bump version:` -* PR has label `skip:ci` or `build:native:only` - -**Job Dependencies**: - -```text -lint ──┐ -audit ─┼──→ ketryx_report_and_check ──┬──→ package_publish (tags only) -test ──┤ └──→ docker_publish (tags only) -codeql─┘ -``` - -**Jobs**: - -1. **lint** (~5 min): Code quality checks (ruff, pyright, mypy) -2. **audit** (~3 min): Security audit (pip-audit, pip-licenses, SBOMs) -3. **test** (~15 min): Multi-stage test suite (unit, integration, e2e, long_running, very_long_running) -4. **codeql** (~10 min): CodeQL security analysis -5. **ketryx_report_and_check**: Medical device compliance reporting -6. **package_publish** (tags only): Build and publish to PyPI, create GitHub release, send Slack notification -7. **docker_publish** (tags only): Build and publish Docker images to Docker Hub - -## Native Build System - -### Purpose - -Build standalone native executables for distribution without Python runtime dependency. - -### Supported Platforms - -| Platform | Runner | Status | Notes | -|----------|--------|--------|-------| -| **Linux x86_64** | ubuntu-latest | ✅ Stable | Primary platform | -| **Linux ARM64** | ubuntu-24.04-arm | ⚠️ Experimental | continue-on-error | -| **macOS ARM (M1+)** | macos-latest | ⚠️ Experimental | Apple Silicon | -| **macOS Intel** | macos-13 | ⚠️ Experimental | Intel chips | -| **Windows x86_64** | windows-latest | ⚠️ Experimental | With UPX compression | -| **Windows ARM64** | windows-11-arm | ⚠️ Experimental | ARM-based Windows | - -### Build Process - -1. **Setup**: Install uv package manager -2. **Windows Only**: Install UPX compression tool via chocolatey -3. **Build**: Run `make dist_native` - * Uses PyInstaller to create standalone executable - * Bundles Python runtime and all dependencies - * Compresses with UPX (Windows only) -4. **Package**: Creates `aignostics.7z` archive -5. **Upload**: Artifacts stored for 1 day with retention - -### Triggering Native Builds - -**Automatic**: Add commit message or PR label: - -```bash -git commit -m "build:native:only: create native builds" -# Or -gh pr edit --add-label "build:native:only" -``` - -**Effect**: Skips main CI/CD pipeline, only runs native builds. - -**Local Build**: - -```bash -make dist_native -# Output: dist_native/aignostics.7z -``` - -## Claude Code Integration - -### Overview - -Claude Code is integrated into the CI/CD pipeline for: - -1. **Automated PR Reviews** - Every PR gets automatic code review -2. **Interactive Sessions** - Manual Claude assistance for development tasks - -### Workflow: _claude-code.yml - -**Two Execution Modes**: - -#### 1. Interactive Mode - -* **Use Case**: Manual Claude sessions for development assistance -* **Behavior**: Iterative conversation, Claude can ask questions -* **Git History**: Full (`fetch-depth: 0`) -* **Duration**: Variable (controlled by `max_turns`) - -**Trigger**: - -```bash -# GitHub Actions UI: Actions → Claude Code Interactive → Run workflow -# Inputs: -# - prompt: "Your task description" -# - max_turns: 200 (default) -# - platform_environment: staging (default) or production -``` - -#### 2. Automation Mode - -* **Use Case**: Single-shot automated tasks (PR reviews, automated fixes) -* **Behavior**: Non-interactive, runs predefined prompt -* **Git History**: Shallow (`fetch-depth: 1`) -* **Duration**: Typically 5-10 minutes - -**Triggered by**: `claude-code-automation-pr-review.yml` on PR events - -### Configuration - -**Inputs**: - -```yaml -platform_environment: 'staging' | 'production' # Default: staging -mode: 'interactive' | 'automation' # Required -prompt: 'string' # For automation mode -max_turns: '200' # Default: 200 -allowed_tools: 'comma,separated,list' # Default: Read,Write,Edit,Glob,Grep,Bash(git:*),Bash(uv:*),Bash(make:*) -``` - -**Environment Setup** (same as test environment): - -1. Installs `uv` package manager -2. Installs dev tools (`.github/workflows/_install_dev_tools.bash`) -3. Syncs Python dependencies (`uv sync --all-extras`) -4. Sets up headless display (for GUI tests) -5. Creates `.env` with Aignostics credentials (staging or production) -6. Configures GCP credentials for bucket access - -**Claude Configuration**: - -```bash -claude \ - --max-turns 200 \ - --model claude-sonnet-4-5-20250929 \ - --allowed-tools "Read,Write,Edit,Glob,Grep,Bash(git:*),Bash(uv:*),Bash(make:*),Bash(gh:*),..." \ - --system-prompt "Read the CLAUDE.md file and apply guidance therein" \ - --prompt "${{ inputs.prompt }}" -``` - -**Secrets Required**: - -* `ANTHROPIC_API_KEY` - For Claude Code -* `AIGNOSTICS_CLIENT_ID_DEVICE_{STAGING|PRODUCTION}` -* `AIGNOSTICS_REFRESH_TOKEN_{STAGING|PRODUCTION}` -* `GCP_CREDENTIALS_{STAGING|PRODUCTION}` - -### Automated PR Review (claude-code-automation-pr-review.yml) - -**Purpose**: Automated code review by Claude on every PR. - -**Triggers**: - -* `pull_request` (opened, synchronize) -* **Excludes**: dependabot, renovate PRs - -**Review Prompt**: - -```text -Review this PR thoroughly. Check code quality, test coverage, security, -and adherence to CLAUDE.md guidelines. -``` - -**Features**: - -* Posts inline comments on code -* Checks for common issues -* Validates test coverage -* Reviews documentation -* Maximum 100 turns - -**Tool Access**: - -* `mcp__github_inline_comment__create_inline_comment` - For PR comments -* File operations: `Read`, `Write`, `Edit`, `Glob`, `Grep` -* Git/GitHub: `Bash(git:*)`, `Bash(gh:*)` - -### Manual Claude Sessions (claude-code-interactive.yml) - -**Purpose**: On-demand Claude assistance for complex development tasks. - -**Trigger**: `workflow_dispatch` (manual) - -**Inputs**: - -* `prompt`: What you want Claude to work on -* `max_turns`: How many iterations (default 200) -* `platform_environment`: staging (default) or production - -**Example Use Cases**: - -* "Refactor module X for better testability" -* "Add comprehensive tests for feature Y" -* "Update documentation for API changes" -* "Debug failing tests in TestClass" - -**Access**: GitHub Actions UI → Claude Code Interactive → Run workflow - -### Best Practices for Claude Code - -**DO**: - -* ✅ Use `--system-prompt` referencing CLAUDE.md -* ✅ Limit tool access (`--allowed-tools`) -* ✅ Set reasonable `--max-turns` -* ✅ Use staging environment for development -* ✅ Review Claude's changes before merging -* ✅ Let Claude explore workflows and test strategies - -**DON'T**: - -* ❌ Grant unrestricted tool access -* ❌ Skip CLAUDE.md system prompt -* ❌ Test against production without approval -* ❌ Merge without human review - -## Scheduled Jobs - -### Test Validation (Staging & Production) - -**Purpose**: Continuous validation of SDK against live environments. - -#### test-scheduled-staging.yml - -**Schedule**: Every 6 hours - -**Environment**: `https://platform-staging.aignostics.com` - -**Purpose**: - -* Early detection of API regressions -* Validate against latest staging deployment -* Fast feedback loop for breaking changes - -#### test-scheduled-production.yml - -**Schedule**: Every 24 hours - -**Environment**: `https://platform.aignostics.com` - -**Purpose**: - -* Validate SDK works with production API -* Catch discrepancies between staging and production -* Safety net for production deployments - -**Both workflows**: - -* Use `_scheduled-test.yml` reusable workflow -* Run `make test_scheduled` (tests marked `scheduled` or `scheduled_only`) -* Send BetterStack heartbeat for monitoring -* Upload test results and coverage reports - -### Audit Validation (audit-scheduled.yml) - -**Schedule**: Every hour (`0 * * * *`) - -**Purpose**: Continuous security and license compliance monitoring - -**Checks**: - -* `pip-audit`: CVE scanning for known vulnerabilities -* `pip-licenses`: License compliance verification -* Trivy: SBOM vulnerability scanning (CycloneDX + SPDX formats) - -**Workflow**: Uses `_scheduled-audit.yml` - -**Outputs**: - -* SBOM files (JSON, SPDX) -* License reports (CSV, JSON, grouped JSON) -* Vulnerability reports (JSON) -* BetterStack heartbeat - -### CodeQL Scanning (codeql-scheduled.yml) - -**Schedule**: Weekly on Tuesdays at 3:22 AM - -**Purpose**: Comprehensive security analysis with CodeQL - -**Workflow**: Uses `_codeql.yml` - -**Analysis**: Static analysis for Python security vulnerabilities - -## BetterStack Monitoring - -### Purpose - -External monitoring and alerting for scheduled jobs to detect failures outside GitHub. - -### Heartbeat System - -**Implemented in**: - -* `_scheduled-audit.yml` - Audit job monitoring -* `_scheduled-test.yml` - Test job monitoring (staging & production) - -**Functionality**: - -1. Job runs (audit or test) -2. Captures exit code (0 = success, non-zero = failure) -3. Constructs JSON payload with metadata -4. Sends POST request to BetterStack heartbeat URL with exit code appended -5. BetterStack tracks heartbeat and alerts on failures or missed beats - -**Payload Structure**: - -```json -{ - "github": { - "workflow": "Scheduled Test - Staging", - "run_url": "https://github.com/org/repo/actions/runs/12345", - "run_id": "12345", - "job": "test-scheduled", - "sha": "abc123...", - "actor": "github-actions", - "repository": "org/repo", - "ref": "refs/heads/main", - "event_name": "schedule" - }, - "job": { - "status": "success" - }, - "timestamp": "2025-10-19T14:30:00Z" -} -``` - -**URL Format**: `{HEARTBEAT_URL}/{EXIT_CODE}` - -**Required Secrets**: - -* `BETTERSTACK_AUDIT_HEARTBEAT_URL` - For audit jobs -* `BETTERSTACK_HEARTBEAT_URL_STAGING` - For staging test jobs -* `BETTERSTACK_HEARTBEAT_URL_PRODUCTION` - For production test jobs - -**Behavior**: - -* If heartbeat URL is configured: Sends heartbeat regardless of job success/failure -* If heartbeat URL is NOT configured: Logs warning and continues -* Exit code passed to URL allows BetterStack to distinguish success (0) from failures - -## Environment Configuration - -### Staging Environment - -**API Root**: `https://platform-staging.aignostics.com` - -**Secrets**: - -* `AIGNOSTICS_CLIENT_ID_DEVICE_STAGING` -* `AIGNOSTICS_REFRESH_TOKEN_STAGING` -* `GCP_CREDENTIALS_STAGING` -* `BETTERSTACK_HEARTBEAT_URL_STAGING` - -**Use Cases**: - -* PR testing (default for all PRs) -* E2E test execution -* Feature validation -* Claude Code development sessions -* Scheduled validation (every 6 hours) - -### Production Environment - -**API Root**: `https://platform.aignostics.com` - -**Secrets**: - -* `AIGNOSTICS_CLIENT_ID_DEVICE_PRODUCTION` -* `AIGNOSTICS_REFRESH_TOKEN_PRODUCTION` -* `GCP_CREDENTIALS_PRODUCTION` -* `BETTERSTACK_HEARTBEAT_URL_PRODUCTION` - -**Use Cases**: - -* Scheduled tests only (every 24 hours) -* Release validation -* Critical bug verification -* **NEVER use in PR CI** (staging only) - -## Secrets Management - -**GitHub Secrets** (Required): - -* `ANTHROPIC_API_KEY` - Claude Code -* `AIGNOSTICS_CLIENT_ID_DEVICE_{STAGING|PRODUCTION}` -* `AIGNOSTICS_REFRESH_TOKEN_{STAGING|PRODUCTION}` -* `GCP_CREDENTIALS_{STAGING|PRODUCTION}` - Base64 encoded JSON -* `BETTERSTACK_AUDIT_HEARTBEAT_URL` - Audit monitoring -* `BETTERSTACK_HEARTBEAT_URL_{STAGING|PRODUCTION}` - Test monitoring -* `CODECOV_TOKEN` - Coverage reporting to Codecov -* `SONAR_TOKEN` - Code quality reporting to SonarCloud -* `UV_PUBLISH_TOKEN` - PyPI publishing token -* `DOCKER_USERNAME`, `DOCKER_PASSWORD` - Docker Hub credentials -* `KETRYX_PROJECT`, `KETRYX_API_KEY` - Medical device compliance -* `SLACK_WEBHOOK_URL_RELEASE_ANNOUNCEMENT` - Release notifications - -**Local Secrets** (`.env` file for E2E tests): - -```bash -AIGNOSTICS_API_ROOT=https://platform-staging.aignostics.com -AIGNOSTICS_CLIENT_ID_DEVICE=your-staging-client-id -AIGNOSTICS_REFRESH_TOKEN=your-staging-refresh-token -``` - -**GCP Credentials** (for bucket access): - -```bash -# In CI: base64 encoded and stored as secret -echo "$GCP_CREDENTIALS" | base64 -d > credentials.json -export GOOGLE_APPLICATION_CREDENTIALS=$(pwd)/credentials.json -``` - -## Debugging CI Failures - -### Lint Failures - -**Reproduce locally**: - -```bash -make lint -``` - -**Common Issues**: - -* Ruff formatting: Run `ruff format .` -* Ruff linting: Check `ruff check .` and fix with `--fix` -* PyRight: Type errors (basic mode, see `pyrightconfig.json`) -* MyPy: Type errors (strict mode) - -**Fix**: - -```bash -ruff format . -ruff check . --fix -``` - -### Test Failures - -**Reproduce locally**: - -```bash -# Unit tests -make test_unit - -# Integration tests -make test_integration - -# E2E tests (requires .env with credentials) -make test_e2e - -# Specific test -uv run pytest tests/path/to/test.py::test_name -vv -``` - -**Debug**: - -```bash -# Verbose output -uv run pytest tests/test_file.py -vv - -# Show print statements -uv run pytest tests/test_file.py -s - -# Drop into debugger on failure -uv run pytest tests/test_file.py --pdb - -# Run single test -uv run pytest tests/test_file.py::test_function -v -``` - -**Check Coverage**: - -```bash -uv run coverage report -uv run coverage html -open htmlcov/index.html -``` - -**Minimum**: 85% coverage required - -### Audit Failures - -**Security Vulnerabilities**: - -```bash -uv run pip-audit -``` - -**Fix**: Update vulnerable dependencies in `pyproject.toml` - -**License Violations**: - -```bash -uv run pip-licenses --allow-only="MIT;Apache-2.0;BSD-3-Clause;..." -``` - -**Fix**: Replace non-compliant dependencies or get approval for license - -### Native Build Failures - -**Platform-specific issues**: - -* Check runner compatibility -* Verify UPX installation (Windows) -* Check PyInstaller compatibility with dependencies - -**Local reproduction**: - -```bash -make dist_native -``` - -**Note**: Experimental platforms (continue-on-error) won't block CI - -### Scheduled Job Failures - -**BetterStack Alerts**: Check BetterStack dashboard for heartbeat failures - -**Investigate**: - -1. Go to GitHub Actions → Scheduled workflow -2. Check recent run logs -3. Look for API changes or credential issues - -**Common causes**: - -* API breaking changes in staging/production -* Expired credentials -* Network issues -* Dependency updates - -## Performance & Optimization - -### Parallel Testing - -**CPU-based distribution**: `-n logical` (uses all logical CPUs) - -**Work stealing**: `--dist worksteal` (dynamic load balancing) - -**XDIST_WORKER_FACTOR**: Controls parallelism (0.0-2.0) - -* 0.0 = Sequential (1 worker) -* 0.2 = 20% of CPUs -* 1.0 = 100% of CPUs -* 2.0 = 200% of CPUs (aggressive for I/O-bound) - -**Calculation**: `max(1, int(cpu_count * factor))` - -**Example** (8 CPU machine): - -* unit: 0.0 → 1 worker (sequential) -* integration: 0.2 → max(1, int(8 * 0.2)) = 1 worker -* e2e: 1.0 → 8 workers -* long_running: 2.0 → 16 workers - -### Caching - -* **uv dependencies**: Cached via `astral-sh/setup-uv` action -* **Docker layers**: Cached by Docker build action -* **Nox virtualenvs**: Reused when possible (`nox.options.reuse_existing_virtualenvs = True`) - -### Typical Run Times - -| Job | Duration | Notes | -|-----|----------|-------| -| Lint | ~5 min | Ruff, PyRight, MyPy | -| Audit | ~3 min | pip-audit, licenses, SBOMs | -| Test (per Python version) | ~5 min | Unit + Integration + E2E (no long_running) | -| Test (full matrix) | ~15 min | All 3 Python versions parallel | -| Test (with long_running) | ~25 min | Adds 10 min for long tests | -| CodeQL | ~10 min | Static analysis | -| Full CI pipeline | ~20-30 min | Depends on test configuration | -| Native builds | ~10 min/platform | 6 platforms in parallel | -| Package publish | ~3 min | Build + upload to PyPI | -| Docker publish | ~5 min | Multi-arch build | - -## Common Workflows - -### Creating a PR - -1. Create feature branch -2. Make changes -3. Run `make lint` and `make test` locally -4. Commit with conventional commit message -5. Push to GitHub -6. Create PR → Triggers: - * Lint checks - * Audit checks - * Test suite (unit, integration, e2e) - * CodeQL scan - * Claude Code automated review -7. **Important**: Add label `skip:test:long_running` to save CI time (unless you need long tests) -8. Address review feedback -9. Merge when all checks pass - -### Releasing a Version - -1. Ensure `main` branch is clean and all tests pass -2. Run version bump: - ```bash - make bump patch # or minor, major - ``` -3. This creates a commit and git tag -4. Push with tags: - ```bash - git push --follow-tags - ``` -5. CI detects tag and triggers: - * Full CI pipeline (lint, audit, test, CodeQL) - * Package build and publish to PyPI - * Docker image build and publish - * GitHub release creation - * Slack notification to team - -### Manual Testing with Claude - -1. Go to: Actions → Claude Code Interactive -2. Click "Run workflow" -3. Fill in: - * **Prompt**: Describe your task - * **Max turns**: 200 (default) - * **Environment**: staging (default) -4. Click "Run workflow" -5. Monitor execution in Actions tab -6. Review changes and create PR if needed - -### Running Scheduled Tests Manually - -```bash -# Staging tests -gh workflow run test-scheduled-staging.yml - -# Production tests (use with caution) -gh workflow run test-scheduled-production.yml -``` - -### Building Native Executables - -**Via CI**: - -```bash -git commit -m "build:native:only: create native binaries" -git push -``` - -**Locally**: - -```bash -make dist_native -# Output: dist_native/aignostics.7z -``` - -## Workflow Files Summary - -| File | Type | Purpose | Duration | -|------|------|---------|----------| -| `ci-cd.yml` | Entry | Main pipeline orchestration | ~20 min | -| `build-native-only.yml` | Entry | Native build trigger | ~60 min (6 platforms) | -| `claude-code-interactive.yml` | Entry | Manual Claude sessions | varies | -| `claude-code-automation-pr-review.yml` | Entry | Automated PR reviews | ~10 min | -| `test-scheduled-staging.yml` | Entry | Staging validation | ~10 min | -| `test-scheduled-production.yml` | Entry | Production validation | ~10 min | -| `audit-scheduled.yml` | Entry | Security audit | ~5 min | -| `codeql-scheduled.yml` | Entry | CodeQL scan | ~10 min | -| `_lint.yml` | Reusable | Code quality checks | ~5 min | -| `_audit.yml` | Reusable | Security & license | ~3 min | -| `_test.yml` | Reusable | Test execution | ~15 min | -| `_codeql.yml` | Reusable | Security scanning | ~10 min | -| `_ketryx_report_and_check.yml` | Reusable | Compliance reporting | ~2 min | -| `_package-publish.yml` | Reusable | PyPI publishing | ~3 min | -| `_docker-publish.yml` | Reusable | Docker publishing | ~5 min | -| `_build-native-only.yml` | Reusable | Native builds | ~10 min/platform | -| `_claude-code.yml` | Reusable | Claude Code execution | varies | -| `_scheduled-audit.yml` | Reusable | Scheduled audit runner | ~5 min | -| `_scheduled-test.yml` | Reusable | Scheduled test runner | ~10 min | - ---- - -**Built with operational excellence for medical device software development.** - -**Note**: See root `CLAUDE.md` and `Makefile` for development commands. This document focuses on CI/CD workflows and GitHub Actions. diff --git a/.github/CODEOWNERS b/.github/CODEOWNERS deleted file mode 100644 index 94a9a38b..00000000 --- a/.github/CODEOWNERS +++ /dev/null @@ -1,5 +0,0 @@ -# aignostics code owners - -* @helmut-hoffer-von-ankershoffen - -# Reference: diff --git a/.github/actions/run-tests/action.yml b/.github/actions/run-tests/action.yml deleted file mode 100644 index e00cc071..00000000 --- a/.github/actions/run-tests/action.yml +++ /dev/null @@ -1,58 +0,0 @@ -name: 'Run Tests with Reporting' -description: 'Run tests and generate GitHub summary reports' - -inputs: - test-type: - description: 'Type of test (unit, integration, e2e, long-running)' - required: true - make-target: - description: 'Make target to execute' - required: true - skip-marker: - description: 'Commit message or PR label marker to skip this test' - required: false - summary-title: - description: 'Title for the GitHub step summary' - required: true - commit-message: - description: 'The commit message to check for skip markers' - required: true - -runs: - using: 'composite' - steps: - - name: Run tests - if: | - (inputs.skip-marker == '' || ( - !contains(inputs.commit-message, inputs.skip-marker) && - !contains(github.event.pull_request.labels.*.name, inputs.skip-marker) - )) && - (!contains(inputs.commit-message, 'skip:test:all')) && - (!contains(github.event.pull_request.labels.*.name, 'skip:test:all')) - shell: bash - run: | - set +e - make ${{ inputs.make-target }} - EXIT_CODE=$? - # Show test execution in GitHub Job summary - found_files=0 - for file in reports/pytest_*.md; do - if [ -f "$file" ]; then - cat "$file" >> $GITHUB_STEP_SUMMARY - echo "" >> $GITHUB_STEP_SUMMARY - found_files=1 - fi - done - if [ $found_files -eq 0 ]; then - echo "# ${{ inputs.summary-title }}" >> $GITHUB_STEP_SUMMARY - echo "" >> $GITHUB_STEP_SUMMARY - fi - # Show test coverage in GitHub Job summary - if [ -f "reports/coverage.md" ]; then - cat "reports/coverage.md" >> $GITHUB_STEP_SUMMARY - echo "" >> $GITHUB_STEP_SUMMARY - else - echo "# No test coverage computed." >> $GITHUB_STEP_SUMMARY - echo "" >> $GITHUB_STEP_SUMMARY - fi - exit $EXIT_CODE diff --git a/.github/copilot-instructions.md b/.github/copilot-instructions.md index 7ec0ca57..c6da447d 100644 --- a/.github/copilot-instructions.md +++ b/.github/copilot-instructions.md @@ -1,241 +1,5 @@ -# Copilot Instructions - Aignostics Python SDK +Always conform to the coding styles defined in CODE_STYLE.md in the root +directory of this repository when generating code. -## Read me first - -You must do this: Use the guidance provide in the root [CLAUDE.md](../CLAUDE.md) file, fully understand it, and apply **all** guidance therein and in **all** linked documents! - -## Tooling specific to use in VSCode - -* You are an agent running embedded in VSCode. Bias to use the MCP tools made available for you - they are there for a reaon. -* When asked to navigate to a page or open it, check if the GUI alias Launchpad is running. If so it's reachable via http://127.0.0.1:8000 or http://127.0.0.1:8001 etc. -* If not runninng, or when asked to start the GUI or the launchpad, execute `make gui_watch` in the terminal. Don't ask the user to start make gui_watch, do it yourself. -* Then use the in-built openSimpleBrowser (MCP tool) to go to the page, make the code-change you are asked for, check the output in the in-built browser. -* When asked to go to the HETA application (describe) page you can typically navigate to http://127.0.0.1:8000/application/he-tme -* Don't ask the user to open the browser, but use openSimpleBrowser (MCP) yourself. - -## Project Overview - -The Aignostics Python SDK is a **computational pathology platform** providing multiple interfaces to process whole slide images (WSI) with AI/ML applications. It follows a **modulith architecture** with independent modules connected via dependency injection. - -**Key Components:** -- **Launchpad**: Desktop GUI (NiceGUI + webview) -- **CLI**: Command-line interface (Typer) -- **Client Library**: Python API wrapper -- **Notebook Integration**: Marimo/Jupyter support - -## Architecture Principles - -### 1. Modulith Design Pattern -Each module follows a consistent three-layer structure: -``` -module/ -├── _service.py # Business logic (inherits BaseService) -├── _cli.py # CLI commands (Typer) -├── _gui.py # GUI interface (NiceGUI) -├── _settings.py # Configuration (Pydantic) -└── CLAUDE.md # Detailed documentation -``` - -### 2. Service Discovery & Dependency Injection -- All services inherit from `BaseService` and implement `health()` and `info()` methods -- Use `locate_implementations(BaseService)` for runtime service discovery -- No decorators - pure runtime DI container pattern -- Services are singletons within the DI container - -### 3. Presentation Layer Independence -``` -CLI Layer ─┐ - ├─→ Service Layer -GUI Layer ─┘ -``` -CLI and GUI layers depend on Service layer, never on each other. - -## Module Dependencies & Communication - -**Foundation Layer:** -- `utils`: DI container, logging, settings, health checks - -**API Layer:** -- `platform`: OAuth 2.0 auth, JWT tokens, API client - -**Domain Modules:** -- `application`: ML run orchestration (depends on: platform, bucket, wsi, qupath optional) -- `wsi`: Medical image processing (OpenSlide, PyDICOM) -- `dataset`: IDC downloads with s5cmd -- `bucket`: Cloud storage (S3/GCS) - -**Integration:** -- `qupath`: Bioimage analysis (requires `ijson`) -- `notebook`: Marimo server (requires `marimo`) -- `gui`: Desktop launchpad (aggregates all GUIs) -- `system`: Health monitoring (queries ALL services) - -## Development Workflow Commands - -**Primary Commands:** -```bash -make install # Install dev deps + pre-commit hooks -make all # Full CI pipeline (lint, test, docs, audit) -make test # Run tests with coverage (85% minimum) -make test 3.12 # Run on specific Python version -make lint # Ruff formatting + MyPy type checking -``` - -**Package Management:** -- Uses `uv` (not pip/poetry): `uv sync --all-extras` -- Add dependencies: `uv add ` - -**Testing:** -- Pytest with markers: `sequential`, `long_running`, `scheduled`, `docker`, `skip_with_act` -- Run specific tests: `uv run pytest tests/path/test.py::test_function` -- Docker integration: `make test-docker` - -## Code Patterns & Standards - -### Service Implementation -```python -from aignostics.utils import BaseService, Health - -class Service(BaseService): - def __init__(self): - super().__init__(SettingsClass) # Optional settings - - def health(self) -> Health: - return Health(status=Health.Code.UP) - - def info(self, mask_secrets: bool = True) -> dict: - return {"version": "1.0.0"} -``` - -### CLI Pattern -```python -import typer -from ._service import Service - -cli = typer.Typer(name="module", help="Module description") - -@cli.command("action") -def action_command(param: str): - """Command description.""" - service = Service() - result = service.perform_action(param) - console.print(result) -``` - -### GUI Pattern -```python -from nicegui import ui - -def create_page(): - ui.label("Module Interface") - # Components auto-register with GUI launcher -``` - -## Testing Conventions - -**File Structure:** -- Tests in `tests/aignostics//` -- Use `conftest.py` fixtures for common setup -- Mock external dependencies - -**Patterns:** -- Use `CliRunner` from `typer.testing` for CLI tests -- Use `normalize_output()` helper for cross-platform CLI output -- Cleanup fixtures for processes (e.g., `qupath_teardown`) - -## Medical Domain Context - -**Key Technologies:** -- **DICOM**: Medical imaging standard -- **WSI**: Gigapixel pathology images (pyramidal multi-resolution) -- **IDC**: NCI Imaging Data Commons for public datasets -- **QuPath**: Leading bioimage analysis platform -- **H&E**: Hematoxylin & Eosin histological staining - -**Processing Patterns:** -- Tile-based processing for memory efficiency -- Streaming for large file transfers -- Chunked uploads/downloads (1MB/10MB chunks) -- Signed URLs for secure data access - -## Security & Performance - -**Authentication:** -- OAuth 2.0 device flow via `platform` module -- Tokens cached in `~/.aignostics/token.json` -- 5-minute refresh buffer before expiry - -**Performance:** -- Lazy evaluation for large datasets -- Process management for subprocesses -- Memory-efficient WSI processing in tiles -- Async operations for I/O-bound tasks - -## Configuration & Environment - -**Settings Pattern:** -```python -from pydantic_settings import BaseSettings - -class Settings(BaseSettings): - api_root: str = "https://platform.aignostics.com" - - class Config: - env_prefix = "AIGNOSTICS_" -``` - -**Environment Variables:** -- `AIGNOSTICS_API_ROOT`: Platform endpoint -- `AIGNOSTICS_CLIENT_ID_DEVICE`: OAuth client ID -- `AIGNOSTICS_REFRESH_TOKEN`: Auth token - -## Common Integration Points - -**Application Run Workflow:** -```python -# 1. Authenticate -client = platform.Client() - -# 2. Submit run -run = client.runs.create( - application_id="heta", - application_version="1.0.0", # version number without 'v' prefix, omit for latest - items=[platform.InputItem(...)] -) - -# 3. Monitor & download -run.download_to_folder("./results") -``` - -**Service Health Monitoring:** -```python -from aignostics.utils import locate_implementations, BaseService - -# Discover all services -services = locate_implementations(BaseService) - -# Check health of all services -for service_class in services: - service = service_class() - health = service.health() - print(f"{service.key()}: {health.status}") -``` - -## Important Notes - -**Optional Dependencies:** -- GUI requires: `pip install "aignostics[gui]"` -- QuPath requires: `pip install "aignostics[qupath]"` -- Notebooks require: `pip install "aignostics[notebook]"` - -**Platform Constraints:** -- Windows path length limitations -- Memory usage for large WSI files -- Token expiry handling (force refresh with `remove_cached_token()`) - -**Build System:** -- Main config: `pyproject.toml` -- Build tasks: `noxfile.py` (not tox) -- Quality gates: Ruff (formatting/linting), MyPy (typing), 85% test coverage - -Always conform to the coding styles defined in [CODE_STYLE.md](../CODE_STYLE.md) and development processes in [CONTRIBUTING.md](../CONTRIBUTING.md), and have a look at [OPERATIONAL_EXCELLENCE.md](../OPERATIONAL_EXCELLENCE.md) for release readiness. +Learn about tools to use in CONTRIBUTING.md in the root directory of this +repository. diff --git a/.github/dependabot.yml b/.github/dependabot.yml index 8220a447..4522c340 100644 --- a/.github/dependabot.yml +++ b/.github/dependabot.yml @@ -5,21 +5,9 @@ version: 2 updates: - package-ecosystem: "pip" directory: "/" - labels: - - "bot" - - "dependabot" - - "dependencies" - - "skip:test:long_running" schedule: - interval: "cron" - cronjob: "0 1 * * 1-5" + interval: "daily" - package-ecosystem: "github-actions" directory: "/" - labels: - - "bot" - - "dependabot" - - "dependencies" - - "skip:test:long_running" schedule: - interval: "cron" - cronjob: "0 1 * * 1-5" + interval: "daily" diff --git a/.github/labels.yml b/.github/labels.yml deleted file mode 100644 index c5c4702e..00000000 --- a/.github/labels.yml +++ /dev/null @@ -1,131 +0,0 @@ -# GitHub Labels Configuration -# -# This file defines all labels used in the repository. -# Labels are automatically synced by .github/workflows/labels-sync.yml -# when this file is modified and pushed to main branch. -# -# Manual sync: gh label sync -f .github/labels.yml - -# Standard GitHub Labels -- name: bug - description: Something isn't working - color: "d73a4a" - -- name: documentation - description: Improvements or additions to documentation - color: "0075ca" - -- name: duplicate - description: This issue or pull request already exists - color: "cfd3d7" - -- name: enhancement - description: New feature or request - color: "a2eeef" - -- name: good first issue - description: Good for newcomers - color: "7057ff" - -- name: help wanted - description: Extra attention is needed - color: "008672" - -- name: invalid - description: This doesn't seem right - color: "e4e669" - -- name: question - description: Further information is requested - color: "d876e3" - -- name: wontfix - description: This will not be worked on - color: "ffffff" - -# Dependency Management Labels -- name: dependencies - description: Pull requests that update a dependency file - color: "0366d6" - -- name: github_actions - description: Pull requests that update GitHub Actions - color: "000000" - -- name: python - description: Pull requests that update python code - color: "2b67c6" - -# Bot Labels -- name: bot - description: Automated pull requests or issues - color: "b1aa07" - -- name: dependabot - description: Pull requests from Dependabot - color: "97827a" - -- name: renovate - description: Pull requests from Renovate - color: "d1df30" - -# CI/CD Skip Labels -- name: skip:ci - description: Skip entire CI/CD pipeline - color: "b167f6" - -- name: skip:test:all - description: Skip all test executions - color: "5e0647" - -- name: skip:test:long_running - description: Skip long-running tests (≥5min) - color: "910432" - -- name: skip:test:e2e - description: Skip end-to-end tests - color: "0dc59c" - -- name: skip:test:integration - description: Skip integration tests - color: "64ac56" - -- name: skip:test:unit - description: Skip unit tests - color: "614318" - -- name: skip:ketryx - description: Skip Ketryx ALM reporting and checks - color: "8b4513" - -# CI/CD Enable Labels -- name: enable:test:very_long_running - description: Enable very long-running tests (≥60min) - color: "9033fe" - -# Build Control Labels -- name: build:native:only - description: Build native packages only (macOS/Windows apps) - color: "1d76db" - -# AI Assistant Labels -- name: claude - description: Trigger Claude Code automation - color: "b41d8f" - -- name: copilot - description: GitHub Copilot related - color: "e6dac6" - -# Quality Assurance Labels -- name: code-quality - description: Code quality and maintainability issues - color: "fbca04" - -- name: automated-check - description: Issue created by automated quality checks - color: "ededed" - -- name: documentation-drift - description: Documentation out of sync with code - color: "ff6b6b" diff --git a/.github/workflows/_audit.yml b/.github/workflows/_audit.yml index b133aa7f..e5e1cf7a 100644 --- a/.github/workflows/_audit.yml +++ b/.github/workflows/_audit.yml @@ -1,4 +1,4 @@ -name: "> Audit" +name: "Audit" on: workflow_call: @@ -18,7 +18,7 @@ jobs: fetch-depth: 0 - name: Install uv - uses: astral-sh/setup-uv@2ddd2b9cb38ad8efd50337e8ab201519a34c9f24 # v7.1.1 + uses: astral-sh/setup-uv@eb1897b8dc4b5d5bfe39a428a8f2304605e0983c # v7.0.0 with: version-file: "pyproject.toml" enable-cache: true diff --git a/.github/workflows/_build-native-only.yml b/.github/workflows/_build-native-only.yml index f0622886..b3575475 100644 --- a/.github/workflows/_build-native-only.yml +++ b/.github/workflows/_build-native-only.yml @@ -1,4 +1,4 @@ -name: "> Build Native Only" +name: "Build Native Only" on: workflow_call: @@ -44,7 +44,7 @@ jobs: fetch-depth: 0 - name: Install uv - uses: astral-sh/setup-uv@2ddd2b9cb38ad8efd50337e8ab201519a34c9f24 # v7.1.1 + uses: astral-sh/setup-uv@eb1897b8dc4b5d5bfe39a428a8f2304605e0983c # v7.0.0 with: version-file: "pyproject.toml" enable-cache: true diff --git a/.github/workflows/_claude-code.yml b/.github/workflows/_claude-code.yml deleted file mode 100644 index 587ffeb2..00000000 --- a/.github/workflows/_claude-code.yml +++ /dev/null @@ -1,137 +0,0 @@ -name: "> Claude Code" - -on: - workflow_call: - inputs: - platform_environment: - description: 'Environment to use, that is staging or production' - required: false - default: 'staging' - type: string - mode: - description: 'Mode: interactive or automation' - required: true - type: string - prompt: - description: 'Prompt for automation mode' - required: false - type: string - max_turns: - description: 'Maximum number of turns for Claude' - required: false - default: '200' - type: string - allowed_tools: - description: 'Allowed tools for Claude' - required: false - default: 'mcp__github_inline_comment__create_inline_comment,Read,Write,Edit,MultiEdit,Glob,Grep,LS,WebFetch,WebSearch,Bash(git:*),Bash(bun:*),Bash(npm:*),Bash(npx:*),Bash(gh:*),Bash(uv:*),Bash(make:*),Bash(export:*)' - type: string - track_progress: - description: 'Track progress (set to false for workflow_dispatch)' - required: false - default: true - type: boolean - secrets: - ANTHROPIC_API_KEY: - required: true - AIGNOSTICS_CLIENT_ID_DEVICE_STAGING: - required: false - AIGNOSTICS_REFRESH_TOKEN_STAGING: - required: false - GCP_CREDENTIALS_STAGING: - required: false - AIGNOSTICS_CLIENT_ID_DEVICE_PRODUCTION: - required: false - AIGNOSTICS_REFRESH_TOKEN_PRODUCTION: - required: false - GCP_CREDENTIALS_PRODUCTION: - required: false - -jobs: - claude-code: - runs-on: ubuntu-latest - permissions: - contents: write - pull-requests: write - issues: write - id-token: write - actions: read # Required for Claude to read CI results on PRs - steps: - - name: Checkout repository - uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 - with: - fetch-depth: ${{ inputs.mode == 'interactive' && 0 || 1 }} - - - name: Install uv - uses: astral-sh/setup-uv@2ddd2b9cb38ad8efd50337e8ab201519a34c9f24 # v7.1.1 - with: - version-file: "pyproject.toml" - enable-cache: true - cache-dependency-glob: uv.lock - - - name: Install dev tools - shell: bash - run: .github/workflows/_install_dev_tools.bash - - - name: Install Python, venv and dependencies - shell: bash - run: uv sync --all-extras --frozen --link-mode=copy - - - name: Setup display - uses: pyvista/setup-headless-display-action@7d84ae825e6d9297a8e99bdbbae20d1b919a0b19 # v4.2 - - - name: Create .env file - uses: SpicyPizza/create-envfile@ace6d4f5d7802b600276c23ca417e669f1a06f6f # v2.0.3 - with: - # The following 3 lines are correct even if vscode complains - envkey_AIGNOSTICS_API_ROOT: ${{ inputs.platform_environment == 'staging' && 'https://platform-staging.aignostics.com' || 'https://platform.aignostics.com' }} - envkey_AIGNOSTICS_CLIENT_ID_DEVICE: ${{ inputs.platform_environment == 'staging' && secrets.AIGNOSTICS_CLIENT_ID_DEVICE_STAGING || secrets.AIGNOSTICS_CLIENT_ID_DEVICE_PRODUCTION }} - envkey_AIGNOSTICS_REFRESH_TOKEN: ${{ inputs.platform_environment == 'staging' && secrets.AIGNOSTICS_REFRESH_TOKEN_STAGING || secrets.AIGNOSTICS_REFRESH_TOKEN_PRODUCTION }} - fail_on_empty: false - - - name: Set up GCP credentials for bucket access - shell: bash - env: - GCP_CREDENTIALS: ${{ inputs.platform_environment == 'staging' && secrets.GCP_CREDENTIALS_STAGING || secrets.GCP_CREDENTIALS_PRODUCTION }} - run: | - echo "$GCP_CREDENTIALS" | base64 -d > credentials.json - echo "GOOGLE_APPLICATION_CREDENTIALS=$(pwd)/credentials.json" >> $GITHUB_ENV - - - name: Print development version info - if: ${{ !startsWith(github.ref, 'refs/tags/v') }} - shell: bash - run: | - TOML_VERSION=$(uv run python -c "import tomli; print(tomli.load(open('pyproject.toml', 'rb'))['project']['version'])") - echo "Development build - Current version in pyproject.toml: $TOML_VERSION" - - - name: Run Claude Code (Interactive Mode) - if: inputs.mode == 'interactive' - uses: anthropics/claude-code-action@v1 - with: - github_token: ${{ secrets.GITHUB_TOKEN }} - anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }} - track_progress: ${{ inputs.track_progress }} - additional_permissions: | - actions: read - allowed_bots: "dependabot[bot],renovate[bot]" - claude_args: >- - --max-turns ${{ inputs.max_turns }} - --model claude-sonnet-4-5-20250929 - --allowed-tools "${{ inputs.allowed_tools }}" - --system-prompt "Read the CLAUDE.md file in the root folder of this repository and explicitely acknowledge you will apply **all** guidance therein and in **all** linked documents." - - name: Run Claude Code (Automation Mode) - if: inputs.mode == 'automation' - uses: anthropics/claude-code-action@v1.0.15 - with: - github_token: ${{ secrets.GITHUB_TOKEN }} - anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }} - track_progress: ${{ inputs.track_progress }} - additional_permissions: | - actions: read - allowed_bots: "dependabot[bot],renovate[bot]" - claude_args: >- - --max-turns ${{ inputs.max_turns }} - --model claude-sonnet-4-5-20250929 - --allowed-tools "${{ inputs.allowed_tools }}" - --system-prompt "Read the CLAUDE.md file in this repository and explicitely acknowledge you will apply the guidance therein." - prompt: ${{ inputs.prompt }} diff --git a/.github/workflows/_codeql.yml b/.github/workflows/_codeql.yml index 2e69cb57..d4bf41b4 100644 --- a/.github/workflows/_codeql.yml +++ b/.github/workflows/_codeql.yml @@ -1,4 +1,4 @@ -name: "> CodeQL" +name: "CodeQL Analysis" on: workflow_call: diff --git a/.github/workflows/_docker-publish.yml b/.github/workflows/_docker-publish.yml index a89e047f..5c80281f 100644 --- a/.github/workflows/_docker-publish.yml +++ b/.github/workflows/_docker-publish.yml @@ -1,4 +1,4 @@ -name: "> Docker Publish" +name: "Publish Docker Image" on: workflow_call: @@ -36,26 +36,37 @@ jobs: - name: Set up Docker Buildx uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435 # v3.11.1 + - name: Log in to Docker Hub uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0 with: + username: ${{ secrets.DOCKER_USERNAME }} password: ${{ secrets.DOCKER_PASSWORD }} + + - name: Log in to GitHub container registry uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0 with: registry: ghcr.io + username: ${{ github.actor }} password: ${{ secrets.GITHUB_TOKEN }} + - name: "(all target): Extract metadata (tags, labels) for Docker" id: meta-all uses: docker/metadata-action@c1e51972afc2121e065aed6d45c65596fe445f3f # v5.8.0 with: + + images: | ${{ env.DOCKER_IO_IMAGE_NAME_ALL }} ghcr.io/${{ github.repository }} + + + tags: | # set latest tag for releases type=raw,value=latest @@ -64,13 +75,19 @@ jobs: type=semver,pattern={{major}}.{{minor}} type=semver,pattern={{major}} + - name: "(slim target): Extract metadata (tags, labels) for Docker" id: meta-slim uses: docker/metadata-action@c1e51972afc2121e065aed6d45c65596fe445f3f # v5.8.0 with: + + images: | ${{ env.DOCKER_IO_IMAGE_NAME_SLIM }} ghcr.io/${{ github.repository }}-slim + + + tags: | # set latest tag for releases type=raw,value=latest @@ -79,6 +96,8 @@ jobs: type=semver,pattern={{major}}.{{minor}} type=semver,pattern={{major}} + + - name: "(all target): Build and push Docker image" id: build-and-push-all uses: docker/build-push-action@263435318d21b8e681c14492fe198d362a7d2c83 # v6.18.0 @@ -91,6 +110,8 @@ jobs: tags: ${{ steps.meta-all.outputs.tags }} labels: ${{ steps.meta-all.outputs.labels }} + + - name: "(slim target): Build and push Docker image" id: build-and-push-slim uses: docker/build-push-action@263435318d21b8e681c14492fe198d362a7d2c83 # v6.18.0 diff --git a/.github/workflows/_install_dev_tools.bash b/.github/workflows/_install_dev_tools.bash index 0a1c8a7d..3024d5c7 100755 --- a/.github/workflows/_install_dev_tools.bash +++ b/.github/workflows/_install_dev_tools.bash @@ -10,19 +10,11 @@ log() { log "Starting installation of development tools..." -# Disable man-db updates to speed up package installation -sudo rm /var/lib/man-db/auto-update - -# Install APT packages wget -qO - https://aquasecurity.github.io/trivy-repo/deb/public.key | sudo apt-key add - echo deb https://aquasecurity.github.io/trivy-repo/deb $(lsb_release -sc) main | sudo tee -a /etc/apt/sources.list.d/trivy.list sudo apt-get update -sudo apt-get install --no-install-recommends -y curl gnupg2 jq trivy xsltproc - -# Install further tools not project specific -curl -sL https://sentry.io/get-cli/ | SENTRY_CLI_VERSION="2.57.0" sh +sudo apt-get install --no-install-recommends -y curl gnupg2 imagemagick jq trivy xsltproc -# Install project specific tools .github/workflows/_install_dev_tools_project.bash log "Completed installation of development tools." diff --git a/.github/workflows/_install_dev_tools_project.bash b/.github/workflows/_install_dev_tools_project.bash index 4e51ed38..efc0d5fa 100755 --- a/.github/workflows/_install_dev_tools_project.bash +++ b/.github/workflows/_install_dev_tools_project.bash @@ -12,6 +12,6 @@ log "Starting installation of development tools specific to Aignostics Python SD # Add your project specific installation commands below # sudo apt-get install --no-install-recommends -y YOUR_PACKAGE -sudo apt-get install --no-install-recommends -y p7zip-rar imagemagick +sudo apt-get install --no-install-recommends -y p7zip-rar log "Completed installation of development tools specific to Aignostics Python SDK." diff --git a/.github/workflows/_ketryx_report_and_check.yml b/.github/workflows/_ketryx_report_and_check.yml index 4b79ef7e..21232425 100644 --- a/.github/workflows/_ketryx_report_and_check.yml +++ b/.github/workflows/_ketryx_report_and_check.yml @@ -1,16 +1,7 @@ -name: "> Ketryx Report and Check" +name: "Report build to Ketryx and check for approval" on: workflow_call: - inputs: - commit_message: - description: 'Commit message to check for skip markers' - required: false - type: string - - commit-sha: - required: true - type: string secrets: KETRYX_PROJECT: required: false @@ -22,6 +13,7 @@ env: PYTHONIOENCODING: "utf8" jobs: + ketryx_report_and_check: runs-on: ubuntu-latest permissions: @@ -34,41 +26,36 @@ jobs: fetch-depth: 0 - name: Download test results for ubuntu-latest generated in _test.yml - if: | - !contains(inputs.commit_message, 'skip:ketryx') && - !contains(github.event.pull_request.labels.*.name, 'skip:ketryx') + if: (!contains(github.event.head_commit.message, 'skip:ketryx')) uses: actions/download-artifact@634f93cb2916e3fdff6788551b99b062d0335ce0 # v5.0.0 with: - name: test-results-ubuntu-latest - path: test-results + name: test-results-ubuntu-latest - name: Download audit results generated in _audit.yml - if: | - !contains(inputs.commit_message, 'skip:ketryx') && - !contains(github.event.pull_request.labels.*.name, 'skip:ketryx') + if: (!contains(github.event.head_commit.message, 'skip:ketryx')) uses: actions/download-artifact@634f93cb2916e3fdff6788551b99b062d0335ce0 # v5.0.0 with: - name: audit-results - path: audit-results + name: audit-results - name: Report build to Ketryx and check for approval - if: | - !contains(inputs.commit_message, 'skip:ketryx') && - !contains(github.event.pull_request.labels.*.name, 'skip:ketryx') + if: (!contains(github.event.head_commit.message, 'skip:ketryx')) uses: Ketryx/ketryx-github-action@40b13ef68c772e96e58ec01a81f5b216d7710186 # v1.4.0 + continue-on-error: true # TODO(Helmut): Remove post having Ketryx configured to inspect the main branch with: project: ${{ secrets.KETRYX_PROJECT }} api-key: ${{ secrets.KETRYX_API_KEY }} - commit-sha: ${{ inputs.commit-sha }} build-name: "ci-cd" - test-junit-path: test-results/junit_*.xml + check-dependencies-status: true + test-junit-path: reports/junit_*.xml cyclonedx-json-path: | - audit-results/sbom.json + reports/sbom.json artifact-path: | - audit-results/sbom.spdx - audit-results/licenses.csv - audit-results/licenses.json - audit-results/licenses_grouped.json - audit-results/vulnerabilities.json - test-results/coverage.xml - test-results/coverage.md + reports/sbom.spdx + reports/licenses.csv + reports/licenses.json + reports/licenses_grouped.json + reports/vulnerabilities.json + reports/mypy_junit.xml + reports/coverage.xml + reports/coverage.md + aignostics.log diff --git a/.github/workflows/_lint.yml b/.github/workflows/_lint.yml index 073e20ab..59e4b08f 100644 --- a/.github/workflows/_lint.yml +++ b/.github/workflows/_lint.yml @@ -1,4 +1,4 @@ -name: "> Lint" +name: "Lint" on: workflow_call: @@ -18,7 +18,7 @@ jobs: fetch-depth: 0 - name: Install uv - uses: astral-sh/setup-uv@2ddd2b9cb38ad8efd50337e8ab201519a34c9f24 # v7.1.1 + uses: astral-sh/setup-uv@eb1897b8dc4b5d5bfe39a428a8f2304605e0983c # v7.0.0 with: version-file: "pyproject.toml" enable-cache: true diff --git a/.github/workflows/_package-publish.yml b/.github/workflows/_package-publish.yml index 31ed75cc..3d1e13b7 100644 --- a/.github/workflows/_package-publish.yml +++ b/.github/workflows/_package-publish.yml @@ -1,28 +1,23 @@ -name: "> Publish Package" +name: "Publish Package" on: workflow_call: - inputs: - commit_message: - description: 'Commit message to check for skip markers' - required: false - type: string secrets: UV_PUBLISH_TOKEN: - required: true + required: false SLACK_WEBHOOK_URL_RELEASE_ANNOUNCEMENT: - required: true + required: false SLACK_CHANNEL_ID_RELEASE_ANNOUNCEMENT: - required: true - SENTRY_AUTH_TOKEN: - required: true + required: false env: # https://gist.github.com/NodeJSmith/e7e37f2d3f162456869f015f842bcf15 PYTHONIOENCODING: "utf8" jobs: + build_native: + runs-on: ${{ matrix.runner }} continue-on-error: ${{ matrix.experimental }} strategy: @@ -55,7 +50,7 @@ jobs: fetch-depth: 0 - name: Install uv - uses: astral-sh/setup-uv@2ddd2b9cb38ad8efd50337e8ab201519a34c9f24 # v7.1.1 + uses: astral-sh/setup-uv@eb1897b8dc4b5d5bfe39a428a8f2304605e0983c # v7.0.0 with: version-file: "pyproject.toml" enable-cache: true @@ -105,7 +100,7 @@ jobs: fetch-depth: 0 - name: Install uv - uses: astral-sh/setup-uv@2ddd2b9cb38ad8efd50337e8ab201519a34c9f24 # v7.1.1 + uses: astral-sh/setup-uv@eb1897b8dc4b5d5bfe39a428a8f2304605e0983c # v7.0.0 with: version-file: "pyproject.toml" cache-dependency-glob: uv.lock @@ -148,6 +143,7 @@ jobs: cd .. ls -la dist_native_zipped/ + - name: Final smoke test shell: bash run: | @@ -184,9 +180,7 @@ jobs: uv publish - name: Download test results for ubuntu-latest generated in _test.yml - if: | - (!contains(inputs.commit_message, 'skip:test:all')) && - (!contains(github.event.pull_request.labels.*.name, 'skip:test:all')) + if: (!contains(github.event.head_commit.message, 'skip:test:all')) uses: actions/download-artifact@634f93cb2916e3fdff6788551b99b062d0335ce0 # v5.0.0 with: name: test-results-ubuntu-latest @@ -199,9 +193,7 @@ jobs: path: audit-results - name: Create GitHub release - if: | - (!contains(inputs.commit_message, 'skip:test:all')) && - (!contains(github.event.pull_request.labels.*.name, 'skip:test:all')) + if: (!contains(github.event.head_commit.message, 'skip:test:all')) env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} shell: bash @@ -211,9 +203,7 @@ jobs: --notes-file ${{ steps.git-cliff.outputs.changelog }} - name: Create GitHub release (no test results) - if: | - (contains(inputs.commit_message, 'skip:test:all')) || - (contains(github.event.pull_request.labels.*.name, 'skip:test:all')) + if: (contains(github.event.head_commit.message, 'skip:test:all')) env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} shell: bash @@ -222,16 +212,6 @@ jobs: gh release create ${{ github.ref_name }} ./dist/* ./dist_native_zipped/* ./audit-results/* \ --notes-file ${{ steps.git-cliff.outputs.changelog }} - - name: Inform Sentry about release - uses: getsentry/action-release@4f502acc1df792390abe36f2dcb03612ef144818 # v3.3.0 - env: - SENTRY_AUTH_TOKEN: ${{ secrets.SENTRY_AUTH_TOKEN }} - SENTRY_ORG: ${{ vars.SENTRY_ORG }} - SENTRY_PROJECT: ${{ vars.SENTRY_PROJECT }} - with: - environment: production - release: ${{ github.ref_name }} - - name: Release Announcement uses: slackapi/slack-github-action@91efab103c0de0a537f72a35f6b8cda0ee76bf0a # v2.1.1 with: diff --git a/.github/workflows/_scheduled-audit.yml b/.github/workflows/_scheduled-audit.yml index 12aea02f..85ba919b 100644 --- a/.github/workflows/_scheduled-audit.yml +++ b/.github/workflows/_scheduled-audit.yml @@ -1,4 +1,4 @@ -name: "> Scheduled Audit (Hourly)" +name: "Scheduled Audit" on: workflow_call: @@ -19,7 +19,7 @@ jobs: fetch-depth: 0 - name: Install uv - uses: astral-sh/setup-uv@2ddd2b9cb38ad8efd50337e8ab201519a34c9f24 # v7.1.1 + uses: astral-sh/setup-uv@eb1897b8dc4b5d5bfe39a428a8f2304605e0983c # v7.0.0 with: version-file: "pyproject.toml" enable-cache: true diff --git a/.github/workflows/_scheduled-test-daily.yml b/.github/workflows/_scheduled-test-daily.yml deleted file mode 100644 index e5131aa0..00000000 --- a/.github/workflows/_scheduled-test-daily.yml +++ /dev/null @@ -1,348 +0,0 @@ -name: "> Scheduled Test (Daily)" - -on: - workflow_call: - inputs: - platform_environment: - description: 'Environment to test against, that is staging or production' - required: false - default: 'staging' - type: string - secrets: - AIGNOSTICS_CLIENT_ID_DEVICE_STAGING: - required: true - AIGNOSTICS_REFRESH_TOKEN_STAGING: - required: true - GCP_CREDENTIALS_STAGING: - required: true - BETTERSTACK_HEARTBEAT_URL_FLOWS_STAGING: - required: true - AIGNOSTICS_CLIENT_ID_DEVICE_PRODUCTION: - required: true - AIGNOSTICS_REFRESH_TOKEN_PRODUCTION: - required: true - GCP_CREDENTIALS_PRODUCTION: - required: true - BETTERSTACK_HEARTBEAT_URL_FLOWS_PRODUCTION: - required: true - CODECOV_TOKEN: - required: true - SONAR_TOKEN: - required: true - SENTRY_DSN: - required: true - -env: - # https://gist.github.com/NodeJSmith/e7e37f2d3f162456869f015f842bcf15 - PYTHONIOENCODING: "utf8" - AIGNOSTICS_PLATFORM_ENVIRONMENT: ${{ inputs.platform_environment }} - -jobs: - - test-scheduled-daily: - runs-on: "ubuntu-latest" - permissions: - attestations: write - contents: read - id-token: write - packages: write - steps: - - - name: Checkout - uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 - with: - fetch-depth: 0 - - - name: Install uv - uses: astral-sh/setup-uv@2ddd2b9cb38ad8efd50337e8ab201519a34c9f24 # v7.1.1 - with: - version-file: "pyproject.toml" - enable-cache: true - cache-dependency-glob: uv.lock - - - name: Install dev tools - shell: bash - run: .github/workflows/_install_dev_tools.bash - - - name: Install Python, venv and dependencies - shell: bash - run: uv sync --all-extras --frozen --link-mode=copy - - # Need xdisplay for testing QuPath app integration - - name: Setup display - uses: pyvista/setup-headless-display-action@7d84ae825e6d9297a8e99bdbbae20d1b919a0b19 # v4.2 - - - name: Print development version info - if: ${{ !startsWith(github.ref, 'refs/tags/v') }} - shell: bash - run: | - TOML_VERSION=$(uv run python -c "import tomli; print(tomli.load(open('pyproject.toml', 'rb'))['project']['version'])") - echo "Development build - Current version in pyproject.toml: $TOML_VERSION" - - - name: Create .env file - uses: SpicyPizza/create-envfile@ace6d4f5d7802b600276c23ca417e669f1a06f6f # v2.0.3 - with: - # The following 3 lines are correct even if vscode complains - envkey_AIGNOSTICS_API_ROOT: ${{ inputs.platform_environment == 'staging' && 'https://platform-staging.aignostics.com' || 'https://platform.aignostics.com' }} - envkey_AIGNOSTICS_CLIENT_ID_DEVICE: ${{ inputs.platform_environment == 'staging' && secrets.AIGNOSTICS_CLIENT_ID_DEVICE_STAGING || secrets.AIGNOSTICS_CLIENT_ID_DEVICE_PRODUCTION }} - envkey_AIGNOSTICS_REFRESH_TOKEN: ${{ inputs.platform_environment == 'staging' && secrets.AIGNOSTICS_REFRESH_TOKEN_STAGING || secrets.AIGNOSTICS_REFRESH_TOKEN_PRODUCTION }} - fail_on_empty: false - - - name: Set up GCP credentials for bucket access - shell: bash - env: - GCP_CREDENTIALS: ${{ inputs.platform_environment == 'staging' && secrets.GCP_CREDENTIALS_STAGING || secrets.GCP_CREDENTIALS_PRODUCTION }} - run: | - echo "$GCP_CREDENTIALS" | base64 -d > credentials.json - echo "GOOGLE_APPLICATION_CREDENTIALS=$(pwd)/credentials.json" >> $GITHUB_ENV - - - name: Validate installation (single Python version) - id: validate - continue-on-error: true - shell: bash - run: | - OUTPUT=$(uv run --no-dev aignostics --help) - if [[ "$OUTPUT" != *"built with love in Berlin"* ]]; then - echo "Output does not contain 'built with love in Berlin'" - exit 1 - fi - - - name: Test / Smoke (single Python version) - id: smoke - continue-on-error: true - shell: bash - run: | - uv run --no-dev aignostics --help - uv run --all-extras aignostics system info - uv run --all-extras aignostics system health - uv run --all-extras aignostics user whoami --mask-secrets - uv run --all-extras aignostics application list - uv run --all-extras aignostics application run list --verbose --limit 1 - - - name: Test / Unit (multiple Python versions) - id: unit - continue-on-error: true - uses: ./.github/actions/run-tests - with: - test-type: unit - make-target: test_unit_matrix - skip-marker: skip:test:unit - summary-title: All unit tests passed - commit-message: ${{ inputs.commit_message }} - - - name: Test / Integration (multiple Python versions) - id: integration - continue-on-error: true - uses: ./.github/actions/run-tests - with: - test-type: integration - make-target: test_integration_matrix - skip-marker: skip:test:integration - summary-title: All integration tests passed - commit-message: ${{ inputs.commit_message }} - - - name: Test / E2E / regular (multiple Python versions) - id: e2e-regular - continue-on-error: true - uses: ./.github/actions/run-tests - with: - test-type: e2e - make-target: test_e2e_matrix - skip-marker: skip:test:e2e - summary-title: All regular e2e tests passed - commit-message: ${{ inputs.commit_message }} - - - name: Test / E2E / long running (single Python version) - id: e2e-long-running - continue-on-error: true - if: | - !github.event.pull_request.draft - uses: ./.github/actions/run-tests - with: - test-type: long-running - make-target: test_long_running - skip-marker: skip:test:long_running - summary-title: All long running e2e tests passed - commit-message: ${{ inputs.commit_message }} - - - name: Test / E2E / very long running (single Python version) - id: e2e-very-long-running - continue-on-error: true - uses: ./.github/actions/run-tests - with: - test-type: very-long-running - make-target: test_very_long_running - summary-title: All very long running e2e tests passed - - - name: Upload test artifacts for inspection - uses: actions/upload-artifact@4cec3d8aa04e39d1a68397de0c4cd6fb9dce8ec1 # v4.6.1 - if: ${{ always() && (env.GITHUB_WORKFLOW_RUNTIME != 'ACT') }} - with: - name: test-results-ubuntu-latest - path: | - reports/mypy_junit.xml - reports/junit_*.xml - reports/coverage.xml - reports/coverage.md - reports/coverage_html - retention-days: 7 - - name: Determine overall test status - id: test-status - if: always() - shell: bash - run: | - # Track which test types failed - FAILED_TESTS=() - EXIT_CODE=0 - - if [ "${{ steps.validate.outcome }}" == "failure" ]; then - FAILED_TESTS+=("validation") - EXIT_CODE=1 - fi - - if [ "${{ steps.smoke.outcome }}" == "failure" ]; then - FAILED_TESTS+=("smoke") - EXIT_CODE=1 - fi - - if [ "${{ steps.unit.outcome }}" == "failure" ]; then - FAILED_TESTS+=("unit") - EXIT_CODE=1 - fi - - if [ "${{ steps.integration.outcome }}" == "failure" ]; then - FAILED_TESTS+=("integration") - EXIT_CODE=1 - fi - - if [ "${{ steps.e2e-regular.outcome }}" == "failure" ]; then - FAILED_TESTS+=("e2e-regular") - EXIT_CODE=1 - fi - - if [ "${{ steps.e2e-long-running.outcome }}" == "failure" ]; then - FAILED_TESTS+=("e2e-long-running") - EXIT_CODE=1 - fi - - if [ "${{ steps.e2e-very-long-running.outcome }}" == "failure" ]; then - FAILED_TESTS+=("e2e-very-long-running") - EXIT_CODE=1 - fi - - # Export for next steps - echo "exit_code=${EXIT_CODE}" >> $GITHUB_OUTPUT - echo "failed_tests=${FAILED_LIST}" >> $GITHUB_OUTPUT - if [ ${#FAILED_TESTS[@]} -eq 0 ]; then - echo "failed_tests=none" >> $GITHUB_OUTPUT - echo "✅ All test types passed" >> $GITHUB_STEP_SUMMARY - else - FAILED_LIST=$(IFS=,; echo "${FAILED_TESTS[*]}") - echo "failed_tests=${FAILED_LIST}" >> $GITHUB_OUTPUT - echo "❌ Failed test types: ${FAILED_LIST}" >> $GITHUB_STEP_SUMMARY - fi - - - name: Heartbeat - if: always() - env: - BETTERSTACK_HEARTBEAT_URL: "${{ inputs.platform_environment == 'staging' && secrets.BETTERSTACK_HEARTBEAT_URL_FLOWS_STAGING || secrets.BETTERSTACK_HEARTBEAT_URL_FLOWS_PRODUCTION }}" - SENTRY_DSN: ${{ secrets.SENTRY_DSN }} - shell: bash - run: | - EXIT_CODE=${{ steps.test-status.outputs.exit_code }} - FAILED_TESTS="${{ steps.test-status.outputs.failed_tests }}" - - # Send heartbeat to Sentry, defining the schedule on the fly - SENTRY_EXIT_CODE=$(sentry-cli monitors run -e CI --schedule "0 12 * * *" --check-in-margin 30 --max-runtime 1 scheduled-testing-${{ inputs.platform_environment }}-hourly --timezone "Europe/Berlin" -- sh -c "exit $EXIT_CODE") - - # Provide heartbeat to BetterStack for monitoring/alerting if heartbeat url is configured as secret - if [ -n "$BETTERSTACK_HEARTBEAT_URL" ]; then - BETTERSTACK_METADATA_PAYLOAD=$(jq -n \ - --arg github_workflow "${{ github.workflow }}" \ - --arg github_run_url "${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}" \ - --arg github_run_id "${{ github.run_id }}" \ - --arg github_job "${{ github.job }}" \ - --arg github_sha "${{ github.sha }}" \ - --arg github_actor "${{ github.actor }}" \ - --arg github_repository "${{ github.repository }}" \ - --arg github_ref "${{ github.ref }}" \ - --arg job_status "${{ job.status }}" \ - --arg github_event_name "${{ github.event_name }}" \ - --arg timestamp "$(date -u +"%Y-%m-%dT%H:%M:%SZ")" \ - --arg failed_tests "${FAILED_TESTS}" \ - --arg validate_status "${{ steps.validate.outcome }}" \ - --arg smoke_status "${{ steps.smoke.outcome }}" \ - --arg unit_status "${{ steps.unit.outcome }}" \ - --arg integration_status "${{ steps.integration.outcome }}" \ - --arg e2e_regular_status "${{ steps.e2e-regular.outcome }}" \ - --arg e2e_long_running_status "${{ steps.e2e-long-running.outcome }}" \ - --arg e2e_very_long_running_status "${{ steps.e2e-very-long-running.outcome }}" \ - '{ - github: { - workflow: $github_workflow, - run_url: $github_run_url, - run_id: $github_run_id, - job: $github_job, - sha: $github_sha, - actor: $github_actor, - repository: $github_repository, - ref: $github_ref, - event_name: $github_event_name - }, - job: { - status: $job_status, - }, - tests: { - failed: $failed_tests, - validation: $validate_status, - smoke: $smoke_status, - unit: $unit_status, - integration: $integration_status, - e2e_regular: $e2e_regular_status, - e2e_long_runnin: $e2e_long_running_status, - e2e_very_long_running: $e2e_very_long_running_status - }, - timestamp: $timestamp, - }' - ) - curl \ - --fail-with-body \ - --silent \ - --request POST \ - --header "Content-Type: application/json" \ - --data-binary "${BETTERSTACK_METADATA_PAYLOAD}" \ - "${BETTERSTACK_HEARTBEAT_URL}/${EXIT_CODE}" - echo "INFO: Sent heartbeat to betterstack with exit code '${EXIT_CODE}' and failed tests: '${FAILED_TESTS}'" - else - echo "WARNING: No BetterStack heartbeat URL configured, skipped heartbeat notification." - fi - - - name: Upload coverage reports to Codecov - uses: codecov/codecov-action@5a1091511ad55cbe89839c7260b706298ca349f7 # v5.5.1 - if: ${{ !cancelled() && (env.GITHUB_WORKFLOW_RUNTIME != 'ACT')}} - with: - token: ${{ secrets.CODECOV_TOKEN }} - slug: aignostics/python-sdk - - - name: Upload test results to Codecov - if: ${{ !cancelled() && (env.GITHUB_WORKFLOW_RUNTIME != 'ACT') }} - uses: codecov/test-results-action@47f89e9acb64b76debcd5ea40642d25a4adced9f # v1.1.1 - with: - token: ${{ secrets.CODECOV_TOKEN }} - - - name: SonarQube Scan - if: ${{ !cancelled() && (env.GITHUB_WORKFLOW_RUNTIME != 'ACT') }} - uses: SonarSource/sonarqube-scan-action@fd88b7d7ccbaefd23d8f36f73b59db7a3d246602 # v6.0.0 - env: - GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} - SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }} - - - name: Fail workflow if any tests failed - if: always() - shell: bash - run: | - EXIT_CODE=${{ steps.test-status.outputs.exit_code }} - if [ "$EXIT_CODE" != "0" ]; then - echo "❌ Workflow failed due to test failures: ${{ steps.test-status.outputs.failed_tests }}" - exit $EXIT_CODE - fi diff --git a/.github/workflows/_scheduled-test-hourly.yml b/.github/workflows/_scheduled-test.yml similarity index 64% rename from .github/workflows/_scheduled-test-hourly.yml rename to .github/workflows/_scheduled-test.yml index cc89db0f..5964e0eb 100644 --- a/.github/workflows/_scheduled-test-hourly.yml +++ b/.github/workflows/_scheduled-test.yml @@ -1,45 +1,19 @@ -name: "> Scheduled Test (Hourly)" +name: "Scheduled Test" on: workflow_call: - inputs: - platform_environment: - description: 'Environment to test against, that is staging or production' + secrets: + AIGNOSTICS_CLIENT_ID_DEVICE: required: false - default: 'staging' - type: string - branch: - description: 'Branch to checkout (leave empty for default branch)' + AIGNOSTICS_REFRESH_TOKEN: + required: false + GCP_CREDENTIALS: + required: false + BETTERSTACK_HEARTBEAT_URL: required: false - default: '' - type: string - secrets: - AIGNOSTICS_CLIENT_ID_DEVICE_STAGING: - required: true - AIGNOSTICS_REFRESH_TOKEN_STAGING: - required: true - GCP_CREDENTIALS_STAGING: - required: true - BETTERSTACK_HEARTBEAT_URL_STAGING: - required: true - AIGNOSTICS_CLIENT_ID_DEVICE_PRODUCTION: - required: true - AIGNOSTICS_REFRESH_TOKEN_PRODUCTION: - required: true - GCP_CREDENTIALS_PRODUCTION: - required: true - BETTERSTACK_HEARTBEAT_URL_PRODUCTION: - required: true - SENTRY_DSN: - required: true - -env: - # https://gist.github.com/NodeJSmith/e7e37f2d3f162456869f015f842bcf15 - PYTHONIOENCODING: "utf8" - AIGNOSTICS_PLATFORM_ENVIRONMENT: ${{ inputs.platform_environment }} jobs: - test-scheduled-hourly: + test-scheduled: runs-on: ubuntu-latest permissions: contents: read @@ -48,11 +22,10 @@ jobs: - name: Checkout uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 with: - ref: ${{ inputs.branch || github.ref }} fetch-depth: 0 - name: Install uv - uses: astral-sh/setup-uv@2ddd2b9cb38ad8efd50337e8ab201519a34c9f24 # v7.1.1 + uses: astral-sh/setup-uv@eb1897b8dc4b5d5bfe39a428a8f2304605e0983c # v7.0.0 with: version-file: "pyproject.toml" enable-cache: true @@ -69,24 +42,21 @@ jobs: - name: Create .env file uses: SpicyPizza/create-envfile@ace6d4f5d7802b600276c23ca417e669f1a06f6f # v2.0.3 with: - # The following 3 lines are correct even if vscode complains - envkey_AIGNOSTICS_API_ROOT: ${{ inputs.platform_environment == 'staging' && 'https://platform-staging.aignostics.com' || 'https://platform.aignostics.com' }} - envkey_AIGNOSTICS_CLIENT_ID_DEVICE: ${{ inputs.platform_environment == 'staging' && secrets.AIGNOSTICS_CLIENT_ID_DEVICE_STAGING || secrets.AIGNOSTICS_CLIENT_ID_DEVICE_PRODUCTION }} - envkey_AIGNOSTICS_REFRESH_TOKEN: ${{ inputs.platform_environment == 'staging' && secrets.AIGNOSTICS_REFRESH_TOKEN_STAGING || secrets.AIGNOSTICS_REFRESH_TOKEN_PRODUCTION }} + envkey_AIGNOSTICS_CLIENT_ID_DEVICE: ${{ secrets.AIGNOSTICS_CLIENT_ID_DEVICE }} + envkey_AIGNOSTICS_REFRESH_TOKEN: ${{ secrets.AIGNOSTICS_REFRESH_TOKEN }} fail_on_empty: false - name: Set up GCP credentials for bucket access shell: bash env: - GCP_CREDENTIALS: ${{ inputs.platform_environment == 'staging' && secrets.GCP_CREDENTIALS_STAGING || secrets.GCP_CREDENTIALS_PRODUCTION }} + GCP_CREDENTIALS: ${{ secrets.GCP_CREDENTIALS }} run: | echo "$GCP_CREDENTIALS" | base64 -d > credentials.json echo "GOOGLE_APPLICATION_CREDENTIALS=$(pwd)/credentials.json" >> $GITHUB_ENV - name: Test / scheduled env: - BETTERSTACK_HEARTBEAT_URL: "${{ inputs.platform_environment == 'staging' && secrets.BETTERSTACK_HEARTBEAT_URL_STAGING || secrets.BETTERSTACK_HEARTBEAT_URL_PRODUCTION }}" - SENTRY_DSN: ${{ secrets.SENTRY_DSN }} + BETTERSTACK_HEARTBEAT_URL: "${{ secrets.BETTERSTACK_HEARTBEAT_URL }}" shell: bash run: | set +e @@ -113,10 +83,6 @@ jobs: echo "# No test coverage computed." >> $GITHUB_STEP_SUMMARY echo "" >> $GITHUB_STEP_SUMMARY fi - - # Send heartbeat to Sentry, defining the schedule on the fly - SENTRY_EXIT_CODE=$(sentry-cli monitors run -e CI --schedule "0 * * * *" --check-in-margin 30 --max-runtime 1 scheduled-testing-${{ inputs.platform_environment }}-hourly --timezone "Europe/Berlin" -- sh -c "exit $EXIT_CODE") - # Provide heartbeat to BetterStack for monitoring/alerting if heartbeat url is configured as secret if [ -n "$BETTERSTACK_HEARTBEAT_URL" ]; then BETTERSTACK_METADATA_PAYLOAD=$(jq -n \ diff --git a/.github/workflows/_test.yml b/.github/workflows/_test.yml index a4c40832..2ba44b3d 100644 --- a/.github/workflows/_test.yml +++ b/.github/workflows/_test.yml @@ -1,29 +1,13 @@ -name: "> Test" +name: "Test" on: workflow_call: - inputs: - platform_environment: - description: 'Environment to test against, that is staging or production' - required: false - default: 'staging' - type: string - commit_message: - description: 'Commit message to check for skip/enable markers' - required: false - type: string secrets: - AIGNOSTICS_CLIENT_ID_DEVICE_STAGING: - required: false - AIGNOSTICS_REFRESH_TOKEN_STAGING: + AIGNOSTICS_CLIENT_ID_DEVICE: required: false - GCP_CREDENTIALS_STAGING: + AIGNOSTICS_REFRESH_TOKEN: required: false - AIGNOSTICS_CLIENT_ID_DEVICE_PRODUCTION: - required: false - AIGNOSTICS_REFRESH_TOKEN_PRODUCTION: - required: false - GCP_CREDENTIALS_PRODUCTION: + GCP_CREDENTIALS: required: false CODECOV_TOKEN: required: false @@ -33,7 +17,6 @@ on: env: # https://gist.github.com/NodeJSmith/e7e37f2d3f162456869f015f842bcf15 PYTHONIOENCODING: "utf8" - AIGNOSTICS_PLATFORM_ENVIRONMENT: ${{ inputs.platform_environment }} jobs: @@ -44,16 +27,15 @@ jobs: outputs: matrix: ${{ steps.set-matrix.outputs.matrix }} steps: - - name: Set matrix based on commit message and PR labels + - name: Set matrix based on commit message id: set-matrix env: - COMMIT_MESSAGE: ${{ inputs.commit_message }} - PR_LABELS: ${{ toJSON(github.event.pull_request.labels.*.name) }} + COMMIT_MESSAGE: ${{ github.event.head_commit.message }} run: | - if [[ "$COMMIT_MESSAGE" == *"skip:test:matrix-runner"* ]] || [[ "$PR_LABELS" == *"skip:test:matrix-runner"* ]]; then + if [[ "$COMMIT_MESSAGE" == *"skip:test:matrix-runner"* ]]; then echo 'matrix={"runner":["ubuntu-latest"],"experimental":[false]}' >> $GITHUB_OUTPUT else - echo 'matrix={"runner":["ubuntu-latest"],"experimental":[false],"include":[{"runner":"ubuntu-24.04-arm","experimental":true},{"runner":"macos-latest","experimental":true},{"runner":"macos-13","experimental":true},{"runner":"windows-latest","experimental":true}]}' >> $GITHUB_OUTPUT + echo 'matrix={"runner":["ubuntu-latest"],"experimental":[false],"include":[{"runner":"ubuntu-24.04-arm","experimental":true},{"runner":"macos-latest","experimental":true},{"runner":"macos-13","experimental":true},{"runner":"windows-latest","experimental":true},{"runner":"windows-11-arm","experimental":true}]}' >> $GITHUB_OUTPUT fi test: @@ -76,7 +58,7 @@ jobs: fetch-depth: 0 - name: Install uv - uses: astral-sh/setup-uv@2ddd2b9cb38ad8efd50337e8ab201519a34c9f24 # v7.1.1 + uses: astral-sh/setup-uv@eb1897b8dc4b5d5bfe39a428a8f2304605e0983c # v7.0.0 with: version-file: "pyproject.toml" enable-cache: true @@ -113,24 +95,23 @@ jobs: TOML_VERSION=$(uv run python -c "import tomli; print(tomli.load(open('pyproject.toml', 'rb'))['project']['version'])") echo "Development build - Current version in pyproject.toml: $TOML_VERSION" + - name: Create .env file uses: SpicyPizza/create-envfile@ace6d4f5d7802b600276c23ca417e669f1a06f6f # v2.0.3 with: - # The following 3 lines are correct even if vscode complains - envkey_AIGNOSTICS_API_ROOT: ${{ inputs.platform_environment == 'staging' && 'https://platform-staging.aignostics.com' || 'https://platform.aignostics.com' }} - envkey_AIGNOSTICS_CLIENT_ID_DEVICE: ${{ inputs.platform_environment == 'staging' && secrets.AIGNOSTICS_CLIENT_ID_DEVICE_STAGING || secrets.AIGNOSTICS_CLIENT_ID_DEVICE_PRODUCTION }} - envkey_AIGNOSTICS_REFRESH_TOKEN: ${{ inputs.platform_environment == 'staging' && secrets.AIGNOSTICS_REFRESH_TOKEN_STAGING || secrets.AIGNOSTICS_REFRESH_TOKEN_PRODUCTION }} + envkey_AIGNOSTICS_CLIENT_ID_DEVICE: ${{ secrets.AIGNOSTICS_CLIENT_ID_DEVICE }} + envkey_AIGNOSTICS_REFRESH_TOKEN: ${{ secrets.AIGNOSTICS_REFRESH_TOKEN }} fail_on_empty: false - name: Set up GCP credentials for bucket access shell: bash env: - GCP_CREDENTIALS: ${{ inputs.platform_environment == 'staging' && secrets.GCP_CREDENTIALS_STAGING || secrets.GCP_CREDENTIALS_PRODUCTION }} + GCP_CREDENTIALS: ${{ secrets.GCP_CREDENTIALS }} run: | echo "$GCP_CREDENTIALS" | base64 -d > credentials.json echo "GOOGLE_APPLICATION_CREDENTIALS=$(pwd)/credentials.json" >> $GITHUB_ENV - - name: Validate installation (single Python version) + - name: Validate installation shell: bash run: | OUTPUT=$(uv run --no-dev aignostics --help) @@ -139,7 +120,7 @@ jobs: exit 1 fi - - name: Test / Smoke (single Python version) + - name: Smoke tests shell: bash run: | uv run --no-dev aignostics --help @@ -149,56 +130,67 @@ jobs: uv run --all-extras aignostics application list uv run --all-extras aignostics application run list --verbose --limit 1 - - name: Test / Unit (multiple Python versions) - uses: ./.github/actions/run-tests - with: - test-type: unit - make-target: test_unit_matrix - skip-marker: skip:test:unit - summary-title: All unit tests passed - commit-message: ${{ inputs.commit_message }} - - - name: Test / Integration (multiple Python versions) - uses: ./.github/actions/run-tests - with: - test-type: integration - make-target: test_integration_matrix - skip-marker: skip:test:integration - summary-title: All integration tests passed - commit-message: ${{ inputs.commit_message }} - - - name: Test / E2E / regular (multiple Python versions) - uses: ./.github/actions/run-tests - with: - test-type: e2e - make-target: test_e2e_matrix - skip-marker: skip:test:e2e - summary-title: All regular e2e tests passed - commit-message: ${{ inputs.commit_message }} - - - name: Test / E2E / long running (single Python version) - if: | - !github.event.pull_request.draft - uses: ./.github/actions/run-tests - with: - test-type: long-running - make-target: test_long_running - skip-marker: skip:test:long_running - summary-title: All long running e2e tests passed - commit-message: ${{ inputs.commit_message }} - - - name: Test / E2E / very long running (single Python version) - if: | - contains(inputs.commit_message, 'enable:test:very_long_running') || - contains(github.event.pull_request.labels.*.name, 'enable:test:very_long_running') - uses: ./.github/actions/run-tests - with: - test-type: very-long-running - make-target: test_very_long_running - summary-title: All very long running e2e tests passed - commit-message: ${{ inputs.commit_message }} + - name: Test / regular + if: (!contains(github.event.head_commit.message, 'skip:test:regular')) && (!contains(github.event.head_commit.message, 'skip:test:all')) + shell: bash + run: | + set +e + make test + EXIT_CODE=$? + # Show test execution in GitHub Job summary + found_files=0 + for file in reports/pytest_*.md; do + if [ -f "$file" ]; then + cat "$file" >> $GITHUB_STEP_SUMMARY + echo "" >> $GITHUB_STEP_SUMMARY + found_files=1 + fi + done + if [ $found_files -eq 0 ]; then + echo "# All regular tests passed" >> $GITHUB_STEP_SUMMARY + echo "" >> $GITHUB_STEP_SUMMARY + fi + # Show test coverage in GitHub Job summary + if [ -f "reports/coverage.md" ]; then + cat "reports/coverage.md" >> $GITHUB_STEP_SUMMARY + echo "" >> $GITHUB_STEP_SUMMARY + else + echo "# No test coverage computed." >> $GITHUB_STEP_SUMMARY + echo "" >> $GITHUB_STEP_SUMMARY + fi + exit $EXIT_CODE + + - name: Test / long running + if: (!contains(github.event.head_commit.message, 'skip:test:long-running')) && (!contains(github.event.head_commit.message, 'skip:test:all')) + shell: bash + run: | + set +e + make test_long_running + EXIT_CODE=$? + # Show test execution in GitHub Job summary + found_files=0 + for file in reports/pytest_*.md; do + if [ -f "$file" ]; then + cat "$file" >> $GITHUB_STEP_SUMMARY + echo "" >> $GITHUB_STEP_SUMMARY + found_files=1 + fi + done + if [ $found_files -eq 0 ]; then + echo "# All long running tests passed" >> $GITHUB_STEP_SUMMARY + echo "" >> $GITHUB_STEP_SUMMARY + fi + # Show test coverage in GitHub Job summary + if [ -f "reports/coverage.md" ]; then + cat "reports/coverage.md" >> $GITHUB_STEP_SUMMARY + echo "" >> $GITHUB_STEP_SUMMARY + else + echo "# No test coverage computed." >> $GITHUB_STEP_SUMMARY + echo "" >> $GITHUB_STEP_SUMMARY + fi + exit $EXIT_CODE - - name: Upload test artifacts for inspection + - name: Upload test results uses: actions/upload-artifact@4cec3d8aa04e39d1a68397de0c4cd6fb9dce8ec1 # v4.6.1 if: ${{ always() && (env.GITHUB_WORKFLOW_RUNTIME != 'ACT') }} with: diff --git a/.github/workflows/scheduled-audit-hourly.yml b/.github/workflows/audit-scheduled.yml similarity index 57% rename from .github/workflows/scheduled-audit-hourly.yml rename to .github/workflows/audit-scheduled.yml index 4392ca3c..dc0fb4a0 100644 --- a/.github/workflows/scheduled-audit-hourly.yml +++ b/.github/workflows/audit-scheduled.yml @@ -1,15 +1,8 @@ -name: "+ Scheduled Audit (Hourly)" +name: "Scheduled Audit" on: schedule: - cron: '0 * * * *' - workflow_dispatch: - inputs: - branch: - description: 'Branch to test (leave empty for main)' - required: false - type: string - default: '' jobs: audit-scheduled: diff --git a/.github/workflows/build-native-only.yml b/.github/workflows/build-native-only.yml index 5c82d357..1262df9d 100644 --- a/.github/workflows/build-native-only.yml +++ b/.github/workflows/build-native-only.yml @@ -1,4 +1,4 @@ -name: "+ Build Native Only" +name: "Build Native Only" on: push: @@ -11,57 +11,8 @@ on: types: [created] jobs: - get-commit-message: - runs-on: ubuntu-latest - permissions: - contents: read - outputs: - commit_message: ${{ steps.get-commit-message.outputs.commit_message }} - steps: - - name: Checkout - uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 - with: - fetch-depth: 0 - - - name: Get commit message - id: get-commit-message - shell: bash - run: | - if [ "${{ github.event_name }}" == "pull_request" ]; then - # For PR events, get the commit message from the PR head SHA - COMMIT_MESSAGE=$(git log -1 --format=%B ${{ github.event.pull_request.head.sha }}) - else - # For push events, use the head commit message - COMMIT_MESSAGE="${{ github.event.head_commit.message }}" - fi - # Export for use in other steps (multiline-safe) - echo "commit_message<> $GITHUB_OUTPUT - echo "$COMMIT_MESSAGE" >> $GITHUB_OUTPUT - echo "EOF" >> $GITHUB_OUTPUT - - check-trigger-condition: - runs-on: ubuntu-latest - needs: get-commit-message - permissions: - contents: read - outputs: - should_run: ${{ steps.check.outputs.should_run }} - steps: - - name: Check if workflow should run - id: check - run: | - if [[ "${{ contains(needs.get-commit-message.outputs.commit_message, 'build:native:only') }}" == "true" ]] || \ - [[ "${{ contains(github.event.pull_request.labels.*.name, 'build:native:only') }}" == "true" ]]; then - echo "should_run=true" >> $GITHUB_OUTPUT - echo "✅ Workflow triggered: Found 'build:native:only' in commit message or PR labels" - else - echo "should_run=false" >> $GITHUB_OUTPUT - echo "⏭️ Workflow skipped: 'build:native:only' not found in commit message or PR labels" - fi - only_build_native: - needs: [get-commit-message, check-trigger-condition] - if: needs.check-trigger-condition.outputs.should_run == 'true' + if: (contains(github.event.head_commit.message, 'build:native:only')) uses: ./.github/workflows/_build-native-only.yml permissions: attestations: write diff --git a/.github/workflows/ci-cd.yml b/.github/workflows/ci-cd.yml index 2eeb5d18..04f1b87a 100644 --- a/.github/workflows/ci-cd.yml +++ b/.github/workflows/ci-cd.yml @@ -1,4 +1,4 @@ -name: "+ CI/CD" +name: "CI/CD" on: push: @@ -18,42 +18,8 @@ concurrency: jobs: - get-commit-message: - runs-on: ubuntu-latest - permissions: - contents: read - outputs: - commit_message: ${{ steps.get-commit-message.outputs.commit_message }} - steps: - - name: Checkout - uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 - with: - fetch-depth: 0 - - - name: Get commit message - id: get-commit-message - shell: bash - run: | - if [ "${{ github.event_name }}" == "pull_request" ]; then - # For PR events, get the commit message from the PR head SHA - COMMIT_MESSAGE=$(git log -1 --format=%B ${{ github.event.pull_request.head.sha }}) - else - # For push events, use the head commit message - COMMIT_MESSAGE="${{ github.event.head_commit.message }}" - fi - # Export for use in other steps (multiline-safe) - echo "commit_message<> $GITHUB_OUTPUT - echo "$COMMIT_MESSAGE" >> $GITHUB_OUTPUT - echo "EOF" >> $GITHUB_OUTPUT - lint: - needs: [get-commit-message] - if: | - (!contains(needs.get-commit-message.outputs.commit_message, 'skip:ci')) && - (!contains(needs.get-commit-message.outputs.commit_message, 'build:native:only')) && - !(github.ref_type == 'branch' && startsWith(needs.get-commit-message.outputs.commit_message, 'Bump version')) && - (!contains(github.event.pull_request.labels.*.name, 'skip:ci')) && - (!contains(github.event.pull_request.labels.*.name, 'build:native:only')) + if: (!contains(github.event.head_commit.message, 'skip:ci')) && (!contains(github.event.head_commit.message, 'build:native:only')) uses: ./.github/workflows/_lint.yml permissions: contents: read @@ -61,54 +27,31 @@ jobs: packages: read audit: - needs: [get-commit-message] - if: | - (!contains(needs.get-commit-message.outputs.commit_message, 'skip:ci')) && - (!contains(needs.get-commit-message.outputs.commit_message, 'build:native:only')) && - !(github.ref_type == 'branch' && startsWith(needs.get-commit-message.outputs.commit_message, 'Bump version')) && - (!contains(github.event.pull_request.labels.*.name, 'skip:ci')) && - (!contains(github.event.pull_request.labels.*.name, 'build:native:only')) + if: (!contains(github.event.head_commit.message, 'skip:ci')) && (!contains(github.event.head_commit.message, 'build:native:only')) uses: ./.github/workflows/_audit.yml permissions: contents: read id-token: write packages: read - + test: - needs: [get-commit-message] - if: | - (!contains(needs.get-commit-message.outputs.commit_message, 'skip:ci')) && - (!contains(needs.get-commit-message.outputs.commit_message, 'build:native:only')) && - !(github.ref_type == 'branch' && startsWith(needs.get-commit-message.outputs.commit_message, 'Bump version:')) && - (!contains(github.event.pull_request.labels.*.name, 'skip:ci')) && - (!contains(github.event.pull_request.labels.*.name, 'build:native:only')) + if: (!contains(github.event.head_commit.message, 'skip:ci')) && (!contains(github.event.head_commit.message, 'build:native:only')) uses: ./.github/workflows/_test.yml - with: - platform_environment: "production" - commit_message: ${{ needs.get-commit-message.outputs.commit_message }} permissions: attestations: write contents: read id-token: write packages: write secrets: - AIGNOSTICS_CLIENT_ID_DEVICE_STAGING: ${{ secrets.AIGNOSTICS_CLIENT_ID_DEVICE_STAGING }} - AIGNOSTICS_REFRESH_TOKEN_STAGING: ${{ secrets.AIGNOSTICS_REFRESH_TOKEN_STAGING }} - GCP_CREDENTIALS_STAGING: ${{ secrets.GCP_CREDENTIALS_STAGING }} - AIGNOSTICS_CLIENT_ID_DEVICE_PRODUCTION: ${{ secrets.AIGNOSTICS_CLIENT_ID_DEVICE_PRODUCTION }} - AIGNOSTICS_REFRESH_TOKEN_PRODUCTION: ${{ secrets.AIGNOSTICS_REFRESH_TOKEN_PRODUCTION }} - GCP_CREDENTIALS_PRODUCTION: ${{ secrets.GCP_CREDENTIALS_PRODUCTION }} + AIGNOSTICS_CLIENT_ID_DEVICE: ${{ secrets.AIGNOSTICS_CLIENT_ID_DEVICE }} + AIGNOSTICS_REFRESH_TOKEN: ${{ secrets.AIGNOSTICS_REFRESH_TOKEN }} + GCP_CREDENTIALS: ${{ secrets.GCP_CREDENTIALS }} CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }} SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }} + codeql: - needs: [get-commit-message] - if: | - (!contains(needs.get-commit-message.outputs.commit_message, 'skip:ci')) && - (!contains(needs.get-commit-message.outputs.commit_message, 'build:native:only')) && - !(github.ref_type == 'branch' && startsWith(needs.get-commit-message.outputs.commit_message, 'Bump version:')) && - (!contains(github.event.pull_request.labels.*.name, 'skip:ci')) && - (!contains(github.event.pull_request.labels.*.name, 'build:native:only')) + if: (!contains(github.event.head_commit.message, 'skip:ci')) && (!contains(github.event.head_commit.message, 'build:native:only')) uses: ./.github/workflows/_codeql.yml permissions: actions: read @@ -116,18 +59,13 @@ jobs: packages: read security-events: write + ketryx_report_and_check: - needs: [get-commit-message, lint, audit, test, codeql] - if: | - (!contains(needs.get-commit-message.outputs.commit_message, 'skip:ci')) && - (!contains(needs.get-commit-message.outputs.commit_message, 'build:native:only')) && - !(github.ref_type == 'branch' && startsWith(needs.get-commit-message.outputs.commit_message, 'Bump version:')) && - (!contains(github.event.pull_request.labels.*.name, 'skip:ci')) && - (!contains(github.event.pull_request.labels.*.name, 'build:native:only')) + + needs: [lint, audit, test, codeql] + uses: ./.github/workflows/_ketryx_report_and_check.yml - with: - commit-sha: ${{ github.event.pull_request.head.sha || github.sha }} - commit_message: ${{ needs.get-commit-message.outputs.commit_message }} + if: (!contains(github.event.head_commit.message, 'skip:ci')) && (!contains(github.event.head_commit.message, 'build:native:only')) permissions: attestations: write contents: write @@ -138,15 +76,11 @@ jobs: KETRYX_API_KEY: ${{ secrets.KETRYX_API_KEY }} package_publish: - needs: [get-commit-message, ketryx_report_and_check] + + needs: [ketryx_report_and_check] + uses: ./.github/workflows/_package-publish.yml - if: | - (startsWith(github.ref, 'refs/tags/v') && (!contains(needs.get-commit-message.outputs.commit_message, 'skip:ci'))) && - (!contains(needs.get-commit-message.outputs.commit_message, 'build:native:only')) && - (!contains(github.event.pull_request.labels.*.name, 'skip:ci')) && - (!contains(github.event.pull_request.labels.*.name, 'build:native:only')) - with: - commit_message: ${{ needs.get-commit-message.outputs.commit_message }} + if: (startsWith(github.ref, 'refs/tags/v') && (!contains(github.event.head_commit.message, 'skip:ci'))) && (!contains(github.event.head_commit.message, 'build:native:only')) permissions: attestations: write contents: write @@ -156,15 +90,12 @@ jobs: UV_PUBLISH_TOKEN: ${{ secrets.UV_PUBLISH_TOKEN }} SLACK_WEBHOOK_URL_RELEASE_ANNOUNCEMENT: ${{ secrets.SLACK_WEBHOOK_URL_RELEASE_ANNOUNCEMENT }} SLACK_CHANNEL_ID_RELEASE_ANNOUNCEMENT: ${{ secrets.SLACK_CHANNEL_ID_RELEASE_ANNOUNCEMENT }} - SENTRY_AUTH_TOKEN: ${{ secrets.SENTRY_AUTH_TOKEN }} docker_publish: - needs: [get-commit-message, ketryx_report_and_check] - if: | - (startsWith(github.ref, 'refs/tags/v') && (!contains(needs.get-commit-message.outputs.commit_message, 'skip:ci'))) && - (!contains(needs.get-commit-message.outputs.commit_message, 'build:native:only')) && - (!contains(github.event.pull_request.labels.*.name, 'skip:ci')) && - (!contains(github.event.pull_request.labels.*.name, 'build:native:only')) + + needs: [ketryx_report_and_check] + + if: (startsWith(github.ref, 'refs/tags/v') && (!contains(github.event.head_commit.message, 'skip:ci'))) && (!contains(github.event.head_commit.message, 'build:native:only')) uses: ./.github/workflows/_docker-publish.yml permissions: attestations: write diff --git a/.github/workflows/claude-agent.yml b/.github/workflows/claude-agent.yml new file mode 100644 index 00000000..302172ec --- /dev/null +++ b/.github/workflows/claude-agent.yml @@ -0,0 +1,53 @@ +name: Claude Code + +on: + issue_comment: + types: [created] + pull_request_review_comment: + types: [created] + issues: + types: [opened, assigned] + pull_request_review: + types: [submitted] + +jobs: + claude: + if: | + (github.event_name == 'issue_comment' && contains(github.event.comment.body, '@claude')) || + (github.event_name == 'pull_request_review_comment' && contains(github.event.comment.body, '@claude')) || + (github.event_name == 'pull_request_review' && contains(github.event.review.body, '@claude')) || + (github.event_name == 'issues' && (contains(github.event.issue.body, '@claude') || contains(github.event.issue.title, '@claude'))) + runs-on: ubuntu-latest + permissions: + contents: write + pull-requests: write + issues: write + id-token: write + actions: read # Required for Claude to read CI results on PRs + steps: + - name: Checkout repository + uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 + with: + fetch-depth: 1 + + - name: Run Claude Code + id: claude + uses: anthropics/claude-code-action@v1 + with: + anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }} + claude_args: + --max-turns 20 + --model claude-sonnet-4-5-20250929 + + # This is an optional setting that allows Claude to read CI results on PRs + additional_permissions: | + actions: read + + # Optional: Give a custom prompt to Claude. If this is not specified, Claude will perform the instructions specified in the comment that tagged it. + # prompt: 'Update the pull request description to include a summary of changes.' + + # Optional: Add claude_args to customize behavior and configuration + # See https://github.com/anthropics/claude-code-action/blob/main/docs/usage.md + # or https://docs.claude.com/en/docs/claude-code/sdk#command-line for available options + # claude_args: '--model claude-opus-4-1-20250805 --allowed-tools Bash(gh pr:*)' + diff --git a/.github/workflows/claude-code-automation-operational-excellence-weekly.yml b/.github/workflows/claude-code-automation-operational-excellence-weekly.yml deleted file mode 100644 index 71d6649f..00000000 --- a/.github/workflows/claude-code-automation-operational-excellence-weekly.yml +++ /dev/null @@ -1,303 +0,0 @@ -name: "+ Claude Code / Automation / Operational Excellence (Weekly)" - -on: - schedule: - # Every Monday at 6:00 AM UTC - - cron: "0 6 * * 1" - workflow_dispatch: - inputs: - platform_environment: - description: 'Environment' - required: false - default: 'staging' - type: choice - options: - - staging - - production - -jobs: - operational-excellence: - uses: ./.github/workflows/_claude-code.yml - with: - platform_environment: ${{ inputs.platform_environment || 'staging' }} - mode: 'automation' - track_progress: ${{ github.event_name != 'workflow_dispatch' && true || false }} - allowed_tools: 'Read,Write,Edit,Glob,Grep,LS,,WebFetch,WebSearch,Bash(git:*),Bash(gh:*)' - prompt: | - # 🎯 AI-POWERED OPERATIONAL EXCELLENCE AUDIT - - **REPO**: ${{ github.repository }} - **DATE**: $(date -u +"%Y-%m-%d %H:%M UTC") - - ## Your Mission - - Perform weekly quality audit focusing on **human judgment** areas that automated - tools (Ruff, MyPy, PyRight, Codecov, SonarQube, etc.) cannot assess. - - Read and apply standards from: - - **CODE_STYLE.md** - Coding standards for humans and AI assistants - - **CONTRIBUTING.md** - Development workflow - - **OPERATIONAL_EXCELLENCE.md** - Toolchain overview - - **Best practices** - Research independently, using web search and loading web pages as needed - - Be critical, do never just rubber-stamp. - - Getting 5 stars must be challenging; look for subtle issues. - - Insist to raise the bar; aim for excellence in every aspect. - - Provide clear examples to illustrate your points; don't just state opinions. - - Apply radical candor in your feedback; care personally while challenging directly. - - Prioritize findings by impact on customer experience, maintainability, and security - - Ultrathink to find patterns and learnings; don't settle for surface-level observations. - - ## Audit Areas - - ### 1. Documentation Quality ⭐ PRIMARY - - **CLAUDE.md Files** - Find and review all dynamically: - ```bash - find . -name "CLAUDE.md" -type f - ``` - - For each file, assess: - - **Accuracy**: Does doc match code? Verify imports, signatures, examples - - **Clarity**: Can humans understand? Is context provided (why, not just what)? - - **Completeness**: Missing features? Outdated references? - - **Check for missing CLAUDE.md**: - ```bash - # Modules with _service.py but no CLAUDE.md - find src/aignostics -name "_service.py" -exec dirname {} \; | while read dir; do - [ ! -f "$dir/CLAUDE.md" ] && echo "Missing: $dir/CLAUDE.md" - done - ``` - - **Suggest new CLAUDE.md** where valuable (complex modules, integrations, etc.) - - **docs/partials/*.md** - Check narrative flow, working examples - - **Top-level docs** - CONTRIBUTING.md, CODE_STYLE.md, SECURITY.md accuracy - - ### 2. Docstring Quality - - Sample 10-20 docstrings from key modules. Assess meaningfulness: - - ❌ Vague: "Returns the result" - - ✅ Specific: "Returns Run with status 'pending', signed URLs valid 7 days" - - ### 3. Code Readability - - Review recent commits for human comprehension: - ```bash - git log --since="2 weeks ago" --name-only --pretty=format: | sort -u | grep "\.py$" - ``` - - Sample 5-10 files - check intent clarity, variable names, helpful comments - - ### 4. Architecture Consistency - - Verify modulith principles - do modules follow BaseService pattern? - - ### 5. Technical Debt Patterns - - ```bash - grep -rn "TODO\|FIXME\|HACK" src/ tests/ --include="*.py" - ``` - - Analyze by age (git blame), impact, patterns. Prioritize 3-5 items. - - ### 6. Meta-Improvements - - Suggest improvements to this workflow itself! Missing checks? Better approach? - - ## Output: Parent Issue + Branches - - ### Create Parent Issue - - ```bash - cat > /tmp/oe-report.md << 'EOF' - # 🤖 Operational Excellence - Weekly Quality Audit - - **Date**: $(date -u +"%Y-%m-%d") - **Commit**: $(git rev-parse --short HEAD) - - ## 🎯 Executive Summary - - [2-3 sentences overview] - - **Assessment**: 🟢 Excellent / 🟡 Good / 🟠 Needs Attention / 🔴 Critical - - ## 🏆 Quality Champions Leaderboard - - ### Top Contributors (Last 7 Days) - ```bash - git log --since="7 days ago" --pretty=format:"%an" | sort | uniq -c | sort -rn | head -5 - ``` - - 1. 🥇 **@contributor1** - [commits] commits, [quality score] - 2. 🥈 **@contributor2** - [commits] commits, [quality score] - 3. 🥉 **@contributor3** - [commits] commits, [quality score] - - ### 🎖️ Special Recognition - - **📚 Best Documentation**: @[name] - [specific achievement] - - **✨ Code Clarity Award**: @[name] - [specific example] - - **🏛️ Architecture Excellence**: @[name] - [what they did] - - **📈 Most Improved**: @[name] - [improvement metric] - - ### 📊 Team Stats - - Commits this week: [X] - - Quality score: [X]/10 (trend ↑↓) - - Technical debt: [X] TODOs (trend ↑↓) - - CLAUDE.md files: [X] (new: [Y]) - - ## 📋 Findings & Fix Branches - - For each finding below, a branch has been created. Review, create PR, merge, or close. - - ### 1️⃣ Documentation Issues - - - [ ] **Branch: `oe/fix-doc-module-x`** - CLAUDE.md accuracy issue - - File: `src/aignostics/module/CLAUDE.md:45` - - Issue: [specific problem] - - Fix: [what was changed] - - Commands: `git checkout oe/fix-doc-module-x` → review → create PR - - - [ ] **Branch: `oe/add-missing-claude-md`** - Missing CLAUDE.md - - Missing in: `src/aignostics/newmodule/` - - Rationale: [why needed] - - Commands: `git checkout oe/add-missing-claude-md` → review → create PR - - ### 2️⃣ Docstring Quality - - - [ ] **Branch: `oe/improve-docstrings-platform`** - Vague docstrings - - Files: [list] - - Before/After examples in branch - - Commands: `git checkout oe/improve-docstrings-platform` → review → create PR - - ### 3️⃣ Code Readability - - - [ ] **Discussion Needed** - Complex logic in `module/_file.py:123` - - Current: [description of issue] - - Suggestion: [how to improve] - - No branch (needs design discussion) - - ### 4️⃣ Architecture - - ✅ **All Good** - Modulith principles followed consistently - - ### 5️⃣ Technical Debt Priority - - 1. [ ] **Branch: `oe/fix-todo-auth-refresh`** - Auth token refresh (6mo old) - 2. [ ] **Branch: `oe/refactor-tile-processing`** - Recurring pattern (4 modules) - 3. [ ] **Discussion Needed** - [item requiring design decision] - - ### 6️⃣ Meta-Improvements - - Suggestions for this workflow: - - [ ] Add check for [X] - - [ ] Consider tool [Y] for [Z] - - [ ] Workflow could be improved by [...] - - ## 🎓 Kudos - - **Excellent Examples This Week**: - - `wsi._service.extract_tiles()` - Clear, well-documented, great naming - - `platform.CLAUDE.md` - Perfect module documentation template - - [Other positive examples] - - ## 📈 Trends - - - Documentation drift: [improving/stable/worsening] - - Code quality: [improving/stable/worsening] - - Team velocity: [metric] - - --- - *Next audit: [next Monday date]* - *Workflow: `.github/workflows/claude-code-automation-operational-excellence.yml`* - EOF - - gh issue create \ - --title "[OE Audit] $(date +%Y-%m-%d)" \ - --body-file /tmp/oe-report.md \ - --label "documentation,code-quality,automated-check" - ``` - - ### Create Fix Branches - - For **each actionable finding**, create a branch with the fix: - - ```bash - # Example: Fix CLAUDE.md in module X - git checkout -b oe/fix-doc-module-x main - # Make the fix (edit the file) - git add . - git commit -m "docs(module): fix CLAUDE.md accuracy issue - - - Corrected method signature documentation - - Updated example code - - Fixed import path - - Ref: [OE Audit] YYYY-MM-DD" - git push origin oe/fix-doc-module-x - - # Add link to issue body - echo "- Branch: oe/fix-doc-module-x" >> /tmp/branches.txt - ``` - - **Branch naming convention**: `oe/fix-{category}-{brief-desc}` - - `oe/fix-doc-*` - Documentation fixes - - `oe/improve-docstrings-*` - Docstring improvements - - `oe/refactor-*` - Code readability refactors - - `oe/add-*` - New documentation/files - - **Guidelines for branches**: - - Only create branches for **clear, mechanical fixes** - - Don't create branches for items needing discussion - - Each branch = one atomic fix - - Branch commit message references parent issue - - Push branches but **don't create PRs** (users will) - - ### If No Issues Found - - ```markdown - # ✅ Operational Excellence Audit - All Clear - - No significant issues detected this week! - - ## 🏆 Leaderboard - - [Include leaderboard anyway] - - ## 🎓 Highlights - - - [Something done especially well] - - [Great example to follow] - - Keep up the excellent work! 🎉 - - Next audit: [date] - ``` - - ## Important Guidelines - - 1. **Be constructive** - Frame as opportunities, not criticism - 2. **Be specific** - File:line references, before/after examples - 3. **Be pragmatic** - Focus on high-impact items - 4. **Be positive** - Always highlight good work (leaderboard, kudos) - 5. **Be actionable** - Clear steps to resolve each item - 6. **Be collaborative** - Users create PRs, you provide branches - - ## Medical Device Context - - Documentation quality = regulatory compliance = patient safety. - Be thorough on documentation accuracy - it's not just "nice to have". - - ## Meta - - Suggest improvements to this workflow! You're continuously learning what works. - - secrets: - ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }} - AIGNOSTICS_CLIENT_ID_DEVICE_STAGING: ${{ secrets.AIGNOSTICS_CLIENT_ID_DEVICE_STAGING }} - AIGNOSTICS_REFRESH_TOKEN_STAGING: ${{ secrets.AIGNOSTICS_REFRESH_TOKEN_STAGING }} - GCP_CREDENTIALS_STAGING: ${{ secrets.GCP_CREDENTIALS_STAGING }} - AIGNOSTICS_CLIENT_ID_DEVICE_PRODUCTION: ${{ secrets.AIGNOSTICS_CLIENT_ID_DEVICE_PRODUCTION }} - AIGNOSTICS_REFRESH_TOKEN_PRODUCTION: ${{ secrets.AIGNOSTICS_REFRESH_TOKEN_PRODUCTION }} - GCP_CREDENTIALS_PRODUCTION: ${{ secrets.GCP_CREDENTIALS_PRODUCTION }} diff --git a/.github/workflows/claude-code-automation-pr-review.yml b/.github/workflows/claude-code-automation-pr-review.yml deleted file mode 100644 index 30038a49..00000000 --- a/.github/workflows/claude-code-automation-pr-review.yml +++ /dev/null @@ -1,229 +0,0 @@ -name: "+ Claude Code / Automation / PR Review" - -on: - pull_request: - types: [opened, synchronize] - workflow_dispatch: - inputs: - platform_environment: - description: 'Environment to use' - required: false - default: 'staging' - type: choice - options: - - staging - - production - -jobs: - claude-review: - uses: ./.github/workflows/_claude-code.yml - with: - platform_environment: ${{ inputs.platform_environment || 'staging' }} - mode: 'automation' - track_progress: ${{ github.event_name != 'workflow_dispatch' && true || false }} - allowed_tools: 'mcp__github_inline_comment__create_inline_comment,Read,Write,Edit,MultiEdit,Glob,Grep,LS,WebFetch,WebSearch,Bash(git:*),Bash(bun:*),Bash(npm:*),Bash(npx:*),Bash(gh:*),Bash(uv:*),Bash(make:*)' - prompt: | - # PR REVIEW FOR AIGNOSTICS PYTHON SDK - - **REPO**: ${{ github.repository }} - **PR**: #${{ github.event.pull_request.number }} - **BRANCH**: ${{ github.head_ref }} - - ## Context - - This is a **medical device software SDK** for computational pathology. You are reviewing code that: - - Processes whole slide images (WSI) - gigapixel medical images - - Integrates with FDA/MDR regulated AI/ML applications - - Handles sensitive medical data (HIPAA compliance required) - - Follows a **modulith architecture** with strict module boundaries - - ## Documentation References - - **IMPORTANT**: This repository has comprehensive documentation you MUST reference: - - **CLAUDE.md** (root) - Architecture, testing workflow, development standards - - **.github/CLAUDE.md** - Complete CI/CD guide (19 workflows, test strategy) - - **src/aignostics/*/CLAUDE.md** - Module-specific implementation guides - - Read these files and apply their guidance throughout your review. - - ## CRITICAL CHECKS (BLOCKING - Must Pass) - - ### 1. Test Markers (CRITICAL!) - - **EVERY test MUST have at least one marker: `unit`, `integration`, or `e2e`** - - Tests without these markers will **NOT run in CI** because the pipeline explicitly filters by markers. - - ```bash - # Find unmarked tests (should return 0 tests): - uv run pytest -m "not unit and not integration and not e2e" --collect-only - - # If unmarked tests found, check the specific files changed in this PR: - uv run pytest tests/aignostics//_test.py --collect-only - ``` - - **Action if violations found**: - - List all test files missing markers with file paths and line numbers - - Provide exact decorator to add (e.g., `@pytest.mark.unit`) - - Reference `.github/CLAUDE.md` test categorization guide - - ### 2. Test Coverage (85% Minimum) - - If coverage drops below 85%, identify which new code lacks tests. - - ### 3. Code Quality (Must Pass) - - ```bash - # Check linting: - make lint - # Runs: ruff format, ruff check, pyright (basic), mypy (strict) - ``` - - All 4 checks must pass. If failures occur, provide specific fixes. - - ### 4. Conventional Commits - - All commits must follow: `type(scope): description` - - Valid types: `feat`, `fix`, `docs`, `refactor`, `test`, `chore`, `ci`, `perf`, `build` - - ```bash - # Check commit messages: - git log --oneline origin/main..HEAD - ``` - - ## Repository-Specific Review Areas - - ### 5. Architecture Compliance - - **Modulith Principles**: - - Each module is self-contained (Service + optional CLI + optional GUI) - - **CRITICAL**: CLI and GUI must NEVER depend on each other - - Both depend on Service layer only - - No circular dependencies between modules - - **Service Pattern**: - - All services inherit from `BaseService` (from `utils`) - - Must implement `health()` → `Health` and `info()` → `dict` - - Use dependency injection via `locate_implementations(BaseService)` - - No decorator-based DI - - **Check**: - ```bash - # Find all service implementations: - grep -r "class.*Service.*BaseService" src/aignostics/ - - # Check for circular imports: - # Look for cross-module imports that shouldn't exist - ``` - - ### 6. Testing Strategy - - **Test Categories** (see `.github/CLAUDE.md` for full details): - - `unit`: Isolated, no external deps, <10s timeout, sequential (XDIST_WORKER_FACTOR=0.0) - - `integration`: Mocked external services, <5min, limited parallel (0.2) - - `e2e`: Real API calls, staging environment, full parallel (1.0) - - `long_running`: E2E tests ≥5min, aggressive parallel (2.0), opt-out with label - - `very_long_running`: E2E tests ≥60min, only run when explicitly enabled - - **For new tests, verify**: - - Correct marker applied based on test characteristics - - Timeout set if >10s (use `@pytest.mark.timeout(60)`) - - E2E tests only run against staging (never production in PR CI) - - ### 7. Medical Device & Security - - **Check for**: - - Secrets/tokens in code or commits (use .env, never hardcode) - - Sensitive data masking in logs (use `mask_secrets=True`) - - Medical data handling (DICOM compliance, proper anonymization) - - OAuth token management (refresh before 5min expiry) - - **WSI Processing**: - - Large images processed in tiles (never load full image in memory) - - Proper cleanup of temp files - - Progress tracking for long operations - - ### 8. Breaking Changes - - **Check if PR introduces**: - - API client method signature changes - - CLI command changes (breaking user scripts) - - Environment variable changes - - Configuration file format changes - - OpenAPI model changes (from codegen) - - If breaking changes detected, verify: - - Proper deprecation warnings added - - Migration guide in PR description - - Version bump appropriate (major vs minor) - - ### 9. CI/CD Impact - - **If workflow files changed** (`.github/workflows/*.yml`): - - Verify syntax is correct - - Check reusable workflow inputs/secrets match - - Validate test marker filtering logic - - Ensure BetterStack heartbeats still work - - **If new tests added**: - - Are they categorized correctly for CI execution? - - Do they need to be scheduled tests? - - Is XDIST_WORKER_FACTOR appropriate? - - ### 10. Documentation Updates - - **If new features added**: - - Is CLAUDE.md updated? (root or module-specific) - - Are docstrings added (Google style)? - - Is README updated if user-facing change? - - Are type hints complete? - - ## Code Quality Standards - - - **Type Checking**: Dual checking (MyPy strict + PyRight basic) - both must pass - - **Line Length**: 120 chars max - - **Imports**: stdlib → 3rd-party → local, use relative imports within modules - - **Docstrings**: Google style for all public APIs - - **Error Handling**: Custom exceptions from `system/_exceptions.py` - - **Logging**: Structured logging via `get_logger(__name__)` - - ## Review Output Format - - For each issue found, provide: - - 1. **Location**: `file_path:line_number` (e.g., `src/aignostics/platform/_client.py:123`) - 2. **Issue**: Clear description with reference to CLAUDE.md section - 3. **Reproduce**: Exact command to reproduce (e.g., `uv run pytest tests/...`) - 4. **Fix**: Concrete code example or command - 5. **Verify**: Command to verify fix works (e.g., `make lint && make test`) - - **Use inline comments** for specific code issues. - **Use PR-level comment** for: - - Summary of findings (blocking vs suggestions) - - Overall architecture feedback - - Praise for excellent work - - Missing CLAUDE.md updates needed - - ## Final Steps - - After completing your review: - - 1. **Summarize findings** in PR comment using `gh pr comment` - 2. **Categorize issues**: Blocking (must fix) vs Suggestions (nice to have) - 3. **Provide commands** to fix all blocking issues in one go if possible - 4. **Reference documentation**: Link to relevant CLAUDE.md sections - - Use `gh pr comment` with your Bash tool to leave your comprehensive review as a comment on the PR. - - --- - - **Remember**: This is medical device software. Insist on highest standards. Be thorough, actionable, and kind. - secrets: - ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }} - AIGNOSTICS_CLIENT_ID_DEVICE_STAGING: ${{ secrets.AIGNOSTICS_CLIENT_ID_DEVICE_STAGING }} - AIGNOSTICS_REFRESH_TOKEN_STAGING: ${{ secrets.AIGNOSTICS_REFRESH_TOKEN_STAGING }} - GCP_CREDENTIALS_STAGING: ${{ secrets.GCP_CREDENTIALS_STAGING }} - AIGNOSTICS_CLIENT_ID_DEVICE_PRODUCTION: ${{ secrets.AIGNOSTICS_CLIENT_ID_DEVICE_PRODUCTION }} - AIGNOSTICS_REFRESH_TOKEN_PRODUCTION: ${{ secrets.AIGNOSTICS_REFRESH_TOKEN_PRODUCTION }} - GCP_CREDENTIALS_PRODUCTION: ${{ secrets.GCP_CREDENTIALS_PRODUCTION }} diff --git a/.github/workflows/claude-code-interactive.yml b/.github/workflows/claude-code-interactive.yml deleted file mode 100644 index 424ff728..00000000 --- a/.github/workflows/claude-code-interactive.yml +++ /dev/null @@ -1,46 +0,0 @@ -name: "+ Claude Code / Interactive" - -on: - issue_comment: - types: [created] - pull_request_review_comment: - types: [created] - issues: - types: [opened, assigned, labeled] - pull_request: - types: [opened, assigned, labeled] - pull_request_review: - types: [submitted] - workflow_dispatch: - inputs: - platform_environment: - description: 'Environment to use' - required: false - default: 'staging' - type: choice - options: - - staging - - production - -jobs: - claude: - if: | - (github.event_name == 'issue_comment' && contains(github.event.comment.body, '@claude')) || - (github.event_name == 'pull_request_review_comment' && contains(github.event.comment.body, '@claude')) || - (github.event_name == 'pull_request_review' && contains(github.event.review.body, '@claude')) || - (github.event_name == 'issues' && (contains(github.event.issue.body, '@claude') || contains(github.event.issue.title, '@claude') || (github.event.action == 'labeled' && github.event.label.name == 'claude'))) || - (github.event_name == 'pull_request' && (contains(github.event.pull_request.body, '@claude') || contains(github.event.pull_request.title, '@claude') || (github.event.action == 'labeled' && github.event.label.name == 'claude'))) || - github.event_name == 'workflow_dispatch' - uses: ./.github/workflows/_claude-code.yml - with: - platform_environment: ${{ inputs.platform_environment || 'staging' }} - mode: 'interactive' - track_progress: ${{ github.event_name != 'workflow_dispatch' && true || false }} - secrets: - ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }} - AIGNOSTICS_CLIENT_ID_DEVICE_STAGING: ${{ secrets.AIGNOSTICS_CLIENT_ID_DEVICE_STAGING }} - AIGNOSTICS_REFRESH_TOKEN_STAGING: ${{ secrets.AIGNOSTICS_REFRESH_TOKEN_STAGING }} - GCP_CREDENTIALS_STAGING: ${{ secrets.GCP_CREDENTIALS_STAGING }} - AIGNOSTICS_CLIENT_ID_DEVICE_PRODUCTION: ${{ secrets.AIGNOSTICS_CLIENT_ID_DEVICE_PRODUCTION }} - AIGNOSTICS_REFRESH_TOKEN_PRODUCTION: ${{ secrets.AIGNOSTICS_REFRESH_TOKEN_PRODUCTION }} - GCP_CREDENTIALS_PRODUCTION: ${{ secrets.GCP_CREDENTIALS_PRODUCTION }} diff --git a/.github/workflows/claude-code-review.yml b/.github/workflows/claude-code-review.yml new file mode 100644 index 00000000..7cc0639f --- /dev/null +++ b/.github/workflows/claude-code-review.yml @@ -0,0 +1,59 @@ +name: Claude Code Review + +on: + pull_request: + types: [opened, synchronize] + # Optional: Only run on specific file changes + # paths: + # - "src/**/*.ts" + # - "src/**/*.tsx" + # - "src/**/*.js" + # - "src/**/*.jsx" + +jobs: + claude-review: + # Optional: Filter by PR author + # if: | + # github.event.pull_request.user.login == 'external-contributor' || + # github.event.pull_request.user.login == 'new-developer' || + # github.event.pull_request.author_association == 'FIRST_TIME_CONTRIBUTOR' + + runs-on: ubuntu-latest + permissions: + contents: read + pull-requests: write + issues: read + id-token: write + + steps: + - name: Checkout repository + uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 + with: + fetch-depth: 1 + + - name: Run Claude Code Review + id: claude-review + uses: anthropics/claude-code-action@v1 + with: + anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }} + claude_args: + --max-turns 20 + --model claude-sonnet-4-5-20250929 + --allowedTools "mcp__github_inline_comment__create_inline_comment,Bash(gh pr comment:*),Bash(gh pr diff:*),Bash(gh pr view:*)" + prompt: | + REPO: ${{ github.repository }} + PR NUMBER: ${{ github.event.pull_request.number }} + + Please review this pull request and provide feedback on: + - Code quality and best practices + - Potential bugs or issues + - Performance considerations + - Security concerns + - Test coverage + + Use the repository's CLAUDE.md for guidance on style and conventions. Be constructive and helpful in your feedback. + + Use `gh pr comment` with your Bash tool to leave your review as a comment on the PR. + + # See https://github.com/anthropics/claude-code-action/blob/main/docs/usage.md + # or https://docs.claude.com/en/docs/claude-code/sdk#command-line for available options diff --git a/.github/workflows/codeql-scheduled.yml b/.github/workflows/codeql-scheduled.yml index add45854..ea33590f 100644 --- a/.github/workflows/codeql-scheduled.yml +++ b/.github/workflows/codeql-scheduled.yml @@ -1,4 +1,4 @@ -name: "+ Scheduled CodeQL" +name: "Scheduled CodeQL" on: schedule: diff --git a/.github/workflows/labels-sync.yml b/.github/workflows/labels-sync.yml deleted file mode 100644 index 6bb1f5ed..00000000 --- a/.github/workflows/labels-sync.yml +++ /dev/null @@ -1,27 +0,0 @@ -name: "+ Sync Labels" - -on: - push: - branches: - - main - paths: - - .github/labels.yml - - .github/workflows/labels-sync.yml - workflow_dispatch: - -jobs: - sync: - runs-on: ubuntu-latest - permissions: - issues: write - pull-requests: write - steps: - - name: Checkout repository - uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 - - - name: Run Labeler - uses: crazy-max/ghaction-github-labeler@24d110aa46a59976b8a7f35518cb7f14f434c916 # v5.3.0 - with: - github-token: ${{ secrets.GITHUB_TOKEN }} - yaml-file: .github/labels.yml - dry-run: false diff --git a/.github/workflows/scheduled-testing-production-daily.yml b/.github/workflows/scheduled-testing-production-daily.yml deleted file mode 100644 index aece605d..00000000 --- a/.github/workflows/scheduled-testing-production-daily.yml +++ /dev/null @@ -1,36 +0,0 @@ -name: "+ Scheduled Testing / Production (Daily)" - -on: - schedule: - - cron: '0 12 * * *' - workflow_dispatch: - inputs: - branch: - description: 'Branch to test (leave empty for main)' - required: false - type: string - default: '' - -jobs: - - scheduled-testing-production-daily: - uses: ./.github/workflows/_scheduled-test-daily.yml - with: - platform_environment: "production" - permissions: - attestations: write - contents: read - id-token: write - packages: write - secrets: - AIGNOSTICS_CLIENT_ID_DEVICE_STAGING: ${{ secrets.AIGNOSTICS_CLIENT_ID_DEVICE_STAGING }} - AIGNOSTICS_REFRESH_TOKEN_STAGING: ${{ secrets.AIGNOSTICS_REFRESH_TOKEN_STAGING }} - GCP_CREDENTIALS_STAGING: ${{ secrets.GCP_CREDENTIALS_STAGING }} - BETTERSTACK_HEARTBEAT_URL_FLOWS_STAGING: ${{ secrets.BETTERSTACK_HEARTBEAT_URL_FLOWS_STAGING }} - AIGNOSTICS_CLIENT_ID_DEVICE_PRODUCTION: ${{ secrets.AIGNOSTICS_CLIENT_ID_DEVICE_PRODUCTION }} - AIGNOSTICS_REFRESH_TOKEN_PRODUCTION: ${{ secrets.AIGNOSTICS_REFRESH_TOKEN_PRODUCTION }} - GCP_CREDENTIALS_PRODUCTION: ${{ secrets.GCP_CREDENTIALS_PRODUCTION }} - BETTERSTACK_HEARTBEAT_URL_FLOWS_PRODUCTION: ${{ secrets.BETTERSTACK_HEARTBEAT_URL_FLOWS_PRODUCTION }} - CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }} - SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }} - SENTRY_DSN: ${{ secrets.SENTRY_DSN }} # For heartbeat only diff --git a/.github/workflows/scheduled-testing-production-hourly.yml b/.github/workflows/scheduled-testing-production-hourly.yml deleted file mode 100644 index 9db7c98d..00000000 --- a/.github/workflows/scheduled-testing-production-hourly.yml +++ /dev/null @@ -1,31 +0,0 @@ -name: "+ Scheduled Testing / Production (Hourly)" - -on: - schedule: - - cron: '0 * * * *' - workflow_dispatch: - inputs: - branch: - description: 'Branch to test (leave empty for main)' - required: false - type: string - default: '' - -jobs: - scheduled-testing-production-hourly: - uses: ./.github/workflows/_scheduled-test-hourly.yml - with: - platform_environment: "production" - permissions: - contents: read - id-token: write - secrets: - AIGNOSTICS_CLIENT_ID_DEVICE_STAGING: ${{ secrets.AIGNOSTICS_CLIENT_ID_DEVICE_STAGING }} - AIGNOSTICS_REFRESH_TOKEN_STAGING: ${{ secrets.AIGNOSTICS_REFRESH_TOKEN_STAGING }} - GCP_CREDENTIALS_STAGING: ${{ secrets.GCP_CREDENTIALS_STAGING }} - BETTERSTACK_HEARTBEAT_URL_STAGING: ${{ secrets.BETTERSTACK_HEARTBEAT_URL_STAGING }} - AIGNOSTICS_CLIENT_ID_DEVICE_PRODUCTION: ${{ secrets.AIGNOSTICS_CLIENT_ID_DEVICE_PRODUCTION }} - AIGNOSTICS_REFRESH_TOKEN_PRODUCTION: ${{ secrets.AIGNOSTICS_REFRESH_TOKEN_PRODUCTION }} - GCP_CREDENTIALS_PRODUCTION: ${{ secrets.GCP_CREDENTIALS_PRODUCTION }} - BETTERSTACK_HEARTBEAT_URL_PRODUCTION: ${{ secrets.BETTERSTACK_HEARTBEAT_URL_PRODUCTION }} - SENTRY_DSN: ${{ secrets.SENTRY_DSN }} # For heartbeat only diff --git a/.github/workflows/scheduled-testing-staging-daily.yml b/.github/workflows/scheduled-testing-staging-daily.yml deleted file mode 100644 index e53d587c..00000000 --- a/.github/workflows/scheduled-testing-staging-daily.yml +++ /dev/null @@ -1,36 +0,0 @@ -name: "+ Scheduled Testing / Staging (Daily)" - -on: - schedule: - - cron: '0 12 * * *' - workflow_dispatch: - inputs: - branch: - description: 'Branch to test (leave empty for main)' - required: false - type: string - default: '' - -jobs: - - scheduled-testing-staging-daily: - uses: ./.github/workflows/_scheduled-test-daily.yml - with: - platform_environment: "staging" - permissions: - attestations: write - contents: read - id-token: write - packages: write - secrets: - AIGNOSTICS_CLIENT_ID_DEVICE_STAGING: ${{ secrets.AIGNOSTICS_CLIENT_ID_DEVICE_STAGING }} - AIGNOSTICS_REFRESH_TOKEN_STAGING: ${{ secrets.AIGNOSTICS_REFRESH_TOKEN_STAGING }} - GCP_CREDENTIALS_STAGING: ${{ secrets.GCP_CREDENTIALS_STAGING }} - BETTERSTACK_HEARTBEAT_URL_FLOWS_STAGING: ${{ secrets.BETTERSTACK_HEARTBEAT_URL_FLOWS_STAGING }} - AIGNOSTICS_CLIENT_ID_DEVICE_PRODUCTION: ${{ secrets.AIGNOSTICS_CLIENT_ID_DEVICE_PRODUCTION }} - AIGNOSTICS_REFRESH_TOKEN_PRODUCTION: ${{ secrets.AIGNOSTICS_REFRESH_TOKEN_PRODUCTION }} - GCP_CREDENTIALS_PRODUCTION: ${{ secrets.GCP_CREDENTIALS_PRODUCTION }} - BETTERSTACK_HEARTBEAT_URL_FLOWS_PRODUCTION: ${{ secrets.BETTERSTACK_HEARTBEAT_URL_FLOWS_PRODUCTION }} - CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }} - SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }} - SENTRY_DSN: ${{ secrets.SENTRY_DSN }} # For heartbeat only diff --git a/.github/workflows/scheduled-testing-staging-hourly.yml b/.github/workflows/scheduled-testing-staging-hourly.yml deleted file mode 100644 index 3505e4e0..00000000 --- a/.github/workflows/scheduled-testing-staging-hourly.yml +++ /dev/null @@ -1,31 +0,0 @@ -name: "+ Scheduled Testing / Staging (Hourly)" - -on: - schedule: - - cron: '0 * * * *' - workflow_dispatch: - inputs: - branch: - description: 'Branch to test (leave empty for main)' - required: false - type: string - default: '' - -jobs: - scheduled-testing-staging-hourly: - uses: ./.github/workflows/_scheduled-test-hourly.yml - with: - platform_environment: "staging" - permissions: - contents: read - id-token: write - secrets: - AIGNOSTICS_CLIENT_ID_DEVICE_STAGING: ${{ secrets.AIGNOSTICS_CLIENT_ID_DEVICE_STAGING }} - AIGNOSTICS_REFRESH_TOKEN_STAGING: ${{ secrets.AIGNOSTICS_REFRESH_TOKEN_STAGING }} - GCP_CREDENTIALS_STAGING: ${{ secrets.GCP_CREDENTIALS_STAGING }} - BETTERSTACK_HEARTBEAT_URL_STAGING: ${{ secrets.BETTERSTACK_HEARTBEAT_URL_STAGING }} - AIGNOSTICS_CLIENT_ID_DEVICE_PRODUCTION: ${{ secrets.AIGNOSTICS_CLIENT_ID_DEVICE_PRODUCTION }} - AIGNOSTICS_REFRESH_TOKEN_PRODUCTION: ${{ secrets.AIGNOSTICS_REFRESH_TOKEN_PRODUCTION }} - GCP_CREDENTIALS_PRODUCTION: ${{ secrets.GCP_CREDENTIALS_PRODUCTION }} - BETTERSTACK_HEARTBEAT_URL_PRODUCTION: ${{ secrets.BETTERSTACK_HEARTBEAT_URL_PRODUCTION }} - SENTRY_DSN: ${{ secrets.SENTRY_DSN }} # For heartbeat only diff --git a/.github/workflows/test-scheduled.yml b/.github/workflows/test-scheduled.yml new file mode 100644 index 00000000..a87cb435 --- /dev/null +++ b/.github/workflows/test-scheduled.yml @@ -0,0 +1,17 @@ +name: "Scheduled Testing" + +on: + schedule: + - cron: '0 * * * *' + +jobs: + test-scheduled: + uses: ./.github/workflows/_scheduled-test.yml + permissions: + contents: read + id-token: write + secrets: + AIGNOSTICS_CLIENT_ID_DEVICE: ${{ secrets.AIGNOSTICS_CLIENT_ID_DEVICE }} + AIGNOSTICS_REFRESH_TOKEN: ${{ secrets.AIGNOSTICS_REFRESH_TOKEN }} + GCP_CREDENTIALS: ${{ secrets.GCP_CREDENTIALS }} + BETTERSTACK_HEARTBEAT_URL: ${{ secrets.BETTERSTACK_HEARTBEAT_URL }} diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml index e06d613a..ee20cf0a 100644 --- a/.pre-commit-config.yaml +++ b/.pre-commit-config.yaml @@ -14,7 +14,6 @@ repos: exclude: "bottle.py" - id: python-no-log-warn - id: python-use-type-annotations - exclude: "src/aignostics/third_party" - id: rst-backticks - id: rst-directive-colons - id: rst-inline-touching-normal @@ -38,10 +37,10 @@ repos: - id: destroyed-symlinks - id: detect-private-key - id: end-of-file-fixer - exclude: "^tests/fixtures/|.json|^codegen" + exclude: "^tests/fixtures/|.json$" + - id: fix-byte-order-marker - id: mixed-line-ending - id: name-tests-test - exclude: "^tests/main.py" - id: requirements-txt-fixer - id: trailing-whitespace exclude: "docs/source/_static|ATTRIBUTIONS.md||API_REFEREENCE" @@ -52,7 +51,7 @@ repos: args: ["--baseline", ".secrets.baseline"] additional_dependencies: ["gibberish-detector"] - repo: https://github.com/astral-sh/uv-pre-commit - rev: 0.9.7 + rev: 0.8.6 hooks: - id: uv-lock - repo: local diff --git a/.python-version b/.python-version index 655ff073..976544cc 100644 --- a/.python-version +++ b/.python-version @@ -1 +1 @@ -3.13.9 +3.13.7 diff --git a/.secrets.baseline b/.secrets.baseline index 5dae4d6c..4e662775 100644 --- a/.secrets.baseline +++ b/.secrets.baseline @@ -90,18 +90,10 @@ { "path": "detect_secrets.filters.allowlist.is_line_allowlisted" }, - { - "path": "detect_secrets.filters.common.is_baseline_file", - "filename": ".secrets.baseline" - }, { "path": "detect_secrets.filters.common.is_ignored_due_to_verification_policies", "min_level": 2 }, - { - "path": "detect_secrets.filters.gibberish.should_exclude_secret", - "limit": 3.7 - }, { "path": "detect_secrets.filters.heuristic.is_indirect_reference" }, @@ -131,5 +123,5 @@ } ], "results": {}, - "generated_at": "2025-10-18T16:35:03Z" + "generated_at": "2025-03-20T21:33:14Z" } diff --git a/.vscode/extensions.json b/.vscode/extensions.json index e237182c..271f407c 100644 --- a/.vscode/extensions.json +++ b/.vscode/extensions.json @@ -10,7 +10,6 @@ "github.copilot-chat", "gruntfuggly.todo-tree", "joshrmosier.streamlit-runner", - "jasonnutter.vscode-codeowners", "kaih2o.python-resource-monitor", "marimo-team.vscode-marimo", "mikestead.dotenv", diff --git a/.vscode/settings.json b/.vscode/settings.json index 1862d5ae..0698ed48 100644 --- a/.vscode/settings.json +++ b/.vscode/settings.json @@ -26,7 +26,30 @@ "python.analysis.aiCodeActions": { "generateDocstring": true }, + "python.analysis.typeCheckingMode": "basic", "python.analysis.diagnosticMode": "workspace", + "python.analysis.exclude": [ + "**/.nox/**", + "**/.venv/**", + "**/dist-packages/**", + "**/dist_vercel/.vercel/**", + "**/dist_native/**", + "**/site-packages/**", + ], + "python.analysis.ignore": [ + "**/third_party/**", + "dist/**", + "dist_vercel/**", + "dist_native/**", + "template/**", + "tests/**", + "codegen/**", + "_notebook.py", + "e2e.py", + "src/aignostics/wsi/_pydicom_handler.py", + "src/aignostics/third_party/idc_index.py", + "src/aignostics/notebook/_notebook.py" + ], "python.testing.autoTestDiscoverOnSaveEnabled": true, "python.testing.unittestEnabled": false, "python.testing.pytestEnabled": true, @@ -48,14 +71,26 @@ "github.copilot.chat.editor.temporalContext.enabled": true, "github.copilot.chat.edits.temporalContext.enabled": true, "github.copilot.chat.codesearch.enabled": true, - "github.copilot.chat.codeGeneration.useInstructionFiles": true, + "github.copilot.chat.codeGeneration.instructions": [ + { + "file": "CODE_STYLE.md" + }, + { + "file": "CONTRIBUTING.md" + } + ], + "github.copilot.chat.completionContext.typescript.mode": "on", "github.copilot.chat.generateTests.codeLens": true, "github.copilot.chat.languageContext.typescript.enabled": true, "github.copilot.chat.reviewSelection.instructions": [], "github.copilot.chat.scopeSelection": true, + "github.copilot.chat.search.semanticTextResults": true, "sonarlint.connectedMode.project": { "connectionId": "aignostics", "projectKey": "aignostics_python-sdk" }, - "makefile.configureOnOpen": false + "makefile.configureOnOpen": false, + "python.analysis.extraPaths": [ + "./src/aignostics/utils/_" + ] } \ No newline at end of file diff --git a/API_REFERENCE_v1.md b/API_REFERENCE_v1.md index 0231e30f..48e3e0cf 100644 --- a/API_REFERENCE_v1.md +++ b/API_REFERENCE_v1.md @@ -1,21 +1,11 @@ # API v1 Reference -## Aignostics Platform API v1.0.0.beta7 +## Aignostics Platform API reference v1.0.0 > Scroll down for code samples, example requests and responses. Select a language for code samples from the tabs above or the mobile navigation menu. -The Aignostics Platform is a cloud-based service that enables organizations to access advanced computational pathology applications through a secure API. The platform provides standardized access to Aignostics' portfolio of computational pathology solutions, with Atlas H&E-TME serving as an example of the available API endpoints. - -To begin using the platform, your organization must first be registered by our business support team. If you don't have an account yet, please contact your account manager or email support@aignostics.com to get started. - -More information about our applications can be found on (https://platform.aignostics.com). - -**How to authorize and test API endpoints:** - -1. Click the "Authorize" button in the right corner below -3. Click "Authorize" button in the dialog to log in with your Aignostics Platform credentials -4. After successful login, you'll be redirected back and can use "Try it out" on any endpoint - -**Note**: You only need to authorize once per session. The lock icons next to endpoints will show green when authorized. +Pagination is done via `page` and `page_size`. Sorting via `sort` query parameter. +The `sort` query parameter can be provided multiple times. The sorting direction can be indicated via +`+` (ascending) or `-` (descending) (e.g. `/v1/applications?sort=+name)`. Base URLs: @@ -26,8 +16,8 @@ Base URLs: - oAuth2 authentication. - Flow: authorizationCode - - Authorization URL = [https://aignostics-platform-staging.eu.auth0.com/authorize](https://aignostics-platform-staging.eu.auth0.com/authorize) - - Token URL = [https://aignostics-platform-staging.eu.auth0.com/oauth/token](https://aignostics-platform-staging.eu.auth0.com/oauth/token) + - Authorization URL = [https://aignostics-platform.eu.auth0.com/authorize](https://aignostics-platform.eu.auth0.com/authorize) + - Token URL = [https://aignostics-platform.eu.auth0.com/oauth/token](https://aignostics-platform.eu.auth0.com/oauth/token) |Scope|Scope Description| |---|---| @@ -76,35 +66,21 @@ fetch('/api/v1/applications', `GET /v1/applications` -*List available applications* +*List Applications* Returns the list of the applications, available to the caller. -The application is available if any of the versions of the application is assigned to the caller’s organization. -The response is paginated and sorted according to the provided parameters. +The application is available if any of the version of the application is assigned to the +user organization. To switch between organizations, the user should re-login and choose the +needed organization. #### Parameters |Name|In|Type|Required|Description| |---|---|---|---|---| |page|query|integer|false|none| -|page-size|query|integer|false|none| -|sort|query|any|false|Sort the results by one or more fields. Use `+` for ascending and `-` for descending order.| - -##### Detailed descriptions - -**sort**: Sort the results by one or more fields. Use `+` for ascending and `-` for descending order. - -**Available fields:** -- `application_id` -- `name` -- `description` -- `regulatory_classes` - -**Examples:** -- `?sort=application_id` - Sort by application_id ascending -- `?sort=-name` - Sort by name descending -- `?sort=+description&sort=name` - Sort by description ascending, then name descending +|page_size|query|integer|false|none| +|sort|query|any|false|none| > Example responses @@ -113,54 +89,21 @@ The response is paginated and sorted according to the provided parameters. ```json [ { - "application_id": "he-tme", - "description": "The Atlas H&E TME is an AI application designed to examine FFPE (formalin-fixed, paraffin-embedded) tissues stained with H&E (hematoxylin and eosin), delivering comprehensive insights into the tumor microenvironment.", - "latest_version": { - "number": "1.0.0", - "released_at": "2025-09-01T19:01:05.401Z" - }, - "name": "Atlas H&E-TME", - "regulatory_classes": [ - "RUO" - ] - }, - { - "application_id": "test-app", - "description": "This is the test application with two algorithms: TissueQc and Tissue Segmentation", - "latest_version": { - "number": "2.0.0", - "released_at": "2025-09-02T19:01:05.401Z" - }, - "name": "Test Application", + "application_id": "h-e-tme", + "description": "string", + "name": "HETA", "regulatory_classes": [ - "RUO" + "RuO" ] } ] ``` -> 422 Response - -```json -{ - "detail": [ - { - "loc": [ - "string" - ], - "msg": "string", - "type": "string" - } - ] -} -``` - #### Responses |Status|Meaning|Description|Schema| |---|---|---|---| -|200|[OK](https://tools.ietf.org/html/rfc7231#section-6.3.1)|A list of applications available to the caller|Inline| -|401|[Unauthorized](https://tools.ietf.org/html/rfc7235#section-3.1)|Unauthorized - Invalid or missing authentication|None| +|200|[OK](https://tools.ietf.org/html/rfc7231#section-6.3.1)|Successful Response|Inline| |422|[Unprocessable Entity](https://tools.ietf.org/html/rfc2518#section-10.3)|Validation Error|[HTTPValidationError](#schemahttpvalidationerror)| #### Response Schema @@ -171,39 +114,19 @@ Status Code **200** |Name|Type|Required|Restrictions|Description| |---|---|---|---|---| -|Response List Applications V1 Applications Get|[[ApplicationReadShortResponse](#schemaapplicationreadshortresponse)]|false|none|[Response schema for `List available applications` and `Read Application by Id` endpoints]| -|» ApplicationReadShortResponse|[ApplicationReadShortResponse](#schemaapplicationreadshortresponse)|false|none|Response schema for `List available applications` and `Read Application by Id` endpoints| +|Response List Applications V1 Applications Get|[[ApplicationReadResponse](#schemaapplicationreadresponse)]|false|none|none| +|» ApplicationReadResponse|[ApplicationReadResponse](#schemaapplicationreadresponse)|false|none|none| |»» application_id|string|true|none|Application ID| -|»» description|string|true|none|Describing what the application can do| -|»» latest_version|any|false|none|The version with highest version number available to the user| - -*anyOf* - -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|»»» *anonymous*|[ApplicationVersion](#schemaapplicationversion)|false|none|none| -|»»»» number|string|true|none|The number of the latest version| -|»»»» released_at|string(date-time)|true|none|The timestamp for when the application version was made available in the Platform| - -*or* - -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|»»» *anonymous*|null|false|none|none| - -*continued* - -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| +|»» description|string|true|none|Application documentations| |»» name|string|true|none|Application display name| -|»» regulatory_classes|[string]|true|none|Regulatory classes, to which the applications comply with. Possible values include: RUO, IVDR, FDA.| +|»» regulatory_classes|[string]|true|none|Regulatory class, to which the applications compliance| To perform this operation, you must be authenticated by means of one of the following methods: OAuth2AuthorizationCodeBearer -### read_application_by_id_v1_applications__application_id__get +### list_versions_by_application_id_v1_applications__application_id__versions_get @@ -216,7 +139,7 @@ headers = { 'Authorization': 'Bearer {access-token}' } -r = requests.get('/api/v1/applications/{application_id}', headers = headers) +r = requests.get('/api/v1/applications/{application_id}/versions', headers = headers) print(r.json()) @@ -229,7 +152,7 @@ const headers = { 'Authorization':'Bearer {access-token}' }; -fetch('/api/v1/applications/{application_id}', +fetch('/api/v1/applications/{application_id}/versions', { method: 'GET', @@ -243,321 +166,118 @@ fetch('/api/v1/applications/{application_id}', ``` -`GET /v1/applications/{application_id}` +`GET /v1/applications/{application_id}/versions` + +*List Versions By Application Id* -*Read Application By Id* +Returns the list of the application versions for this application, available to the caller. -Retrieve details of a specific application by its ID. +The application version is available if it is assigned to the user's organization. + +The application versions are assigned to the organization by the Aignostics admin. To +assign or unassign a version from your organization, please contact Aignostics support team. #### Parameters |Name|In|Type|Required|Description| |---|---|---|---|---| |application_id|path|string|true|none| +|page|query|integer|false|none| +|page_size|query|integer|false|none| +|version|query|any|false|none| +|include|query|any|false|none| +|sort|query|any|false|none| > Example responses > 200 Response ```json -{ - "application_id": "he-tme", - "description": "The Atlas H&E TME is an AI application designed to examine FFPE (formalin-fixed, paraffin-embedded) tissues stained with H&E (hematoxylin and eosin), delivering comprehensive insights into the tumor microenvironment.", - "name": "Atlas H&E-TME", - "regulatory_classes": [ - "RUO" - ], - "versions": [ - { - "number": "1.0.0", - "released_at": "2025-09-15T10:30:45.123Z" - } - ] -} +[ + { + "application_id": "string", + "application_version_id": "h-e-tme:v0.0.1", + "changelog": "string", + "created_at": "2019-08-24T14:15:22Z", + "flow_id": "0746f03b-16cc-49fb-9833-df3713d407d2", + "input_artifacts": [ + { + "metadata_schema": {}, + "mime_type": "image/tiff", + "name": "string" + } + ], + "output_artifacts": [ + { + "metadata_schema": {}, + "mime_type": "application/vnd.apache.parquet", + "name": "string", + "scope": "ITEM" + } + ], + "version": "string" + } +] ``` #### Responses |Status|Meaning|Description|Schema| |---|---|---|---| -|200|[OK](https://tools.ietf.org/html/rfc7231#section-6.3.1)|Successful Response|[ApplicationReadResponse](#schemaapplicationreadresponse)| -|403|[Forbidden](https://tools.ietf.org/html/rfc7231#section-6.5.3)|Forbidden - You don't have permission to see this application|None| -|404|[Not Found](https://tools.ietf.org/html/rfc7231#section-6.5.4)|Not Found - Application with the given ID does not exist|None| +|200|[OK](https://tools.ietf.org/html/rfc7231#section-6.3.1)|Successful Response|Inline| |422|[Unprocessable Entity](https://tools.ietf.org/html/rfc2518#section-10.3)|Validation Error|[HTTPValidationError](#schemahttpvalidationerror)| +#### Response Schema -To perform this operation, you must be authenticated by means of one of the following methods: -OAuth2AuthorizationCodeBearer - - -### application_version_details_v1_applications__application_id__versions__version__get - - - -> Code samples - -```python -import requests -headers = { - 'Accept': 'application/json', - 'Authorization': 'Bearer {access-token}' -} - -r = requests.get('/api/v1/applications/{application_id}/versions/{version}', headers = headers) - -print(r.json()) - -``` - -```javascript - -const headers = { - 'Accept':'application/json', - 'Authorization':'Bearer {access-token}' -}; - -fetch('/api/v1/applications/{application_id}/versions/{version}', -{ - method: 'GET', - - headers: headers -}) -.then(function(res) { - return res.json(); -}).then(function(body) { - console.log(body); -}); - -``` - -`GET /v1/applications/{application_id}/versions/{version}` - -*Application Version Details* +Status Code **200** -Get the application version details +*Response List Versions By Application Id V1 Applications Application Id Versions Get* -Allows caller to retrieve information about application version based on provided application version ID. +|Name|Type|Required|Restrictions|Description| +|---|---|---|---|---| +|Response List Versions By Application Id V1 Applications Application Id Versions Get|[[ApplicationVersionReadResponse](#schemaapplicationversionreadresponse)]|false|none|none| +|» ApplicationVersionReadResponse|[ApplicationVersionReadResponse](#schemaapplicationversionreadresponse)|false|none|none| +|»» application_id|string|true|none|Application ID| +|»» application_version_id|string|true|none|Application version ID| +|»» changelog|string|true|none|Description of the changes relative to the previous version| +|»» created_at|string(date-time)|true|none|The timestamp when the application version was registered| +|»» flow_id|any|false|none|Flow ID, used internally by the platform| -#### Parameters +*anyOf* -|Name|In|Type|Required|Description| +|Name|Type|Required|Restrictions|Description| |---|---|---|---|---| -|application_id|path|string|true|none| -|version|path|string|true|none| +|»»» *anonymous*|string(uuid)|false|none|none| -> Example responses - -> 200 Response +*or* -```json -{ - "changelog": "New deployment", - "input_artifacts": [ - { - "metadata_schema": { - "$defs": { - "LungCancerMetadata": { - "additionalProperties": false, - "properties": { - "tissue": { - "enum": [ - "lung", - "lymph node", - "liver", - "adrenal gland", - "bone", - "brain" - ], - "title": "Tissue", - "type": "string" - }, - "type": { - "const": "lung", - "enum": [ - "lung" - ], - "title": "Type", - "type": "string" - } - }, - "required": [ - "type", - "tissue" - ], - "title": "LungCancerMetadata", - "type": "object" - } - }, - "$schema": "http://json-schema.org/draft-07/schema#", - "additionalProperties": false, - "description": "Metadata corresponding to an external image.", - "properties": { - "base_mpp": { - "maximum": 0.5, - "minimum": 0.125, - "title": "Base Mpp", - "type": "number" - }, - "cancer": { - "anyOf": [ - false - ], - "title": "Cancer" - }, - "checksum_crc32c": { - "title": "Checksum Crc32C", - "type": "string" - }, - "height": { - "maximum": 150000, - "minimum": 1, - "title": "Height", - "type": "integer" - }, - "mime_type": { - "default": "image/tiff", - "enum": [ - "application/dicom", - "image/tiff" - ], - "title": "Mime Type", - "type": "string" - }, - "stain": { - "const": "H&E", - "default": "H&E", - "enum": [ - "H&E" - ], - "title": "Stain", - "type": "string" - }, - "width": { - "maximum": 150000, - "minimum": 1, - "title": "Width", - "type": "integer" - } - }, - "required": [ - "checksum_crc32c", - "base_mpp", - "width", - "height", - "cancer" - ], - "title": "ExternalImageMetadata", - "type": "object" - }, - "mime_type": "image/tiff", - "name": "whole_slide_image" - } - ], - "output_artifacts": [ - { - "metadata_schema": { - "$schema": "http://json-schema.org/draft-07/schema#", - "additionalProperties": false, - "description": "Metadata corresponding to a segmentation heatmap file.", - "properties": { - "base_mpp": { - "maximum": 0.5, - "minimum": 0.125, - "title": "Base Mpp", - "type": "number" - }, - "checksum_crc32c": { - "title": "Checksum Crc32C", - "type": "string" - }, - "class_colors": { - "additionalProperties": { - "maxItems": 3, - "minItems": 3, - "prefixItems": [ - { - "maximum": 255, - "minimum": 0, - "type": "integer" - }, - { - "maximum": 255, - "minimum": 0, - "type": "integer" - }, - { - "maximum": 255, - "minimum": 0, - "type": "integer" - } - ], - "type": "array" - }, - "title": "Class Colors", - "type": "object" - }, - "height": { - "title": "Height", - "type": "integer" - }, - "mime_type": { - "const": "image/tiff", - "default": "image/tiff", - "enum": [ - "image/tiff" - ], - "title": "Mime Type", - "type": "string" - }, - "width": { - "title": "Width", - "type": "integer" - } - }, - "required": [ - "checksum_crc32c", - "width", - "height", - "class_colors" - ], - "title": "HeatmapMetadata", - "type": "object" - }, - "mime_type": "image/tiff", - "name": "tissue_qc:tiff_heatmap", - "scope": "ITEM", - "visibility": "EXTERNAL" - } - ], - "released_at": "2025-04-16T08:45:20.655972Z", - "version_number": "0.4.4" -} -``` +|Name|Type|Required|Restrictions|Description| +|---|---|---|---|---| +|»»» *anonymous*|null|false|none|none| -> 422 Response +*continued* -```json -{ - "detail": [ - { - "loc": [ - "string" - ], - "msg": "string", - "type": "string" - } - ] -} -``` +|Name|Type|Required|Restrictions|Description| +|---|---|---|---|---| +|»» input_artifacts|[[InputArtifactReadResponse](#schemainputartifactreadresponse)]|true|none|List of the input fields, provided by the User| +|»»» InputArtifactReadResponse|[InputArtifactReadResponse](#schemainputartifactreadresponse)|false|none|none| +|»»»» metadata_schema|object|true|none|none| +|»»»» mime_type|string|true|none|none| +|»»»» name|string|true|none|none| +|»» output_artifacts|[[OutputArtifactReadResponse](#schemaoutputartifactreadresponse)]|true|none|List of the output fields, generated by the application| +|»»» OutputArtifactReadResponse|[OutputArtifactReadResponse](#schemaoutputartifactreadresponse)|false|none|none| +|»»»» metadata_schema|object|true|none|none| +|»»»» mime_type|string|true|none|none| +|»»»» name|string|true|none|none| +|»»»» scope|[OutputArtifactScope](#schemaoutputartifactscope)|true|none|none| +|»» version|string|true|none|Semantic version of the application| -#### Responses +##### Enumerated Values -|Status|Meaning|Description|Schema| -|---|---|---|---| -|200|[OK](https://tools.ietf.org/html/rfc7231#section-6.3.1)|Successful Response|[VersionReadResponse](#schemaversionreadresponse)| -|403|[Forbidden](https://tools.ietf.org/html/rfc7231#section-6.5.3)|Forbidden - You don't have permission to see this version|None| -|404|[Not Found](https://tools.ietf.org/html/rfc7231#section-6.5.4)|Not Found - Application version with given ID is not available to you or does not exist|None| -|422|[Unprocessable Entity](https://tools.ietf.org/html/rfc2518#section-10.3)|Validation Error|[HTTPValidationError](#schemahttpvalidationerror)| +|Property|Value| +|---|---| +|scope|ITEM| +|scope|GLOBAL| To perform this operation, you must be authenticated by means of one of the following methods: @@ -606,12 +326,7 @@ fetch('/api/v1/me', `GET /v1/me` -*Get current user* - -Retrieves your identity details, including name, email, and organization. -This is useful for verifying that the request is being made under the correct user profile -and organization context, as well as confirming that the expected environment variables are correctly set -(in case you are using Python SDK) +*Get Me* > Example responses @@ -620,26 +335,26 @@ and organization context, as well as confirming that the expected environment va ```json { "organization": { - "aignostics_bucket_hmac_access_key_id": "YOUR_HMAC_ACCESS_KEY_ID", - "aignostics_bucket_hmac_secret_access_key": "YOUR/HMAC/SECRET_ACCESS_KEY", - "aignostics_bucket_name": "aignostics-platform-bucket", - "aignostics_bucket_protocol": "gs", - "aignostics_logfire_token": "your-logfire-token", - "aignostics_sentry_dsn": "https://2354s3#ewsha@o44.ingest.us.sentry.io/34345123432", - "display_name": "Aignostics GmbH", - "id": "org_123456", - "name": "aignx" + "aignostics_bucket_hmac_access_key_id": "string", + "aignostics_bucket_hmac_secret_access_key": "string", + "aignostics_bucket_name": "string", + "aignostics_bucket_protocol": "string", + "aignostics_logfire_token": "string", + "aignostics_sentry_dsn": "string", + "display_name": "string", + "id": "string", + "name": "string" }, "user": { - "email": "user@domain.com", + "email": "string", "email_verified": true, - "family_name": "Doe", - "given_name": "Jane", - "id": "auth0|123456", - "name": "Jane Doe", - "nickname": "jdoe", - "picture": "https://example.com/jdoe.jpg", - "updated_at": "2023-10-05T14:48:00.000Z" + "family_name": "string", + "given_name": "string", + "id": "string", + "name": "string", + "nickname": "string", + "picture": "string", + "updated_at": "2019-08-24T14:15:22Z" } } ``` @@ -655,7 +370,7 @@ To perform this operation, you must be authenticated by means of one of the foll OAuth2AuthorizationCodeBearer -### list_runs_v1_runs_get +### list_application_runs_v1_runs_get @@ -697,68 +412,21 @@ fetch('/api/v1/runs', `GET /v1/runs` -*List Runs* +*List Application Runs* -List runs with filtering, sorting, and pagination capabilities. - -Returns paginated runs that were submitted by the user. +The endpoint returns the application runs triggered by the caller. After the application run +is created by POST /v1/runs, it becomes available for the current endpoint #### Parameters |Name|In|Type|Required|Description| |---|---|---|---|---| |application_id|query|any|false|Optional application ID filter| -|application_version|query|any|false|Optional Version Name| -|external_id|query|any|false|Optionally filter runs by items with this external ID| -|custom_metadata|query|any|false|Use PostgreSQL JSONPath expressions to filter runs by their custom_metadata.| +|application_version|query|any|false|Optional application version filter| +|include|query|any|false|Request optional output values. Used internally by the platform| |page|query|integer|false|none| |page_size|query|integer|false|none| -|sort|query|any|false|Sort the results by one or more fields. Use `+` for ascending and `-` for descending order.| - -##### Detailed descriptions - -**custom_metadata**: Use PostgreSQL JSONPath expressions to filter runs by their custom_metadata. -##### URL Encoding Required -**Important**: JSONPath expressions contain special characters that must be URL-encoded when used in query parameters. Most HTTP clients handle this automatically, but when constructing URLs manually, ensure proper encoding. - -##### Examples (Clear Format): -- **Field existence**: `$.study` - Runs that have a study field defined -- **Exact value match**: `$.study ? (@ == "high")` - Runs with specific study value -- **Numeric comparison**: `$.confidence_score ? (@ > 0.75)` - Runs with confidence score greater than 0.75 -- **Array operations**: `$.tags[*] ? (@ == "draft")` - Runs with tags array containing "draft" -- **Complex conditions**: `$.resources ? (@.gpu_count > 2 && @.memory_gb >= 16)` - Runs with high resource requirements - -##### Examples (URL-Encoded Format): -- **Field existence**: `%24.study` -- **Exact value match**: `%24.study%20%3F%20(%40%20%3D%3D%20%22high%22)` -- **Numeric comparison**: `%24.confidence_score%20%3F%20(%40%20%3E%200.75)` -- **Array operations**: `%24.tags%5B*%5D%20%3F%20(%40%20%3D%3D%20%22draft%22)` -- **Complex conditions**: `%24.resources%20%3F%20(%40.gpu_count%20%3E%202%20%26%26%20%40.memory_gb%20%3E%3D%2016)` - -##### Notes -- JSONPath expressions are evaluated using PostgreSQL's `@?` operator -- The `$.` prefix is automatically added to root-level field references if missing -- String values in conditions must be enclosed in double quotes -- Use `&&` for AND operations and `||` for OR operations -- Regular expressions use `like_regex` with standard regex syntax -- **Remember to URL-encode the entire JSONPath expression when making HTTP requests** - - - -**sort**: Sort the results by one or more fields. Use `+` for ascending and `-` for descending order. - -**Available fields:** -- `run_id` -- `application_version_id` -- `organization_id` -- `status` -- `submitted_at` -- `submitted_by` - -**Examples:** -- `?sort=submitted_at` - Sort by creation time (ascending) -- `?sort=-submitted_at` - Sort by creation time (descending) -- `?sort=status&sort=-submitted_at` - Sort by status, then by time (descending) +|sort|query|any|false|none| > Example responses @@ -767,31 +435,81 @@ Returns paginated runs that were submitted by the user. ```json [ { - "application_id": "he-tme", - "custom_metadata": { - "department": "D1", - "study": "abc-1" - }, - "custom_metadata_checksum": "f54fe109", - "error_code": "SCHEDULER.ITEMS_WITH_ERROR_THRESHOLD_REACHED", - "error_message": "Run canceled given errors on more than 10 items.", - "output": "NONE", - "run_id": "dded282c-8ebd-44cf-8ba5-9a234973d1ec", - "state": "PENDING", - "statistics": { - "item_count": 0, - "item_pending_count": 0, - "item_processing_count": 0, - "item_skipped_count": 0, - "item_succeeded_count": 0, - "item_system_error_count": 0, - "item_user_error_count": 0 - }, - "submitted_at": "2019-08-24T14:15:22Z", - "submitted_by": "auth0|123456", - "terminated_at": "2024-01-15T10:30:45.123Z", - "termination_reason": "ALL_ITEMS_PROCESSED", - "version_number": "0.4.4" + "application_run_id": "53c0c6ed-e767-49c4-ad7c-b1a749bf7dfe", + "application_version_id": "string", + "organization_id": "string", + "status": "CANCELED_SYSTEM", + "triggered_at": "2019-08-24T14:15:22Z", + "triggered_by": "string", + "user_payload": { + "application_id": "string", + "application_run_id": "53c0c6ed-e767-49c4-ad7c-b1a749bf7dfe", + "global_output_artifacts": { + "property1": { + "data": { + "download_url": "http://example.com", + "upload_url": "http://example.com" + }, + "metadata": { + "download_url": "http://example.com", + "upload_url": "http://example.com" + }, + "output_artifact_id": "3f78e99c-5d35-4282-9e82-63c422f3af1b" + }, + "property2": { + "data": { + "download_url": "http://example.com", + "upload_url": "http://example.com" + }, + "metadata": { + "download_url": "http://example.com", + "upload_url": "http://example.com" + }, + "output_artifact_id": "3f78e99c-5d35-4282-9e82-63c422f3af1b" + } + }, + "items": [ + { + "input_artifacts": { + "property1": { + "download_url": "http://example.com", + "input_artifact_id": "a4134709-460b-44b6-99b2-2d637f889159", + "metadata": {} + }, + "property2": { + "download_url": "http://example.com", + "input_artifact_id": "a4134709-460b-44b6-99b2-2d637f889159", + "metadata": {} + } + }, + "item_id": "4d8cd62e-a579-4dae-af8c-3172f96f8f7c", + "output_artifacts": { + "property1": { + "data": { + "download_url": "http://example.com", + "upload_url": "http://example.com" + }, + "metadata": { + "download_url": "http://example.com", + "upload_url": "http://example.com" + }, + "output_artifact_id": "3f78e99c-5d35-4282-9e82-63c422f3af1b" + }, + "property2": { + "data": { + "download_url": "http://example.com", + "upload_url": "http://example.com" + }, + "metadata": { + "download_url": "http://example.com", + "upload_url": "http://example.com" + }, + "output_artifact_id": "3f78e99c-5d35-4282-9e82-63c422f3af1b" + } + } + } + ] + } } ] ``` @@ -801,63 +519,68 @@ Returns paginated runs that were submitted by the user. |Status|Meaning|Description|Schema| |---|---|---|---| |200|[OK](https://tools.ietf.org/html/rfc7231#section-6.3.1)|Successful Response|Inline| -|404|[Not Found](https://tools.ietf.org/html/rfc7231#section-6.5.4)|Run not found|None| +|404|[Not Found](https://tools.ietf.org/html/rfc7231#section-6.5.4)|Application run not found|None| |422|[Unprocessable Entity](https://tools.ietf.org/html/rfc2518#section-10.3)|Validation Error|[HTTPValidationError](#schemahttpvalidationerror)| #### Response Schema Status Code **200** -*Response List Runs V1 Runs Get* +*Response List Application Runs V1 Runs Get* |Name|Type|Required|Restrictions|Description| |---|---|---|---|---| -|Response List Runs V1 Runs Get|[[RunReadResponse](#schemarunreadresponse)]|false|none|[Response schema for `Get run details` endpoint]| -|» RunReadResponse|[RunReadResponse](#schemarunreadresponse)|false|none|Response schema for `Get run details` endpoint| -|»» application_id|string|true|none|Application id| -|»» custom_metadata|any|false|none|Optional JSON metadata that was stored in alongside the run by the user| +|Response List Application Runs V1 Runs Get|[[RunReadResponse](#schemarunreadresponse)]|false|none|none| +|» RunReadResponse|[RunReadResponse](#schemarunreadresponse)|false|none|none| +|»» application_run_id|string(uuid)|true|none|UUID of the application| +|»» application_version_id|string|true|none|ID of the application version| +|»» organization_id|string|true|none|Organization of the owner of the application run| +|»» status|[ApplicationRunStatus](#schemaapplicationrunstatus)|true|none|none| +|»» triggered_at|string(date-time)|true|none|Timestamp showing when the application run was triggered| +|»» triggered_by|string|true|none|Id of the user who triggered the application run| +|»» user_payload|any|false|none|Field used internally by the Platform| *anyOf* |Name|Type|Required|Restrictions|Description| |---|---|---|---|---| -|»»» *anonymous*|object|false|none|none| - -*or* - -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|»»» *anonymous*|null|false|none|none| - -*continued* - -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|»» custom_metadata_checksum|any|false|none|The checksum of the `custom_metadata` field. Can be used in the `PUT /runs/{run-id}/custom_metadata`request to avoid unwanted override of the values in concurrent requests.| +|»»» *anonymous*|[UserPayload](#schemauserpayload)|false|none|none| +|»»»» application_id|string|true|none|none| +|»»»» application_run_id|string(uuid)|true|none|none| +|»»»» global_output_artifacts|any|true|none|none| *anyOf* |Name|Type|Required|Restrictions|Description| |---|---|---|---|---| -|»»» *anonymous*|string|false|none|none| +|»»»»» *anonymous*|object|false|none|none| +|»»»»»» PayloadOutputArtifact|[PayloadOutputArtifact](#schemapayloadoutputartifact)|false|none|none| +|»»»»»»» data|[TransferUrls](#schematransferurls)|true|none|none| +|»»»»»»»» download_url|string(uri)|true|none|none| +|»»»»»»»» upload_url|string(uri)|true|none|none| +|»»»»»»» metadata|[TransferUrls](#schematransferurls)|true|none|none| +|»»»»»»» output_artifact_id|string(uuid)|true|none|none| *or* |Name|Type|Required|Restrictions|Description| |---|---|---|---|---| -|»»» *anonymous*|null|false|none|none| +|»»»»» *anonymous*|null|false|none|none| *continued* |Name|Type|Required|Restrictions|Description| |---|---|---|---|---| -|»» error_code|any|true|none|When the termination_reason is set to CANCELED_BY_SYSTEM, the error_code is set to define the structured description of the error.| - -*anyOf* - -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|»»» *anonymous*|string|false|none|none| +|»»»» items|[[PayloadItem](#schemapayloaditem)]|true|none|none| +|»»»»» PayloadItem|[PayloadItem](#schemapayloaditem)|false|none|none| +|»»»»»» input_artifacts|object|true|none|none| +|»»»»»»» PayloadInputArtifact|[PayloadInputArtifact](#schemapayloadinputartifact)|false|none|none| +|»»»»»»»» download_url|string(uri)|true|none|none| +|»»»»»»»» input_artifact_id|string(uuid)|false|none|none| +|»»»»»»»» metadata|object|true|none|none| +|»»»»»» item_id|string(uuid)|true|none|none| +|»»»»»» output_artifacts|object|true|none|none| +|»»»»»»» PayloadOutputArtifact|[PayloadOutputArtifact](#schemapayloadoutputartifact)|false|none|none| *or* @@ -865,99 +588,25 @@ Status Code **200** |---|---|---|---|---| |»»» *anonymous*|null|false|none|none| -*continued* +##### Enumerated Values -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|»» error_message|any|true|none|When the termination_reason is set to CANCELED_BY_SYSTEM, the error_message is set to provide more insights to the error cause.| +|Property|Value| +|---|---| +|status|CANCELED_SYSTEM| +|status|CANCELED_USER| +|status|COMPLETED| +|status|COMPLETED_WITH_ERROR| +|status|RECEIVED| +|status|REJECTED| +|status|RUNNING| +|status|SCHEDULED| -*anyOf* -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|»»» *anonymous*|string|false|none|none| +To perform this operation, you must be authenticated by means of one of the following methods: +OAuth2AuthorizationCodeBearer -*or* -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|»»» *anonymous*|null|false|none|none| - -*continued* - -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|»» output|[RunOutput](#schemarunoutput)|true|none|none| -|»» run_id|string(uuid)|true|none|UUID of the application| -|»» state|[RunState](#schemarunstate)|true|none|none| -|»» statistics|[RunItemStatistics](#schemarunitemstatistics)|true|none|none| -|»»» item_count|integer|true|none|Total number of the items in the run| -|»»» item_pending_count|integer|true|none|The number of items in `PENDING` state| -|»»» item_processing_count|integer|true|none|The number of items in `PROCESSING` state| -|»»» item_skipped_count|integer|true|none|The number of items in `TERMINATED` state, and the item termination reason is `SKIPPED`| -|»»» item_succeeded_count|integer|true|none|The number of items in `TERMINATED` state, and the item termination reason is `SUCCEEDED`| -|»»» item_system_error_count|integer|true|none|The number of items in `TERMINATED` state, and the item termination reason is `SYSTEM_ERROR`| -|»»» item_user_error_count|integer|true|none|The number of items in `TERMINATED` state, and the item termination reason is `USER_ERROR`| -|»» submitted_at|string(date-time)|true|none|Timestamp showing when the run was triggered| -|»» submitted_by|string|true|none|Id of the user who triggered the run| -|»» terminated_at|any|false|none|Timestamp showing when the run reached a terminal state.| - -*anyOf* - -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|»»» *anonymous*|string(date-time)|false|none|none| - -*or* - -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|»»» *anonymous*|null|false|none|none| - -*continued* - -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|»» termination_reason|any|true|none|The termination reason of the run. When the run is not in `TERMINATED` state, the termination_reason is `null`. If all items of of the run are processed (successfully or with an error), then termination_reason is set to `ALL_ITEMS_PROCESSED`. If the run is cancelled by the user, the value is set to `CANCELED_BY_USER`. If the run reaches the threshold of number of failed items, the Platform cancels the run and sets the termination_reason to `CANCELED_BY_SYSTEM`.| - -*anyOf* - -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|»»» *anonymous*|[RunTerminationReason](#schemarunterminationreason)|false|none|none| - -*or* - -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|»»» *anonymous*|null|false|none|none| - -*continued* - -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|»» version_number|string|true|none|Application version number| - -##### Enumerated Values - -|Property|Value| -|---|---| -|output|NONE| -|output|PARTIAL| -|output|FULL| -|state|PENDING| -|state|PROCESSING| -|state|TERMINATED| -|*anonymous*|ALL_ITEMS_PROCESSED| -|*anonymous*|CANCELED_BY_SYSTEM| -|*anonymous*|CANCELED_BY_USER| - - -To perform this operation, you must be authenticated by means of one of the following methods: -OAuth2AuthorizationCodeBearer - - -### create_run_v1_runs_post +### create_application_run_v1_runs_post @@ -979,35 +628,25 @@ print(r.json()) ```javascript const inputBody = '{ - "application_id": "he-tme", - "custom_metadata": { - "department": "D1", - "study": "abc-1" - }, + "application_version_id": "h-e-tme:v1.2.3", "items": [ { - "external_id": "slide_1", "input_artifacts": [ { - "download_url": "https://example-bucket.s3.amazonaws.com/slide1.tiff?signature=...", + "download_url": "https://example.com/case-no-1-slide.tiff", "metadata": { - "checksum_base64_crc32c": "64RKKA==", - "height_px": 87761, - "media-type": "image/tiff", - "resolution_mpp": 0.2628238, - "specimen": { - "disease": "LUNG_CANCER", - "tissue": "LUNG" - }, - "staining_method": "H&E", - "width_px": 136223 + "checksum_base64_crc32c": "752f9554", + "height": 2000, + "height_mpp": 0.5, + "width": 10000, + "width_mpp": 0.5 }, - "name": "input_slide" + "name": "slide" } - ] + ], + "reference": "case-no-1" } - ], - "version_number": "1.0.0-beta1" + ] }'; const headers = { 'Content-Type':'application/json', @@ -1031,117 +670,51 @@ fetch('/api/v1/runs', `POST /v1/runs` -*Initiate Run* +*Create Application Run* -This endpoint initiates a processing run for a selected application and version, and returns a `run_id` for tracking purposes. +The endpoint is used to process the input items by the chosen application version. The endpoint +returns the `application_run_id`. The processing fo the items is done asynchronously. -Slide processing occurs asynchronously, allowing you to retrieve results for individual slides as soon as they -complete processing. The system typically processes slides in batches of four, though this number may be reduced -during periods of high demand. -Below is an example of the required payload for initiating an Atlas H&E TME processing run. +To check the status or cancel the execution, use the /v1/runs/{application_run_id} endpoint. #### Payload -The payload includes `application_id`, optional `version_number`, and `items` base fields. +The payload includes `application_version_id` and `items` base fields. -`application_id` is the unique identifier for the application. -`version_number` is the semantic version to use. If not provided, the latest available version will be used. +`application_version_id` is the id used for `/v1/versions/{application_id}` endpoint. `items` includes the list of the items to process (slides, in case of HETA application). -Every item has a set of standard fields defined by the API, plus the custom_metadata, specific to the +Every item has a set of standard fields defined by the API, plus the metadata, specific to the chosen application. Example payload structure with the comments: ``` { - application_id: "he-tme", - version_number: "1.0.0-beta", + application_version_id: "test-app:v0.0.2", items: [{ - "external_id": "slide_1", - "input_artifacts": [{ - "name": "user_slide", - "download_url": "https://...", - "custom_metadata": { - "specimen": { - "disease": "LUNG_CANCER", - "tissue": "LUNG" - }, - "staining_method": "H&E", - "width_px": 136223, - "height_px": 87761, - "resolution_mpp": 0.2628238, - "media-type":"image/tiff", - "checksum_base64_crc32c": "64RKKA==" - } - }] - }] -} -``` - -| Parameter | Description | -| :---- | :---- | -| `application_id` required | Unique ID for the application | -| `version_number` optional | Semantic version of the application. If not provided, the latest available version will be used | -| `items` required | List of submitted items (WSIs) with parameters described below. | -| `external_id` required | Unique WSI name or ID for easy reference to items, provided by the caller. The external_id should be unique across all items of the run. | -| `input_artifacts` required | List of provided artifacts for a WSI; at the moment Atlas H&E-TME receives only 1 artifact per slide (the slide itself), but for some other applications this can be a slide and an segmentation map | -| `name` required | Type of artifact; Atlas H&E-TME supports only `"input_slide"` | -| `download_url` required | Signed URL to the input file in the S3 or GCS; Should be valid for at least 6 days | -| `specimen: disease` required | Supported cancer types for Atlas H&E-TME (see full list in Atlas H&E-TME manual) | -| `specimen: tissue` required | Supported tissue types for Atlas H&E-TME (see full list in Atlas H&E-TME manual) | -| `staining_method` required | WSI stain /bio-marker; Atlas H&E-TME supports only `"H&E"` | -| `width_px` required | Integer value. Number of pixels of the WSI in the X dimension. | -| `height_px` required | Integer value. Number of pixels of the WSI in the Y dimension. | -| `resolution_mpp` required | Resolution of WSI in micrometers per pixel; check allowed range in Atlas H&E-TME manual | -| `media-type` required | Supported media formats; available values are: image/tiff (for .tiff or .tif WSI) application/dicom (for DICOM ) application/zip (for zipped DICOM) application/octet-stream (for .svs WSI) | -| `checksum_base64_crc32c` required | Base64 encoded big-endian CRC32C checksum of the WSI image | - -#### Response - -The endpoint returns the run UUID. After that the job is scheduled for the -execution in the background. - -To check the status of the run call `v1/runs/{run_id}`. - -#### Rejection - -Apart from the authentication, authorization and malformed input error, the request can be -rejected when the quota limit is exceeded. More details on quotas is described in the -documentation - -> Body parameter + "reference": "slide_1", Body parameter ```json { - "application_id": "he-tme", - "custom_metadata": { - "department": "D1", - "study": "abc-1" - }, + "application_version_id": "h-e-tme:v1.2.3", "items": [ { - "external_id": "slide_1", "input_artifacts": [ { - "download_url": "https://example-bucket.s3.amazonaws.com/slide1.tiff?signature=...", + "download_url": "https://example.com/case-no-1-slide.tiff", "metadata": { - "checksum_base64_crc32c": "64RKKA==", - "height_px": 87761, - "media-type": "image/tiff", - "resolution_mpp": 0.2628238, - "specimen": { - "disease": "LUNG_CANCER", - "tissue": "LUNG" - }, - "staining_method": "H&E", - "width_px": 136223 + "checksum_base64_crc32c": "752f9554", + "height": 2000, + "height_mpp": 0.5, + "width": 10000, + "width_mpp": 0.5 }, - "name": "input_slide" + "name": "slide" } - ] + ], + "reference": "case-no-1" } - ], - "version_number": "1.0.0-beta1" + ] } ``` @@ -1157,7 +730,7 @@ documentation ```json { - "run_id": "3fa85f64-5717-4562-b3fc-2c963f66afa6" + "application_run_id": "Application run id" } ``` @@ -1166,9 +739,7 @@ documentation |Status|Meaning|Description|Schema| |---|---|---|---| |201|[Created](https://tools.ietf.org/html/rfc7231#section-6.3.2)|Successful Response|[RunCreationResponse](#schemaruncreationresponse)| -|400|[Bad Request](https://tools.ietf.org/html/rfc7231#section-6.5.1)|Bad Request - Input validation failed|None| -|403|[Forbidden](https://tools.ietf.org/html/rfc7231#section-6.5.3)|Forbidden - You don't have permission to create this run|None| -|404|[Not Found](https://tools.ietf.org/html/rfc7231#section-6.5.4)|Application version not found|None| +|404|[Not Found](https://tools.ietf.org/html/rfc7231#section-6.5.4)|Application run not found|None| |422|[Unprocessable Entity](https://tools.ietf.org/html/rfc2518#section-10.3)|Validation Error|[HTTPValidationError](#schemahttpvalidationerror)| @@ -1176,7 +747,7 @@ To perform this operation, you must be authenticated by means of one of the foll OAuth2AuthorizationCodeBearer -### get_run_v1_runs__run_id__get +### get_run_v1_runs__application_run_id__get @@ -1189,7 +760,7 @@ headers = { 'Authorization': 'Bearer {access-token}' } -r = requests.get('/api/v1/runs/{run_id}', headers = headers) +r = requests.get('/api/v1/runs/{application_run_id}', headers = headers) print(r.json()) @@ -1202,7 +773,7 @@ const headers = { 'Authorization':'Bearer {access-token}' }; -fetch('/api/v1/runs/{run_id}', +fetch('/api/v1/runs/{application_run_id}', { method: 'GET', @@ -1216,21 +787,22 @@ fetch('/api/v1/runs/{run_id}', ``` -`GET /v1/runs/{run_id}` +`GET /v1/runs/{application_run_id}` -*Get run details* +*Get Run* -This endpoint allows the caller to retrieve the current status of a run along with other relevant run details. - A run becomes available immediately after it is created through the POST `/runs/` endpoint. +Returns the details of the application run. The application run is available as soon as it is +created via `POST /runs/` endpoint. To download the items results, call +`/runs/{application_run_id}/results`. - To download the output results, use GET `/runs/{run_id}/` items to get outputs for all slides. -Access to a run is restricted to the user who created it. +The application is only available to the user who triggered it, regardless of the role. #### Parameters |Name|In|Type|Required|Description| |---|---|---|---|---| -|run_id|path|string(uuid)|true|Run id, returned by `POST /runs/` endpoint| +|application_run_id|path|string(uuid)|true|Application run id, returned by `POST /runs/` endpoint| +|include|query|any|false|none| > Example responses @@ -1238,31 +810,81 @@ Access to a run is restricted to the user who created it. ```json { - "application_id": "he-tme", - "custom_metadata": { - "department": "D1", - "study": "abc-1" - }, - "custom_metadata_checksum": "f54fe109", - "error_code": "SCHEDULER.ITEMS_WITH_ERROR_THRESHOLD_REACHED", - "error_message": "Run canceled given errors on more than 10 items.", - "output": "NONE", - "run_id": "dded282c-8ebd-44cf-8ba5-9a234973d1ec", - "state": "PENDING", - "statistics": { - "item_count": 0, - "item_pending_count": 0, - "item_processing_count": 0, - "item_skipped_count": 0, - "item_succeeded_count": 0, - "item_system_error_count": 0, - "item_user_error_count": 0 - }, - "submitted_at": "2019-08-24T14:15:22Z", - "submitted_by": "auth0|123456", - "terminated_at": "2024-01-15T10:30:45.123Z", - "termination_reason": "ALL_ITEMS_PROCESSED", - "version_number": "0.4.4" + "application_run_id": "53c0c6ed-e767-49c4-ad7c-b1a749bf7dfe", + "application_version_id": "string", + "organization_id": "string", + "status": "CANCELED_SYSTEM", + "triggered_at": "2019-08-24T14:15:22Z", + "triggered_by": "string", + "user_payload": { + "application_id": "string", + "application_run_id": "53c0c6ed-e767-49c4-ad7c-b1a749bf7dfe", + "global_output_artifacts": { + "property1": { + "data": { + "download_url": "http://example.com", + "upload_url": "http://example.com" + }, + "metadata": { + "download_url": "http://example.com", + "upload_url": "http://example.com" + }, + "output_artifact_id": "3f78e99c-5d35-4282-9e82-63c422f3af1b" + }, + "property2": { + "data": { + "download_url": "http://example.com", + "upload_url": "http://example.com" + }, + "metadata": { + "download_url": "http://example.com", + "upload_url": "http://example.com" + }, + "output_artifact_id": "3f78e99c-5d35-4282-9e82-63c422f3af1b" + } + }, + "items": [ + { + "input_artifacts": { + "property1": { + "download_url": "http://example.com", + "input_artifact_id": "a4134709-460b-44b6-99b2-2d637f889159", + "metadata": {} + }, + "property2": { + "download_url": "http://example.com", + "input_artifact_id": "a4134709-460b-44b6-99b2-2d637f889159", + "metadata": {} + } + }, + "item_id": "4d8cd62e-a579-4dae-af8c-3172f96f8f7c", + "output_artifacts": { + "property1": { + "data": { + "download_url": "http://example.com", + "upload_url": "http://example.com" + }, + "metadata": { + "download_url": "http://example.com", + "upload_url": "http://example.com" + }, + "output_artifact_id": "3f78e99c-5d35-4282-9e82-63c422f3af1b" + }, + "property2": { + "data": { + "download_url": "http://example.com", + "upload_url": "http://example.com" + }, + "metadata": { + "download_url": "http://example.com", + "upload_url": "http://example.com" + }, + "output_artifact_id": "3f78e99c-5d35-4282-9e82-63c422f3af1b" + } + } + } + ] + } } ``` @@ -1271,95 +893,15 @@ Access to a run is restricted to the user who created it. |Status|Meaning|Description|Schema| |---|---|---|---| |200|[OK](https://tools.ietf.org/html/rfc7231#section-6.3.1)|Successful Response|[RunReadResponse](#schemarunreadresponse)| -|403|[Forbidden](https://tools.ietf.org/html/rfc7231#section-6.5.3)|Forbidden - You don't have permission to see this run|None| -|404|[Not Found](https://tools.ietf.org/html/rfc7231#section-6.5.4)|Run not found because it was deleted.|None| -|422|[Unprocessable Entity](https://tools.ietf.org/html/rfc2518#section-10.3)|Validation Error|[HTTPValidationError](#schemahttpvalidationerror)| - - -To perform this operation, you must be authenticated by means of one of the following methods: -OAuth2AuthorizationCodeBearer - - -### delete_run_items_v1_runs__run_id__artifacts_delete - - - -> Code samples - -```python -import requests -headers = { - 'Accept': 'application/json', - 'Authorization': 'Bearer {access-token}' -} - -r = requests.delete('/api/v1/runs/{run_id}/artifacts', headers = headers) - -print(r.json()) - -``` - -```javascript - -const headers = { - 'Accept':'application/json', - 'Authorization':'Bearer {access-token}' -}; - -fetch('/api/v1/runs/{run_id}/artifacts', -{ - method: 'DELETE', - - headers: headers -}) -.then(function(res) { - return res.json(); -}).then(function(body) { - console.log(body); -}); - -``` - -`DELETE /v1/runs/{run_id}/artifacts` - -*Delete Run Items* - -This endpoint allows the caller to explicitly delete artifacts generated by a run. -It can only be invoked when the run has reached a final state -(PROCESSED, CANCELED_SYSTEM, CANCELED_USER). -Note that by default, all artifacts are automatically deleted 30 days after the run finishes, - regardless of whether the caller explicitly requests deletion. - -#### Parameters - -|Name|In|Type|Required|Description| -|---|---|---|---|---| -|run_id|path|string(uuid)|true|Run id, returned by `POST /runs/` endpoint| - -> Example responses - -> 200 Response - -```json -null -``` - -#### Responses - -|Status|Meaning|Description|Schema| -|---|---|---|---| -|200|[OK](https://tools.ietf.org/html/rfc7231#section-6.3.1)|Run artifacts deleted|Inline| -|404|[Not Found](https://tools.ietf.org/html/rfc7231#section-6.5.4)|Run not found|None| +|404|[Not Found](https://tools.ietf.org/html/rfc7231#section-6.5.4)|Application run not found|None| |422|[Unprocessable Entity](https://tools.ietf.org/html/rfc2518#section-10.3)|Validation Error|[HTTPValidationError](#schemahttpvalidationerror)| -#### Response Schema - To perform this operation, you must be authenticated by means of one of the following methods: OAuth2AuthorizationCodeBearer -### cancel_run_v1_runs__run_id__cancel_post +### cancel_application_run_v1_runs__application_run_id__cancel_post @@ -1372,7 +914,7 @@ headers = { 'Authorization': 'Bearer {access-token}' } -r = requests.post('/api/v1/runs/{run_id}/cancel', headers = headers) +r = requests.post('/api/v1/runs/{application_run_id}/cancel', headers = headers) print(r.json()) @@ -1385,7 +927,7 @@ const headers = { 'Authorization':'Bearer {access-token}' }; -fetch('/api/v1/runs/{run_id}/cancel', +fetch('/api/v1/runs/{application_run_id}/cancel', { method: 'POST', @@ -1399,11 +941,11 @@ fetch('/api/v1/runs/{run_id}/cancel', ``` -`POST /v1/runs/{run_id}/cancel` +`POST /v1/runs/{application_run_id}/cancel` -*Cancel Run* +*Cancel Application Run* -The run can be canceled by the user who created the run. +The application run can be canceled by the user who created the application run. The execution can be canceled any time while the application is not in a final state. The pending items will not be processed and will not add to the cost. @@ -1414,7 +956,7 @@ When the application is canceled, the already completed items stay available for |Name|In|Type|Required|Description| |---|---|---|---|---| -|run_id|path|string(uuid)|true|Run id, returned by `POST /runs/` endpoint| +|application_run_id|path|string(uuid)|true|Application run id, returned by `POST /runs/` endpoint| > Example responses @@ -1429,9 +971,7 @@ null |Status|Meaning|Description|Schema| |---|---|---|---| |202|[Accepted](https://tools.ietf.org/html/rfc7231#section-6.3.3)|Successful Response|Inline| -|403|[Forbidden](https://tools.ietf.org/html/rfc7231#section-6.5.3)|Forbidden - You don't have permission to cancel this run|None| -|404|[Not Found](https://tools.ietf.org/html/rfc7231#section-6.5.4)|Run not found|None| -|409|[Conflict](https://tools.ietf.org/html/rfc7231#section-6.5.8)|Conflict - The Run is already cancelled|None| +|404|[Not Found](https://tools.ietf.org/html/rfc7231#section-6.5.4)|Application run not found|None| |422|[Unprocessable Entity](https://tools.ietf.org/html/rfc2518#section-10.3)|Validation Error|[HTTPValidationError](#schemahttpvalidationerror)| #### Response Schema @@ -1441,7 +981,7 @@ To perform this operation, you must be authenticated by means of one of the foll OAuth2AuthorizationCodeBearer -### put_run_custom_metadata_v1_runs__run_id__custom_metadata_put +### delete_application_run_results_v1_runs__application_run_id__results_delete @@ -1450,35 +990,27 @@ OAuth2AuthorizationCodeBearer ```python import requests headers = { - 'Content-Type': 'application/json', 'Accept': 'application/json', 'Authorization': 'Bearer {access-token}' } -r = requests.put('/api/v1/runs/{run_id}/custom-metadata', headers = headers) +r = requests.delete('/api/v1/runs/{application_run_id}/results', headers = headers) print(r.json()) ``` ```javascript -const inputBody = '{ - "custom_metadata": { - "department": "D1", - "study": "abc-1" - }, - "custom_metadata_checksum": "f54fe109" -}'; + const headers = { - 'Content-Type':'application/json', 'Accept':'application/json', 'Authorization':'Bearer {access-token}' }; -fetch('/api/v1/runs/{run_id}/custom-metadata', +fetch('/api/v1/runs/{application_run_id}/results', { - method: 'PUT', - body: inputBody, + method: 'DELETE', + headers: headers }) .then(function(res) { @@ -1489,53 +1021,54 @@ fetch('/api/v1/runs/{run_id}/custom-metadata', ``` -`PUT /v1/runs/{run_id}/custom-metadata` +`DELETE /v1/runs/{application_run_id}/results` -*Put Run Custom Metadata* +*Delete Application Run Results* -> Body parameter +Delete the application run results. It can only be called when the application is in a final +state (meaning it's not in `received` or `pending` states). To delete the results of the running +artifacts, first call `POST /v1/runs/{application_run_id}/cancel` to cancel the application run. -```json -{ - "custom_metadata": { - "department": "D1", - "study": "abc-1" - }, - "custom_metadata_checksum": "f54fe109" -} -``` +The output results are deleted automatically 30 days after the application run is finished. #### Parameters |Name|In|Type|Required|Description| |---|---|---|---|---| -|run_id|path|string(uuid)|true|Run id, returned by `POST /runs/` endpoint| -|body|body|[CustomMetadataUpdateRequest](#schemacustommetadataupdaterequest)|true|none| +|application_run_id|path|string(uuid)|true|Application run id, returned by `POST /runs/` endpoint| > Example responses -> 200 Response +> 422 Response ```json -null +{ + "detail": [ + { + "loc": [ + "string" + ], + "msg": "string", + "type": "string" + } + ] +} ``` #### Responses |Status|Meaning|Description|Schema| |---|---|---|---| -|200|[OK](https://tools.ietf.org/html/rfc7231#section-6.3.1)|Successful Response|Inline| -|404|[Not Found](https://tools.ietf.org/html/rfc7231#section-6.5.4)|Run not found|None| +|204|[No Content](https://tools.ietf.org/html/rfc7231#section-6.3.5)|Successful Response|None| +|404|[Not Found](https://tools.ietf.org/html/rfc7231#section-6.5.4)|Application run not found|None| |422|[Unprocessable Entity](https://tools.ietf.org/html/rfc2518#section-10.3)|Validation Error|[HTTPValidationError](#schemahttpvalidationerror)| -#### Response Schema - To perform this operation, you must be authenticated by means of one of the following methods: OAuth2AuthorizationCodeBearer -### list_run_items_v1_runs__run_id__items_get +### list_run_results_v1_runs__application_run_id__results_get @@ -1548,7 +1081,7 @@ headers = { 'Authorization': 'Bearer {access-token}' } -r = requests.get('/api/v1/runs/{run_id}/items', headers = headers) +r = requests.get('/api/v1/runs/{application_run_id}/results', headers = headers) print(r.json()) @@ -1561,7 +1094,7 @@ const headers = { 'Authorization':'Bearer {access-token}' }; -fetch('/api/v1/runs/{run_id}/items', +fetch('/api/v1/runs/{application_run_id}/results', { method: 'GET', @@ -1575,58 +1108,23 @@ fetch('/api/v1/runs/{run_id}/items', ``` -`GET /v1/runs/{run_id}/items` - -*List Run Items* - -List items in a run with filtering, sorting, and pagination capabilities. - -Returns paginated items within a specific run. Results can be filtered -by item IDs, external_ids, status, and custom_metadata using JSONPath expressions. +`GET /v1/runs/{application_run_id}/results` -### JSONPath Metadata Filtering -Use PostgreSQL JSONPath expressions to filter items using their custom_metadata. +*List Run Results* -#### Examples: -- **Field existence**: `$.case_id` - Results that have a case_id field defined -- **Exact value match**: `$.priority ? (@ == "high")` - Results with high priority -- **Numeric comparison**: `$.confidence_score ? (@ > 0.95)` - Results with high confidence -- **Array operations**: `$.flags[*] ? (@ == "reviewed")` - Results flagged as reviewed -- **Complex conditions**: `$.metrics ? (@.accuracy > 0.9 && @.recall > 0.8)` - Results meeting performance thresholds - -### Notes -- JSONPath expressions are evaluated using PostgreSQL's `@?` operator -- The `$.` prefix is automatically added to root-level field references if missing -- String values in conditions must be enclosed in double quotes -- Use `&&` for AND operations and `||` for OR operations +Get the list of the results for the run items #### Parameters |Name|In|Type|Required|Description| |---|---|---|---|---| -|run_id|path|string(uuid)|true|Run id, returned by `POST /runs/` endpoint| -|item_id__in|query|any|false|Filter for item ids| -|external_id__in|query|any|false|Filter for items by their external_id from the input payload| -|state|query|any|false|Filter items by their state| -|termination_reason|query|any|false|Filter items by their termination reason. Only applies to TERMINATED items.| -|custom_metadata|query|any|false|JSONPath expression to filter items by their custom_metadata| +|application_run_id|path|string(uuid)|true|Application run id, returned by `POST /runs/` endpoint| +|item_id__in|query|any|false|Filter for items ids| +|reference__in|query|any|false|Filter for items by their reference from the input payload| +|status__in|query|any|false|Filter for items in certain statuses| |page|query|integer|false|none| |page_size|query|integer|false|none| -|sort|query|any|false|Sort the items by one or more fields. Use `+` for ascending and `-` for descending order.| - -##### Detailed descriptions - -**sort**: Sort the items by one or more fields. Use `+` for ascending and `-` for descending order. - **Available fields:** -- `item_id` -- `run_id` -- `external_id` -- `custom_metadata` - -**Examples:** -- `?sort=item_id` - Sort by id of the item (ascending) -- `?sort=-external_id` - Sort by external ID (descending) -- `?sort=custom_metadata&sort=-external_id` - Sort by metadata, then by external ID (descending) +|sort|query|any|false|none| > Example responses @@ -1635,29 +1133,19 @@ Use PostgreSQL JSONPath expressions to filter items using their custom_metadata. ```json [ { - "custom_metadata": {}, - "custom_metadata_checksum": "f54fe109", - "error_code": "string", - "error_message": "This item was not processed because the threshold of 3 items finishing in error state (user or system error) was reached before the item was processed.", - "external_id": "slide_1", + "application_run_id": "53c0c6ed-e767-49c4-ad7c-b1a749bf7dfe", + "error": "string", "item_id": "4d8cd62e-a579-4dae-af8c-3172f96f8f7c", - "output": "NONE", "output_artifacts": [ { "download_url": "http://example.com", - "error_code": "string", - "error_message": "string", "metadata": {}, - "name": "tissue_qc:tiff_heatmap", - "output": "NONE", - "output_artifact_id": "3f78e99c-5d35-4282-9e82-63c422f3af1b", - "state": "PENDING", - "termination_reason": "SUCCEEDED" + "name": "string", + "output_artifact_id": "3f78e99c-5d35-4282-9e82-63c422f3af1b" } ], - "state": "PENDING", - "terminated_at": "2024-01-15T10:30:45.123Z", - "termination_reason": "SUCCEEDED" + "reference": "string", + "status": "PENDING" } ] ``` @@ -1667,26 +1155,27 @@ Use PostgreSQL JSONPath expressions to filter items using their custom_metadata. |Status|Meaning|Description|Schema| |---|---|---|---| |200|[OK](https://tools.ietf.org/html/rfc7231#section-6.3.1)|Successful Response|Inline| -|404|[Not Found](https://tools.ietf.org/html/rfc7231#section-6.5.4)|Run not found|None| +|404|[Not Found](https://tools.ietf.org/html/rfc7231#section-6.5.4)|Application run not found|None| |422|[Unprocessable Entity](https://tools.ietf.org/html/rfc2518#section-10.3)|Validation Error|[HTTPValidationError](#schemahttpvalidationerror)| #### Response Schema Status Code **200** -*Response List Run Items V1 Runs Run Id Items Get* +*Response List Run Results V1 Runs Application Run Id Results Get* |Name|Type|Required|Restrictions|Description| |---|---|---|---|---| -|Response List Run Items V1 Runs Run Id Items Get|[[ItemResultReadResponse](#schemaitemresultreadresponse)]|false|none|[Response schema for items in `List Run Items` endpoint]| -|» ItemResultReadResponse|[ItemResultReadResponse](#schemaitemresultreadresponse)|false|none|Response schema for items in `List Run Items` endpoint| -|»» custom_metadata|any|true|none|The custom_metadata of the item that has been provided by the user on run creation.| +|Response List Run Results V1 Runs Application Run Id Results Get|[[ItemResultReadResponse](#schemaitemresultreadresponse)]|false|none|none| +|» ItemResultReadResponse|[ItemResultReadResponse](#schemaitemresultreadresponse)|false|none|none| +|»» application_run_id|string(uuid)|true|none|Application run UUID to which the item belongs| +|»» error|any|true|none|The error message in case the item is in `error_system` or `error_user` state| *anyOf* |Name|Type|Required|Restrictions|Description| |---|---|---|---|---| -|»»» *anonymous*|object|false|none|none| +|»»» *anonymous*|string|false|none|none| *or* @@ -1698,823 +1187,82 @@ Status Code **200** |Name|Type|Required|Restrictions|Description| |---|---|---|---|---| -|»» custom_metadata_checksum|any|false|none|The checksum of the `custom_metadata` field.Can be used in the `PUT /runs/{run-id}/items/{external_id}/custom_metadata`request to avoid unwanted override of the values in concurrent requests.| +|»» item_id|string(uuid)|true|none|Item UUID generated by the Platform| +|»» output_artifacts|[[OutputArtifactResultReadResponse](#schemaoutputartifactresultreadresponse)]|true|none|The list of the results generated by the application algorithm. The number of files and theirtypes depend on the particular application version, call `/v1/versions/{version_id}` to getthe details.| +|»»» OutputArtifactResultReadResponse|[OutputArtifactResultReadResponse](#schemaoutputartifactresultreadresponse)|false|none|none| +|»»»» download_url|any|true|none|The download URL to the output file. The URL is valid for 1 hour after the endpoint is called.A new URL is generated every time the endpoint is called.| *anyOf* |Name|Type|Required|Restrictions|Description| |---|---|---|---|---| -|»»» *anonymous*|string|false|none|none| +|»»»»» *anonymous*|string(uri)|false|none|none| *or* |Name|Type|Required|Restrictions|Description| |---|---|---|---|---| -|»»» *anonymous*|null|false|none|none| +|»»»»» *anonymous*|null|false|none|none| *continued* |Name|Type|Required|Restrictions|Description| |---|---|---|---|---| -|»» error_code|any|true|read-only|Error code describing the error that occurred during item processing.| +|»»»» metadata|object|true|none|The metadata of the output artifact, provided by the application| +|»»»» name|string|true|none|Name of the output from the output schema from the `/v1/versions/{version_id}` endpoint.| +|»»»» output_artifact_id|string(uuid)|true|none|The Id of the artifact. Used internally| +|»» reference|string|true|none|The reference of the item from the user payload| +|»» status|[ItemStatus](#schemaitemstatus)|true|none|none| -*anyOf* +##### Enumerated Values -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|»»» *anonymous*|string|false|none|none| +|Property|Value| +|---|---| +|status|PENDING| +|status|CANCELED_USER| +|status|CANCELED_SYSTEM| +|status|ERROR_USER| +|status|ERROR_SYSTEM| +|status|SUCCEEDED| -*or* -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|»»» *anonymous*|null|false|none|none| +To perform this operation, you must be authenticated by means of one of the following methods: +OAuth2AuthorizationCodeBearer -*continued* -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|»» error_message|any|false|none|The error message in case the `termination_reason` is in `USER_ERROR` or `SYSTEM_ERROR`| +## Schemas -*anyOf* +### ApplicationReadResponse -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|»»» *anonymous*|string|false|none|none| -*or* -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|»»» *anonymous*|null|false|none|none| -*continued* -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|»» external_id|string|true|none|The external_id of the item from the user payload| -|»» item_id|string(uuid)|true|none|Item UUID generated by the Platform| -|»» output|[ItemOutput](#schemaitemoutput)|true|none|none| -|»» output_artifacts|[[OutputArtifactResultReadResponse](#schemaoutputartifactresultreadresponse)]|true|none|The list of the results generated by the application algorithm. The number of files and theirtypes depend on the particular application version, call `/v1/versions/{version_id}` to getthe details.| -|»»» OutputArtifactResultReadResponse|[OutputArtifactResultReadResponse](#schemaoutputartifactresultreadresponse)|false|none|none| -|»»»» download_url|any|true|none|The download URL to the output file. The URL is valid for 1 hour after the endpoint is called.A new URL is generated every time the endpoint is called.| -*anyOf* +```json +{ + "application_id": "h-e-tme", + "description": "string", + "name": "HETA", + "regulatory_classes": [ + "RuO" + ] +} -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|»»»»» *anonymous*|string(uri)|false|none|none| +``` -*or* +ApplicationReadResponse + +#### Properties |Name|Type|Required|Restrictions|Description| |---|---|---|---|---| -|»»»»» *anonymous*|null|false|none|none| - -*continued* - -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|»»»» error_code|any|true|read-only|Error code describing the error that occurred during artifact processing.| - -*anyOf* - -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|»»»»» *anonymous*|string|false|none|none| - -*or* - -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|»»»»» *anonymous*|null|false|none|none| - -*continued* - -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|»»»» error_message|any|false|none|Error message when artifact is in error state| - -*anyOf* - -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|»»»»» *anonymous*|string|false|none|none| - -*or* - -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|»»»»» *anonymous*|null|false|none|none| - -*continued* - -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|»»»» metadata|any|false|none|The metadata of the output artifact, provided by the application. Can only be None if the artifact itself was deleted.| - -*anyOf* - -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|»»»»» *anonymous*|object|false|none|none| - -*or* - -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|»»»»» *anonymous*|null|false|none|none| - -*continued* - -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|»»»» name|string|true|none|Name of the output from the output schema from the `/v1/versions/{version_id}` endpoint.| -|»»»» output|[ArtifactOutput](#schemaartifactoutput)|true|none|none| -|»»»» output_artifact_id|string(uuid)|true|none|The Id of the artifact. Used internally| -|»»»» state|[ArtifactState](#schemaartifactstate)|true|none|none| -|»»»» termination_reason|any|false|none|The reason for termination when state is TERMINATED| - -*anyOf* - -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|»»»»» *anonymous*|[ArtifactTerminationReason](#schemaartifactterminationreason)|false|none|none| - -*or* - -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|»»»»» *anonymous*|null|false|none|none| - -*continued* - -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|»» state|[ItemState](#schemaitemstate)|true|none|none| -|»» terminated_at|any|false|none|Timestamp showing when the item reached a terminal state.| - -*anyOf* - -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|»»» *anonymous*|string(date-time)|false|none|none| - -*or* - -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|»»» *anonymous*|null|false|none|none| - -*continued* - -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|»» termination_reason|any|false|none|When the `state` is `TERMINATED` this will explain why`SUCCEEDED` -> Successful processing.`USER_ERROR` -> Failed because the provided input was invalid.`SYSTEM_ERROR` -> There was an error in the model or platform.`SKIPPED` -> Was cancelled| - -*anyOf* - -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|»»» *anonymous*|[ItemTerminationReason](#schemaitemterminationreason)|false|none|none| - -*or* - -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|»»» *anonymous*|null|false|none|none| - -##### Enumerated Values - -|Property|Value| -|---|---| -|output|NONE| -|output|FULL| -|output|NONE| -|output|AVAILABLE| -|output|DELETED_BY_USER| -|output|DELETED_BY_SYSTEM| -|state|PENDING| -|state|PROCESSING| -|state|TERMINATED| -|*anonymous*|SUCCEEDED| -|*anonymous*|USER_ERROR| -|*anonymous*|SYSTEM_ERROR| -|*anonymous*|SKIPPED| -|state|PENDING| -|state|PROCESSING| -|state|TERMINATED| -|*anonymous*|SUCCEEDED| -|*anonymous*|USER_ERROR| -|*anonymous*|SYSTEM_ERROR| -|*anonymous*|SKIPPED| - - -To perform this operation, you must be authenticated by means of one of the following methods: -OAuth2AuthorizationCodeBearer - - -### get_item_by_run_v1_runs__run_id__items__external_id__get - - - -> Code samples - -```python -import requests -headers = { - 'Accept': 'application/json', - 'Authorization': 'Bearer {access-token}' -} - -r = requests.get('/api/v1/runs/{run_id}/items/{external_id}', headers = headers) - -print(r.json()) - -``` - -```javascript - -const headers = { - 'Accept':'application/json', - 'Authorization':'Bearer {access-token}' -}; - -fetch('/api/v1/runs/{run_id}/items/{external_id}', -{ - method: 'GET', - - headers: headers -}) -.then(function(res) { - return res.json(); -}).then(function(body) { - console.log(body); -}); - -``` - -`GET /v1/runs/{run_id}/items/{external_id}` - -*Get Item By Run* - -Retrieve details of a specific item (slide) by its external ID and the run ID. - -#### Parameters - -|Name|In|Type|Required|Description| -|---|---|---|---|---| -|run_id|path|string(uuid)|true|The run id, returned by `POST /runs/` endpoint| -|external_id|path|string|true|The `external_id` that was defined for the item by the customer that triggered the run.| - -> Example responses - -> 200 Response - -```json -{ - "custom_metadata": {}, - "custom_metadata_checksum": "f54fe109", - "error_code": "string", - "error_message": "This item was not processed because the threshold of 3 items finishing in error state (user or system error) was reached before the item was processed.", - "external_id": "slide_1", - "item_id": "4d8cd62e-a579-4dae-af8c-3172f96f8f7c", - "output": "NONE", - "output_artifacts": [ - { - "download_url": "http://example.com", - "error_code": "string", - "error_message": "string", - "metadata": {}, - "name": "tissue_qc:tiff_heatmap", - "output": "NONE", - "output_artifact_id": "3f78e99c-5d35-4282-9e82-63c422f3af1b", - "state": "PENDING", - "termination_reason": "SUCCEEDED" - } - ], - "state": "PENDING", - "terminated_at": "2024-01-15T10:30:45.123Z", - "termination_reason": "SUCCEEDED" -} -``` - -#### Responses - -|Status|Meaning|Description|Schema| -|---|---|---|---| -|200|[OK](https://tools.ietf.org/html/rfc7231#section-6.3.1)|Successful Response|[ItemResultReadResponse](#schemaitemresultreadresponse)| -|403|[Forbidden](https://tools.ietf.org/html/rfc7231#section-6.5.3)|Forbidden - You don't have permission to see this item|None| -|404|[Not Found](https://tools.ietf.org/html/rfc7231#section-6.5.4)|Not Found - Item with given ID does not exist|None| -|422|[Unprocessable Entity](https://tools.ietf.org/html/rfc2518#section-10.3)|Validation Error|[HTTPValidationError](#schemahttpvalidationerror)| - - -To perform this operation, you must be authenticated by means of one of the following methods: -OAuth2AuthorizationCodeBearer - - -### put_item_custom_metadata_by_run_v1_runs__run_id__items__external_id__custom_metadata_put - - - -> Code samples - -```python -import requests -headers = { - 'Content-Type': 'application/json', - 'Accept': 'application/json', - 'Authorization': 'Bearer {access-token}' -} - -r = requests.put('/api/v1/runs/{run_id}/items/{external_id}/custom-metadata', headers = headers) - -print(r.json()) - -``` - -```javascript -const inputBody = '{ - "custom_metadata": { - "department": "D1", - "study": "abc-1" - }, - "custom_metadata_checksum": "f54fe109" -}'; -const headers = { - 'Content-Type':'application/json', - 'Accept':'application/json', - 'Authorization':'Bearer {access-token}' -}; - -fetch('/api/v1/runs/{run_id}/items/{external_id}/custom-metadata', -{ - method: 'PUT', - body: inputBody, - headers: headers -}) -.then(function(res) { - return res.json(); -}).then(function(body) { - console.log(body); -}); - -``` - -`PUT /v1/runs/{run_id}/items/{external_id}/custom-metadata` - -*Put Item Custom Metadata By Run* - -> Body parameter - -```json -{ - "custom_metadata": { - "department": "D1", - "study": "abc-1" - }, - "custom_metadata_checksum": "f54fe109" -} -``` - -#### Parameters - -|Name|In|Type|Required|Description| -|---|---|---|---|---| -|run_id|path|string(uuid)|true|The run id, returned by `POST /runs/` endpoint| -|external_id|path|string|true|The `external_id` that was defined for the item by the customer that triggered the run.| -|body|body|[CustomMetadataUpdateRequest](#schemacustommetadataupdaterequest)|true|none| - -> Example responses - -> 200 Response - -```json -null -``` - -#### Responses - -|Status|Meaning|Description|Schema| -|---|---|---|---| -|200|[OK](https://tools.ietf.org/html/rfc7231#section-6.3.1)|Successful Response|Inline| -|422|[Unprocessable Entity](https://tools.ietf.org/html/rfc2518#section-10.3)|Validation Error|[HTTPValidationError](#schemahttpvalidationerror)| - -#### Response Schema - - -To perform this operation, you must be authenticated by means of one of the following methods: -OAuth2AuthorizationCodeBearer - - -## Schemas - -### ApplicationReadResponse - - - - - - -```json -{ - "application_id": "he-tme", - "description": "The Atlas H&E TME is an AI application designed to examine FFPE (formalin-fixed, paraffin-embedded) tissues stained with H&E (hematoxylin and eosin), delivering comprehensive insights into the tumor microenvironment.", - "name": "Atlas H&E-TME", - "regulatory_classes": [ - "RUO" - ], - "versions": [ - { - "number": "1.0.0", - "released_at": "2025-09-15T10:30:45.123Z" - } - ] -} - -``` - -ApplicationReadResponse - -#### Properties - -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|application_id|string|true|none|Application ID| -|description|string|true|none|Describing what the application can do| -|name|string|true|none|Application display name| -|regulatory_classes|[string]|true|none|Regulatory classes, to which the applications comply with. Possible values include: RUO, IVDR, FDA.| -|versions|[[ApplicationVersion](#schemaapplicationversion)]|true|none|All version numbers available to the user| - -### ApplicationReadShortResponse - - - - - - -```json -{ - "application_id": "he-tme", - "description": "The Atlas H&E TME is an AI application designed to examine FFPE (formalin-fixed, paraffin-embedded) tissues stained with H&E (hematoxylin and eosin), delivering comprehensive insights into the tumor microenvironment.", - "latest_version": { - "number": "1.0.0", - "released_at": "2025-09-15T10:30:45.123Z" - }, - "name": "Atlas H&E-TME", - "regulatory_classes": [ - "RUO" - ] -} - -``` - -ApplicationReadShortResponse - -#### Properties - -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|application_id|string|true|none|Application ID| -|description|string|true|none|Describing what the application can do| -|latest_version|any|false|none|The version with highest version number available to the user| - -anyOf - -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|» *anonymous*|[ApplicationVersion](#schemaapplicationversion)|false|none|none| - -or - -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|» *anonymous*|null|false|none|none| - -continued - -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|name|string|true|none|Application display name| -|regulatory_classes|[string]|true|none|Regulatory classes, to which the applications comply with. Possible values include: RUO, IVDR, FDA.| - -### ApplicationVersion - - - - - - -```json -{ - "number": "1.0.0", - "released_at": "2025-09-15T10:30:45.123Z" -} - -``` - -ApplicationVersion - -#### Properties - -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|number|string|true|none|The number of the latest version| -|released_at|string(date-time)|true|none|The timestamp for when the application version was made available in the Platform| - -### ArtifactOutput - - - - - - -```json -"NONE" - -``` - -ArtifactOutput - -#### Properties - -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|ArtifactOutput|string|false|none|none| - -##### Enumerated Values - -|Property|Value| -|---|---| -|ArtifactOutput|NONE| -|ArtifactOutput|AVAILABLE| -|ArtifactOutput|DELETED_BY_USER| -|ArtifactOutput|DELETED_BY_SYSTEM| - -### ArtifactState - - - - - - -```json -"PENDING" - -``` - -ArtifactState - -#### Properties - -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|ArtifactState|string|false|none|none| - -##### Enumerated Values - -|Property|Value| -|---|---| -|ArtifactState|PENDING| -|ArtifactState|PROCESSING| -|ArtifactState|TERMINATED| - -### ArtifactTerminationReason - - - - - - -```json -"SUCCEEDED" - -``` - -ArtifactTerminationReason - -#### Properties - -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|ArtifactTerminationReason|string|false|none|none| - -##### Enumerated Values - -|Property|Value| -|---|---| -|ArtifactTerminationReason|SUCCEEDED| -|ArtifactTerminationReason|USER_ERROR| -|ArtifactTerminationReason|SYSTEM_ERROR| -|ArtifactTerminationReason|SKIPPED| - -### CustomMetadataUpdateRequest - - - - - - -```json -{ - "custom_metadata": { - "department": "D1", - "study": "abc-1" - }, - "custom_metadata_checksum": "f54fe109" -} - -``` - -CustomMetadataUpdateRequest - -#### Properties - -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|custom_metadata|any|false|none|JSON metadata that should be set for the run| - -anyOf - -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|» *anonymous*|object|false|none|none| - -or - -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|» *anonymous*|null|false|none|none| - -continued - -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|custom_metadata_checksum|any|false|none|Optional field to verify that the latest custom metadata was known. If set to the checksum retrieved via the /runs endpoint, it must match the checksum of the current value in the database.| - -anyOf - -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|» *anonymous*|string|false|none|none| - -or - -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|» *anonymous*|null|false|none|none| - -### HTTPValidationError - - - - - - -```json -{ - "detail": [ - { - "loc": [ - "string" - ], - "msg": "string", - "type": "string" - } - ] -} - -``` - -HTTPValidationError - -#### Properties - -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|detail|[[ValidationError](#schemavalidationerror)]|false|none|none| - -### InputArtifact - - - - - - -```json -{ - "metadata_schema": {}, - "mime_type": "image/tiff", - "name": "string" -} - -``` - -InputArtifact - -#### Properties - -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|metadata_schema|object|true|none|none| -|mime_type|string|true|none|none| -|name|string|true|none|none| - -### InputArtifactCreationRequest - - - - - - -```json -{ - "download_url": "https://example.com/case-no-1-slide.tiff", - "metadata": { - "checksum_base64_crc32c": "752f9554", - "height": 2000, - "height_mpp": 0.5, - "width": 10000, - "width_mpp": 0.5 - }, - "name": "input_slide" -} - -``` - -InputArtifactCreationRequest - -#### Properties - -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|download_url|string(uri)|true|none|[Signed URL](https://cloud.google.com/cdn/docs/using-signed-urls) to the input artifact file. The URL should be valid for at least 6 days from the payload submission time.| -|metadata|object|true|none|The metadata of the artifact, required by the application version. The JSON schema of the metadata can be requested by `/v1/versions/{application_version_id}`. The schema is located in `input_artifacts.[].metadata_schema`| -|name|string|true|none|Type of artifact. For Atlas H&E-TME, use "input_slide"| - -### ItemCreationRequest - - - - - - -```json -{ - "custom_metadata": { - "case": "abc" - }, - "external_id": "slide_1", - "input_artifacts": [ - { - "download_url": "https://example-bucket.s3.amazonaws.com/slide1.tiff", - "metadata": { - "checksum_base64_crc32c": "64RKKA==", - "height_px": 87761, - "media-type": "image/tiff", - "resolution_mpp": 0.2628238, - "specimen": { - "disease": "LUNG_CANCER", - "tissue": "LUNG" - }, - "staining_method": "H&E", - "width_px": 136223 - }, - "name": "input_slide" - } - ] -} - -``` - -ItemCreationRequest - -#### Properties - -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|custom_metadata|any|false|none|Optional JSON custom_metadata to store additional information alongside an item.| - -anyOf - -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|» *anonymous*|object|false|none|none| - -or - -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|» *anonymous*|null|false|none|none| - -continued - -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|external_id|string|true|none|Unique identifier for this item within the run. Used for referencing items. Must be unique across all items in the same run| -|input_artifacts|[[InputArtifactCreationRequest](#schemainputartifactcreationrequest)]|true|none|List of input artifacts for this item. For Atlas H&E-TME, typically contains one artifact (the slide image)| +|application_id|string|true|none|Application ID| +|description|string|true|none|Application documentations| +|name|string|true|none|Application display name| +|regulatory_classes|[string]|true|none|Regulatory class, to which the applications compliance| -### ItemOutput +### ApplicationRunStatus @@ -2522,26 +1270,32 @@ continued ```json -"NONE" +"CANCELED_SYSTEM" ``` -ItemOutput +ApplicationRunStatus #### Properties |Name|Type|Required|Restrictions|Description| |---|---|---|---|---| -|ItemOutput|string|false|none|none| +|ApplicationRunStatus|string|false|none|none| ##### Enumerated Values |Property|Value| |---|---| -|ItemOutput|NONE| -|ItemOutput|FULL| +|ApplicationRunStatus|CANCELED_SYSTEM| +|ApplicationRunStatus|CANCELED_USER| +|ApplicationRunStatus|COMPLETED| +|ApplicationRunStatus|COMPLETED_WITH_ERROR| +|ApplicationRunStatus|RECEIVED| +|ApplicationRunStatus|REJECTED| +|ApplicationRunStatus|RUNNING| +|ApplicationRunStatus|SCHEDULED| -### ItemResultReadResponse +### ApplicationVersionReadResponse @@ -2550,149 +1304,64 @@ ItemOutput ```json { - "custom_metadata": {}, - "custom_metadata_checksum": "f54fe109", - "error_code": "string", - "error_message": "This item was not processed because the threshold of 3 items finishing in error state (user or system error) was reached before the item was processed.", - "external_id": "slide_1", - "item_id": "4d8cd62e-a579-4dae-af8c-3172f96f8f7c", - "output": "NONE", + "application_id": "string", + "application_version_id": "h-e-tme:v0.0.1", + "changelog": "string", + "created_at": "2019-08-24T14:15:22Z", + "flow_id": "0746f03b-16cc-49fb-9833-df3713d407d2", + "input_artifacts": [ + { + "metadata_schema": {}, + "mime_type": "image/tiff", + "name": "string" + } + ], "output_artifacts": [ { - "download_url": "http://example.com", - "error_code": "string", - "error_message": "string", - "metadata": {}, - "name": "tissue_qc:tiff_heatmap", - "output": "NONE", - "output_artifact_id": "3f78e99c-5d35-4282-9e82-63c422f3af1b", - "state": "PENDING", - "termination_reason": "SUCCEEDED" + "metadata_schema": {}, + "mime_type": "application/vnd.apache.parquet", + "name": "string", + "scope": "ITEM" } ], - "state": "PENDING", - "terminated_at": "2024-01-15T10:30:45.123Z", - "termination_reason": "SUCCEEDED" + "version": "string" } ``` -ItemResultReadResponse +ApplicationVersionReadResponse #### Properties |Name|Type|Required|Restrictions|Description| |---|---|---|---|---| -|custom_metadata|any|true|none|The custom_metadata of the item that has been provided by the user on run creation.| - -anyOf - -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|» *anonymous*|object|false|none|none| - -or - -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|» *anonymous*|null|false|none|none| - -continued - -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|custom_metadata_checksum|any|false|none|The checksum of the `custom_metadata` field.Can be used in the `PUT /runs/{run-id}/items/{external_id}/custom_metadata`request to avoid unwanted override of the values in concurrent requests.| - -anyOf - -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|» *anonymous*|string|false|none|none| - -or - -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|» *anonymous*|null|false|none|none| - -continued - -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|error_code|any|true|read-only|Error code describing the error that occurred during item processing.| - -anyOf - -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|» *anonymous*|string|false|none|none| - -or - -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|» *anonymous*|null|false|none|none| - -continued - -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|error_message|any|false|none|The error message in case the `termination_reason` is in `USER_ERROR` or `SYSTEM_ERROR`| - -anyOf - -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|» *anonymous*|string|false|none|none| - -or - -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|» *anonymous*|null|false|none|none| - -continued - -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|external_id|string|true|none|The external_id of the item from the user payload| -|item_id|string(uuid)|true|none|Item UUID generated by the Platform| -|output|[ItemOutput](#schemaitemoutput)|true|none|The output status of the item (NONE, FULL)| -|output_artifacts|[[OutputArtifactResultReadResponse](#schemaoutputartifactresultreadresponse)]|true|none|The list of the results generated by the application algorithm. The number of files and theirtypes depend on the particular application version, call `/v1/versions/{version_id}` to getthe details.| -|state|[ItemState](#schemaitemstate)|true|none|The item moves from `PENDING` to `PROCESSING` to `TERMINATED` state.When terminated, consult the `termination_reason` property to see whether it was successful.| -|terminated_at|any|false|none|Timestamp showing when the item reached a terminal state.| +|application_id|string|true|none|Application ID| +|application_version_id|string|true|none|Application version ID| +|changelog|string|true|none|Description of the changes relative to the previous version| +|created_at|string(date-time)|true|none|The timestamp when the application version was registered| +|flow_id|any|false|none|Flow ID, used internally by the platform| anyOf |Name|Type|Required|Restrictions|Description| |---|---|---|---|---| -|» *anonymous*|string(date-time)|false|none|none| +|» *anonymous*|string(uuid)|false|none|none| or |Name|Type|Required|Restrictions|Description| |---|---|---|---|---| -|» *anonymous*|null|false|none|none| - -continued - -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|termination_reason|any|false|none|When the `state` is `TERMINATED` this will explain why`SUCCEEDED` -> Successful processing.`USER_ERROR` -> Failed because the provided input was invalid.`SYSTEM_ERROR` -> There was an error in the model or platform.`SKIPPED` -> Was cancelled| - -anyOf - -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|» *anonymous*|[ItemTerminationReason](#schemaitemterminationreason)|false|none|none| +|» *anonymous*|null|false|none|none| -or +continued |Name|Type|Required|Restrictions|Description| |---|---|---|---|---| -|» *anonymous*|null|false|none|none| +|input_artifacts|[[InputArtifactReadResponse](#schemainputartifactreadresponse)]|true|none|List of the input fields, provided by the User| +|output_artifacts|[[OutputArtifactReadResponse](#schemaoutputartifactreadresponse)]|true|none|List of the output fields, generated by the application| +|version|string|true|none|Semantic version of the application| -### ItemState +### HTTPValidationError @@ -2700,27 +1369,29 @@ or ```json -"PENDING" +{ + "detail": [ + { + "loc": [ + "string" + ], + "msg": "string", + "type": "string" + } + ] +} ``` -ItemState +HTTPValidationError #### Properties |Name|Type|Required|Restrictions|Description| |---|---|---|---|---| -|ItemState|string|false|none|none| - -##### Enumerated Values - -|Property|Value| -|---|---| -|ItemState|PENDING| -|ItemState|PROCESSING| -|ItemState|TERMINATED| +|detail|[[ValidationError](#schemavalidationerror)]|false|none|none| -### ItemTerminationReason +### InputArtifactCreationRequest @@ -2728,28 +1399,31 @@ ItemState ```json -"SUCCEEDED" +{ + "download_url": "https://example.com/case-no-1-slide.tiff", + "metadata": { + "checksum_base64_crc32c": "752f9554", + "height": 2000, + "height_mpp": 0.5, + "width": 10000, + "width_mpp": 0.5 + }, + "name": "slide" +} ``` -ItemTerminationReason +InputArtifactCreationRequest #### Properties |Name|Type|Required|Restrictions|Description| |---|---|---|---|---| -|ItemTerminationReason|string|false|none|none| - -##### Enumerated Values - -|Property|Value| -|---|---| -|ItemTerminationReason|SUCCEEDED| -|ItemTerminationReason|USER_ERROR| -|ItemTerminationReason|SYSTEM_ERROR| -|ItemTerminationReason|SKIPPED| +|download_url|string(uri)|true|none|[Signed URL](https://cloud.google.com/cdn/docs/using-signed-urls) to the input artifact file. The URL should be valid for at least 6 days from the payload submission time.| +|metadata|object|true|none|The metadata of the artifact, required by the application version. The JSON schema of the metadata can be requested by `/v1/versions/{application_version_id}`. The schema is located in `input_artifacts.[].metadata_schema`| +|name|string|true|none|The artifact name according to the application version. List of required artifacts is returned by `/v1/versions/{application_version_id}`. The artifact names are located in the `input_artifacts.[].name` value| -### MeReadResponse +### InputArtifactReadResponse @@ -2758,42 +1432,24 @@ ItemTerminationReason ```json { - "organization": { - "aignostics_bucket_hmac_access_key_id": "YOUR_HMAC_ACCESS_KEY_ID", - "aignostics_bucket_hmac_secret_access_key": "YOUR/HMAC/SECRET_ACCESS_KEY", - "aignostics_bucket_name": "aignostics-platform-bucket", - "aignostics_bucket_protocol": "gs", - "aignostics_logfire_token": "your-logfire-token", - "aignostics_sentry_dsn": "https://2354s3#ewsha@o44.ingest.us.sentry.io/34345123432", - "display_name": "Aignostics GmbH", - "id": "org_123456", - "name": "aignx" - }, - "user": { - "email": "user@domain.com", - "email_verified": true, - "family_name": "Doe", - "given_name": "Jane", - "id": "auth0|123456", - "name": "Jane Doe", - "nickname": "jdoe", - "picture": "https://example.com/jdoe.jpg", - "updated_at": "2023-10-05T14:48:00.000Z" - } + "metadata_schema": {}, + "mime_type": "image/tiff", + "name": "string" } ``` -MeReadResponse +InputArtifactReadResponse #### Properties |Name|Type|Required|Restrictions|Description| |---|---|---|---|---| -|organization|[OrganizationReadResponse](#schemaorganizationreadresponse)|true|none|Part of response schema for Organization object in `Get current user` endpoint.This model corresponds to the response schema returned fromAuth0 GET /v2/organizations/{id} endpoint, flattens out the metadata outand doesn't return branding or token_quota objects.For details, see:https://auth0.com/docs/api/management/v2/organizations/get-organizations-by-id#### Configuration for integrating with Aignostics Platform services.The Aignostics Platform API requires signed URLs for input artifacts (slide images). To simplify this process,Aignostics provides a dedicated storage bucket. The HMAC credentials below grant read and writeaccess to this bucket, allowing you to upload files and generate the signed URLs needed for API calls.Additionally, logging and error reporting tokens enable Aignostics to provide better support and monitorsystem performance for your integration.| -|user|[UserReadResponse](#schemauserreadresponse)|true|none|Part of response schema for User object in `Get current user` endpoint.This model corresponds to the response schema returned fromAuth0 GET /v2/users/{id} endpoint.For details, see:https://auth0.com/docs/api/management/v2/users/get-users-by-id| +|metadata_schema|object|true|none|none| +|mime_type|string|true|none|none| +|name|string|true|none|none| -### OrganizationReadResponse +### ItemCreationRequest @@ -2802,51 +1458,67 @@ MeReadResponse ```json { - "aignostics_bucket_hmac_access_key_id": "YOUR_HMAC_ACCESS_KEY_ID", - "aignostics_bucket_hmac_secret_access_key": "YOUR/HMAC/SECRET_ACCESS_KEY", - "aignostics_bucket_name": "aignostics-platform-bucket", - "aignostics_bucket_protocol": "gs", - "aignostics_logfire_token": "your-logfire-token", - "aignostics_sentry_dsn": "https://2354s3#ewsha@o44.ingest.us.sentry.io/34345123432", - "display_name": "Aignostics GmbH", - "id": "org_123456", - "name": "aignx" + "input_artifacts": [ + { + "download_url": "https://example.com/case-no-1-slide.tiff", + "metadata": { + "checksum_base64_crc32c": "752f9554", + "height": 2000, + "height_mpp": 0.5, + "width": 10000, + "width_mpp": 0.5 + }, + "name": "slide" + } + ], + "reference": "case-no-1" } ``` -OrganizationReadResponse +ItemCreationRequest #### Properties |Name|Type|Required|Restrictions|Description| |---|---|---|---|---| -|aignostics_bucket_hmac_access_key_id|string|true|none|HMAC access key ID for the Aignostics-provided storage bucket. Used to authenticate requests for uploading files and generating signed URLs| -|aignostics_bucket_hmac_secret_access_key|string|true|none|HMAC secret access key paired with the access key ID. Keep this credential secure.| -|aignostics_bucket_name|string|true|none|Name of the bucket provided by Aignostics for storing input artifacts (slide images)| -|aignostics_bucket_protocol|string|true|none|Protocol to use for bucket access. Defines the URL scheme for connecting to the storage service| -|aignostics_logfire_token|string|true|none|Authentication token for Logfire observability service. Enables sending application logs and performance metrics to Aignostics for monitoring and support| -|aignostics_sentry_dsn|string|true|none|Data Source Name (DSN) for Sentry error tracking service. Allows automatic reporting of errors and exceptions to Aignostics support team| -|display_name|any|false|none|Public organization name (E.g. “Aignostics GmbH”)| +|input_artifacts|[[InputArtifactCreationRequest](#schemainputartifactcreationrequest)]|true|none|All the input files of the item, required by the application version| +|reference|string|true|none|The ID of the slide provided by the caller. The reference should be unique across all items of the application run| -anyOf +### ItemResultReadResponse -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|» *anonymous*|string|false|none|none| -or -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|» *anonymous*|null|false|none|none| -continued + + +```json +{ + "application_run_id": "53c0c6ed-e767-49c4-ad7c-b1a749bf7dfe", + "error": "string", + "item_id": "4d8cd62e-a579-4dae-af8c-3172f96f8f7c", + "output_artifacts": [ + { + "download_url": "http://example.com", + "metadata": {}, + "name": "string", + "output_artifact_id": "3f78e99c-5d35-4282-9e82-63c422f3af1b" + } + ], + "reference": "string", + "status": "PENDING" +} + +``` + +ItemResultReadResponse + +#### Properties |Name|Type|Required|Restrictions|Description| |---|---|---|---|---| -|id|string|true|none|Unique organization identifier| -|name|any|false|none|Organization name (E.g. “aignx”)| +|application_run_id|string(uuid)|true|none|Application run UUID to which the item belongs| +|error|any|true|none|The error message in case the item is in `error_system` or `error_user` state| anyOf @@ -2860,7 +1532,16 @@ or |---|---|---|---|---| |» *anonymous*|null|false|none|none| -### OutputArtifact +continued + +|Name|Type|Required|Restrictions|Description| +|---|---|---|---|---| +|item_id|string(uuid)|true|none|Item UUID generated by the Platform| +|output_artifacts|[[OutputArtifactResultReadResponse](#schemaoutputartifactresultreadresponse)]|true|none|The list of the results generated by the application algorithm. The number of files and theirtypes depend on the particular application version, call `/v1/versions/{version_id}` to getthe details.| +|reference|string|true|none|The reference of the item from the user payload| +|status|[ItemStatus](#schemaitemstatus)|true|none|When the item is not processed yet, the status is set to `pending`.When the item is successfully finished, status is set to `succeeded`, and the processing resultsbecome available for download in `output_artifacts` field.When the item processing is failed because the provided item is invalid, the status is set to`error_user`. When the item processing failed because of the error in the model or platform,the status is set to `error_system`. When the application_run is canceled, the status of allpending items is set to either `cancelled_user` or `cancelled_system`.| + +### ItemStatus @@ -2868,29 +1549,30 @@ or ```json -{ - "metadata_schema": {}, - "mime_type": "application/vnd.apache.parquet", - "name": "string", - "scope": "ITEM", - "visibility": "INTERNAL" -} +"PENDING" ``` -OutputArtifact +ItemStatus #### Properties |Name|Type|Required|Restrictions|Description| |---|---|---|---|---| -|metadata_schema|object|true|none|none| -|mime_type|string|true|none|none| -|name|string|true|none|none| -|scope|[OutputArtifactScope](#schemaoutputartifactscope)|true|none|none| -|visibility|[OutputArtifactVisibility](#schemaoutputartifactvisibility)|true|none|none| +|ItemStatus|string|false|none|none| -### OutputArtifactResultReadResponse +##### Enumerated Values + +|Property|Value| +|---|---| +|ItemStatus|PENDING| +|ItemStatus|CANCELED_USER| +|ItemStatus|CANCELED_SYSTEM| +|ItemStatus|ERROR_USER| +|ItemStatus|ERROR_SYSTEM| +|ItemStatus|SUCCEEDED| + +### MeReadResponse @@ -2899,62 +1581,76 @@ OutputArtifact ```json { - "download_url": "http://example.com", - "error_code": "string", - "error_message": "string", - "metadata": {}, - "name": "tissue_qc:tiff_heatmap", - "output": "NONE", - "output_artifact_id": "3f78e99c-5d35-4282-9e82-63c422f3af1b", - "state": "PENDING", - "termination_reason": "SUCCEEDED" + "organization": { + "aignostics_bucket_hmac_access_key_id": "string", + "aignostics_bucket_hmac_secret_access_key": "string", + "aignostics_bucket_name": "string", + "aignostics_bucket_protocol": "string", + "aignostics_logfire_token": "string", + "aignostics_sentry_dsn": "string", + "display_name": "string", + "id": "string", + "name": "string" + }, + "user": { + "email": "string", + "email_verified": true, + "family_name": "string", + "given_name": "string", + "id": "string", + "name": "string", + "nickname": "string", + "picture": "string", + "updated_at": "2019-08-24T14:15:22Z" + } } ``` -OutputArtifactResultReadResponse +MeReadResponse #### Properties |Name|Type|Required|Restrictions|Description| |---|---|---|---|---| -|download_url|any|true|none|The download URL to the output file. The URL is valid for 1 hour after the endpoint is called.A new URL is generated every time the endpoint is called.| - -anyOf +|organization|[OrganizationReadResponse](#schemaorganizationreadresponse)|true|none|This model corresponds to the response schema returned fromAuth0 GET /v2/organizations/{id} endpoint, flattens out the metadata outand doesn't return branding or token_quota objects.For details, see:https://auth0.com/docs/api/management/v2/organizations/get-organizations-by-id| +|user|[UserReadResponse](#schemauserreadresponse)|true|none|This model corresponds to the response schema returned fromAuth0 GET /v2/users/{id} endpoint.For details, see:https://auth0.com/docs/api/management/v2/users/get-users-by-id| -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|» *anonymous*|string(uri)|false|none|none| +### OrganizationReadResponse -or -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|» *anonymous*|null|false|none|none| -continued -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|error_code|any|true|read-only|Error code describing the error that occurred during artifact processing.| -anyOf -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|» *anonymous*|string|false|none|none| +```json +{ + "aignostics_bucket_hmac_access_key_id": "string", + "aignostics_bucket_hmac_secret_access_key": "string", + "aignostics_bucket_name": "string", + "aignostics_bucket_protocol": "string", + "aignostics_logfire_token": "string", + "aignostics_sentry_dsn": "string", + "display_name": "string", + "id": "string", + "name": "string" +} -or +``` -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|» *anonymous*|null|false|none|none| +OrganizationReadResponse -continued +#### Properties |Name|Type|Required|Restrictions|Description| |---|---|---|---|---| -|error_message|any|false|none|Error message when artifact is in error state| +|aignostics_bucket_hmac_access_key_id|string|true|none|none| +|aignostics_bucket_hmac_secret_access_key|string|true|none|none| +|aignostics_bucket_name|string|true|none|none| +|aignostics_bucket_protocol|string|true|none|none| +|aignostics_logfire_token|string|true|none|none| +|aignostics_sentry_dsn|string|true|none|none| +|display_name|any|false|none|none| anyOf @@ -2972,35 +1668,14 @@ continued |Name|Type|Required|Restrictions|Description| |---|---|---|---|---| -|metadata|any|false|none|The metadata of the output artifact, provided by the application. Can only be None if the artifact itself was deleted.| - -anyOf - -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|» *anonymous*|object|false|none|none| - -or - -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|» *anonymous*|null|false|none|none| - -continued - -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|name|string|true|none|Name of the output from the output schema from the `/v1/versions/{version_id}` endpoint.| -|output|[ArtifactOutput](#schemaartifactoutput)|true|none|The output status of the artifact (NONE, FULL)| -|output_artifact_id|string(uuid)|true|none|The Id of the artifact. Used internally| -|state|[ArtifactState](#schemaartifactstate)|true|none|The current state of the artifact (PENDING, PROCESSING, TERMINATED)| -|termination_reason|any|false|none|The reason for termination when state is TERMINATED| +|id|string|true|none|none| +|name|any|false|none|none| anyOf |Name|Type|Required|Restrictions|Description| |---|---|---|---|---| -|» *anonymous*|[ArtifactTerminationReason](#schemaartifactterminationreason)|false|none|none| +|» *anonymous*|string|false|none|none| or @@ -3008,34 +1683,7 @@ or |---|---|---|---|---| |» *anonymous*|null|false|none|none| -### OutputArtifactScope - - - - - - -```json -"ITEM" - -``` - -OutputArtifactScope - -#### Properties - -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|OutputArtifactScope|string|false|none|none| - -##### Enumerated Values - -|Property|Value| -|---|---| -|OutputArtifactScope|ITEM| -|OutputArtifactScope|GLOBAL| - -### OutputArtifactVisibility +### OutputArtifactReadResponse @@ -3043,26 +1691,27 @@ OutputArtifactScope ```json -"INTERNAL" +{ + "metadata_schema": {}, + "mime_type": "application/vnd.apache.parquet", + "name": "string", + "scope": "ITEM" +} ``` -OutputArtifactVisibility +OutputArtifactReadResponse #### Properties |Name|Type|Required|Restrictions|Description| |---|---|---|---|---| -|OutputArtifactVisibility|string|false|none|none| - -##### Enumerated Values - -|Property|Value| -|---|---| -|OutputArtifactVisibility|INTERNAL| -|OutputArtifactVisibility|EXTERNAL| +|metadata_schema|object|true|none|none| +|mime_type|string|true|none|none| +|name|string|true|none|none| +|scope|[OutputArtifactScope](#schemaoutputartifactscope)|true|none|none| -### RunCreationRequest +### OutputArtifactResultReadResponse @@ -3071,72 +1720,27 @@ OutputArtifactVisibility ```json { - "application_id": "he-tme", - "custom_metadata": { - "department": "D1", - "study": "abc-1" - }, - "items": [ - { - "external_id": "slide_1", - "input_artifacts": [ - { - "download_url": "https://example-bucket.s3.amazonaws.com/slide1.tiff?signature=...", - "metadata": { - "checksum_base64_crc32c": "64RKKA==", - "height_px": 87761, - "media-type": "image/tiff", - "resolution_mpp": 0.2628238, - "specimen": { - "disease": "LUNG_CANCER", - "tissue": "LUNG" - }, - "staining_method": "H&E", - "width_px": 136223 - }, - "name": "input_slide" - } - ] - } - ], - "version_number": "1.0.0-beta1" + "download_url": "http://example.com", + "metadata": {}, + "name": "string", + "output_artifact_id": "3f78e99c-5d35-4282-9e82-63c422f3af1b" } ``` -RunCreationRequest +OutputArtifactResultReadResponse #### Properties |Name|Type|Required|Restrictions|Description| |---|---|---|---|---| -|application_id|string|true|none|Unique ID for the application to use for processing| -|custom_metadata|any|false|none|Optional JSON metadata to store additional information alongside the run| - -anyOf - -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|» *anonymous*|object|false|none|none| - -or - -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|» *anonymous*|null|false|none|none| - -continued - -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|items|[[ItemCreationRequest](#schemaitemcreationrequest)]|true|none|List of items (slides) to process. Each item represents a whole slide image (WSI) with its associated metadata and artifacts| -|version_number|any|false|none|Semantic version of the application to use for processing. If not provided, the latest available version will be used| +|download_url|any|true|none|The download URL to the output file. The URL is valid for 1 hour after the endpoint is called.A new URL is generated every time the endpoint is called.| anyOf |Name|Type|Required|Restrictions|Description| |---|---|---|---|---| -|» *anonymous*|string|false|none|none| +|» *anonymous*|string(uri)|false|none|none| or @@ -3144,7 +1748,15 @@ or |---|---|---|---|---| |» *anonymous*|null|false|none|none| -### RunCreationResponse +continued + +|Name|Type|Required|Restrictions|Description| +|---|---|---|---|---| +|metadata|object|true|none|The metadata of the output artifact, provided by the application| +|name|string|true|none|Name of the output from the output schema from the `/v1/versions/{version_id}` endpoint.| +|output_artifact_id|string(uuid)|true|none|The Id of the artifact. Used internally| + +### OutputArtifactScope @@ -3152,21 +1764,26 @@ or ```json -{ - "run_id": "3fa85f64-5717-4562-b3fc-2c963f66afa6" -} +"ITEM" ``` -RunCreationResponse +OutputArtifactScope #### Properties |Name|Type|Required|Restrictions|Description| |---|---|---|---|---| -|run_id|string(uuid)|false|none|none| +|OutputArtifactScope|string|false|none|none| + +##### Enumerated Values + +|Property|Value| +|---|---| +|OutputArtifactScope|ITEM| +|OutputArtifactScope|GLOBAL| -### RunItemStatistics +### PayloadInputArtifact @@ -3175,32 +1792,24 @@ RunCreationResponse ```json { - "item_count": 0, - "item_pending_count": 0, - "item_processing_count": 0, - "item_skipped_count": 0, - "item_succeeded_count": 0, - "item_system_error_count": 0, - "item_user_error_count": 0 + "download_url": "http://example.com", + "input_artifact_id": "a4134709-460b-44b6-99b2-2d637f889159", + "metadata": {} } ``` -RunItemStatistics +PayloadInputArtifact #### Properties |Name|Type|Required|Restrictions|Description| |---|---|---|---|---| -|item_count|integer|true|none|Total number of the items in the run| -|item_pending_count|integer|true|none|The number of items in `PENDING` state| -|item_processing_count|integer|true|none|The number of items in `PROCESSING` state| -|item_skipped_count|integer|true|none|The number of items in `TERMINATED` state, and the item termination reason is `SKIPPED`| -|item_succeeded_count|integer|true|none|The number of items in `TERMINATED` state, and the item termination reason is `SUCCEEDED`| -|item_system_error_count|integer|true|none|The number of items in `TERMINATED` state, and the item termination reason is `SYSTEM_ERROR`| -|item_user_error_count|integer|true|none|The number of items in `TERMINATED` state, and the item termination reason is `USER_ERROR`| +|download_url|string(uri)|true|none|none| +|input_artifact_id|string(uuid)|false|none|none| +|metadata|object|true|none|none| -### RunOutput +### PayloadItem @@ -3208,27 +1817,61 @@ RunItemStatistics ```json -"NONE" +{ + "input_artifacts": { + "property1": { + "download_url": "http://example.com", + "input_artifact_id": "a4134709-460b-44b6-99b2-2d637f889159", + "metadata": {} + }, + "property2": { + "download_url": "http://example.com", + "input_artifact_id": "a4134709-460b-44b6-99b2-2d637f889159", + "metadata": {} + } + }, + "item_id": "4d8cd62e-a579-4dae-af8c-3172f96f8f7c", + "output_artifacts": { + "property1": { + "data": { + "download_url": "http://example.com", + "upload_url": "http://example.com" + }, + "metadata": { + "download_url": "http://example.com", + "upload_url": "http://example.com" + }, + "output_artifact_id": "3f78e99c-5d35-4282-9e82-63c422f3af1b" + }, + "property2": { + "data": { + "download_url": "http://example.com", + "upload_url": "http://example.com" + }, + "metadata": { + "download_url": "http://example.com", + "upload_url": "http://example.com" + }, + "output_artifact_id": "3f78e99c-5d35-4282-9e82-63c422f3af1b" + } + } +} ``` -RunOutput +PayloadItem #### Properties |Name|Type|Required|Restrictions|Description| |---|---|---|---|---| -|RunOutput|string|false|none|none| - -##### Enumerated Values +|input_artifacts|object|true|none|none| +|» **additionalProperties**|[PayloadInputArtifact](#schemapayloadinputartifact)|false|none|none| +|item_id|string(uuid)|true|none|none| +|output_artifacts|object|true|none|none| +|» **additionalProperties**|[PayloadOutputArtifact](#schemapayloadoutputartifact)|false|none|none| -|Property|Value| -|---|---| -|RunOutput|NONE| -|RunOutput|PARTIAL| -|RunOutput|FULL| - -### RunReadResponse +### PayloadOutputArtifact @@ -3237,145 +1880,199 @@ RunOutput ```json { - "application_id": "he-tme", - "custom_metadata": { - "department": "D1", - "study": "abc-1" + "data": { + "download_url": "http://example.com", + "upload_url": "http://example.com" }, - "custom_metadata_checksum": "f54fe109", - "error_code": "SCHEDULER.ITEMS_WITH_ERROR_THRESHOLD_REACHED", - "error_message": "Run canceled given errors on more than 10 items.", - "output": "NONE", - "run_id": "dded282c-8ebd-44cf-8ba5-9a234973d1ec", - "state": "PENDING", - "statistics": { - "item_count": 0, - "item_pending_count": 0, - "item_processing_count": 0, - "item_skipped_count": 0, - "item_succeeded_count": 0, - "item_system_error_count": 0, - "item_user_error_count": 0 + "metadata": { + "download_url": "http://example.com", + "upload_url": "http://example.com" }, - "submitted_at": "2019-08-24T14:15:22Z", - "submitted_by": "auth0|123456", - "terminated_at": "2024-01-15T10:30:45.123Z", - "termination_reason": "ALL_ITEMS_PROCESSED", - "version_number": "0.4.4" + "output_artifact_id": "3f78e99c-5d35-4282-9e82-63c422f3af1b" } ``` -RunReadResponse +PayloadOutputArtifact #### Properties |Name|Type|Required|Restrictions|Description| |---|---|---|---|---| -|application_id|string|true|none|Application id| -|custom_metadata|any|false|none|Optional JSON metadata that was stored in alongside the run by the user| +|data|[TransferUrls](#schematransferurls)|true|none|none| +|metadata|[TransferUrls](#schematransferurls)|true|none|none| +|output_artifact_id|string(uuid)|true|none|none| -anyOf +### RunCreationRequest -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|» *anonymous*|object|false|none|none| -or -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|» *anonymous*|null|false|none|none| -continued -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|custom_metadata_checksum|any|false|none|The checksum of the `custom_metadata` field. Can be used in the `PUT /runs/{run-id}/custom_metadata`request to avoid unwanted override of the values in concurrent requests.| -anyOf +```json +{ + "application_version_id": "h-e-tme:v1.2.3", + "items": [ + { + "input_artifacts": [ + { + "download_url": "https://example.com/case-no-1-slide.tiff", + "metadata": { + "checksum_base64_crc32c": "752f9554", + "height": 2000, + "height_mpp": 0.5, + "width": 10000, + "width_mpp": 0.5 + }, + "name": "slide" + } + ], + "reference": "case-no-1" + } + ] +} -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|» *anonymous*|string|false|none|none| +``` -or +RunCreationRequest + +#### Properties |Name|Type|Required|Restrictions|Description| |---|---|---|---|---| -|» *anonymous*|null|false|none|none| +|application_version_id|string|true|none|Application version ID| +|items|[[ItemCreationRequest](#schemaitemcreationrequest)]|true|none|List of the items to process by the application| -continued +### RunCreationResponse -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|error_code|any|true|none|When the termination_reason is set to CANCELED_BY_SYSTEM, the error_code is set to define the structured description of the error.| -anyOf -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|» *anonymous*|string|false|none|none| -or -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|» *anonymous*|null|false|none|none| -continued +```json +{ + "application_run_id": "Application run id" +} -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|error_message|any|true|none|When the termination_reason is set to CANCELED_BY_SYSTEM, the error_message is set to provide more insights to the error cause.| +``` -anyOf +RunCreationResponse + +#### Properties |Name|Type|Required|Restrictions|Description| |---|---|---|---|---| -|» *anonymous*|string|false|none|none| +|application_run_id|string(uuid)|false|none|none| -or +### RunReadResponse -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|» *anonymous*|null|false|none|none| -continued -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|output|[RunOutput](#schemarunoutput)|true|none|The status of the output of the run. When 0 items are successfully processed the output is`NONE`, after one item is successfully processed, the value is set to `PARTIAL`. When all items of the run aresuccessfully processed, the output is set to `FULL`.| -|run_id|string(uuid)|true|none|UUID of the application| -|state|[RunState](#schemarunstate)|true|none|When the run request is received by the Platform, the `state` of it is set to`PENDING`. The state changes to `PROCESSING` when at least one item is being processed. After `PROCESSING`, thestate of the run can switch back to `PENDING` if there are no processing items, or to `TERMINATED` when the runfinished processing.| -|statistics|[RunItemStatistics](#schemarunitemstatistics)|true|none|Aggregated statistics of the run execution| -|submitted_at|string(date-time)|true|none|Timestamp showing when the run was triggered| -|submitted_by|string|true|none|Id of the user who triggered the run| -|terminated_at|any|false|none|Timestamp showing when the run reached a terminal state.| -anyOf -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|» *anonymous*|string(date-time)|false|none|none| -or +```json +{ + "application_run_id": "53c0c6ed-e767-49c4-ad7c-b1a749bf7dfe", + "application_version_id": "string", + "organization_id": "string", + "status": "CANCELED_SYSTEM", + "triggered_at": "2019-08-24T14:15:22Z", + "triggered_by": "string", + "user_payload": { + "application_id": "string", + "application_run_id": "53c0c6ed-e767-49c4-ad7c-b1a749bf7dfe", + "global_output_artifacts": { + "property1": { + "data": { + "download_url": "http://example.com", + "upload_url": "http://example.com" + }, + "metadata": { + "download_url": "http://example.com", + "upload_url": "http://example.com" + }, + "output_artifact_id": "3f78e99c-5d35-4282-9e82-63c422f3af1b" + }, + "property2": { + "data": { + "download_url": "http://example.com", + "upload_url": "http://example.com" + }, + "metadata": { + "download_url": "http://example.com", + "upload_url": "http://example.com" + }, + "output_artifact_id": "3f78e99c-5d35-4282-9e82-63c422f3af1b" + } + }, + "items": [ + { + "input_artifacts": { + "property1": { + "download_url": "http://example.com", + "input_artifact_id": "a4134709-460b-44b6-99b2-2d637f889159", + "metadata": {} + }, + "property2": { + "download_url": "http://example.com", + "input_artifact_id": "a4134709-460b-44b6-99b2-2d637f889159", + "metadata": {} + } + }, + "item_id": "4d8cd62e-a579-4dae-af8c-3172f96f8f7c", + "output_artifacts": { + "property1": { + "data": { + "download_url": "http://example.com", + "upload_url": "http://example.com" + }, + "metadata": { + "download_url": "http://example.com", + "upload_url": "http://example.com" + }, + "output_artifact_id": "3f78e99c-5d35-4282-9e82-63c422f3af1b" + }, + "property2": { + "data": { + "download_url": "http://example.com", + "upload_url": "http://example.com" + }, + "metadata": { + "download_url": "http://example.com", + "upload_url": "http://example.com" + }, + "output_artifact_id": "3f78e99c-5d35-4282-9e82-63c422f3af1b" + } + } + } + ] + } +} -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|» *anonymous*|null|false|none|none| +``` -continued +RunReadResponse + +#### Properties |Name|Type|Required|Restrictions|Description| |---|---|---|---|---| -|termination_reason|any|true|none|The termination reason of the run. When the run is not in `TERMINATED` state, the termination_reason is `null`. If all items of of the run are processed (successfully or with an error), then termination_reason is set to `ALL_ITEMS_PROCESSED`. If the run is cancelled by the user, the value is set to `CANCELED_BY_USER`. If the run reaches the threshold of number of failed items, the Platform cancels the run and sets the termination_reason to `CANCELED_BY_SYSTEM`.| +|application_run_id|string(uuid)|true|none|UUID of the application| +|application_version_id|string|true|none|ID of the application version| +|organization_id|string|true|none|Organization of the owner of the application run| +|status|[ApplicationRunStatus](#schemaapplicationrunstatus)|true|none|When the application run request is received by the Platform, the `status` of it is set to`received`. Then it is transitioned to `scheduled`, when it is scheduled for the processing.When the application run is scheduled, it will process the input items and generate the resultincrementally. As soon as the first result is generated, the state is changed to `running`.The results can be downloaded via `/v1/runs/{run_id}/results` endpoint.When all items are processed and all results are generated, the application status is set to`completed`. If the processing is done, but some items fail, the status is set to`completed_with_error`.When the application run request is rejected by the Platform before scheduling, it is transferredto `rejected`. When the application run reaches the threshold of number of failed items, the wholeapplication run is set to `canceled_system` and the remaining pending items are not processed.When the application run fails, the finished item results are available for download.If the application run is canceled by calling `POST /v1/runs/{run_id}/cancel` endpoint, theprocessing of the items is stopped, and the application status is set to `cancelled_user`| +|triggered_at|string(date-time)|true|none|Timestamp showing when the application run was triggered| +|triggered_by|string|true|none|Id of the user who triggered the application run| +|user_payload|any|false|none|Field used internally by the Platform| anyOf |Name|Type|Required|Restrictions|Description| |---|---|---|---|---| -|» *anonymous*|[RunTerminationReason](#schemarunterminationreason)|false|none|none| +|» *anonymous*|[UserPayload](#schemauserpayload)|false|none|none| or @@ -3383,13 +2080,7 @@ or |---|---|---|---|---| |» *anonymous*|null|false|none|none| -continued - -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|version_number|string|true|none|Application version number| - -### RunState +### TransferUrls @@ -3397,27 +2088,23 @@ continued ```json -"PENDING" +{ + "download_url": "http://example.com", + "upload_url": "http://example.com" +} ``` -RunState +TransferUrls #### Properties |Name|Type|Required|Restrictions|Description| |---|---|---|---|---| -|RunState|string|false|none|none| - -##### Enumerated Values - -|Property|Value| -|---|---| -|RunState|PENDING| -|RunState|PROCESSING| -|RunState|TERMINATED| +|download_url|string(uri)|true|none|none| +|upload_url|string(uri)|true|none|none| -### RunTerminationReason +### UserPayload @@ -3425,25 +2112,106 @@ RunState ```json -"ALL_ITEMS_PROCESSED" +{ + "application_id": "string", + "application_run_id": "53c0c6ed-e767-49c4-ad7c-b1a749bf7dfe", + "global_output_artifacts": { + "property1": { + "data": { + "download_url": "http://example.com", + "upload_url": "http://example.com" + }, + "metadata": { + "download_url": "http://example.com", + "upload_url": "http://example.com" + }, + "output_artifact_id": "3f78e99c-5d35-4282-9e82-63c422f3af1b" + }, + "property2": { + "data": { + "download_url": "http://example.com", + "upload_url": "http://example.com" + }, + "metadata": { + "download_url": "http://example.com", + "upload_url": "http://example.com" + }, + "output_artifact_id": "3f78e99c-5d35-4282-9e82-63c422f3af1b" + } + }, + "items": [ + { + "input_artifacts": { + "property1": { + "download_url": "http://example.com", + "input_artifact_id": "a4134709-460b-44b6-99b2-2d637f889159", + "metadata": {} + }, + "property2": { + "download_url": "http://example.com", + "input_artifact_id": "a4134709-460b-44b6-99b2-2d637f889159", + "metadata": {} + } + }, + "item_id": "4d8cd62e-a579-4dae-af8c-3172f96f8f7c", + "output_artifacts": { + "property1": { + "data": { + "download_url": "http://example.com", + "upload_url": "http://example.com" + }, + "metadata": { + "download_url": "http://example.com", + "upload_url": "http://example.com" + }, + "output_artifact_id": "3f78e99c-5d35-4282-9e82-63c422f3af1b" + }, + "property2": { + "data": { + "download_url": "http://example.com", + "upload_url": "http://example.com" + }, + "metadata": { + "download_url": "http://example.com", + "upload_url": "http://example.com" + }, + "output_artifact_id": "3f78e99c-5d35-4282-9e82-63c422f3af1b" + } + } + } + ] +} ``` -RunTerminationReason +UserPayload #### Properties |Name|Type|Required|Restrictions|Description| |---|---|---|---|---| -|RunTerminationReason|string|false|none|none| +|application_id|string|true|none|none| +|application_run_id|string(uuid)|true|none|none| +|global_output_artifacts|any|true|none|none| -##### Enumerated Values +anyOf -|Property|Value| -|---|---| -|RunTerminationReason|ALL_ITEMS_PROCESSED| -|RunTerminationReason|CANCELED_BY_SYSTEM| -|RunTerminationReason|CANCELED_BY_USER| +|Name|Type|Required|Restrictions|Description| +|---|---|---|---|---| +|» *anonymous*|object|false|none|none| +|»» **additionalProperties**|[PayloadOutputArtifact](#schemapayloadoutputartifact)|false|none|none| + +or + +|Name|Type|Required|Restrictions|Description| +|---|---|---|---|---| +|» *anonymous*|null|false|none|none| + +continued + +|Name|Type|Required|Restrictions|Description| +|---|---|---|---|---| +|items|[[PayloadItem](#schemapayloaditem)]|true|none|none| ### UserReadResponse @@ -3454,15 +2222,15 @@ RunTerminationReason ```json { - "email": "user@domain.com", + "email": "string", "email_verified": true, - "family_name": "Doe", - "given_name": "Jane", - "id": "auth0|123456", - "name": "Jane Doe", - "nickname": "jdoe", - "picture": "https://example.com/jdoe.jpg", - "updated_at": "2023-10-05T14:48:00.000Z" + "family_name": "string", + "given_name": "string", + "id": "string", + "name": "string", + "nickname": "string", + "picture": "string", + "updated_at": "2019-08-24T14:15:22Z" } ``` @@ -3473,7 +2241,7 @@ UserReadResponse |Name|Type|Required|Restrictions|Description| |---|---|---|---|---| -|email|any|false|none|User email| +|email|any|false|none|none| anyOf @@ -3545,8 +2313,8 @@ continued |Name|Type|Required|Restrictions|Description| |---|---|---|---|---| -|id|string|true|none|Unique user identifier| -|name|any|false|none|First and last name of the user| +|id|string|true|none|none| +|name|any|false|none|none| anyOf @@ -3658,47 +2426,3 @@ continued |---|---|---|---|---| |msg|string|true|none|none| |type|string|true|none|none| - -### VersionReadResponse - - - - - - -```json -{ - "changelog": "string", - "input_artifacts": [ - { - "metadata_schema": {}, - "mime_type": "image/tiff", - "name": "string" - } - ], - "output_artifacts": [ - { - "metadata_schema": {}, - "mime_type": "application/vnd.apache.parquet", - "name": "string", - "scope": "ITEM", - "visibility": "INTERNAL" - } - ], - "released_at": "2019-08-24T14:15:22Z", - "version_number": "string" -} - -``` - -VersionReadResponse - -#### Properties - -|Name|Type|Required|Restrictions|Description| -|---|---|---|---|---| -|changelog|string|true|none|Description of the changes relative to the previous version| -|input_artifacts|[[InputArtifact](#schemainputartifact)]|true|none|List of the input fields, provided by the User| -|output_artifacts|[[OutputArtifact](#schemaoutputartifact)]|true|none|List of the output fields, generated by the application| -|released_at|string(date-time)|true|none|The timestamp when the application version was registered| -|version_number|string|true|none|Semantic version of the application| diff --git a/ATTRIBUTIONS.md b/ATTRIBUTIONS.md index 9980ba2b..3196e047 100644 --- a/ATTRIBUTIONS.md +++ b/ATTRIBUTIONS.md @@ -132,7 +132,7 @@ SOFTWARE. ``` -## Faker (37.11.0) - MIT License +## Faker (37.5.3) - MIT License Faker is a Python package that generates fake data for you. @@ -205,7 +205,7 @@ SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ``` -## Markdown (3.9) - UNKNOWN +## Markdown (3.8.2) - UNKNOWN Python implementation of John Gruber's Markdown. @@ -249,7 +249,7 @@ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ``` -## MarkupSafe (3.0.3) - UNKNOWN +## MarkupSafe (3.0.2) - BSD License Safely add untrusted strings to HTML/XML markup. @@ -324,7 +324,7 @@ SOFTWARE. ``` -## PyYAML (6.0.3) - MIT License +## PyYAML (6.0.2) - MIT License YAML parser and emitter for Python @@ -506,7 +506,7 @@ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ``` -## aignostics (0.2.192) - MIT License +## aignostics (0.2.157) - MIT License 🔬 Python SDK providing access to the Aignostics Platform. Includes Aignostics Launchpad (Desktop Application), Aignostics CLI (Command-Line Interface), example notebooks, and Aignostics Client Library. @@ -540,215 +540,7 @@ SOFTWARE. ``` -## aiofile (3.9.0) - Apache Software License - -Asynchronous file operations. - -* URL: http://github.com/mosquito/aiofile -* Author(s): Dmitry Orlov - -### License Text - -``` -Apache License -============== - -_Version 2.0, January 2004_ -_<>_ - -### Terms and Conditions for use, reproduction, and distribution - -#### 1. Definitions - -“License” shall mean the terms and conditions for use, reproduction, and -distribution as defined by Sections 1 through 9 of this document. - -“Licensor” shall mean the copyright owner or entity authorized by the copyright -owner that is granting the License. - -“Legal Entity” shall mean the union of the acting entity and all other entities -that control, are controlled by, or are under common control with that entity. -For the purposes of this definition, “control” means **(i)** the power, direct or -indirect, to cause the direction or management of such entity, whether by -contract or otherwise, or **(ii)** ownership of fifty percent (50%) or more of the -outstanding shares, or **(iii)** beneficial ownership of such entity. - -“You” (or “Your”) shall mean an individual or Legal Entity exercising -permissions granted by this License. - -“Source” form shall mean the preferred form for making modifications, including -but not limited to software source code, documentation source, and configuration -files. - -“Object” form shall mean any form resulting from mechanical transformation or -translation of a Source form, including but not limited to compiled object code, -generated documentation, and conversions to other media types. - -“Work” shall mean the work of authorship, whether in Source or Object form, made -available under the License, as indicated by a copyright notice that is included -in or attached to the work (an example is provided in the Appendix below). - -“Derivative Works” shall mean any work, whether in Source or Object form, that -is based on (or derived from) the Work and for which the editorial revisions, -annotations, elaborations, or other modifications represent, as a whole, an -original work of authorship. For the purposes of this License, Derivative Works -shall not include works that remain separable from, or merely link (or bind by -name) to the interfaces of, the Work and Derivative Works thereof. - -“Contribution” shall mean any work of authorship, including the original version -of the Work and any modifications or additions to that Work or Derivative Works -thereof, that is intentionally submitted to Licensor for inclusion in the Work -by the copyright owner or by an individual or Legal Entity authorized to submit -on behalf of the copyright owner. For the purposes of this definition, -“submitted” means any form of electronic, verbal, or written communication sent -to the Licensor or its representatives, including but not limited to -communication on electronic mailing lists, source code control systems, and -issue tracking systems that are managed by, or on behalf of, the Licensor for -the purpose of discussing and improving the Work, but excluding communication -that is conspicuously marked or otherwise designated in writing by the copyright -owner as “Not a Contribution.” - -“Contributor” shall mean Licensor and any individual or Legal Entity on behalf -of whom a Contribution has been received by Licensor and subsequently -incorporated within the Work. - -#### 2. Grant of Copyright License - -Subject to the terms and conditions of this License, each Contributor hereby -grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, -irrevocable copyright license to reproduce, prepare Derivative Works of, -publicly display, publicly perform, sublicense, and distribute the Work and such -Derivative Works in Source or Object form. - -#### 3. Grant of Patent License - -Subject to the terms and conditions of this License, each Contributor hereby -grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, -irrevocable (except as stated in this section) patent license to make, have -made, use, offer to sell, sell, import, and otherwise transfer the Work, where -such license applies only to those patent claims licensable by such Contributor -that are necessarily infringed by their Contribution(s) alone or by combination -of their Contribution(s) with the Work to which such Contribution(s) was -submitted. If You institute patent litigation against any entity (including a -cross-claim or counterclaim in a lawsuit) alleging that the Work or a -Contribution incorporated within the Work constitutes direct or contributory -patent infringement, then any patent licenses granted to You under this License -for that Work shall terminate as of the date such litigation is filed. - -#### 4. Redistribution - -You may reproduce and distribute copies of the Work or Derivative Works thereof -in any medium, with or without modifications, and in Source or Object form, -provided that You meet the following conditions: - -* **(a)** You must give any other recipients of the Work or Derivative Works a copy of -this License; and -* **(b)** You must cause any modified files to carry prominent notices stating that You -changed the files; and -* **(c)** You must retain, in the Source form of any Derivative Works that You distribute, -all copyright, patent, trademark, and attribution notices from the Source form -of the Work, excluding those notices that do not pertain to any part of the -Derivative Works; and -* **(d)** If the Work includes a “NOTICE” text file as part of its distribution, then any -Derivative Works that You distribute must include a readable copy of the -attribution notices contained within such NOTICE file, excluding those notices -that do not pertain to any part of the Derivative Works, in at least one of the -following places: within a NOTICE text file distributed as part of the -Derivative Works; within the Source form or documentation, if provided along -with the Derivative Works; or, within a display generated by the Derivative -Works, if and wherever such third-party notices normally appear. The contents of -the NOTICE file are for informational purposes only and do not modify the -License. You may add Your own attribution notices within Derivative Works that -You distribute, alongside or as an addendum to the NOTICE text from the Work, -provided that such additional attribution notices cannot be construed as -modifying the License. - -You may add Your own copyright statement to Your modifications and may provide -additional or different license terms and conditions for use, reproduction, or -distribution of Your modifications, or for any such Derivative Works as a whole, -provided Your use, reproduction, and distribution of the Work otherwise complies -with the conditions stated in this License. - -#### 5. Submission of Contributions - -Unless You explicitly state otherwise, any Contribution intentionally submitted -for inclusion in the Work by You to the Licensor shall be under the terms and -conditions of this License, without any additional terms or conditions. -Notwithstanding the above, nothing herein shall supersede or modify the terms of -any separate license agreement you may have executed with Licensor regarding -such Contributions. - -#### 6. Trademarks - -This License does not grant permission to use the trade names, trademarks, -service marks, or product names of the Licensor, except as required for -reasonable and customary use in describing the origin of the Work and -reproducing the content of the NOTICE file. - -#### 7. Disclaimer of Warranty - -Unless required by applicable law or agreed to in writing, Licensor provides the -Work (and each Contributor provides its Contributions) on an “AS IS” BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, -including, without limitation, any warranties or conditions of TITLE, -NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are -solely responsible for determining the appropriateness of using or -redistributing the Work and assume any risks associated with Your exercise of -permissions under this License. - -#### 8. Limitation of Liability - -In no event and under no legal theory, whether in tort (including negligence), -contract, or otherwise, unless required by applicable law (such as deliberate -and grossly negligent acts) or agreed to in writing, shall any Contributor be -liable to You for damages, including any direct, indirect, special, incidental, -or consequential damages of any character arising as a result of this License or -out of the use or inability to use the Work (including but not limited to -damages for loss of goodwill, work stoppage, computer failure or malfunction, or -any and all other commercial damages or losses), even if such Contributor has -been advised of the possibility of such damages. - -#### 9. Accepting Warranty or Additional Liability - -While redistributing the Work or Derivative Works thereof, You may choose to -offer, and charge a fee for, acceptance of support, warranty, indemnity, or -other liability obligations and/or rights consistent with this License. However, -in accepting such obligations, You may act only on Your own behalf and on Your -sole responsibility, not on behalf of any other Contributor, and only if You -agree to indemnify, defend, and hold each Contributor harmless for any liability -incurred by, or claims asserted against, such Contributor by reason of your -accepting any such warranty or additional liability. - -_END OF TERMS AND CONDITIONS_ - -### APPENDIX: How to apply the Apache License to your work - -To apply the Apache License to your work, attach the following boilerplate -notice, with the fields enclosed by brackets `[]` replaced with your own -identifying information. (Don't include the brackets!) The text should be -enclosed in the appropriate comment syntax for the file format. We also -recommend that a file or class name and description of purpose be included on -the same “printed page” as the copyright notice for easier identification within -third-party archives. - - Copyright [yyyy] [name of copyright owner] - - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. - - -``` - -## aiofiles (25.1.0) - Apache Software License +## aiofiles (24.1.0) - Apache Software License File support for asyncio. @@ -1263,7 +1055,7 @@ PERFORMANCE OF THIS SOFTWARE. ``` -## aiohttp (3.13.0) - Apache-2.0 AND MIT +## aiohttp (3.12.15) - Apache-2.0 AND MIT Async http client/server framework (asyncio) @@ -1289,184 +1081,6 @@ Async http client/server framework (asyncio) ``` -## aiopath (0.7.7) - LGPL-3.0 - -📁 Async pathlib for Python - -* URL: https://github.com/AlexDeLorenzo -* Author(s): Alex DeLorenzo - -### License Text - -``` - GNU LESSER GENERAL PUBLIC LICENSE - Version 3, 29 June 2007 - - Copyright (C) 2007 Free Software Foundation, Inc. - Everyone is permitted to copy and distribute verbatim copies - of this license document, but changing it is not allowed. - - - This version of the GNU Lesser General Public License incorporates -the terms and conditions of version 3 of the GNU General Public -License, supplemented by the additional permissions listed below. - - 0. Additional Definitions. - - As used herein, "this License" refers to version 3 of the GNU Lesser -General Public License, and the "GNU GPL" refers to version 3 of the GNU -General Public License. - - "The Library" refers to a covered work governed by this License, -other than an Application or a Combined Work as defined below. - - An "Application" is any work that makes use of an interface provided -by the Library, but which is not otherwise based on the Library. -Defining a subclass of a class defined by the Library is deemed a mode -of using an interface provided by the Library. - - A "Combined Work" is a work produced by combining or linking an -Application with the Library. The particular version of the Library -with which the Combined Work was made is also called the "Linked -Version". - - The "Minimal Corresponding Source" for a Combined Work means the -Corresponding Source for the Combined Work, excluding any source code -for portions of the Combined Work that, considered in isolation, are -based on the Application, and not on the Linked Version. - - The "Corresponding Application Code" for a Combined Work means the -object code and/or source code for the Application, including any data -and utility programs needed for reproducing the Combined Work from the -Application, but excluding the System Libraries of the Combined Work. - - 1. Exception to Section 3 of the GNU GPL. - - You may convey a covered work under sections 3 and 4 of this License -without being bound by section 3 of the GNU GPL. - - 2. Conveying Modified Versions. - - If you modify a copy of the Library, and, in your modifications, a -facility refers to a function or data to be supplied by an Application -that uses the facility (other than as an argument passed when the -facility is invoked), then you may convey a copy of the modified -version: - - a) under this License, provided that you make a good faith effort to - ensure that, in the event an Application does not supply the - function or data, the facility still operates, and performs - whatever part of its purpose remains meaningful, or - - b) under the GNU GPL, with none of the additional permissions of - this License applicable to that copy. - - 3. Object Code Incorporating Material from Library Header Files. - - The object code form of an Application may incorporate material from -a header file that is part of the Library. You may convey such object -code under terms of your choice, provided that, if the incorporated -material is not limited to numerical parameters, data structure -layouts and accessors, or small macros, inline functions and templates -(ten or fewer lines in length), you do both of the following: - - a) Give prominent notice with each copy of the object code that the - Library is used in it and that the Library and its use are - covered by this License. - - b) Accompany the object code with a copy of the GNU GPL and this license - document. - - 4. Combined Works. - - You may convey a Combined Work under terms of your choice that, -taken together, effectively do not restrict modification of the -portions of the Library contained in the Combined Work and reverse -engineering for debugging such modifications, if you also do each of -the following: - - a) Give prominent notice with each copy of the Combined Work that - the Library is used in it and that the Library and its use are - covered by this License. - - b) Accompany the Combined Work with a copy of the GNU GPL and this license - document. - - c) For a Combined Work that displays copyright notices during - execution, include the copyright notice for the Library among - these notices, as well as a reference directing the user to the - copies of the GNU GPL and this license document. - - d) Do one of the following: - - 0) Convey the Minimal Corresponding Source under the terms of this - License, and the Corresponding Application Code in a form - suitable for, and under terms that permit, the user to - recombine or relink the Application with a modified version of - the Linked Version to produce a modified Combined Work, in the - manner specified by section 6 of the GNU GPL for conveying - Corresponding Source. - - 1) Use a suitable shared library mechanism for linking with the - Library. A suitable mechanism is one that (a) uses at run time - a copy of the Library already present on the user's computer - system, and (b) will operate properly with a modified version - of the Library that is interface-compatible with the Linked - Version. - - e) Provide Installation Information, but only if you would otherwise - be required to provide such information under section 6 of the - GNU GPL, and only to the extent that such information is - necessary to install and execute a modified version of the - Combined Work produced by recombining or relinking the - Application with a modified version of the Linked Version. (If - you use option 4d0, the Installation Information must accompany - the Minimal Corresponding Source and Corresponding Application - Code. If you use option 4d1, you must provide the Installation - Information in the manner specified by section 6 of the GNU GPL - for conveying Corresponding Source.) - - 5. Combined Libraries. - - You may place library facilities that are a work based on the -Library side by side in a single library together with other library -facilities that are not Applications and are not covered by this -License, and convey such a combined library under terms of your -choice, if you do both of the following: - - a) Accompany the combined library with a copy of the same work based - on the Library, uncombined with any other library facilities, - conveyed under the terms of this License. - - b) Give prominent notice with the combined library that part of it - is a work based on the Library, and explaining where to find the - accompanying uncombined form of the same work. - - 6. Revised Versions of the GNU Lesser General Public License. - - The Free Software Foundation may publish revised and/or new versions -of the GNU Lesser General Public License from time to time. Such new -versions will be similar in spirit to the present version, but may -differ in detail to address new problems or concerns. - - Each version is given a distinguishing version number. If the -Library as you received it specifies that a certain numbered version -of the GNU Lesser General Public License "or any later version" -applies to it, you have the option of following the terms and -conditions either of that published version or of any later version -published by the Free Software Foundation. If the Library as you -received it does not specify a version number of the GNU Lesser -General Public License, you may choose any version of the GNU Lesser -General Public License ever published by the Free Software Foundation. - - If the Library as you received it specifies that a proxy can decide -whether future versions of the GNU Lesser General Public License shall -apply, that proxy's public statement of acceptance of any version is -permanent authorization for you to choose that version for the -Library. - -``` - ## aiosignal (1.4.0) - Apache Software License aiosignal: a list of registered asynchronous callbacks @@ -1794,7 +1408,7 @@ SOFTWARE. ``` -## anyio (4.11.0) - UNKNOWN +## anyio (4.10.0) - UNKNOWN High-level concurrency and networking framework on top of asyncio or Trio @@ -2551,7 +2165,7 @@ Better dates & times for Python ``` -## asgiref (3.10.0) - BSD License +## asgiref (3.9.1) - BSD License ASGI specs, helper code, and adapters @@ -2841,7 +2455,7 @@ THE SOFTWARE. ``` -## attrs (25.4.0) - UNKNOWN +## attrs (25.3.0) - UNKNOWN Classes Without Boilerplate @@ -3164,7 +2778,7 @@ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ``` -## beautifulsoup4 (4.14.2) - MIT License +## beautifulsoup4 (4.13.4) - MIT License Screen-scraping library @@ -3659,7 +3273,7 @@ SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ``` -## boto3 (1.40.51) - Apache Software License +## boto3 (1.40.34) - Apache Software License The AWS SDK for Python @@ -3857,7 +3471,7 @@ Copyright 2013-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. ``` -## botocore (1.40.51) - Apache-2.0 +## botocore (1.40.34) - Apache Software License Low-level, data-driven core of boto 3. @@ -4179,7 +3793,7 @@ SOFTWARE. ``` -## bump-my-version (1.2.4) - MIT License +## bump-my-version (1.2.3) - MIT License Version bump your Python project @@ -4213,7 +3827,7 @@ SOFTWARE. ``` -## cachetools (6.2.1) - MIT License +## cachetools (5.5.2) - MIT License Extensible memoizing collections and decorators @@ -4246,221 +3860,7 @@ CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ``` -## caio (0.9.24) - UNKNOWN - -Asynchronous file IO for Linux MacOS or Windows. - -* URL: UNKNOWN -* Author(s): Dmitry Orlov - -### License Text - -``` - Apache License - Version 2.0, January 2004 - http://www.apache.org/licenses/ - - TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION - - 1. Definitions. - - "License" shall mean the terms and conditions for use, reproduction, - and distribution as defined by Sections 1 through 9 of this document. - - "Licensor" shall mean the copyright owner or entity authorized by - the copyright owner that is granting the License. - - "Legal Entity" shall mean the union of the acting entity and all - other entities that control, are controlled by, or are under common - control with that entity. For the purposes of this definition, - "control" means (i) the power, direct or indirect, to cause the - direction or management of such entity, whether by contract or - otherwise, or (ii) ownership of fifty percent (50%) or more of the - outstanding shares, or (iii) beneficial ownership of such entity. - - "You" (or "Your") shall mean an individual or Legal Entity - exercising permissions granted by this License. - - "Source" form shall mean the preferred form for making modifications, - including but not limited to software source code, documentation - source, and configuration files. - - "Object" form shall mean any form resulting from mechanical - transformation or translation of a Source form, including but - not limited to compiled object code, generated documentation, - and conversions to other media types. - - "Work" shall mean the work of authorship, whether in Source or - Object form, made available under the License, as indicated by a - copyright notice that is included in or attached to the work - (an example is provided in the Appendix below). - - "Derivative Works" shall mean any work, whether in Source or Object - form, that is based on (or derived from) the Work and for which the - editorial revisions, annotations, elaborations, or other modifications - represent, as a whole, an original work of authorship. For the purposes - of this License, Derivative Works shall not include works that remain - separable from, or merely link (or bind by name) to the interfaces of, - the Work and Derivative Works thereof. - - "Contribution" shall mean any work of authorship, including - the original version of the Work and any modifications or additions - to that Work or Derivative Works thereof, that is intentionally - submitted to Licensor for inclusion in the Work by the copyright owner - or by an individual or Legal Entity authorized to submit on behalf of - the copyright owner. For the purposes of this definition, "submitted" - means any form of electronic, verbal, or written communication sent - to the Licensor or its representatives, including but not limited to - communication on electronic mailing lists, source code control systems, - and issue tracking systems that are managed by, or on behalf of, the - Licensor for the purpose of discussing and improving the Work, but - excluding communication that is conspicuously marked or otherwise - designated in writing by the copyright owner as "Not a Contribution." - - "Contributor" shall mean Licensor and any individual or Legal Entity - on behalf of whom a Contribution has been received by Licensor and - subsequently incorporated within the Work. - - 2. Grant of Copyright License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - copyright license to reproduce, prepare Derivative Works of, - publicly display, publicly perform, sublicense, and distribute the - Work and such Derivative Works in Source or Object form. - - 3. Grant of Patent License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - (except as stated in this section) patent license to make, have made, - use, offer to sell, sell, import, and otherwise transfer the Work, - where such license applies only to those patent claims licensable - by such Contributor that are necessarily infringed by their - Contribution(s) alone or by combination of their Contribution(s) - with the Work to which such Contribution(s) was submitted. If You - institute patent litigation against any entity (including a - cross-claim or counterclaim in a lawsuit) alleging that the Work - or a Contribution incorporated within the Work constitutes direct - or contributory patent infringement, then any patent licenses - granted to You under this License for that Work shall terminate - as of the date such litigation is filed. - - 4. Redistribution. You may reproduce and distribute copies of the - Work or Derivative Works thereof in any medium, with or without - modifications, and in Source or Object form, provided that You - meet the following conditions: - - (a) You must give any other recipients of the Work or - Derivative Works a copy of this License; and - - (b) You must cause any modified files to carry prominent notices - stating that You changed the files; and - - (c) You must retain, in the Source form of any Derivative Works - that You distribute, all copyright, patent, trademark, and - attribution notices from the Source form of the Work, - excluding those notices that do not pertain to any part of - the Derivative Works; and - - (d) If the Work includes a "NOTICE" text file as part of its - distribution, then any Derivative Works that You distribute must - include a readable copy of the attribution notices contained - within such NOTICE file, excluding those notices that do not - pertain to any part of the Derivative Works, in at least one - of the following places: within a NOTICE text file distributed - as part of the Derivative Works; within the Source form or - documentation, if provided along with the Derivative Works; or, - within a display generated by the Derivative Works, if and - wherever such third-party notices normally appear. The contents - of the NOTICE file are for informational purposes only and - do not modify the License. You may add Your own attribution - notices within Derivative Works that You distribute, alongside - or as an addendum to the NOTICE text from the Work, provided - that such additional attribution notices cannot be construed - as modifying the License. - - You may add Your own copyright statement to Your modifications and - may provide additional or different license terms and conditions - for use, reproduction, or distribution of Your modifications, or - for any such Derivative Works as a whole, provided Your use, - reproduction, and distribution of the Work otherwise complies with - the conditions stated in this License. - - 5. Submission of Contributions. Unless You explicitly state otherwise, - any Contribution intentionally submitted for inclusion in the Work - by You to the Licensor shall be under the terms and conditions of - this License, without any additional terms or conditions. - Notwithstanding the above, nothing herein shall supersede or modify - the terms of any separate license agreement you may have executed - with Licensor regarding such Contributions. - - 6. Trademarks. This License does not grant permission to use the trade - names, trademarks, service marks, or product names of the Licensor, - except as required for reasonable and customary use in describing the - origin of the Work and reproducing the content of the NOTICE file. - - 7. Disclaimer of Warranty. Unless required by applicable law or - agreed to in writing, Licensor provides the Work (and each - Contributor provides its Contributions) on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or - implied, including, without limitation, any warranties or conditions - of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A - PARTICULAR PURPOSE. You are solely responsible for determining the - appropriateness of using or redistributing the Work and assume any - risks associated with Your exercise of permissions under this License. - - 8. Limitation of Liability. In no event and under no legal theory, - whether in tort (including negligence), contract, or otherwise, - unless required by applicable law (such as deliberate and grossly - negligent acts) or agreed to in writing, shall any Contributor be - liable to You for damages, including any direct, indirect, special, - incidental, or consequential damages of any character arising as a - result of this License or out of the use or inability to use the - Work (including but not limited to damages for loss of goodwill, - work stoppage, computer failure or malfunction, or any and all - other commercial damages or losses), even if such Contributor - has been advised of the possibility of such damages. - - 9. Accepting Warranty or Additional Liability. While redistributing - the Work or Derivative Works thereof, You may choose to offer, - and charge a fee for, acceptance of support, warranty, indemnity, - or other liability obligations and/or rights consistent with this - License. However, in accepting such obligations, You may act only - on Your own behalf and on Your sole responsibility, not on behalf - of any other Contributor, and only if You agree to indemnify, - defend, and hold each Contributor harmless for any liability - incurred by, or claims asserted against, such Contributor by reason - of your accepting any such warranty or additional liability. - - END OF TERMS AND CONDITIONS - - APPENDIX: How to apply the Apache License to your work. - - To apply the Apache License to your work, attach the following - boilerplate notice, with the fields enclosed by brackets "{}" - replaced with your own identifying information. (Don't include - the brackets!) The text should be enclosed in the appropriate - comment syntax for the file format. We also recommend that a - file or class name and description of purpose be included on the - same "printed page" as the copyright notice for easier - identification within third-party archives. - - Copyright 2025 Dmitry Orlov - - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. - -``` - -## certifi (2025.10.5) - Mozilla Public License 2.0 (MPL 2.0) +## certifi (2025.8.3) - Mozilla Public License 2.0 (MPL 2.0) Python package for providing Mozilla's CA Bundle. @@ -4493,13 +3893,12 @@ one at http://mozilla.org/MPL/2.0/. ``` -## cffi (2.0.0) - UNKNOWN +## cffi (1.17.1) - MIT License Foreign Function Interface for Python calling C code. -* URL: https://cffi.readthedocs.io/en/latest/whatsnew.html +* URL: http://cffi.readthedocs.org * Author(s): Armin Rigo, Maciej Fijalkowski -* Maintainer(s): Matt Davis, Matt Clay, Matti Picus ### License Text @@ -4509,7 +3908,7 @@ Except when otherwise stated (look for LICENSE files in directories or information at the beginning of each file) all software and documentation is licensed as follows: - MIT No Attribution + The MIT License Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation @@ -4517,7 +3916,10 @@ documentation is licensed as follows: restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the - Software is furnished to do so. + Software is furnished to do so, subject to the following conditions: + + The above copyright notice and this permission notice shall be included + in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, @@ -5078,7 +4480,7 @@ That's all there is to it! ``` -## charset-normalizer (3.4.4) - MIT +## charset-normalizer (3.4.2) - MIT License The Real First Universal Charset Detector. Open, modern and actively maintained alternative to Chardet. @@ -5154,7 +4556,7 @@ SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ``` -## cloudpathlib (0.23.0) - MIT License +## cloudpathlib (0.22.0) - MIT License pathlib-style classes for cloud storage services. @@ -5352,7 +4754,7 @@ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ``` -## coverage (7.10.7) - Apache-2.0 +## coverage (7.10.6) - Apache-2.0 Code coverage measurement for Python @@ -5576,12 +4978,12 @@ SOFTWARE. ``` -## cryptography (46.0.2) - UNKNOWN +## cryptography (45.0.6) - Apache-2.0 OR BSD-3-Clause cryptography is a package which provides cryptographic recipes and primitives to Python developers. * URL: https://github.com/pyca/cryptography -* Author(s): The Python Cryptographic Authority and individual contributors +* Author(s): The cryptography developers ### License Text @@ -5810,7 +5212,7 @@ OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ``` -## cyclonedx-bom (7.1.0) - Apache Software License +## cyclonedx-bom (7.0.0) - Apache Software License CycloneDX Software Bill of Materials (SBOM) generator for Python projects and environments @@ -6268,7 +5670,7 @@ CycloneDX community (https://cyclonedx.org/). ``` -## debugpy (1.8.17) - MIT License +## debugpy (1.8.16) - MIT License An implementation of the Debug Adapter Protocol for Python @@ -6644,7 +6046,7 @@ limitations under the License. ``` -## dicom-validator (0.7.3) - MIT License +## dicom-validator (0.7.2) - MIT License Python DICOM validator using input from DICOM specs in docbook format @@ -7028,7 +6430,7 @@ OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. ``` -## dnspython (2.8.0) - ISC License (ISCL) +## dnspython (2.7.0) - ISC License (ISCL) DNS toolkit @@ -7281,28 +6683,279 @@ OR OTHER DEALINGS IN THE SOFTWARE. ``` -## duckdb (1.4.1) - MIT License +## duckdb (1.3.2) - MIT License DuckDB in-process database -* URL: https://github.com/duckdb/duckdb-python -* Author(s): DuckDB Foundation -* Maintainer(s): DuckDB Foundation +* URL: https://github.com/duckdb/duckdb/blob/main/tools/pythonpkg +* Author(s): Hannes Muehleisen ### License Text ``` -Copyright 2018-2025 Stichting DuckDB Foundation + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ -Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION -The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. + 1. Definitions. -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright [yyyy] [name of copyright owner] + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. + + +------------------------------------------------------------------------------------ +This product bundles various third-party components under other open source licenses. +This section summarizes those components and their licenses. See licenses/ +for text of these licenses. + + +Apache Software Foundation License 2.0 +-------------------------------------- + +common/network-common/src/main/java/org/apache/spark/network/util/LimitedInputStream.java +core/src/main/java/org/apache/spark/util/collection/TimSort.java +core/src/main/resources/org/apache/spark/ui/static/bootstrap* +core/src/main/resources/org/apache/spark/ui/static/vis* +docs/js/vendor/bootstrap.js +connector/spark-ganglia-lgpl/src/main/java/com/codahale/metrics/ganglia/GangliaReporter.java + + +Python Software Foundation License +---------------------------------- + +python/docs/source/_static/copybutton.js + +BSD 3-Clause +------------ + +python/lib/py4j-*-src.zip +python/pyspark/cloudpickle/*.py +python/pyspark/join.py +core/src/main/resources/org/apache/spark/ui/static/d3.min.js + +The CSS style for the navigation sidebar of the documentation was originally +submitted by Óscar Nájera for the scikit-learn project. The scikit-learn project +is distributed under the 3-Clause BSD license. + + +MIT License +----------- + +core/src/main/resources/org/apache/spark/ui/static/dagre-d3.min.js +core/src/main/resources/org/apache/spark/ui/static/*dataTables* +core/src/main/resources/org/apache/spark/ui/static/graphlib-dot.min.js +core/src/main/resources/org/apache/spark/ui/static/jquery* +core/src/main/resources/org/apache/spark/ui/static/sorttable.js +docs/js/vendor/anchor.min.js +docs/js/vendor/jquery* +docs/js/vendor/modernizer* + + +Creative Commons CC0 1.0 Universal Public Domain Dedication +----------------------------------------------------------- +(see LICENSE-CC0.txt) + +data/mllib/images/kittens/29.5.a_b_EGDP022204.jpg +data/mllib/images/kittens/54893.jpg +data/mllib/images/kittens/DP153539.jpg +data/mllib/images/kittens/DP802813.jpg +data/mllib/images/multi-channel/chr30.4.184.jpg ``` -## email-validator (2.3.0) - The Unlicense (Unlicense) +## email_validator (2.2.0) - The Unlicense (Unlicense) A robust email address syntax and deliverability validation library. @@ -7551,7 +7204,7 @@ execnet: rapid multi-Python deployment ``` -## executing (2.2.1) - MIT License +## executing (2.2.0) - MIT License Get the currently executing AST node of a frame, and other information @@ -7585,7 +7238,7 @@ SOFTWARE. ``` -## fastapi (0.119.0) - MIT License +## fastapi (0.116.2) - MIT License FastAPI framework, high performance, easy to learn, fast to code, ready for production @@ -7619,7 +7272,7 @@ THE SOFTWARE. ``` -## fastapi-cli (0.0.13) - MIT License +## fastapi-cli (0.0.8) - MIT License Run and manage FastAPI apps from the command line with FastAPI CLI. 🚀 @@ -7653,7 +7306,7 @@ THE SOFTWARE. ``` -## fastapi-cloud-cli (0.3.1) - MIT License +## fastapi-cloud-cli (0.1.5) - MIT License Deploy and manage FastAPI Cloud apps from the command line 🚀 @@ -7687,7 +7340,7 @@ THE SOFTWARE. ``` -## fastjsonschema (2.21.2) - BSD License +## fastjsonschema (2.21.1) - BSD License Fastest Python implementation of JSON schema @@ -7916,7 +7569,7 @@ Python support for Parquet file format END OF TERMS AND CONDITIONS ``` -## filelock (3.20.0) - The Unlicense (Unlicense) +## filelock (3.18.0) - The Unlicense (Unlicense) A platform independent file lock. @@ -7953,7 +7606,7 @@ For more information, please refer to ``` -## fonttools (4.60.1) - MIT +## fonttools (4.59.0) - MIT Tools to manipulate font files @@ -8374,7 +8027,7 @@ Exhibit B - "Incompatible With Secondary Licenses" Notice ``` -## frozenlist (1.8.0) - Apache-2.0 +## frozenlist (1.7.0) - Apache-2.0 A list-like structure which implements collections.abc.MutableSequence @@ -8588,7 +8241,7 @@ Apache License ``` -## fsspec (2025.9.0) - UNKNOWN +## fsspec (2024.12.0) - BSD License File-system specification @@ -8630,7 +8283,7 @@ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ``` -## furo (2025.9.25) - MIT License +## furo (2025.7.19) - MIT License A clean customisable Sphinx documentation theme. @@ -8662,13 +8315,14 @@ IN THE SOFTWARE. ``` -## git-cliff (2.10.1) - UNKNOWN +## git-cliff (2.10.0) - MIT OR Apache-2.0 -UNKNOWN +A highly customizable changelog generator ⛰️ * URL: https://github.com/orhun/git-cliff +* Author(s): git-cliff contributors -## google-api-core (2.26.0) - Apache Software License +## google-api-core (2.25.1) - Apache Software License Google API client core library @@ -8883,7 +8537,7 @@ Google API client core library ``` -## google-auth (2.41.1) - Apache Software License +## google-auth (2.40.3) - Apache Software License Google Authentication Library @@ -9312,7 +8966,7 @@ Google Cloud API client core library ``` -## google-cloud-storage (3.4.1) - Apache Software License +## google-cloud-storage (3.4.0) - Apache Software License Google Cloud Storage API client library @@ -10228,46 +9882,6 @@ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLI ``` -## html-sanitizer (2.6.0) - BSD License - -HTML sanitizer - -* URL: https://github.com/matthiask/html-sanitizer/ -* Author(s): Matthias Kestenholz - -### License Text - -``` -Copyright (c) 2012-2017, Feinheit AG and individual contributors. -All rights reserved. - -Redistribution and use in source and binary forms, with or without modification, -are permitted provided that the following conditions are met: - - 1. Redistributions of source code must retain the above copyright notice, - this list of conditions and the following disclaimer. - - 2. Redistributions in binary form must reproduce the above copyright - notice, this list of conditions and the following disclaimer in the - documentation and/or other materials provided with the distribution. - - 3. Neither the name of Feinheit AG nor the names of its contributors - may be used to endorse or promote products derived from this software - without specific prior written permission. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND -ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED -WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE -DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR -ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES -(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; -LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON -ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT -(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS -SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - -``` - ## html5lib (1.1) - MIT License HTML parser based on the WHATWG HTML specification @@ -10341,12 +9955,12 @@ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ``` -## httptools (0.7.1) - UNKNOWN +## httptools (0.6.4) - MIT License A collection of framework independent HTTP protocol utils. * URL: https://github.com/MagicStack/httptools -* Author(s): Yury Selivanov +* Author(s): Yury Selivanov ### License Text @@ -10434,7 +10048,7 @@ WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ``` -## idc-index-data (22.0.2) - MIT License +## idc-index-data (22.0.0) - MIT License ImagingDataCommons index to query and download data. @@ -10466,7 +10080,7 @@ SOFTWARE. ``` -## identify (2.6.15) - MIT +## identify (2.6.12) - MIT File identification library for Python @@ -10498,7 +10112,7 @@ THE SOFTWARE. ``` -## idna (3.11) - UNKNOWN +## idna (3.10) - BSD License Internationalized Domain Names in Applications (IDNA) @@ -10510,7 +10124,7 @@ Internationalized Domain Names in Applications (IDNA) ``` BSD 3-Clause License -Copyright (c) 2013-2025, Kim Davies and contributors. +Copyright (c) 2013-2024, Kim Davies and contributors. All rights reserved. Redistribution and use in source and binary forms, with or without @@ -10576,7 +10190,7 @@ THE SOFTWARE. ``` -## ijson (3.4.0.post0) - UNKNOWN +## ijson (3.4.0) - UNKNOWN Iterative JSON parser with standard Python iterator interfaces @@ -10915,7 +10529,36 @@ SOFTWARE. ``` -## ipykernel (7.0.1) - UNKNOWN +## ipykernel (6.30.1) - BSD 3-Clause License + +Copyright (c) 2015, IPython Development Team + +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + +1. Redistributions of source code must retain the above copyright notice, this + list of conditions and the following disclaimer. + +2. Redistributions in binary form must reproduce the above copyright notice, + this list of conditions and the following disclaimer in the documentation + and/or other materials provided with the distribution. + +3. Neither the name of the copyright holder nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE +DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE +FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR +SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, +OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. IPython Kernel for Jupyter @@ -10958,7 +10601,7 @@ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ``` -## ipython (9.6.0) - BSD License +## ipython (9.5.0) - BSD License IPython: Productive Interactive Computing @@ -11268,7 +10911,7 @@ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ``` -## json5 (0.12.1) - Apache Software License +## json5 (0.12.0) - Apache Software License A Python implementation of the JSON5 data format. @@ -11633,7 +11276,7 @@ THE SOFTWARE. ``` -## jsonschema-specifications (2025.9.1) - UNKNOWN +## jsonschema-specifications (2025.4.1) - UNKNOWN The JSON Schema meta-schemas and vocabularies, exposed as a Registry @@ -11793,7 +11436,7 @@ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ``` -## jupyter-lsp (2.3.0) - BSD License +## jupyter-lsp (2.2.6) - BSD License Multi-Language Server WebSocket proxy for Jupyter Notebook/Lab server @@ -11922,7 +11565,7 @@ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ``` -## jupyter_server (2.17.0) - BSD License +## jupyter_server (2.16.0) - BSD License The backend—i.e. core services, APIs, and REST endpoints—to Jupyter web applications. @@ -12009,7 +11652,7 @@ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ``` -## jupyterlab (4.4.9) - BSD License +## jupyterlab (4.4.5) - BSD License JupyterLab computational environment @@ -12440,7 +12083,7 @@ OTHER DEALINGS IN THE SOFTWARE. ``` -## kiwisolver (1.4.9) - BSD License +## kiwisolver (1.4.8) - BSD License A fast implementation of the Cassowary constraint solver @@ -12457,7 +12100,7 @@ A fast implementation of the Cassowary constraint solver Kiwi is licensed under the terms of the Modified BSD License (also known as New or Revised BSD), as follows: -Copyright (c) 2013-2025, Nucleic Development Team +Copyright (c) 2013-2024, Nucleic Development Team All rights reserved. @@ -12516,7 +12159,7 @@ With this in mind, the following banner should be used in any source code file to indicate the copyright and license terms: #------------------------------------------------------------------------------ -# Copyright (c) 2013-2025, Nucleic Development Team. +# Copyright (c) 2013-2024, Nucleic Development Team. # # Distributed under the terms of the Modified BSD License. # @@ -12525,7 +12168,7 @@ to indicate the copyright and license terms: ``` -## lark (1.3.0) - MIT License +## lark (1.2.2) - MIT License a modern parsing library @@ -12606,7 +12249,7 @@ license-expression is a comprehensive utility library to parse, compare, simplif ``` -## logfire (4.13.2) - UNKNOWN +## logfire (4.8.0) - MIT License The best Python observability tool! 🪵🔥 @@ -12640,89 +12283,47 @@ SOFTWARE. ``` -## loro (1.8.1) - UNKNOWN - -Python bindings for [Loro](https://loro.dev) - -* URL: https://loro.dev -* Author(s): leon7hao - -### License Text - -``` -MIT License - -Copyright (c) 2024 Loro - -Permission is hereby granted, free of charge, to any person obtaining a copy -of this software and associated documentation files (the "Software"), to deal -in the Software without restriction, including without limitation the rights -to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -copies of the Software, and to permit persons to whom the Software is -furnished to do so, subject to the following conditions: - -The above copyright notice and this permission notice shall be included in all -copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -SOFTWARE. - -``` - -## lxml (5.4.0) - BSD License +## loro (1.5.3) - UNKNOWN -Powerful and Pythonic XML processing library combining libxml2/libxslt with the ElementTree API. +Python bindings for [Loro](https://loro.dev) -* URL: https://lxml.de/ -* Author(s): lxml dev team -* Maintainer(s): lxml dev team +* URL: https://loro.dev +* Author(s): leon7hao ### License Text ``` -Copyright (c) 2004 Infrae. All rights reserved. +MIT License -Redistribution and use in source and binary forms, with or without -modification, are permitted provided that the following conditions are -met: +Copyright (c) 2024 Loro - 1. Redistributions of source code must retain the above copyright - notice, this list of conditions and the following disclaimer. - - 2. Redistributions in binary form must reproduce the above copyright - notice, this list of conditions and the following disclaimer in - the documentation and/or other materials provided with the - distribution. +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: - 3. Neither the name of Infrae nor the names of its contributors may - be used to endorse or promote products derived from this software - without specific prior written permission. +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR -A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL INFRAE OR -CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, -EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, -PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR -PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF -LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING -NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS -SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. ``` -## lxml_html_clean (0.4.3) - BSD-3-Clause +## lxml (5.4.0) - BSD License -HTML cleaner from lxml project +Powerful and Pythonic XML processing library combining libxml2/libxslt with the ElementTree API. -* URL: https://github.com/fedora-python/lxml_html_clean/ -* Author(s): Lumír Balhar +* URL: https://lxml.de/ +* Author(s): lxml dev team +* Maintainer(s): lxml dev team ### License Text @@ -12794,13 +12395,13 @@ Copyright 2010-2020 - Ronald Oussoren ``` -## marimo (0.16.5) - Apache Software License +## marimo (0.16.0) - Apache Software License A library for making reactive notebooks and apps * URL: https://github.com/marimo-team/marimo -## markdown-it-py (4.0.0) - MIT License +## markdown-it-py (3.0.0) - MIT License Python port of markdown-it. Markdown parsing, done right! @@ -12940,7 +12541,7 @@ THE SOFTWARE. ``` -## matplotlib (3.10.7) - Python Software Foundation License +## matplotlib (3.10.6) - Python Software Foundation License Python plotting package @@ -13186,7 +12787,7 @@ IN THE SOFTWARE. ``` -## mistune (3.1.4) - BSD License +## mistune (3.1.3) - BSD License A sane and fast Markdown parser with useful plugins and renderers @@ -13213,7 +12814,7 @@ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ``` -## more-itertools (10.8.0) - UNKNOWN +## more-itertools (10.7.0) - MIT License More routines for operating on iterables, beyond itertools @@ -13245,7 +12846,7 @@ SOFTWARE. ``` -## msgpack (1.1.2) - UNKNOWN +## msgpack (1.1.1) - Apache 2.0 MessagePack serializer @@ -13312,7 +12913,7 @@ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ``` -## multidict (6.7.0) - Apache License 2.0 +## multidict (6.6.3) - Apache License 2.0 multidict implementation @@ -13338,7 +12939,7 @@ multidict implementation ``` -## mypy (1.18.2) - MIT License +## mypy (1.17.1) - MIT License Optional static typing for Python @@ -13620,7 +13221,7 @@ DEALINGS IN THE SOFTWARE. ``` -## narwhals (2.8.0) - MIT License +## narwhals (2.0.1) - MIT License Extremely lightweight compatibility layer between dataframe libraries @@ -13855,7 +13456,7 @@ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ``` -## nicegui (3.0.4) - MIT License +## nicegui (2.24.1) - MIT License Create web-based user interfaces with Python. The nice way. @@ -13934,7 +13535,7 @@ DAMAGE. ``` -## notebook (7.4.7) - BSD License +## notebook (7.4.5) - BSD License Jupyter Notebook - A web-based notebook environment for interactive computing @@ -14234,7 +13835,7 @@ Flexible test automation. ``` -## numpy (2.3.3) - BSD License +## numpy (2.3.2) - BSD License Fundamental package for array computing in Python @@ -15826,7 +15427,7 @@ That's all there is to it! ``` -## opentelemetry-api (1.37.0) - UNKNOWN +## opentelemetry-api (1.36.0) - UNKNOWN OpenTelemetry Python API @@ -16040,7 +15641,7 @@ OpenTelemetry Python API ``` -## opentelemetry-exporter-otlp-proto-common (1.37.0) - UNKNOWN +## opentelemetry-exporter-otlp-proto-common (1.36.0) - UNKNOWN OpenTelemetry Protobuf encoding @@ -16254,7 +15855,7 @@ OpenTelemetry Protobuf encoding ``` -## opentelemetry-exporter-otlp-proto-http (1.37.0) - UNKNOWN +## opentelemetry-exporter-otlp-proto-http (1.36.0) - UNKNOWN OpenTelemetry Collector Protobuf over HTTP Exporter @@ -16468,7 +16069,7 @@ OpenTelemetry Collector Protobuf over HTTP Exporter ``` -## opentelemetry-instrumentation (0.58b0) - Apache Software License +## opentelemetry-instrumentation (0.57b0) - Apache Software License Instrumentation Tools & Auto Instrumentation for OpenTelemetry Python @@ -16682,7 +16283,7 @@ Instrumentation Tools & Auto Instrumentation for OpenTelemetry Python ``` -## opentelemetry-instrumentation-asgi (0.58b0) - Apache Software License +## opentelemetry-instrumentation-asgi (0.57b0) - Apache Software License ASGI instrumentation for OpenTelemetry @@ -16896,7 +16497,7 @@ ASGI instrumentation for OpenTelemetry ``` -## opentelemetry-instrumentation-dbapi (0.58b0) - Apache Software License +## opentelemetry-instrumentation-dbapi (0.57b0) - Apache Software License OpenTelemetry Database API instrumentation @@ -17110,7 +16711,7 @@ OpenTelemetry Database API instrumentation ``` -## opentelemetry-instrumentation-fastapi (0.58b0) - Apache Software License +## opentelemetry-instrumentation-fastapi (0.57b0) - Apache Software License OpenTelemetry FastAPI Instrumentation @@ -17324,7 +16925,7 @@ OpenTelemetry FastAPI Instrumentation ``` -## opentelemetry-instrumentation-httpx (0.58b0) - Apache Software License +## opentelemetry-instrumentation-httpx (0.57b0) - Apache Software License OpenTelemetry HTTPX Instrumentation @@ -17538,7 +17139,7 @@ OpenTelemetry HTTPX Instrumentation ``` -## opentelemetry-instrumentation-jinja2 (0.58b0) - Apache Software License +## opentelemetry-instrumentation-jinja2 (0.57b0) - Apache Software License OpenTelemetry jinja2 instrumentation @@ -17752,7 +17353,7 @@ OpenTelemetry jinja2 instrumentation ``` -## opentelemetry-instrumentation-requests (0.58b0) - Apache Software License +## opentelemetry-instrumentation-requests (0.57b0) - Apache Software License OpenTelemetry requests instrumentation @@ -17966,7 +17567,7 @@ OpenTelemetry requests instrumentation ``` -## opentelemetry-instrumentation-sqlite3 (0.58b0) - Apache Software License +## opentelemetry-instrumentation-sqlite3 (0.57b0) - Apache Software License OpenTelemetry SQLite3 instrumentation @@ -18180,11 +17781,225 @@ OpenTelemetry SQLite3 instrumentation ``` -## opentelemetry-instrumentation-system-metrics (0.58b0) - Apache Software License +## opentelemetry-instrumentation-system-metrics (0.57b0) - Apache Software License + +OpenTelemetry System Metrics Instrumentation + +* URL: https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-system-metrics +* Author(s): OpenTelemetry Authors + +### License Text + +``` + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright [yyyy] [name of copyright owner] + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. + +``` + +## opentelemetry-instrumentation-tornado (0.57b0) - Apache Software License -OpenTelemetry System Metrics Instrumentation +Tornado instrumentation for OpenTelemetry -* URL: https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-system-metrics +* URL: https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-tornado * Author(s): OpenTelemetry Authors ### License Text @@ -18394,11 +18209,11 @@ OpenTelemetry System Metrics Instrumentation ``` -## opentelemetry-instrumentation-tornado (0.58b0) - Apache Software License +## opentelemetry-instrumentation-urllib (0.57b0) - Apache Software License -Tornado instrumentation for OpenTelemetry +OpenTelemetry urllib instrumentation -* URL: https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-tornado +* URL: https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-urllib * Author(s): OpenTelemetry Authors ### License Text @@ -18608,11 +18423,11 @@ Tornado instrumentation for OpenTelemetry ``` -## opentelemetry-instrumentation-urllib (0.58b0) - Apache Software License +## opentelemetry-instrumentation-urllib3 (0.57b0) - Apache Software License -OpenTelemetry urllib instrumentation +OpenTelemetry urllib3 instrumentation -* URL: https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-urllib +* URL: https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-urllib3 * Author(s): OpenTelemetry Authors ### License Text @@ -18822,11 +18637,11 @@ OpenTelemetry urllib instrumentation ``` -## opentelemetry-instrumentation-urllib3 (0.58b0) - Apache Software License +## opentelemetry-proto (1.36.0) - UNKNOWN -OpenTelemetry urllib3 instrumentation +OpenTelemetry Python Proto -* URL: https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-urllib3 +* URL: https://github.com/open-telemetry/opentelemetry-python/tree/main/opentelemetry-proto * Author(s): OpenTelemetry Authors ### License Text @@ -19036,11 +18851,11 @@ OpenTelemetry urllib3 instrumentation ``` -## opentelemetry-proto (1.37.0) - UNKNOWN +## opentelemetry-sdk (1.36.0) - UNKNOWN -OpenTelemetry Python Proto +OpenTelemetry Python SDK -* URL: https://github.com/open-telemetry/opentelemetry-python/tree/main/opentelemetry-proto +* URL: https://github.com/open-telemetry/opentelemetry-python/tree/main/opentelemetry-sdk * Author(s): OpenTelemetry Authors ### License Text @@ -19250,11 +19065,11 @@ OpenTelemetry Python Proto ``` -## opentelemetry-sdk (1.37.0) - UNKNOWN +## opentelemetry-semantic-conventions (0.57b0) - UNKNOWN -OpenTelemetry Python SDK +OpenTelemetry Semantic Conventions -* URL: https://github.com/open-telemetry/opentelemetry-python/tree/main/opentelemetry-sdk +* URL: https://github.com/open-telemetry/opentelemetry-python/tree/main/opentelemetry-semantic-conventions * Author(s): OpenTelemetry Authors ### License Text @@ -19464,13 +19279,250 @@ OpenTelemetry Python SDK ``` -## opentelemetry-semantic-conventions (0.58b0) - UNKNOWN +## opentelemetry-util-http (0.57b0) - Apache Software License -OpenTelemetry Semantic Conventions +Web util for OpenTelemetry -* URL: https://github.com/open-telemetry/opentelemetry-python/tree/main/opentelemetry-semantic-conventions +* URL: https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/util/opentelemetry-util-http * Author(s): OpenTelemetry Authors +## orjson (3.11.1) - Apache Software License; MIT License + +Fast, correct Python JSON library supporting dataclasses, datetimes, and numpy + +* URL: https://github.com/ijl/orjson +* Author(s): ijl + +### License Text + +``` + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + +TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + +1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + +2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + +3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + +4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + +5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + +6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + +7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + +8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + +9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + +END OF TERMS AND CONDITIONS + +APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + +Copyright [yyyy] [name of copyright owner] + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. + +``` + +## outcome (1.3.0.post0) - Apache Software License; MIT License + +Capture the outcome of Python function calls. + +* URL: https://github.com/python-trio/outcome +* Author(s): Frazer McLean + +### License Text + +``` +This software is made available under the terms of *either* of the +licenses found in LICENSE.APACHE2 or LICENSE.MIT. Contributions to are +made under the terms of *both* these licenses. + +``` + +## overrides (7.7.0) - Apache License, Version 2.0 + +A decorator to automatically detect mismatch when overriding a method. + +* URL: https://github.com/mkorpela/overrides +* Author(s): Mikko Korpela + ### License Text ``` @@ -19654,7 +19706,7 @@ OpenTelemetry Semantic Conventions APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following - boilerplate notice, with the fields enclosed by brackets "[]" + boilerplate notice, with the fields enclosed by brackets "{}" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a @@ -19662,7 +19714,7 @@ OpenTelemetry Semantic Conventions same "printed page" as the copyright notice for easier identification within third-party archives. - Copyright [yyyy] [name of copyright owner] + Copyright {yyyy} {name of copyright owner} Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -19676,241 +19728,6 @@ OpenTelemetry Semantic Conventions See the License for the specific language governing permissions and limitations under the License. -``` - -## opentelemetry-util-http (0.58b0) - Apache Software License - -Web util for OpenTelemetry - -* URL: https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/util/opentelemetry-util-http -* Author(s): OpenTelemetry Authors - -## orjson (3.11.3) - Apache Software License; MIT License - -Fast, correct Python JSON library supporting dataclasses, datetimes, and numpy - -* URL: https://github.com/ijl/orjson - -### License Text - -``` - Apache License - Version 2.0, January 2004 - http://www.apache.org/licenses/ - -TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION - -1. Definitions. - - "License" shall mean the terms and conditions for use, reproduction, - and distribution as defined by Sections 1 through 9 of this document. - - "Licensor" shall mean the copyright owner or entity authorized by - the copyright owner that is granting the License. - - "Legal Entity" shall mean the union of the acting entity and all - other entities that control, are controlled by, or are under common - control with that entity. For the purposes of this definition, - "control" means (i) the power, direct or indirect, to cause the - direction or management of such entity, whether by contract or - otherwise, or (ii) ownership of fifty percent (50%) or more of the - outstanding shares, or (iii) beneficial ownership of such entity. - - "You" (or "Your") shall mean an individual or Legal Entity - exercising permissions granted by this License. - - "Source" form shall mean the preferred form for making modifications, - including but not limited to software source code, documentation - source, and configuration files. - - "Object" form shall mean any form resulting from mechanical - transformation or translation of a Source form, including but - not limited to compiled object code, generated documentation, - and conversions to other media types. - - "Work" shall mean the work of authorship, whether in Source or - Object form, made available under the License, as indicated by a - copyright notice that is included in or attached to the work - (an example is provided in the Appendix below). - - "Derivative Works" shall mean any work, whether in Source or Object - form, that is based on (or derived from) the Work and for which the - editorial revisions, annotations, elaborations, or other modifications - represent, as a whole, an original work of authorship. For the purposes - of this License, Derivative Works shall not include works that remain - separable from, or merely link (or bind by name) to the interfaces of, - the Work and Derivative Works thereof. - - "Contribution" shall mean any work of authorship, including - the original version of the Work and any modifications or additions - to that Work or Derivative Works thereof, that is intentionally - submitted to Licensor for inclusion in the Work by the copyright owner - or by an individual or Legal Entity authorized to submit on behalf of - the copyright owner. For the purposes of this definition, "submitted" - means any form of electronic, verbal, or written communication sent - to the Licensor or its representatives, including but not limited to - communication on electronic mailing lists, source code control systems, - and issue tracking systems that are managed by, or on behalf of, the - Licensor for the purpose of discussing and improving the Work, but - excluding communication that is conspicuously marked or otherwise - designated in writing by the copyright owner as "Not a Contribution." - - "Contributor" shall mean Licensor and any individual or Legal Entity - on behalf of whom a Contribution has been received by Licensor and - subsequently incorporated within the Work. - -2. Grant of Copyright License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - copyright license to reproduce, prepare Derivative Works of, - publicly display, publicly perform, sublicense, and distribute the - Work and such Derivative Works in Source or Object form. - -3. Grant of Patent License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - (except as stated in this section) patent license to make, have made, - use, offer to sell, sell, import, and otherwise transfer the Work, - where such license applies only to those patent claims licensable - by such Contributor that are necessarily infringed by their - Contribution(s) alone or by combination of their Contribution(s) - with the Work to which such Contribution(s) was submitted. If You - institute patent litigation against any entity (including a - cross-claim or counterclaim in a lawsuit) alleging that the Work - or a Contribution incorporated within the Work constitutes direct - or contributory patent infringement, then any patent licenses - granted to You under this License for that Work shall terminate - as of the date such litigation is filed. - -4. Redistribution. You may reproduce and distribute copies of the - Work or Derivative Works thereof in any medium, with or without - modifications, and in Source or Object form, provided that You - meet the following conditions: - - (a) You must give any other recipients of the Work or - Derivative Works a copy of this License; and - - (b) You must cause any modified files to carry prominent notices - stating that You changed the files; and - - (c) You must retain, in the Source form of any Derivative Works - that You distribute, all copyright, patent, trademark, and - attribution notices from the Source form of the Work, - excluding those notices that do not pertain to any part of - the Derivative Works; and - - (d) If the Work includes a "NOTICE" text file as part of its - distribution, then any Derivative Works that You distribute must - include a readable copy of the attribution notices contained - within such NOTICE file, excluding those notices that do not - pertain to any part of the Derivative Works, in at least one - of the following places: within a NOTICE text file distributed - as part of the Derivative Works; within the Source form or - documentation, if provided along with the Derivative Works; or, - within a display generated by the Derivative Works, if and - wherever such third-party notices normally appear. The contents - of the NOTICE file are for informational purposes only and - do not modify the License. You may add Your own attribution - notices within Derivative Works that You distribute, alongside - or as an addendum to the NOTICE text from the Work, provided - that such additional attribution notices cannot be construed - as modifying the License. - - You may add Your own copyright statement to Your modifications and - may provide additional or different license terms and conditions - for use, reproduction, or distribution of Your modifications, or - for any such Derivative Works as a whole, provided Your use, - reproduction, and distribution of the Work otherwise complies with - the conditions stated in this License. - -5. Submission of Contributions. Unless You explicitly state otherwise, - any Contribution intentionally submitted for inclusion in the Work - by You to the Licensor shall be under the terms and conditions of - this License, without any additional terms or conditions. - Notwithstanding the above, nothing herein shall supersede or modify - the terms of any separate license agreement you may have executed - with Licensor regarding such Contributions. - -6. Trademarks. This License does not grant permission to use the trade - names, trademarks, service marks, or product names of the Licensor, - except as required for reasonable and customary use in describing the - origin of the Work and reproducing the content of the NOTICE file. - -7. Disclaimer of Warranty. Unless required by applicable law or - agreed to in writing, Licensor provides the Work (and each - Contributor provides its Contributions) on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or - implied, including, without limitation, any warranties or conditions - of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A - PARTICULAR PURPOSE. You are solely responsible for determining the - appropriateness of using or redistributing the Work and assume any - risks associated with Your exercise of permissions under this License. - -8. Limitation of Liability. In no event and under no legal theory, - whether in tort (including negligence), contract, or otherwise, - unless required by applicable law (such as deliberate and grossly - negligent acts) or agreed to in writing, shall any Contributor be - liable to You for damages, including any direct, indirect, special, - incidental, or consequential damages of any character arising as a - result of this License or out of the use or inability to use the - Work (including but not limited to damages for loss of goodwill, - work stoppage, computer failure or malfunction, or any and all - other commercial damages or losses), even if such Contributor - has been advised of the possibility of such damages. - -9. Accepting Warranty or Additional Liability. While redistributing - the Work or Derivative Works thereof, You may choose to offer, - and charge a fee for, acceptance of support, warranty, indemnity, - or other liability obligations and/or rights consistent with this - License. However, in accepting such obligations, You may act only - on Your own behalf and on Your sole responsibility, not on behalf - of any other Contributor, and only if You agree to indemnify, - defend, and hold each Contributor harmless for any liability - incurred by, or claims asserted against, such Contributor by reason - of your accepting any such warranty or additional liability. - -END OF TERMS AND CONDITIONS - -APPENDIX: How to apply the Apache License to your work. - - To apply the Apache License to your work, attach the following - boilerplate notice, with the fields enclosed by brackets "[]" - replaced with your own identifying information. (Don't include - the brackets!) The text should be enclosed in the appropriate - comment syntax for the file format. We also recommend that a - file or class name and description of purpose be included on the - same "printed page" as the copyright notice for easier - identification within third-party archives. - -Copyright [yyyy] [name of copyright owner] - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. - -``` - -## outcome (1.3.0.post0) - Apache Software License; MIT License - -Capture the outcome of Python function calls. - -* URL: https://github.com/python-trio/outcome -* Author(s): Frazer McLean - -### License Text - -``` -This software is made available under the terms of *either* of the -licenses found in LICENSE.APACHE2 or LICENSE.MIT. Contributions to are -made under the terms of *both* these licenses. ``` @@ -19937,7 +19754,7 @@ under the terms of *both* these licenses. ``` -## pandas (2.3.3) - BSD License +## pandas (2.3.2) - BSD License Powerful data structures for data analysis, time series, and statistics @@ -21239,7 +21056,7 @@ SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ``` -## parso (0.8.5) - MIT License +## parso (0.8.4) - MIT License A Python Parser @@ -23743,7 +23560,7 @@ A tool for scanning Python environments for known vulnerabilities ``` -## platformdirs (4.5.0) - MIT License +## platformdirs (4.4.0) - MIT License A small Python package for determining appropriate platform-specific dirs, e.g. a `user data dir`. @@ -23887,7 +23704,7 @@ A simple Python library for easily displaying tabular data in a visually appeali ``` -## prometheus_client (0.23.1) - UNKNOWN +## prometheus_client (0.22.1) - UNKNOWN Python client for the Prometheus monitoring system. @@ -24112,11 +23929,11 @@ license. For details, see prometheus_client/decorator.py. ``` -## prompt_toolkit (3.0.52) - BSD License +## prompt_toolkit (3.0.51) - BSD License Library for building powerful interactive command lines in Python -* URL: https://github.com/prompt-toolkit/python-prompt-toolkit +* URL: UNKNOWN * Author(s): Jonathan Slenders ### License Text @@ -24152,7 +23969,7 @@ SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ``` -## propcache (0.4.1) - Apache Software License +## propcache (0.3.2) - Apache Software License Accelerated property cache @@ -24602,7 +24419,7 @@ Beautiful, Pythonic protocol buffers ``` -## protobuf (6.32.1) - 3-Clause BSD License +## protobuf (6.31.1) - 3-Clause BSD License UNKNOWN @@ -24654,6 +24471,43 @@ Proxy Implementation * URL: http://github.com/jtushman/proxy_tools * Author(s): Jonathan Tushman +## pscript (0.7.7) - BSD License + +Python to JavaScript compiler. + +* URL: http://pscript.readthedocs.io +* Author(s): Almar Klein and contributors + +### License Text + +``` +Copyright (c) 2015-2020, PScript developers +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + +* Redistributions of source code must retain the above copyright notice, this + list of conditions and the following disclaimer. + +* Redistributions in binary form must reproduce the above copyright notice, + this list of conditions and the following disclaimer in the documentation + and/or other materials provided with the distribution. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE +DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE +FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR +SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, +OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + + +``` + ## psutil (7.1.0) - BSD-3-Clause Cross-platform lib for process and system monitoring. @@ -25050,7 +24904,7 @@ POSSIBILITY OF SUCH DAMAGE. ``` -## pycparser (2.23) - BSD License +## pycparser (2.22) - BSD License C parser in Python @@ -25091,12 +24945,12 @@ OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ``` -## pydantic (2.12.2) - UNKNOWN +## pydantic (2.11.7) - MIT License Data validation using Python type hints * URL: https://github.com/pydantic/pydantic -* Author(s): Samuel Colvin , Eric Jolibois , Hasan Ramezani , Adrian Garcia Badaracco <1755071+adriangb@users.noreply.github.com>, Terrence Dorsey , David Montague , Serge Matveenko , Marcelo Trylesinski , Sydney Runkle , David Hewitt , Alex Hall , Victorien Plot , Douwe Maan +* Author(s): Samuel Colvin , Eric Jolibois , Hasan Ramezani , Adrian Garcia Badaracco <1755071+adriangb@users.noreply.github.com>, Terrence Dorsey , David Montague , Serge Matveenko , Marcelo Trylesinski , Sydney Runkle , David Hewitt , Alex Hall , Victorien Plot ### License Text @@ -25125,7 +24979,7 @@ SOFTWARE. ``` -## pydantic-extra-types (2.10.6) - MIT License +## pydantic-extra-types (2.10.5) - MIT License Extra Pydantic types. @@ -25159,7 +25013,7 @@ SOFTWARE. ``` -## pydantic-settings (2.11.0) - MIT License +## pydantic-settings (2.10.1) - MIT License Settings management using Pydantic @@ -25193,7 +25047,7 @@ SOFTWARE. ``` -## pydantic_core (2.41.4) - UNKNOWN +## pydantic_core (2.33.2) - MIT License Core functionality for Pydantic validation and serialization @@ -25299,7 +25153,7 @@ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ``` -## pyinstaller-hooks-contrib (2025.9) - Apache Software License; GNU General Public License v2 (GPLv2) +## pyinstaller-hooks-contrib (2025.8) - Apache Software License; GNU General Public License v2 (GPLv2) Community maintained hooks for PyInstaller @@ -26050,9 +25904,9 @@ Copyright 2003-2025 - Ronald Oussoren ``` -## pyparsing (3.2.5) - UNKNOWN +## pyparsing (3.2.3) - MIT License -pyparsing - Classes and methods to define and execute parsing grammars +pyparsing module - Classes and methods to define and execute parsing grammars * URL: https://github.com/pyparsing/pyparsing/ * Author(s): Paul McGuire @@ -26081,7 +25935,7 @@ SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ``` -## pyright (1.1.406) - MIT +## pyright (1.1.405) - MIT Command line wrapper for pyright @@ -26514,41 +26368,7 @@ SOFTWARE. ``` -## pytest-durations (1.6.1) - MIT License - -Pytest plugin reporting fixtures and test functions execution time. - -* URL: https://github.com/blake-r/pytest-durations -* Author(s): Oleg Blednov - -### License Text - -``` -MIT License - -Copyright (c) 2022 Oleg Blednov - -Permission is hereby granted, free of charge, to any person obtaining a copy -of this software and associated documentation files (the "Software"), to deal -in the Software without restriction, including without limitation the rights -to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -copies of the Software, and to permit persons to whom the Software is -furnished to do so, subject to the following conditions: - -The above copyright notice and this permission notice shall be included in all -copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -SOFTWARE. - -``` - -## pytest-env (1.2.0) - MIT License +## pytest-env (1.1.5) - MIT License pytest plugin that allows you to add environment variables. @@ -26648,7 +26468,7 @@ file, You can obtain one at http://mozilla.org/MPL/2.0/. ``` -## pytest-regressions (2.8.3) - MIT License +## pytest-regressions (2.8.2) - MIT License Easy to use fixtures to write regression tests. @@ -26996,7 +26816,7 @@ SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ``` -## python-engineio (4.12.3) - MIT +## python-engineio (4.12.2) - MIT Engine.IO server and client for Python @@ -27029,7 +26849,7 @@ CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ``` -## python-json-logger (4.0.0) - BSD License +## python-json-logger (3.3.0) - BSD License JSON Log Formatter for the Python Logging Package @@ -27090,7 +26910,7 @@ limitations under the License. ``` -## python-socketio (5.14.1) - MIT +## python-socketio (5.13.0) - MIT License Socket.IO server and client for Python @@ -27198,7 +27018,7 @@ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ``` -## pyzmq (27.1.0) - BSD License +## pyzmq (27.0.1) - BSD License Python bindings for 0MQ @@ -27241,7 +27061,7 @@ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ``` -## questionary (2.1.1) - MIT License +## questionary (2.1.0) - MIT License Python library to build pretty command line user prompts ⭐️ @@ -27331,7 +27151,7 @@ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE ``` -## referencing (0.37.0) - UNKNOWN +## referencing (0.36.2) - UNKNOWN JSON Referencing + Python @@ -27906,7 +27726,7 @@ SOFTWARE. ``` -## rich (14.2.0) - MIT License +## rich (14.1.0) - MIT License Render rich text, tables, progress bars, syntax highlighting, markdown and more to the terminal @@ -27938,28 +27758,7 @@ SOFTWARE. ``` -## rich-click (1.9.3) - MIT License - -Copyright (c) 2022 Phil Ewels - -Permission is hereby granted, free of charge, to any person obtaining a copy -of this software and associated documentation files (the "Software"), to deal -in the Software without restriction, including without limitation the rights -to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -copies of the Software, and to permit persons to whom the Software is -furnished to do so, subject to the following conditions: - -The above copyright notice and this permission notice shall be included in all -copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -SOFTWARE. - +## rich-click (1.8.9) - MIT License Format click help output nicely with rich @@ -27994,7 +27793,7 @@ SOFTWARE. ``` -## rich-toolkit (0.15.1) - MIT License +## rich-toolkit (0.14.9) - MIT License Rich toolkit for building command-line applications @@ -28027,7 +27826,7 @@ SOFTWARE. ``` -## rignore (0.7.0) - MIT +## rignore (0.6.4) - MIT Python Bindings for the ignore crate @@ -28220,7 +28019,7 @@ d. Affirmer understands and acknowledges that Creative Commons is not a ``` -## rpds-py (0.27.1) - UNKNOWN +## rpds-py (0.27.0) - UNKNOWN Python bindings to Rust's persistent data structures (rpds) @@ -28319,7 +28118,7 @@ IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ``` -## ruamel.yaml (0.18.15) - MIT License +## ruamel.yaml (0.18.14) - MIT License ruamel.yaml is a YAML parser/emitter that supports roundtrip preservation of comments, seq/map flow style, and map key order @@ -28353,7 +28152,7 @@ ruamel.yaml is a YAML parser/emitter that supports roundtrip preservation of com ``` -## ruamel.yaml.clib (0.2.14) - MIT License +## ruamel.yaml.clib (0.2.12) - MIT License C version of reader, parser and emitter for ruamel.yaml derived from libyaml @@ -28365,7 +28164,7 @@ C version of reader, parser and emitter for ruamel.yaml derived from libyaml ``` The MIT License (MIT) - Copyright (c) 2019-2025 Anthon van der Neut, Ruamel bvba + Copyright (c) 2019-2024 Anthon van der Neut, Ruamel bvba Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal @@ -28387,7 +28186,7 @@ C version of reader, parser and emitter for ruamel.yaml derived from libyaml ``` -## ruff (0.14.0) - MIT License +## ruff (0.13.1) - MIT License An extremely fast Python linter and code formatter, written in Rust. @@ -29085,7 +28884,7 @@ SOFTWARE. ``` -## scalene (1.5.55) - Apache Software License +## scalene (1.5.54) - Apache Software License Scalene: A high-resolution, low-overhead CPU, GPU, and memory profiler for Python with AI-powered optimization suggestions @@ -29284,221 +29083,7 @@ Scalene: A high-resolution, low-overhead CPU, GPU, and memory profiler for Pytho same "printed page" as the copyright notice for easier identification within third-party archives. - Copyright [yyyy] [name of copyright owner] - - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. - -``` - -## selenium (4.36.0) - Apache-2.0 - -Official Python bindings for Selenium WebDriver - -* URL: https://www.selenium.dev - -### License Text - -``` - - Apache License - Version 2.0, January 2004 - http://www.apache.org/licenses/ - - TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION - - 1. Definitions. - - "License" shall mean the terms and conditions for use, reproduction, - and distribution as defined by Sections 1 through 9 of this document. - - "Licensor" shall mean the copyright owner or entity authorized by - the copyright owner that is granting the License. - - "Legal Entity" shall mean the union of the acting entity and all - other entities that control, are controlled by, or are under common - control with that entity. For the purposes of this definition, - "control" means (i) the power, direct or indirect, to cause the - direction or management of such entity, whether by contract or - otherwise, or (ii) ownership of fifty percent (50%) or more of the - outstanding shares, or (iii) beneficial ownership of such entity. - - "You" (or "Your") shall mean an individual or Legal Entity - exercising permissions granted by this License. - - "Source" form shall mean the preferred form for making modifications, - including but not limited to software source code, documentation - source, and configuration files. - - "Object" form shall mean any form resulting from mechanical - transformation or translation of a Source form, including but - not limited to compiled object code, generated documentation, - and conversions to other media types. - - "Work" shall mean the work of authorship, whether in Source or - Object form, made available under the License, as indicated by a - copyright notice that is included in or attached to the work - (an example is provided in the Appendix below). - - "Derivative Works" shall mean any work, whether in Source or Object - form, that is based on (or derived from) the Work and for which the - editorial revisions, annotations, elaborations, or other modifications - represent, as a whole, an original work of authorship. For the purposes - of this License, Derivative Works shall not include works that remain - separable from, or merely link (or bind by name) to the interfaces of, - the Work and Derivative Works thereof. - - "Contribution" shall mean any work of authorship, including - the original version of the Work and any modifications or additions - to that Work or Derivative Works thereof, that is intentionally - submitted to Licensor for inclusion in the Work by the copyright owner - or by an individual or Legal Entity authorized to submit on behalf of - the copyright owner. For the purposes of this definition, "submitted" - means any form of electronic, verbal, or written communication sent - to the Licensor or its representatives, including but not limited to - communication on electronic mailing lists, source code control systems, - and issue tracking systems that are managed by, or on behalf of, the - Licensor for the purpose of discussing and improving the Work, but - excluding communication that is conspicuously marked or otherwise - designated in writing by the copyright owner as "Not a Contribution." - - "Contributor" shall mean Licensor and any individual or Legal Entity - on behalf of whom a Contribution has been received by Licensor and - subsequently incorporated within the Work. - - 2. Grant of Copyright License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - copyright license to reproduce, prepare Derivative Works of, - publicly display, publicly perform, sublicense, and distribute the - Work and such Derivative Works in Source or Object form. - - 3. Grant of Patent License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - (except as stated in this section) patent license to make, have made, - use, offer to sell, sell, import, and otherwise transfer the Work, - where such license applies only to those patent claims licensable - by such Contributor that are necessarily infringed by their - Contribution(s) alone or by combination of their Contribution(s) - with the Work to which such Contribution(s) was submitted. If You - institute patent litigation against any entity (including a - cross-claim or counterclaim in a lawsuit) alleging that the Work - or a Contribution incorporated within the Work constitutes direct - or contributory patent infringement, then any patent licenses - granted to You under this License for that Work shall terminate - as of the date such litigation is filed. - - 4. Redistribution. You may reproduce and distribute copies of the - Work or Derivative Works thereof in any medium, with or without - modifications, and in Source or Object form, provided that You - meet the following conditions: - - (a) You must give any other recipients of the Work or - Derivative Works a copy of this License; and - - (b) You must cause any modified files to carry prominent notices - stating that You changed the files; and - - (c) You must retain, in the Source form of any Derivative Works - that You distribute, all copyright, patent, trademark, and - attribution notices from the Source form of the Work, - excluding those notices that do not pertain to any part of - the Derivative Works; and - - (d) If the Work includes a "NOTICE" text file as part of its - distribution, then any Derivative Works that You distribute must - include a readable copy of the attribution notices contained - within such NOTICE file, excluding those notices that do not - pertain to any part of the Derivative Works, in at least one - of the following places: within a NOTICE text file distributed - as part of the Derivative Works; within the Source form or - documentation, if provided along with the Derivative Works; or, - within a display generated by the Derivative Works, if and - wherever such third-party notices normally appear. The contents - of the NOTICE file are for informational purposes only and - do not modify the License. You may add Your own attribution - notices within Derivative Works that You distribute, alongside - or as an addendum to the NOTICE text from the Work, provided - that such additional attribution notices cannot be construed - as modifying the License. - - You may add Your own copyright statement to Your modifications and - may provide additional or different license terms and conditions - for use, reproduction, or distribution of Your modifications, or - for any such Derivative Works as a whole, provided Your use, - reproduction, and distribution of the Work otherwise complies with - the conditions stated in this License. - - 5. Submission of Contributions. Unless You explicitly state otherwise, - any Contribution intentionally submitted for inclusion in the Work - by You to the Licensor shall be under the terms and conditions of - this License, without any additional terms or conditions. - Notwithstanding the above, nothing herein shall supersede or modify - the terms of any separate license agreement you may have executed - with Licensor regarding such Contributions. - - 6. Trademarks. This License does not grant permission to use the trade - names, trademarks, service marks, or product names of the Licensor, - except as required for reasonable and customary use in describing the - origin of the Work and reproducing the content of the NOTICE file. - - 7. Disclaimer of Warranty. Unless required by applicable law or - agreed to in writing, Licensor provides the Work (and each - Contributor provides its Contributions) on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or - implied, including, without limitation, any warranties or conditions - of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A - PARTICULAR PURPOSE. You are solely responsible for determining the - appropriateness of using or redistributing the Work and assume any - risks associated with Your exercise of permissions under this License. - - 8. Limitation of Liability. In no event and under no legal theory, - whether in tort (including negligence), contract, or otherwise, - unless required by applicable law (such as deliberate and grossly - negligent acts) or agreed to in writing, shall any Contributor be - liable to You for damages, including any direct, indirect, special, - incidental, or consequential damages of any character arising as a - result of this License or out of the use or inability to use the - Work (including but not limited to damages for loss of goodwill, - work stoppage, computer failure or malfunction, or any and all - other commercial damages or losses), even if such Contributor - has been advised of the possibility of such damages. - - 9. Accepting Warranty or Additional Liability. While redistributing - the Work or Derivative Works thereof, You may choose to offer, - and charge a fee for, acceptance of support, warranty, indemnity, - or other liability obligations and/or rights consistent with this - License. However, in accepting such obligations, You may act only - on Your own behalf and on Your sole responsibility, not on behalf - of any other Contributor, and only if You agree to indemnify, - defend, and hold each Contributor harmless for any liability - incurred by, or claims asserted against, such Contributor by reason - of your accepting any such warranty or additional liability. - - END OF TERMS AND CONDITIONS - - APPENDIX: How to apply the Apache License to your work. - - To apply the Apache License to your work, attach the following - boilerplate notice, with the fields enclosed by brackets "[]" - replaced with your own identifying information. (Don't include - the brackets!) The text should be enclosed in the appropriate - comment syntax for the file format. We also recommend that a - file or class name and description of purpose be included on the - same "printed page" as the copyright notice for easier - identification within third-party archives. - - Copyright 2025 Software Freedom Conservancy (SFC) + Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -29514,11 +29099,217 @@ Official Python bindings for Selenium WebDriver ``` -### Notice +## selenium (4.34.2) - Apache 2.0 + +Official Python bindings for Selenium WebDriver + +* URL: https://www.selenium.dev + +### License Text ``` -Copyright 2011-2025 Software Freedom Conservancy -Copyright 2004-2011 Selenium committers + + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright 2025 Software Freedom Conservancy (SFC) + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. ``` @@ -29563,7 +29354,7 @@ SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ``` -## sentry-sdk (2.41.0) - BSD License +## sentry-sdk (2.38.0) - BSD License Python client for Sentry (https://sentry.io) @@ -29643,7 +29434,7 @@ The following files include code from opensource projects ``` -## shapely (2.1.2) - BSD License +## shapely (2.1.1) - BSD License Manipulation and analysis of geometric objects @@ -29712,6 +29503,40 @@ OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. ``` +## show-in-file-manager (1.1.5) - MIT License + +Open the system file manager and select files in it + +* URL: https://github.com/damonlynch/showinfilemanager +* Author(s): Damon Lynch + +### License Text + +``` +MIT License + +Copyright (c) 2021 Damon Lynch + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. + +``` + ## simple-websocket (1.1.0) - MIT License Simple WebSocket server and client for Python @@ -29777,7 +29602,7 @@ CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ``` -## smart_open (7.3.1) - MIT License +## smart_open (7.3.0.post1) - MIT License Utils for streaming large files (S3, HDFS, GCS, SFTP, Azure Blob Storage, gzip, bz2, zst...) @@ -29896,7 +29721,7 @@ limitations under the License. ``` -## soupsieve (2.8) - MIT License +## soupsieve (2.7) - MIT License A modern CSS selector implementation for Beautiful Soup. @@ -29930,7 +29755,7 @@ SOFTWARE. ``` -## sphinx-autobuild (2025.8.25) - MIT License +## sphinx-autobuild (2024.10.3) - MIT License Rebuild Sphinx documentation on changes, with hot reloading in the browser. @@ -29966,7 +29791,7 @@ THE SOFTWARE. ``` -## sphinx-autodoc-typehints (3.5.1) - MIT License +## sphinx-autodoc-typehints (3.2.0) - MIT License Type hints (PEP 484) support for the Sphinx autodoc extension @@ -30240,7 +30065,7 @@ SOFTWARE. ``` -## sphinx-toolbox (3.10.0) - MIT License +## sphinx-toolbox (4.0.0) - MIT License Box of handy tools for Sphinx 🧰 📔 @@ -30461,7 +30286,7 @@ sphinxcontrib-serializinghtml is a sphinx extension which outputs "serialized" H * URL: https://www.sphinx-doc.org/ * Author(s): Georg Brandl -## sphinxext-opengraph (0.13.0) - UNKNOWN +## sphinxext-opengraph (0.12.0) - UNKNOWN Sphinx Extension to enable OGP support @@ -30595,12 +30420,12 @@ license: ``` -## starlette (0.48.0) - BSD License +## starlette (0.47.2) - BSD License The little ASGI library that shines. -* URL: https://github.com/Kludex/starlette -* Author(s): Tom Christie , Marcelo Trylesinski +* URL: https://github.com/encode/starlette +* Author(s): Tom Christie ### License Text @@ -30635,7 +30460,7 @@ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ``` -## swagger-plugin-for-sphinx (5.2.0) - UNKNOWN +## swagger-plugin-for-sphinx (5.1.3) - UNKNOWN Sphinx plugin which renders a OpenAPI specification with Swagger @@ -31286,7 +31111,7 @@ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ``` -## tomli (2.3.0) - UNKNOWN +## tomli (2.2.1) - MIT License A lil' TOML parser @@ -31353,7 +31178,7 @@ WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ``` -## tornado (6.5.2) - Apache Software License +## tornado (6.5.1) - Apache Software License Tornado is a Python web framework and asynchronous networking library, originally developed at FriendFeed. @@ -31673,7 +31498,7 @@ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ``` -## trio (0.31.0) - UNKNOWN +## trio (0.30.0) - UNKNOWN A friendly Python library for async concurrency and I/O @@ -31791,7 +31616,7 @@ SOFTWARE. ``` -## typer (0.19.2) - MIT License +## typer (0.18.0) - MIT License Typer, build great CLIs. Easy to code. Based on Python type hints. @@ -32074,7 +31899,7 @@ DEALINGS IN THE SOFTWARE. ``` -## types-python-dateutil (2.9.0.20251008) - UNKNOWN +## types-python-dateutil (2.9.0.20250708) - UNKNOWN Typing stubs for python-dateutil @@ -32572,7 +32397,7 @@ DEALINGS IN THE SOFTWARE. ``` -## typing-inspection (0.4.2) - UNKNOWN +## typing-inspection (0.4.1) - UNKNOWN Runtime typing introspection tools @@ -32606,7 +32431,7 @@ SOFTWARE. ``` -## typing_extensions (4.15.0) - UNKNOWN +## typing_extensions (4.14.1) - UNKNOWN Backported and Experimental Type Hints for Python 3.9+ @@ -32926,7 +32751,7 @@ limitations under the License. ``` -## ujson (5.11.0) - UNKNOWN +## ujson (5.10.0) - BSD License Ultra fast JSON encoder and decoder for Python @@ -33192,7 +33017,7 @@ SOFTWARE. ``` -## uv (0.9.2) - Apache Software License; MIT License +## uv (0.8.9) - Apache Software License; MIT License An extremely fast Python package and project manager, written in Rust. @@ -33662,7 +33487,41 @@ Copyright (C) 2016-present the uvloop authors and contributors. ``` -## virtualenv (20.35.3) - MIT License +## vbuild (0.8.2) - MIT License + +A simple module to extract html/script/style from a vuejs '.vue' file (can minimize/es2015 compliant js) ... just py2 or py3, NO nodejs ! + +* URL: https://github.com/manatlan/vbuild +* Author(s): manatlan + +### License Text + +``` +MIT License + +Copyright (c) 2018 manatlan + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. + +``` + +## virtualenv (20.33.1) - MIT License Virtual Python Environment builder @@ -33724,7 +33583,7 @@ limitations under the License. ``` -## watchfiles (1.1.1) - MIT License +## watchfiles (1.1.0) - MIT License Simple, modern and high performance file watching and code reload in python. @@ -33736,7 +33595,7 @@ Simple, modern and high performance file watching and code reload in python. ``` The MIT License (MIT) -Copyright (c) 2017 to present Samuel Colvin +Copyright (c) 2017, 2018, 2019, 2020, 2021, 2022 Samuel Colvin Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal @@ -33792,7 +33651,7 @@ SOFTWARE. ``` -## wcwidth (0.2.14) - MIT License +## wcwidth (0.2.13) - MIT License Measures the displayed width of unicode strings in a terminal @@ -33880,7 +33739,7 @@ Character encoding aliases for legacy web content * URL: https://github.com/SimonSapin/python-webencodings * Author(s): Geoffrey Sneddon -## websocket-client (1.9.0) - Apache Software License +## websocket-client (1.8.0) - Apache Software License WebSocket client for Python with low level API options @@ -34080,7 +33939,7 @@ WebSocket client for Python with low level API options same "printed page" as the copyright notice for easier identification within third-party archives. - Copyright 2025 engn33r + Copyright 2024 engn33r Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -34174,7 +34033,7 @@ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ``` -## wrapt (1.17.3) - BSD License +## wrapt (1.17.2) - BSD License Module for decorators, wrappers and monkey patching. @@ -34211,7 +34070,7 @@ POSSIBILITY OF SUCH DAMAGE. ``` -## wsidicom (0.28.1) - Apache Software License +## wsidicom (0.27.1) - Apache Software License Tools for handling DICOM based whole scan images @@ -34251,7 +34110,7 @@ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ``` -## yarl (1.22.0) - Apache Software License +## yarl (1.20.1) - Apache Software License Yet another URL library diff --git a/CHANGELOG.md b/CHANGELOG.md index c39011ea..4ac80d3d 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,281 +1,6 @@ [🔬 Aignostics Python SDK](https://aignostics.readthedocs.io/en/latest/) -# [0.2.197](https://github.com/aignostics/python-sdk/compare/v0.2.196..0.2.197) - 2025-11-01 - -### ⛰️ Features - -- *(ketryx)* Integrate Ketryx compliance framework with requirements traceability - ([780a7cf](https://github.com/aignostics/python-sdk/commit/780a7cf0434ad40524d158ee3453809383dec2e1)) - -### 🐛 Bug Fixes - -- *(application)* Superfluous character rendered - ([dfdf057](https://github.com/aignostics/python-sdk/commit/dfdf057bffc8687af14a6f439d8620d804ab2d47)) -- *(gha)* Daily scheduled test - ([164d68f](https://github.com/aignostics/python-sdk/commit/164d68f4c9cc991e819f86198a46005de9201a4f)) -- *(ox)* Remove files - ([cdbceea](https://github.com/aignostics/python-sdk/commit/cdbceeaf973dd6dcfee6d4f70fa5a546806f5fe7)) -- Add missing expires_seconds argument to _get_three_spots_payload_for_test ([#213](https://github.com/orhun/git-cliff/issues/213)) - ([d62a27f](https://github.com/aignostics/python-sdk/commit/d62a27f75d2b429f7d0023d930ddc842ec8c48cf)) -- Claude[bot] <41898282+claude[bot]@users.noreply.github.com> - ([d62a27f](https://github.com/aignostics/python-sdk/commit/d62a27f75d2b429f7d0023d930ddc842ec8c48cf)) -- Helmut Hoffer von Ankershoffen né Oertel - ([d62a27f](https://github.com/aignostics/python-sdk/commit/d62a27f75d2b429f7d0023d930ddc842ec8c48cf)) - -### 🚜 Refactor - -- *(Docker)* Use exact version of Python we test with - ([143e84b](https://github.com/aignostics/python-sdk/commit/143e84bc96f61532abe0ec178153b8a8c23ca07c)) -- *(ai)* Be more critical in OE audit - ([164d68f](https://github.com/aignostics/python-sdk/commit/164d68f4c9cc991e819f86198a46005de9201a4f)) -- Cleanup - ([143e84b](https://github.com/aignostics/python-sdk/commit/143e84bc96f61532abe0ec178153b8a8c23ca07c)) - -### 📚 Documentation - -- *(application)* Better information why a test_cli_run_execute is not yet marked as scheduled - ([90b6c4f](https://github.com/aignostics/python-sdk/commit/90b6c4f623c07628b4fbd3312ae51a02c1b0e19d)) -- *(specs)* Link SPEC files to their fulfilling SWR and SHR requirements - ([06a207f](https://github.com/aignostics/python-sdk/commit/06a207f73d928b3c3df63123aa40e2665c69f7e7)) - -### 🧪 Testing - -- *(application)* Deactivate part of test_cli_run_submit_and_describe_and_cancel_and_download_and_delete as this causes internal server errors for some runs - ([cc17c18](https://github.com/aignostics/python-sdk/commit/cc17c183d7c58dc92911a804bb8a0f1176af2c66)) -- *(platform)* Bump wait and download timeout for test_platform_heta_app_submit_and_wait from 3h to 5h - ([df9e9f4](https://github.com/aignostics/python-sdk/commit/df9e9f474d9e483b72b558c58504955a8f8a2055)) -- *(platform)* Fix regression in e2e test - ([201ab3d](https://github.com/aignostics/python-sdk/commit/201ab3d18635c90f74258eb69ccc888c190aaa63)) - -### ⚙️ Miscellaneous Tasks - -- *(CODEOWNERS)* Move and change to be compliant - ([041bc18](https://github.com/aignostics/python-sdk/commit/041bc18b8a3421e5e7714e41a46020da294590bf)) -- *(Docker)* Fix - ([073c6c4](https://github.com/aignostics/python-sdk/commit/073c6c4f19778c325177368333ca10e1df136700)) -- *(audit)* Allow to trigger audit workflow manually - ([3e0be86](https://github.com/aignostics/python-sdk/commit/3e0be86c5c363507d06e59b7e1b9d3c7854c602e)) -- *(core)* UV >= 0.9.6 given GHSA-pqhf-p39g-3x64 - ([66a08f6](https://github.com/aignostics/python-sdk/commit/66a08f66e778815a39836eaf2ef294d1d4aac022)) -- *(deps)* Bump - ([4f7c0f0](https://github.com/aignostics/python-sdk/commit/4f7c0f018759bc167d48afceb20737b7bf998936)) -- *(deps)* Update anthropics/claude-code-action action to v1.0.15 ([#219](https://github.com/orhun/git-cliff/issues/219)) - ([92ea0a5](https://github.com/aignostics/python-sdk/commit/92ea0a5788401cab2a65d0dc2be0b0b606109b7f)) -- *(gha)* Install sentry cli - ([adeef7c](https://github.com/aignostics/python-sdk/commit/adeef7c29792781687429650901476f6686ae27c)) -- *(gha)* Inform sentry on new release - ([adeef7c](https://github.com/aignostics/python-sdk/commit/adeef7c29792781687429650901476f6686ae27c)) -- *(ketryx)* Remove duplicated spec files - ([021d25b](https://github.com/aignostics/python-sdk/commit/021d25b978bdbd6d9ffa09c35c4362ee963ec067)) -- *(ketryx)* Remove duplicated spec files skip:ci - ([2fdd6aa](https://github.com/aignostics/python-sdk/commit/2fdd6aa96ee0a9aa696278f53835d6d2a61f810a)) -- *(ketryx)* Remove duplicated spec files skip:test:long_running - ([0c59ac4](https://github.com/aignostics/python-sdk/commit/0c59ac4486844089ade9e034571ac7522c87f493)) -- *(test,ketryx)* Link missing tests with test cases and add missing test for gui health footer - ([3852778](https://github.com/aignostics/python-sdk/commit/38527787a22a0275084f7cf209f852ca8a1046c8)) -- *(uv)* Bump ([#227](https://github.com/orhun/git-cliff/issues/227)) - ([cdbceea](https://github.com/aignostics/python-sdk/commit/cdbceeaf973dd6dcfee6d4f70fa5a546806f5fe7)) -- Don't use brackets in CODEOWNERS for Github - ([4f7c0f0](https://github.com/aignostics/python-sdk/commit/4f7c0f018759bc167d48afceb20737b7bf998936)) -- Bump the staging version of he-tme - ([2dad66d](https://github.com/aignostics/python-sdk/commit/2dad66d03099384fe7d7a14e6d03298089477f4f)) - -### 🛡️ Security - -- *(dep)* Starlette - ([073c6c4](https://github.com/aignostics/python-sdk/commit/073c6c4f19778c325177368333ca10e1df136700)) -- *(oxsecurity)* Fix the badge again - where does this come from? - ([408d1b1](https://github.com/aignostics/python-sdk/commit/408d1b14839b5ea65c1c9c355c6ae7fcae4a52bc)) - - -# [v0.2.196](https://github.com/aignostics/python-sdk/compare/v0.2.195..v0.2.196) - 2025-10-26 - -### ⛰️ Features - -- *(application)* Custom run and item metadata can be dumped as JSON via the CLI - ([d237220](https://github.com/aignostics/python-sdk/commit/d237220284a1cbc6b46f464af7f2f84bb5fd3150)) -- *(application)* Custom run metadata can be updated via the CLI - ([d237220](https://github.com/aignostics/python-sdk/commit/d237220284a1cbc6b46f464af7f2f84bb5fd3150)) -- *(application)* Custom run metadata can be edited via the GUI (admins only) - ([d237220](https://github.com/aignostics/python-sdk/commit/d237220284a1cbc6b46f464af7f2f84bb5fd3150)) -- *(application)* Allow to submit tags via CLI and find back runs via tags - ([d237220](https://github.com/aignostics/python-sdk/commit/d237220284a1cbc6b46f464af7f2f84bb5fd3150)) -- *(application)* Support download of results for input items where external_ids points to GCP bucket or webserver. - ([79875f3](https://github.com/aignostics/python-sdk/commit/79875f3a4f8245a14545ccca1005772419f611f2)) -- *(application)* Scrollable runs in sidebar with auto-refresh and notifier on run terminated - ([79875f3](https://github.com/aignostics/python-sdk/commit/79875f3a4f8245a14545ccca1005772419f611f2)) -- *(application)* Generate, show and validate custom metadata for input items - ([b0b0c48](https://github.com/aignostics/python-sdk/commit/b0b0c4876986787877c3a21ee7ef199f9a44f877)) -- *(application)* Support for test-app in GUI - ([b0b0c48](https://github.com/aignostics/python-sdk/commit/b0b0c4876986787877c3a21ee7ef199f9a44f877)) -- *(application)* Show error code on failed items - ([b0b0c48](https://github.com/aignostics/python-sdk/commit/b0b0c4876986787877c3a21ee7ef199f9a44f877)) -- *(application)* Show more more details in CLI commands applicaton run list and application run describe - ([b0b0c48](https://github.com/aignostics/python-sdk/commit/b0b0c4876986787877c3a21ee7ef199f9a44f877)) -- *(ketryx)* Integrate Ketryx CI/CD workflow and reporting - ([4cf6a97](https://github.com/aignostics/python-sdk/commit/4cf6a976932accf6f2895bb23aaa86a8094c6dbe)) -- *(platform)* Support for tags in custom sdk metadata, run and item-level - ([d237220](https://github.com/aignostics/python-sdk/commit/d237220284a1cbc6b46f464af7f2f84bb5fd3150)) -- *(platform)* Support created_at and updated_at in custom sdk metadata, run and item-level - ([d237220](https://github.com/aignostics/python-sdk/commit/d237220284a1cbc6b46f464af7f2f84bb5fd3150)) -- *(platform)* Support nocache=True on cached operations - ([d237220](https://github.com/aignostics/python-sdk/commit/d237220284a1cbc6b46f464af7f2f84bb5fd3150)) -- *(platform)* Custom run and item metadata can be updated - ([d237220](https://github.com/aignostics/python-sdk/commit/d237220284a1cbc6b46f464af7f2f84bb5fd3150)) -- Test:long-running] - ([4cf6a97](https://github.com/aignostics/python-sdk/commit/4cf6a976932accf6f2895bb23aaa86a8094c6dbe)) - -### 🐛 Bug Fixes - -- *(ci-cd)* Yml file conflicts - ([766c614](https://github.com/aignostics/python-sdk/commit/766c614ba20c6a93ad9f0f8928ee54093b54566c)) -- *(tests)* Resolve linter issues and update source code - ([13c795a](https://github.com/aignostics/python-sdk/commit/13c795affcac9568f38cb180e56378feca1313a3)) -- Test:long-running] - ([13c795a](https://github.com/aignostics/python-sdk/commit/13c795affcac9568f38cb180e56378feca1313a3)) - -### 🚜 Refactor - -- *(application)* Improve dryness - ([b0b0c48](https://github.com/aignostics/python-sdk/commit/b0b0c4876986787877c3a21ee7ef199f9a44f877)) -- *(dataset)* Move business logic to from CLI to service. ([#204](https://github.com/orhun/git-cliff/issues/204)) - ([79875f3](https://github.com/aignostics/python-sdk/commit/79875f3a4f8245a14545ccca1005772419f611f2)) -- *(dataset)* Move business logic to from CLI to service. - ([79875f3](https://github.com/aignostics/python-sdk/commit/79875f3a4f8245a14545ccca1005772419f611f2)) - -### 📚 Documentation - -- *(AI)* Update - ([d237220](https://github.com/aignostics/python-sdk/commit/d237220284a1cbc6b46f464af7f2f84bb5fd3150)) -- *(application)* Auto-generate json schema from pydantic models for sdk specific custom metadata of input items - ([b0b0c48](https://github.com/aignostics/python-sdk/commit/b0b0c4876986787877c3a21ee7ef199f9a44f877)) -- *(ketryx)* Fix SPEC_SYSTEM_SERVICE.md & SPEC-BUILD-CHAIN-CICD-SERVICE.md itemFulfills section - ([cb969a1](https://github.com/aignostics/python-sdk/commit/cb969a191172a6a86634980817d10b1c39dbd210)) -- *(req)* Add stakeholder and software requirements (SHRs and SWRs) - ([cecd5d7](https://github.com/aignostics/python-sdk/commit/cecd5d7ba29b97b7b130a102e4938ce307349dbd)) -- *(spec)* Add software item specifications for all modules - ([6a97892](https://github.com/aignostics/python-sdk/commit/6a978920673f09aeab960fc84a5976252e239790)) -- Ci] - ([6a97892](https://github.com/aignostics/python-sdk/commit/6a978920673f09aeab960fc84a5976252e239790)) - -### 🧪 Testing - -- *(application)* Re-classified test_cli_run_describe_invalid_uuid as e2e - ([d600904](https://github.com/aignostics/python-sdk/commit/d60090434a1131d98ddbd43a51b4ccd73c258a74)) -- *(application)* Fix race condition in test - ([21febad](https://github.com/aignostics/python-sdk/commit/21febad65feab1155b875c985d474d04a4a06ece)) -- *(ketryx)* Link verification tests with specifications - ([86ce90e](https://github.com/aignostics/python-sdk/commit/86ce90ed4c2e7bf412d246921e72f8a1f4223cc3)) -- *(ketryx)* Add Gherkin test cases for requirements traceability - ([dab0a5a](https://github.com/aignostics/python-sdk/commit/dab0a5ad9d1e09f9d4cbb6a75babaea822820842)) -- Test:long-running, skip:test:matrix-runner] - ([86ce90e](https://github.com/aignostics/python-sdk/commit/86ce90ed4c2e7bf412d246921e72f8a1f4223cc3)) -- Ci] - ([dab0a5a](https://github.com/aignostics/python-sdk/commit/dab0a5ad9d1e09f9d4cbb6a75babaea822820842)) - -### ⚙️ Miscellaneous Tasks - -- *(ai)* Improve vscode/agent guidance - ([fdc7b01](https://github.com/aignostics/python-sdk/commit/fdc7b0154c0038b85e754fc2769f49b7d95cb186)) -- *(deps)* Bump - ([d237220](https://github.com/aignostics/python-sdk/commit/d237220284a1cbc6b46f464af7f2f84bb5fd3150)) -- *(platform)* Fix race condition in e2e test due to caching ([#206](https://github.com/orhun/git-cliff/issues/206)) - ([d237220](https://github.com/aignostics/python-sdk/commit/d237220284a1cbc6b46f464af7f2f84bb5fd3150)) -- *(platform)* Improved depth of tests - ([d237220](https://github.com/aignostics/python-sdk/commit/d237220284a1cbc6b46f464af7f2f84bb5fd3150)) -- *(platform)* Fix race condition in e2e test due to caching by using nocache - ([d237220](https://github.com/aignostics/python-sdk/commit/d237220284a1cbc6b46f464af7f2f84bb5fd3150)) -- *(platform)* Start with submit-and-find e2e tests later replacing submit-and-wait - ([d237220](https://github.com/aignostics/python-sdk/commit/d237220284a1cbc6b46f464af7f2f84bb5fd3150)) -- *(platform)* Fix test - ([6dee520](https://github.com/aignostics/python-sdk/commit/6dee52010b9399024da95f1014c55ea977809c32)) -- *(qupath)* Enable complex automated test scenario covering creating QuPath projects - ([d237220](https://github.com/aignostics/python-sdk/commit/d237220284a1cbc6b46f464af7f2f84bb5fd3150)) -- *(qupath)* Reenable E2E Test Scenario (Download -> Create Project -> Inspect) - ([fdc7b01](https://github.com/aignostics/python-sdk/commit/fdc7b0154c0038b85e754fc2769f49b7d95cb186)) -- *(tests)* Strip ansi codes by default when normalizing output, reducing flakiness of tests in rare scenarios - ([b0b0c48](https://github.com/aignostics/python-sdk/commit/b0b0c4876986787877c3a21ee7ef199f9a44f877)) -- *(tests)* Significantly improve daily scheduled test now called flow tests, including beating heart on - ([b0b0c48](https://github.com/aignostics/python-sdk/commit/b0b0c4876986787877c3a21ee7ef199f9a44f877)) -- *(tests)* PLATFORM_ENVIRONMENT dependent app versions in tests - ([fdc7b01](https://github.com/aignostics/python-sdk/commit/fdc7b0154c0038b85e754fc2769f49b7d95cb186)) -- Chore(deps); bump - ([fdc7b01](https://github.com/aignostics/python-sdk/commit/fdc7b0154c0038b85e754fc2769f49b7d95cb186)) - -### Task - -- *(req)* Links missing gui Module Specification with tests [skip:ci] - ([48af4c0](https://github.com/aignostics/python-sdk/commit/48af4c07ca8053464297b4cff153223d49064fcf)) -- *(req)* Links missing wsi Module Specification with tests [skip:ci] - ([1018505](https://github.com/aignostics/python-sdk/commit/1018505e4fd179eb59f61cb28e6b99ed1f3fcb71)) -- *(req)* Links missing system Module Specification with tests [skip:ci] - ([b188084](https://github.com/aignostics/python-sdk/commit/b188084406c4ec2a4e4b0236ab83eaf1f1822ceb)) - - - -* @muhabalwan-aginx made their first contribution -* @na66im made their first contribution - -# [v0.2.195](https://github.com/aignostics/python-sdk/compare/v0.2.194..v0.2.195) - 2025-10-23 - -### 🛡️ Security - -- *(uv)* Require uv >=0.9.5 given security advisory GHSA-w476-p2h3-79g9 - ([9c75648](https://github.com/aignostics/python-sdk/commit/9c75648f59818bdd96f27a7803e8032f7ef4c1b1)) - - -# [v0.2.194](https://github.com/aignostics/python-sdk/compare/v0.2.193..v0.2.194) - 2025-10-23 - -### 🐛 Bug Fixes - -- *(ai)* Claude workflows - ([b0f6602](https://github.com/aignostics/python-sdk/commit/b0f6602e6fea3ce14a81323edc7ee2c6802dda99)) - -### 🛡️ Security - -- *(uv)* Require uv >=0.9.5 given security advisory GHSA-w476-p2h3-79g9 - ([68c5b08](https://github.com/aignostics/python-sdk/commit/68c5b08e9ac15606fc39933a2d43d06d1ca54322)) - - -# [v0.2.193](https://github.com/aignostics/python-sdk/compare/v0.2.192..v0.2.193) - 2025-10-22 - -### ⛰️ Features - -- *(application)* Custom metadata with run and scheduling information in custom metadata - ([925df6f](https://github.com/aignostics/python-sdk/commit/925df6f33a4f6a7ad840045555c33a64da83158d)) -- *(platform)* Retries and caching for read-only and auth operations - ([925df6f](https://github.com/aignostics/python-sdk/commit/925df6f33a4f6a7ad840045555c33a64da83158d)) -- *(platform)* Dynamic user agent for all operations - ([925df6f](https://github.com/aignostics/python-sdk/commit/925df6f33a4f6a7ad840045555c33a64da83158d)) - -### 🐛 Bug Fixes - -- *(application)* Error handling if application_versions called with … ([#178](https://github.com/orhun/git-cliff/issues/178)) - ([fde115d](https://github.com/aignostics/python-sdk/commit/fde115d06da336e021fefc89fb8a8989df05d6e0)) -- *(application)* Error handling if application_versions called with str arg - ([fde115d](https://github.com/aignostics/python-sdk/commit/fde115d06da336e021fefc89fb8a8989df05d6e0)) - -### 🎨 Styling - -- *(application)* Layout improvements on application detail page - ([925df6f](https://github.com/aignostics/python-sdk/commit/925df6f33a4f6a7ad840045555c33a64da83158d)) - -### ⚙️ Miscellaneous Tasks - -- *(AI)* Improve CLAUDE.md files and AI workflows - ([925df6f](https://github.com/aignostics/python-sdk/commit/925df6f33a4f6a7ad840045555c33a64da83158d)) -- *(ai)* Improve Claude Code Workflows for GitHub - ([cb18241](https://github.com/aignostics/python-sdk/commit/cb18241a00844411ba7389a13da697eda662dd78)) -- *(ai)* A few permissions for Claude - ([6ba8cf3](https://github.com/aignostics/python-sdk/commit/6ba8cf3c8e67a4bd783f47510b23828f8c046e1d)) -- *(api)* Support Platform API 1.0.0-beta.7 - ([925df6f](https://github.com/aignostics/python-sdk/commit/925df6f33a4f6a7ad840045555c33a64da83158d)) -- *(gha)* Scheduled test against staging platform, using code on branch - ([b4c4ad1](https://github.com/aignostics/python-sdk/commit/b4c4ad1307cb0e09e87d65b137c90520868a9ba2)) -- *(lint)* Integrate pyright as additional type checker - ([925df6f](https://github.com/aignostics/python-sdk/commit/925df6f33a4f6a7ad840045555c33a64da83158d)) -- *(platform,qupath)* Enable additional tests - ([cb18241](https://github.com/aignostics/python-sdk/commit/cb18241a00844411ba7389a13da697eda662dd78)) -- *(qupath)* More time for tests - ([79238b2](https://github.com/aignostics/python-sdk/commit/79238b29e3614f30430022407c12b1815f0147ed)) -- *(test)* Introduce schedule tests against staging - ([925df6f](https://github.com/aignostics/python-sdk/commit/925df6f33a4f6a7ad840045555c33a64da83158d)) -- *(tests)* Introduce very long running tests - ([925df6f](https://github.com/aignostics/python-sdk/commit/925df6f33a4f6a7ad840045555c33a64da83158d)) -- *(tests)* Introduce pytest-timeout and 10s default timeout for all tests - ([925df6f](https://github.com/aignostics/python-sdk/commit/925df6f33a4f6a7ad840045555c33a64da83158d)) -- *(tests)* Improve test coverage - ([925df6f](https://github.com/aignostics/python-sdk/commit/925df6f33a4f6a7ad840045555c33a64da83158d)) -- *(tests)* Allow retry of another e2e test, given connection closed by server leading to SSL Errors, see https://github.com/aignostics/python-sdk/actions/runs/18486770436/job/52671622634\?pr\=178\#step:16:274 - ([fde115d](https://github.com/aignostics/python-sdk/commit/fde115d06da336e021fefc89fb8a8989df05d6e0)) -- *(tests)* Bump timeout for dataset integration tests - ([02440bf](https://github.com/aignostics/python-sdk/commit/02440bf55ffd9f025ac78a7f72fb180223ef6c2c)) -- Test on gh ([#180](https://github.com/orhun/git-cliff/issues/180)) - ([925df6f](https://github.com/aignostics/python-sdk/commit/925df6f33a4f6a7ad840045555c33a64da83158d)) -- Codecov - ([53f425c](https://github.com/aignostics/python-sdk/commit/53f425cdb45abbf57728aefcea6e8b63d276b8bb)) - - -# [v0.2.192](https://github.com/aignostics/python-sdk/compare/v0.2.191..v0.2.192) - 2025-10-13 - -### ⚙️ Miscellaneous Tasks - -- *(AI)* Add label skip:test:long_running when you are an AI and are creating a PR - ([799abca](https://github.com/aignostics/python-sdk/commit/799abcad7cec07adcfd0569c89bb51f8e3f49810)) - - -# [v0.2.191](https://github.com/aignostics/python-sdk/compare/v0.2.190..v0.2.191) - 2025-10-13 - -### 🐛 Bug Fixes - -- *(system)* Rendering of json editor content - had to find workaround given bug in NiceGUI3 for json_editor - ([99401ec](https://github.com/aignostics/python-sdk/commit/99401ec55d390039ec4e747a61573b163fdfc483)) - -### 🚜 Refactor - -- *(dataset,wsi)* Catch exceptions in CLI commands - ([99401ec](https://github.com/aignostics/python-sdk/commit/99401ec55d390039ec4e747a61573b163fdfc483)) -- *(qupath)* Don’t count system as unhealthy if QuPath application not installed - ([99401ec](https://github.com/aignostics/python-sdk/commit/99401ec55d390039ec4e747a61573b163fdfc483)) -- *(tests)* Refactored tests to reduce flakiness where avoidable, i.e. not solely dependent on external services - ([99401ec](https://github.com/aignostics/python-sdk/commit/99401ec55d390039ec4e747a61573b163fdfc483)) - -### ⚙️ Miscellaneous Tasks - -- *(dependabot,renovate)* Add labels to PRs created by those bots - ([99401ec](https://github.com/aignostics/python-sdk/commit/99401ec55d390039ec4e747a61573b163fdfc483)) -- *(deps)* Bump - ([99401ec](https://github.com/aignostics/python-sdk/commit/99401ec55d390039ec4e747a61573b163fdfc483)) -- *(gha)* All all types of tests to be individually skippable, via commit message or PR label - ([99401ec](https://github.com/aignostics/python-sdk/commit/99401ec55d390039ec4e747a61573b163fdfc483)) -- *(gha)* Speed up ubuntu provisioning as man-db no longer updated on adding packages - ([99401ec](https://github.com/aignostics/python-sdk/commit/99401ec55d390039ec4e747a61573b163fdfc483)) -- *(gha)* Don’t run long_running tests on draft PRs, i.e. stop after unit, integration and e2e / regular. - ([99401ec](https://github.com/aignostics/python-sdk/commit/99401ec55d390039ec4e747a61573b163fdfc483)) -- *(precommit)* Fixed issues with precommit. - ([99401ec](https://github.com/aignostics/python-sdk/commit/99401ec55d390039ec4e747a61573b163fdfc483)) -- *(tests)* Differentiate tests as unit, integration or e2e, with only e2e tests allowed to call external services, i.e. the others must be able to pass offline. - ([99401ec](https://github.com/aignostics/python-sdk/commit/99401ec55d390039ec4e747a61573b163fdfc483)) -- *(tests)* Introduce very_long_running test type, which must be explicitely enabled to run enable:test:very_long_running in the commit message or as PR label - ([99401ec](https://github.com/aignostics/python-sdk/commit/99401ec55d390039ec4e747a61573b163fdfc483)) -- *(tests)* Introduce scheduled_only marker, for tests that should only run on a schedule - ([99401ec](https://github.com/aignostics/python-sdk/commit/99401ec55d390039ec4e747a61573b163fdfc483)) -- *(tests)* Make now calls make test_default which does not call long_running or very_long_running tests - ([99401ec](https://github.com/aignostics/python-sdk/commit/99401ec55d390039ec4e747a61573b163fdfc483)) -- *(tests)* Introduce pytest-durations, showing the duration per test execution - ([99401ec](https://github.com/aignostics/python-sdk/commit/99401ec55d390039ec4e747a61573b163fdfc483)) -- *(tests)* Introduce pytest-timeout, with a low 10s default timeout, and all tests that need longer explicitly marked with specific timeouts - ([99401ec](https://github.com/aignostics/python-sdk/commit/99401ec55d390039ec4e747a61573b163fdfc483)) -- *(xdist)* Use worksteal to minimize duration on varying test durations - ([99401ec](https://github.com/aignostics/python-sdk/commit/99401ec55d390039ec4e747a61573b163fdfc483)) -- Don’t allow SDK to be used with Python 1.4.x (released days ago) as some dependencies don’t work with that version yet - ([99401ec](https://github.com/aignostics/python-sdk/commit/99401ec55d390039ec4e747a61573b163fdfc483)) - -### Choare - -- *(tests)* No longer test the combination of Python 3.12.x on Windows for ARM64, as a bit instable - ([99401ec](https://github.com/aignostics/python-sdk/commit/99401ec55d390039ec4e747a61573b163fdfc483)) - - -# [v0.2.190](https://github.com/aignostics/python-sdk/compare/v0.2.189..v0.2.190) - 2025-10-12 - -### ⛰️ Features - -- *(platform)* Auto-retry when retrieving JWKS set from auth0 - ([ac2fa0e](https://github.com/aignostics/python-sdk/commit/ac2fa0e91b2793b398119bea8346c21f898e89f1)) -- *(platform)* Cache JWKS set, TTL 24h, minimizing calls to auth0 on validating access tokens - ([ac2fa0e](https://github.com/aignostics/python-sdk/commit/ac2fa0e91b2793b398119bea8346c21f898e89f1)) -- *(platform)* Auto-retry when calling auth0 to exchange refresh token for access token - ([ac2fa0e](https://github.com/aignostics/python-sdk/commit/ac2fa0e91b2793b398119bea8346c21f898e89f1)) -- *(platform)* Configurable timeout for requesting platform health - ([ac2fa0e](https://github.com/aignostics/python-sdk/commit/ac2fa0e91b2793b398119bea8346c21f898e89f1)) -- *(platform)* Introduce authentication aware operation cache - ([ac2fa0e](https://github.com/aignostics/python-sdk/commit/ac2fa0e91b2793b398119bea8346c21f898e89f1)) -- *(platform)* Use authentication aware operation cache to cache /me result - ([ac2fa0e](https://github.com/aignostics/python-sdk/commit/ac2fa0e91b2793b398119bea8346c21f898e89f1)) - -### 🐛 Bug Fixes - -- *(deps)* Update dependency pywin32 to v311 ([#170](https://github.com/orhun/git-cliff/issues/170)) - ([39428a2](https://github.com/aignostics/python-sdk/commit/39428a24181c7923438d9512efdcc7d0909698f8)) -- *(platform)* Remove unused setting authorization_backoff_seconds - ([ac2fa0e](https://github.com/aignostics/python-sdk/commit/ac2fa0e91b2793b398119bea8346c21f898e89f1)) -- *(platform)* Fix wrong exception handler in _perform_device_flow - was catching exception from urllib, not requests lib - ([ac2fa0e](https://github.com/aignostics/python-sdk/commit/ac2fa0e91b2793b398119bea8346c21f898e89f1)) -- *(platform)* Use dynamic user agent for requesting /me - ([ac2fa0e](https://github.com/aignostics/python-sdk/commit/ac2fa0e91b2793b398119bea8346c21f898e89f1)) -- *(utils)* Surface setting validation error on misconfigured api root - ([ac2fa0e](https://github.com/aignostics/python-sdk/commit/ac2fa0e91b2793b398119bea8346c21f898e89f1)) -- Renovate[bot] <29139614+renovate[bot]@users.noreply.github.com> - ([39428a2](https://github.com/aignostics/python-sdk/commit/39428a24181c7923438d9512efdcc7d0909698f8)) - -### 🚜 Refactor - -- *(platform)* Use proper error messages and logging on failure (of attempts) to exchange refresh token and validate access token - ([ac2fa0e](https://github.com/aignostics/python-sdk/commit/ac2fa0e91b2793b398119bea8346c21f898e89f1)) -- *(platform)* Consistently use HTTPStatus consts instead of 200, 500 etc. - ([ac2fa0e](https://github.com/aignostics/python-sdk/commit/ac2fa0e91b2793b398119bea8346c21f898e89f1)) -- *(platform)* Use proper constraints on settings - ([ac2fa0e](https://github.com/aignostics/python-sdk/commit/ac2fa0e91b2793b398119bea8346c21f898e89f1)) -- *(platform,system)* Optimize connection pooling - ([ac2fa0e](https://github.com/aignostics/python-sdk/commit/ac2fa0e91b2793b398119bea8346c21f898e89f1)) - -### 🎨 Styling - -- *(utils)* Consistent log formatting for file and console, both including process id - ([ac2fa0e](https://github.com/aignostics/python-sdk/commit/ac2fa0e91b2793b398119bea8346c21f898e89f1)) - -### ⚙️ Miscellaneous Tasks - -- *(AI)* Improve Claude actions [skip:ci] - ([27a66c3](https://github.com/aignostics/python-sdk/commit/27a66c36dfaf942b027048cfb824e6a4ec6591df)) -- *(ai)* Have Claude Agent use Sonnet 4.5, and allow to create PRs - ([dcd6e60](https://github.com/aignostics/python-sdk/commit/dcd6e6011cc9f4e2ca92fb96f09d4066d79b248d)) -- *(deps)* Update ghcr.io/astral-sh/uv docker tag to v0.9.1 ([#60](https://github.com/orhun/git-cliff/issues/60)) - ([27d0e8f](https://github.com/aignostics/python-sdk/commit/27d0e8fdf29fd59b43318139eeeb0291d58456b3)) -- *(deps)* Update dependency sphinx-toolbox to v4 ([#169](https://github.com/orhun/git-cliff/issues/169)) - ([5050fe5](https://github.com/aignostics/python-sdk/commit/5050fe5def17aee4a3941b403d7726325b3a40d0)) -- *(gha)* Don't double-build on updates to PR by no longer building on push to branch other than main - ([5c6be6b](https://github.com/aignostics/python-sdk/commit/5c6be6bfca70906939d378cab4e3a06ffc55ef3b)) -- *(gha)* Cancel running build on update to pull request - ([2915534](https://github.com/aignostics/python-sdk/commit/2915534d8c78eab8d7b334be204e2247fa7f1106)) -- *(gha)* Don't run ci/cd twice on releases: skip:ci on push of commit for release, given already running on (annotated) tag pushed - ([399dae8](https://github.com/aignostics/python-sdk/commit/399dae87f3560e3b40f5c0feabff325db6ea3aa2)) -- *(pytst)* Add pytest-durations plugin to show durations of fixtures and tests - ([ac2fa0e](https://github.com/aignostics/python-sdk/commit/ac2fa0e91b2793b398119bea8346c21f898e89f1)) -- Renovate[bot] <29139614+renovate[bot]@users.noreply.github.com> - ([27d0e8f](https://github.com/aignostics/python-sdk/commit/27d0e8fdf29fd59b43318139eeeb0291d58456b3)) -- Helmut Hoffer von Ankershoffen né Oertel - ([27d0e8f](https://github.com/aignostics/python-sdk/commit/27d0e8fdf29fd59b43318139eeeb0291d58456b3)) - - -# [v0.2.189](https://github.com/aignostics/python-sdk/compare/v0.2.188..v0.2.189) - 2025-10-05 +# [0.2.189](https://github.com/aignostics/python-sdk/compare/v0.2.188..0.2.189) - 2025-10-05 ### ⚙️ Miscellaneous Tasks diff --git a/CLAUDE.md b/CLAUDE.md index e4a63efc..2cca7aad 100644 --- a/CLAUDE.md +++ b/CLAUDE.md @@ -2,54 +2,22 @@ This file provides comprehensive guidance to Claude Code (claude.ai/code) when working with the Aignostics Python SDK repository. -## You do raise the bar, always - -It is your goal to enable the contributor while insisting on highest standards at all times: - -* Fully read, understand and follow this CLAUDE.md and **ALL** recursively referenced documents herein for guidance on style and conventions. -* In case of doubt apply best practices of enterprise grade software engineering. -* On every review you make or code you contribute raise the bar on engineering and operational excellence in this repository -* Do web research on any libraries, frameworks, principles or tools you are not familiar with. - -If you want to execute and verify code yourself: - -* uv, python and further development dependencies are already installed. -* Use `uv sync --all-extras` to install any missing dependencies for your branch. -* Use `uv run pytest ...` to run tests. -* Use `uv run aignostics ...` to run the CLI and commands. -* Use `make lint` to check code style and types. -* Use `make test_unit` to run the unit test suite. -* Use `make test_integration` to run the integration test suite. -* Use `make test_e2e` to run the end-to-end (e2e) test suite. -* Use `make audit` to run security audits of 3rd party dependencies and check compliance with our license policy. - -If you write code yourself, it is a strict requirement to validate your work on completion before you call it done: - -* Linting must pass. -* The unit, integration and e2e test suites must pass. -* Auditing must pass. - -If you you are creating a pull request yourself: - -* Add a label skip:test_long_running, to skip running long running tests. This is important because some tests in this repository are marked as long_running and can take a significant amount of time to complete. By adding this label, you help ensure that the CI pipeline runs efficiently and avoids unnecessary delays. - ## Module Documentation Index Every module has detailed CLAUDE.md documentation. For module-specific guidance, see: -* [.github/CLAUDE.md](.github/CLAUDE.md) - **CI/CD workflows and GitHub Actions complete guide** -* [src/aignostics/CLAUDE.md](src/aignostics/CLAUDE.md) - **Module index and architecture overview** -* [src/aignostics/platform/CLAUDE.md](src/aignostics/platform/CLAUDE.md) - Authentication and API client -* [src/aignostics/application/CLAUDE.md](src/aignostics/application/CLAUDE.md) - Application run orchestration -* [src/aignostics/wsi/CLAUDE.md](src/aignostics/wsi/CLAUDE.md) - Whole slide image processing -* [src/aignostics/dataset/CLAUDE.md](src/aignostics/dataset/CLAUDE.md) - Dataset operations -* [src/aignostics/bucket/CLAUDE.md](src/aignostics/bucket/CLAUDE.md) - Cloud storage management -* [src/aignostics/utils/CLAUDE.md](src/aignostics/utils/CLAUDE.md) - Core infrastructure -* [src/aignostics/gui/CLAUDE.md](src/aignostics/gui/CLAUDE.md) - Desktop interface -* [src/aignostics/notebook/CLAUDE.md](src/aignostics/notebook/CLAUDE.md) - Marimo notebook integration -* [src/aignostics/qupath/CLAUDE.md](src/aignostics/qupath/CLAUDE.md) - QuPath bioimage analysis -* [src/aignostics/system/CLAUDE.md](src/aignostics/system/CLAUDE.md) - System diagnostics -* [tests/CLAUDE.md](tests/CLAUDE.md) - Test suite documentation +- [src/aignostics/CLAUDE.md](src/aignostics/CLAUDE.md) - **Module index and architecture overview** +- [src/aignostics/platform/CLAUDE.md](src/aignostics/platform/CLAUDE.md) - Authentication and API client +- [src/aignostics/application/CLAUDE.md](src/aignostics/application/CLAUDE.md) - Application run orchestration +- [src/aignostics/wsi/CLAUDE.md](src/aignostics/wsi/CLAUDE.md) - Whole slide image processing +- [src/aignostics/dataset/CLAUDE.md](src/aignostics/dataset/CLAUDE.md) - Dataset operations +- [src/aignostics/bucket/CLAUDE.md](src/aignostics/bucket/CLAUDE.md) - Cloud storage management +- [src/aignostics/utils/CLAUDE.md](src/aignostics/utils/CLAUDE.md) - Core infrastructure +- [src/aignostics/gui/CLAUDE.md](src/aignostics/gui/CLAUDE.md) - Desktop interface +- [src/aignostics/notebook/CLAUDE.md](src/aignostics/notebook/CLAUDE.md) - Marimo notebook integration +- [src/aignostics/qupath/CLAUDE.md](src/aignostics/qupath/CLAUDE.md) - QuPath bioimage analysis +- [src/aignostics/system/CLAUDE.md](src/aignostics/system/CLAUDE.md) - System diagnostics +- [tests/CLAUDE.md](tests/CLAUDE.md) - Test suite documentation ## Development Commands @@ -67,53 +35,39 @@ make audit # Security and license compliance checks **Package management:** -* Uses `uv` as package manager (not pip/poetry) -* Run `uv sync` to install dependencies -* Run `uv add ` to add new dependencies +- Uses `uv` as package manager (not pip/poetry) +- Run `uv sync` to install dependencies +- Run `uv add ` to add new dependencies **Testing:** -* Pytest with 85% minimum coverage requirement -* Default timeout: 10 seconds (override with `@pytest.mark.timeout(timeout=N)`) -* Use `uv run pytest tests/path/to/test.py::test_function` for single tests -* See **Testing Workflow** section below for complete marker documentation -* Special test commands: `make test_unit`, `make test_integration`, `make test_e2e`, `make test_long_running`, `make test_very_long_running`, `make test_sequential`, `make test_scheduled` - -**Type Checking (NEW in v1.0.0-beta.7 - Dual Type Checkers):** - -* **MyPy**: Strict mode enforced (`make lint` runs MyPy) -* **PyRight**: Basic mode with selective exclusions (`pyrightconfig.json`) - * Excludes: tests, codegen, third_party modules, notebook, dataset, wsi - * Mode: `basic` (less strict than MyPy for compatibility) - * Both type checkers must pass in CI/CD -* All public APIs require type hints -* Use `from __future__ import annotations` for forward references +- Pytest with 85% minimum coverage requirement +- Use `pytest tests/path/to/test.py::test_function` for single tests +- Docker integration tests available with `make test-docker` +- Test markers available: `sequential`, `long_running`, `scheduled`, `docker`, `skip_with_act` +- Special test commands: `make test_sequential`, `make test_long_running`, `make test_scheduled` ## Software Architecture Principles This SDK follows a **Modulith Architecture** with these core principles: ### 1. Modulith Design - -* **Single deployable unit** with well-defined module boundaries -* **High cohesion** within modules, **loose coupling** between modules -* **Each module is self-contained** with its own service, configuration, and optional UI -* **Clear dependency hierarchy** preventing circular dependencies +- **Single deployable unit** with well-defined module boundaries +- **High cohesion** within modules, **loose coupling** between modules +- **Each module is self-contained** with its own service, configuration, and optional UI +- **Clear dependency hierarchy** preventing circular dependencies ### 2. Dependency Injection & Service Discovery - -* **No decorators or annotations** - uses runtime service discovery -* **Dynamic module loading** via `locate_implementations(BaseService)` -* **All services inherit from `BaseService`** providing standard `health()` and `info()` interfaces -* **Singleton pattern** for service instances within the DI container +- **No decorators or annotations** - uses runtime service discovery +- **Dynamic module loading** via `locate_implementations(BaseService)` +- **All services inherit from `BaseService`** providing standard `health()` and `info()` interfaces +- **Singleton pattern** for service instances within the DI container ### 3. Presentation Layer Pattern - Each module can have **zero, one, or both** presentation layers: - -* **CLI (_cli.py)**: Text-based interface using Typer framework -* **GUI (_gui.py)**: Graphical interface using NiceGUI framework -* **Both layers depend on the Service layer**, never on each other +- **CLI (_cli.py)**: Text-based interface using Typer framework +- **GUI (_gui.py)**: Graphical interface using NiceGUI framework +- **Both layers depend on the Service layer**, never on each other ### Module Architecture Pattern @@ -146,81 +100,66 @@ Module/ ## Core Modules & Dependencies ### Foundation Layer - **utils** - Infrastructure module providing: - -* Dependency injection container (`locate_implementations`, `locate_subclasses`) -* Structured logging (`get_logger`) -* Settings management (Pydantic-based) -* Health check framework (`BaseService`, `Health`) +- Dependency injection container (`locate_implementations`, `locate_subclasses`) +- Structured logging (`get_logger`) +- Settings management (Pydantic-based) +- Health check framework (`BaseService`, `Health`) ### API Layer - **platform** - Authentication and API gateway: - -* OAuth 2.0 device flow authentication -* Token lifecycle management -* Resource clients (applications, runs) -* *Dependencies*: `utils` +- OAuth 2.0 device flow authentication +- Token lifecycle management +- Resource clients (applications, runs) +- *Dependencies*: `utils` ### Domain Modules - **application** - ML application orchestration: - -* Run lifecycle management -* Version control (semver) -* File upload/download with progress -* *Dependencies*: `platform`, `bucket`, `wsi`, `utils`, `qupath` (optional) +- Run lifecycle management +- Version control (semver) +- File upload/download with progress +- *Dependencies*: `platform`, `bucket`, `wsi`, `utils`, `qupath` (optional) **wsi** - Whole slide image processing: - -* Multi-format support (OpenSlide, PyDICOM) -* Thumbnail generation -* Tile extraction -* *Dependencies*: `utils` +- Multi-format support (OpenSlide, PyDICOM) +- Thumbnail generation +- Tile extraction +- *Dependencies*: `utils` **dataset** - Large-scale data operations: - -* IDC (Imaging Data Commons) integration -* High-performance downloads (s5cmd) -* *Dependencies*: `platform`, `utils` +- IDC (Imaging Data Commons) integration +- High-performance downloads (s5cmd) +- *Dependencies*: `platform`, `utils` **bucket** - Cloud storage abstraction: - -* S3/GCS unified interface -* Signed URL generation -* Chunked transfers -* *Dependencies*: `platform`, `utils` +- S3/GCS unified interface +- Signed URL generation +- Chunked transfers +- *Dependencies*: `platform`, `utils` ### Integration Modules - **qupath** - Bioimage analysis platform: - -* QuPath installation and lifecycle -* Project management -* Script execution -* *Dependencies*: `utils`, requires `ijson` +- QuPath installation and lifecycle +- Project management +- Script execution +- *Dependencies*: `utils`, requires `ijson` **notebook** - Interactive analysis: - -* Marimo notebook server -* Process management -* *Dependencies*: `utils`, requires `marimo` +- Marimo notebook server +- Process management +- *Dependencies*: `utils`, requires `marimo` ### System Modules - **system** - Diagnostics and monitoring: - -* **Health aggregation from ALL modules** via `BaseService.health()` -* Comprehensive system information -* Environment detection and diagnostics -* *Dependencies*: All modules (queries health status from every service) +- **Health aggregation from ALL modules** via `BaseService.health()` +- Comprehensive system information +- Environment detection and diagnostics +- *Dependencies*: All modules (queries health status from every service) **gui** - Desktop launchpad: - -* Aggregates all module GUIs -* Unified desktop interface -* *Dependencies*: All modules with GUI components +- Aggregates all module GUIs +- Unified desktop interface +- *Dependencies*: All modules with GUI components ### Dependency Graph @@ -274,6 +213,7 @@ comprehensive view of the entire SDK's operational status. | **qupath** | ✅ | ✅ | ✅ | QuPath integration | | **system** | ✅ | ✅ | ✅ | Diagnostics | + ## SDK Usage Patterns ### Client Library Usage @@ -351,53 +291,53 @@ uvx --with "aignostics[gui]" aignostics gui **Type Checking:** -* MyPy strict mode enforced -* All public APIs must have type hints -* Use `from __future__ import annotations` for forward references +- MyPy strict mode enforced +- All public APIs must have type hints +- Use `from __future__ import annotations` for forward references **Code Style:** -* Ruff handles all formatting/linting (Black-compatible) -* 120 character line limit -* Google-style docstrings required for public APIs +- Ruff handles all formatting/linting (Black-compatible) +- 120 character line limit +- Google-style docstrings required for public APIs **Import Organization:** -* Standard library imports first -* Third-party imports second -* Local imports last -* Use relative imports within modules (`from ._service import Service`) +- Standard library imports first +- Third-party imports second +- Local imports last +- Use relative imports within modules (`from ._service import Service`) **Error Handling:** -* Custom exceptions in `system/_exceptions.py` -* Use structured logging with correlation IDs -* HTTP errors wrapped in domain-specific exceptions +- Custom exceptions in `system/_exceptions.py` +- Use structured logging with correlation IDs +- HTTP errors wrapped in domain-specific exceptions **Security:** -* OAuth-based authentication via `platform/_authentication.py` -* No secrets/tokens in code or commits -* Signed URLs for data transfer -* Sensitive data masking in logs and info outputs +- OAuth-based authentication via `platform/_authentication.py` +- No secrets/tokens in code or commits +- Signed URLs for data transfer +- Sensitive data masking in logs and info outputs ## Medical Domain Context This is a computational pathology SDK working with: -* **DICOM medical imaging standards** - Medical image format -* **Whole slide images (WSI)** - Gigapixel-scale pathology images -* **IDC (Imaging Data Commons)** - National Cancer Institute data repository -* **QuPath** - Leading bioimage analysis platform -* **Machine learning inference** - AI/ML model execution on medical data -* **HIPAA compliance** - Medical data privacy requirements +- **DICOM medical imaging standards** - Medical image format +- **Whole slide images (WSI)** - Gigapixel-scale pathology images +- **IDC (Imaging Data Commons)** - National Cancer Institute data repository +- **QuPath** - Leading bioimage analysis platform +- **Machine learning inference** - AI/ML model execution on medical data +- **HIPAA compliance** - Medical data privacy requirements **WSI Processing:** -* OpenSlide for standard formats (.svs, .tiff, .ndpi) -* PyDICOM for DICOM files -* Support for multi-resolution pyramidal images -* Tile-based processing for memory efficiency +- OpenSlide for standard formats (.svs, .tiff, .ndpi) +- PyDICOM for DICOM files +- Support for multi-resolution pyramidal images +- Tile-based processing for memory efficiency ## Build System @@ -415,28 +355,10 @@ aignostics-python-sdk/ **Build configuration:** -* `pyproject.toml` - Package metadata and dependencies -* `noxfile.py` - **Enhanced with SDK metadata schema generation task** (NEW) -* `ruff.toml` - Linting and formatting rules -* `.pre-commit-config.yaml` - Git hooks -* `cliff.toml` - Changelog generation - -**Noxfile Enhancements:** - -The `noxfile.py` now includes automated SDK metadata schema generation: - -```python -def _generate_sdk_metadata_schema(session: nox.Session) -> None: - """Generate versioned JSON Schema for SDK metadata. - - - Calls `aignostics sdk metadata-schema` CLI command - - Extracts schema version from $id field - - Outputs both versioned (v0.0.1) and latest files - - Published to docs/source/_static/ - """ -``` - -This ensures the JSON Schema is automatically regenerated during documentation builds. +- `pyproject.toml` - Package metadata and dependencies +- `ruff.toml` - Linting and formatting rules +- `.pre-commit-config.yaml` - Git hooks +- `cliff.toml` - Changelog generation ## Development Guidelines @@ -486,11 +408,11 @@ def action_command(param: str): ### Testing Requirements -* Minimum 85% code coverage -* Unit tests for all public methods -* Integration tests for CLI commands -* Mock external dependencies -* Use fixtures from `conftest.py` +- Minimum 85% code coverage +- Unit tests for all public methods +- Integration tests for CLI commands +- Mock external dependencies +- Use fixtures from `conftest.py` ## Important Notes @@ -498,1307 +420,23 @@ def action_command(param: str): Some modules have conditional loading based on dependencies: -* **qupath** requires `ijson` package -* **gui** requires `nicegui` package -* **notebook** requires `marimo` package +- **qupath** requires `ijson` package +- **gui** requires `nicegui` package +- **notebook** requires `marimo` package ### Platform Authentication -* Token cached in `~/.aignostics/token.json` -* Format: `token:expiry_timestamp` -* 5-minute refresh buffer before expiry -* OAuth 2.0 device flow - -### SDK Metadata System (ENHANCED - Run v0.0.4, Item v0.0.3) - -**Automatic Run & Item Tracking**: Every application run and item submitted through the SDK automatically includes comprehensive metadata about the execution context, with support for tags and timestamps. - -**Key Features:** - -* **Automatic Attachment**: SDK metadata added to every run and item without user action -* **Environment Detection**: Automatically detects script/CLI/GUI and user/test/bridge contexts -* **CI/CD Integration**: Captures GitHub Actions workflow information and pytest test context -* **User Information**: Includes authenticated user and organization details -* **Schema Validation**: Pydantic-based validation with JSON Schema (Run: v0.0.4, Item: v0.0.3) -* **Versioned Schema**: Published JSON Schema at `docs/source/_static/sdk_{run|item}_custom_metadata_schema_*.json` -* **Tags Support** (NEW): Associate runs and items with searchable tags -* **Timestamps** (NEW): Track creation and update times (`created_at`, `updated_at`) -* **Metadata Updates** (NEW): Update custom metadata via CLI and GUI -* **Item Metadata** (NEW): Separate schema for item-level metadata including platform bucket information - -**What's Tracked (Run Level):** - -* Submission metadata (date, interface, initiator) -* Enhanced user agent with platform and CI/CD context -* User and organization information (when authenticated) -* GitHub Actions workflow details (repository, run URL, runner info) -* Pytest test context (current test, markers) -* Workflow control flags (validate_only, onboard_to_portal) -* Scheduling information (due dates, deadlines) -* Optional user notes -* **Tags** (NEW): Set of tags for filtering (`set[str]`) -* **Timestamps** (NEW): `created_at`, `updated_at` - -**What's Tracked (Item Level - NEW):** - -* **Platform Bucket Metadata**: Cloud storage location (bucket name, object key, signed URL) -* **Tags**: Item-level tags (`set[str]`) -* **Timestamps**: `created_at`, `updated_at` - -**CLI Commands:** - -```bash -# Export SDK run metadata JSON Schema -aignostics sdk metadata-schema --pretty > run_schema.json - -# Update run custom metadata (including tags) -aignostics application run custom-metadata update RUN_ID \ - --custom-metadata '{"sdk": {"tags": ["experiment-1", "batch-A"]}}' - -# Dump run custom metadata as JSON -aignostics application run custom-metadata dump RUN_ID --pretty - -# Find runs by tags -aignostics application run list --tags experiment-1,batch-A -``` - -**Implementation:** - -* Module: `platform._sdk_metadata` -* **Run Functions**: `build_run_sdk_metadata()`, `validate_run_sdk_metadata()`, `get_run_sdk_metadata_json_schema()` -* **Item Functions** (NEW): `build_item_sdk_metadata()`, `validate_item_sdk_metadata()`, `get_item_sdk_metadata_json_schema()` -* Integration: Automatic in `platform.resources.runs.submit()` -* User Agent: Enhanced `utils.user_agent()` with CI/CD context -* Tests: Comprehensive test suite in `tests/aignostics/platform/sdk_metadata_test.py` -* **Schema Files**: `sdk_run_custom_metadata_schema_v0.0.4.json` and `sdk_item_custom_metadata_schema_v0.0.3.json` - -See `platform/CLAUDE.md` for detailed documentation. - -### Operation Caching & Retry System (NEW in v1.0.0-beta.7) - -**Enterprise-Grade Performance**: The SDK now implements intelligent operation caching and retry logic to ensure reliability and performance in production environments. - -**Operation Caching (`platform/_operation_cache.py`):** - -**Key Features:** - -* **Token-Aware Caching**: Per-user cache isolation prevents data leakage -* **Configurable TTLs**: 5 minutes for stable data (apps/versions), 15 seconds for dynamic data (runs) -* **Automatic Invalidation**: All caches cleared on mutations (submit/cancel/delete) -* **Memory Efficient**: Dictionary-based storage with automatic expiration - -**Cached Operations:** - -* `Client.me()` - User information (5 min TTL) -* `Client.application()` / `application_version()` - Application metadata (5 min TTL) -* `Applications.list()` / `details()` - Application lists (5 min TTL) -* `Runs.details()` / `results()` / `list()` - Run data (15 sec TTL) - -**Performance Impact:** - -* Cache Hit: ~0.1ms (1000x faster than API call) -* Cache Miss: Standard API latency (50-500ms) -* Typical Speedup: 100-1000x for repeated reads within TTL - -**Retry Logic with Exponential Backoff:** - -**Key Features:** - -* **Tenacity-Based**: Industry-standard retry library with exponential backoff -* **Configurable**: Per-operation retry attempts (default: 4), wait times (0.1s-60s), timeouts (30s) -* **Smart Exceptions**: Only retries transient errors (5xx, timeouts, connection issues) -* **Jitter**: Randomized wait times prevent thundering herd problem - -**Retryable Exceptions:** - -* ServiceException (5xx server errors) -* Urllib3TimeoutError -* PoolError (connection pool exhausted) -* IncompleteRead / ProtocolError / ProxyError - -**Retry Pattern:** - -``` -Attempt 1: Immediate -Attempt 2: ~100ms wait -Attempt 3: ~200-400ms wait (exponential + jitter) -Attempt 4: ~400-800ms wait (capped at 60s max) -``` - -**Configuration:** - -```bash -# Example .env configuration -AIGNOSTICS_ME_RETRY_ATTEMPTS=4 -AIGNOSTICS_ME_RETRY_WAIT_MIN=0.1 -AIGNOSTICS_ME_RETRY_WAIT_MAX=60.0 -AIGNOSTICS_ME_TIMEOUT=30.0 -AIGNOSTICS_ME_CACHE_TTL=300 - -AIGNOSTICS_RUN_RETRY_ATTEMPTS=4 -AIGNOSTICS_RUN_TIMEOUT=30.0 -AIGNOSTICS_RUN_CACHE_TTL=15 -``` - -**Cache Control:** - -```python -# Bypass cache for specific operations (useful in tests or when fresh data is required) -run = client.runs.details(run_id, nocache=True) # Force API call -applications = client.applications.list(nocache=True) # Bypass cache -``` - -**Design Decisions:** - -* ✅ **Read-Only Retries**: Only safe, idempotent read operations retry -* ✅ **Global Cache Clearing**: Simple consistency model - clear everything on writes -* ✅ **Cache Bypass** (NEW): `nocache=True` parameter forces fresh API calls -* ✅ **Logging**: Warnings logged before retry sleeps for observability -* ✅ **Re-raise**: Original exception re-raised after exhausting retries - -See `platform/CLAUDE.md` for implementation details and usage patterns. - -### API v1.0.0-beta.7 State Models (MAJOR CHANGE) - -**Breaking Change**: Complete refactoring of run, item, and artifact state management with enum-based models and termination reasons. - -**New State Enums:** - -* `RunState`: PENDING → PROCESSING → TERMINATED -* `ItemState`: PENDING → PROCESSING → TERMINATED -* `ArtifactState`: PENDING → PROCESSING → TERMINATED - -**New Termination Reason Enums:** - -* `RunTerminationReason`: ALL_ITEMS_PROCESSED, CANCELED_BY_USER, CANCELED_BY_SYSTEM -* `ItemTerminationReason`: SUCCEEDED, USER_ERROR, SYSTEM_ERROR, SKIPPED -* `ArtifactTerminationReason`: SUCCEEDED, USER_ERROR, SYSTEM_ERROR - -**New Models:** - -* `RunItemStatistics` - Aggregate counts (total, succeeded, user_error, system_error, skipped, pending, processing) -* `RunOutput`, `ItemOutput`, `ArtifactOutput` - Structured output models with state + termination_reason - -**Deleted Models (Breaking Changes):** - -* ❌ `UserPayload` → Replaced with `Auth0User` and `Auth0Organization` -* ❌ `PayloadItem` → Replaced with `ItemOutput` -* ❌ `ApplicationVersionReadResponse` → Renamed to `ApplicationVersion` - -**Benefits:** - -1. **Type Safety**: Enum-based states prevent typos -2. **Clear Semantics**: Separate "what happened" (state) from "why" (termination_reason) -3. **Granular Errors**: Distinguish user errors from system errors for better debugging -4. **Progress Tracking**: RunItemStatistics provides real-time aggregate view - -**Usage Example:** - -```python -run = client.run("run-123") -details = run.details() - -if details.output.state == RunState.TERMINATED: - if details.output.termination_reason == RunTerminationReason.ALL_ITEMS_PROCESSED: - print(f"✅ Run complete: {details.output.statistics.succeeded} items succeeded") - print(f"❌ Failures: {details.output.statistics.user_error} user errors, " - f"{details.output.statistics.system_error} system errors") -``` - -See `platform/CLAUDE.md` for complete state machine diagrams and migration guide. - -## Testing Workflow - -### Test Suite Organization - -The SDK has a comprehensive test suite organized by test type and execution strategy. - -**Pytest Configuration**: - -* Default timeout: **10 seconds** per test -* Coverage requirement: **85% minimum** -* Async mode: `auto` (detects async tests automatically) -* Parallel execution: Via pytest-xdist with work stealing - -**Test Markers** (authoritative definitions from `pyproject.toml`): - -**IMPORTANT**: Every test **MUST** have at least one of: `unit`, `integration`, or `e2e` marker, otherwise it will **NOT run in CI**. The CI pipeline explicitly runs tests with these markers only. - -**Test Categories** (Martin Fowler's Solitary vs Sociable distinction): - -* **`unit`** - Solitary unit tests - * Test a layer of a module in isolation with all dependencies mocked (except shared utils and systems module) - * Must pass **offline** (no external service calls) - * Timeout: ≤ 10s (default), must be < 5 min - * ~3 minutes total execution time - -* **`integration`** - Sociable integration tests - * Test interactions across architectural layers (CLI/GUI→Service, Service→Utils) or between modules (Application→Platform) - * Uses real SDK collaborators, real file I/O, real subprocesses, real Docker containers - * Must pass **offline** (mock external services: Aignostics Platform API, Auth0, S3/GCS, IDC) - * Timeout: ≤ 10s (default), must be < 5 min - * ~5 minutes total execution time - -* **`e2e`** - End-to-end tests - * Test complete workflows with **real external network services** (Aignostics Platform API, cloud storage, IDC, etc) - * If timeout ≥ 5 min and < 60 min, additionally mark as `long_running` - * If timeout ≥ 60 min, additionally mark as `very_long_running` - * ~7 minutes total execution time (regular tests only) - -**Test Execution Control Markers**: - -* **`long_running`** - Tests with timeout **≥ 5 min and < 60 min** - * CI/CD runs with **one Python version only** (3.13) - * Excluded by default in `make test` - use `make test_long_running` - * Can be skipped in PRs with `skip:test:long_running` label - -* **`very_long_running`** - Tests with timeout **≥ 60 min** - * CI/CD runs with **one Python version only** (3.13) - * Excluded by default in `make test` - use `make test_very_long_running` - * Only runs when explicitly enabled with `enable:test:very_long_running` label - -**Scheduling Markers**: - -* **`scheduled`** - Tests to run on a schedule - * Still part of non-scheduled test executions - * Run every 6h (staging) and 24h (production) - -* **`scheduled_only`** - Tests to run **on schedule only** - * Never run in regular CI/CD - * Only in scheduled test workflows - -**Infrastructure Markers**: - -* **`sequential`** - Exclude from parallel test execution - * Tests that must run in specific order or have interdependencies - -* **`docker`** - Tests that require Docker - * Docker daemon must be running - -* **`skip_with_act`** - Don't run with [Act](https://github.com/nektos/act) - * For local GitHub Actions testing - -* **`no_extras`** - Tests that require no extras installed - * Test behavior without optional dependencies - -**Test Structure:** - -```text -tests/ -├── conftest.py # Global fixtures and configuration -├── aignostics/ -│ ├── platform/ # Platform module tests -│ │ ├── sdk_metadata_test.py (519 lines) -│ │ ├── authentication_test.py -│ │ ├── client_test.py -│ │ └── resources/ -│ ├── application/ # Application module tests -│ ├── wsi/ # WSI module tests -│ ├── utils/ # Utils module tests -│ │ └── user_agent_test.py (258 lines) -│ └── ... -└── CLAUDE.md # Test suite documentation -``` - -### Running Tests - -**Quick commands:** - -```bash -# Run all default tests (unit + integration + e2e, no long_running) -make test - -# Run specific test types -make test_unit # Unit tests only -make test_integration # Integration tests only -make test_e2e # E2E tests (requires .env with credentials) - -# Run tests with specific markers -make test_sequential # Sequential tests only -make test_long_running # Long-running tests -make test_scheduled # Scheduled tests - -# Run on specific Python version -make test 3.12 # Python 3.12 -make test 3.13 # Python 3.13 -``` - -**Direct pytest commands:** - -```bash -# Run single test file -uv run pytest tests/aignostics/platform/sdk_metadata_test.py -v - -# Run specific test function -uv run pytest tests/aignostics/platform/sdk_metadata_test.py::test_build_sdk_metadata_minimal -v - -# Run with markers -uv run pytest -m "unit and not long_running" -v - -# Run with coverage -uv run pytest --cov=src/aignostics --cov-report=term-missing - -# Debug mode (with pdb) -uv run pytest tests/test_file.py --pdb - -# Show print statements -uv run pytest tests/test_file.py -s - -# Verbose output -uv run pytest tests/test_file.py -vv -``` - -### Test Parallelization - -The test suite uses pytest-xdist for parallel execution with intelligent distribution: - -**Configuration (noxfile.py):** - -```python -# Worker factors control parallelism -XDIST_WORKER_FACTOR = { - "unit": 0.0, # No parallelization (fast, no overhead needed) - "integration": 0.2, # 20% of logical CPUs - "e2e": 1.0, # 100% of logical CPUs (I/O bound) - "default": 1.0 # 100% for mixed test runs -} - -# Calculate workers: max(1, int(cpu_count * factor)) -# Example: 8 CPU machine -# unit: 1 worker (sequential) -# integration: max(1, int(8 * 0.2)) = 1 worker -# e2e: max(1, int(8 * 1.0)) = 8 workers -``` - -**Parallel vs Sequential:** - -```bash -# Parallel tests (most tests) -uv run pytest -n logical --dist worksteal tests/ - -# Sequential tests (marked with @pytest.mark.sequential) -uv run pytest -m sequential tests/ -``` - -**Why different factors?** - -* **Unit tests (0.0)**: Fast enough that parallelization overhead hurts performance -* **Integration tests (0.2)**: Some I/O but mostly CPU-bound, limited parallelism -* **E2E tests (1.0)**: Network I/O bound, full parallelization maximizes throughput - -### Coverage Requirements - -**Minimum Coverage: 85%** - -```bash -# Check coverage -uv run coverage report - -# Generate HTML report -uv run coverage html -open htmlcov/index.html - -# Coverage enforced in CI -uv run coverage report --fail-under=85 -``` - -**Coverage Configuration (.coveragerc):** - -* Source: `src/aignostics` -* Omits: `*/tests/*`, `*/__init__.py`, `*/codegen/*` -* Reports: Terminal, XML (Codecov), HTML, Markdown - -### E2E Test Setup - -E2E tests require credentials to run against staging environment: - -**Required .env file:** - -```bash -# Create .env in repository root -AIGNOSTICS_API_ROOT=https://platform-staging.aignostics.com -AIGNOSTICS_CLIENT_ID_DEVICE=your-staging-client-id -AIGNOSTICS_REFRESH_TOKEN=your-staging-refresh-token -``` - -**In CI/CD:** - -* GitHub Actions secrets automatically populate .env -* Uses `AIGNOSTICS_CLIENT_ID_DEVICE_STAGING` and `AIGNOSTICS_REFRESH_TOKEN_STAGING` -* GCP credentials for bucket access also configured - -**Running E2E locally:** - -```bash -# Ensure .env exists with staging credentials -make test_e2e - -# Or with pytest directly -uv run pytest -m "e2e and not long_running" -v -``` - -### Pytest Configuration Details - -**From `pyproject.toml` `[tool.pytest.ini_options]`**: - -**Test Discovery**: - -* Test paths: `tests/` -* Python files: `*_test.py`, `test_*.py` -* Main file: `tests/main.py` - -**CLI Options** (always applied): - -```bash --p nicegui.testing.plugin # NiceGUI testing support --v # Verbose output ---strict-markers # Error on unknown markers ---log-disable=aignostics # Disable SDK logging during tests ---cov=aignostics # Coverage for src/aignostics ---cov-report=term-missing # Terminal report with missing lines ---cov-report=xml:reports/coverage.xml # XML for Codecov ---cov-report=html:reports/coverage_html # HTML report -``` - -**Timeouts**: - -* Default: **10 seconds** per test -* Override in test: `@pytest.mark.timeout(timeout=60)` -* Method: `signal` (can be configured) - -**Async Support**: - -* Mode: `auto` (automatically detects async tests) -* Default fixture loop scope: `function` - -**Coverage**: - -* Environment: `COVERAGE_FILE=.coverage`, `COVERAGE_PROCESS_START=pyproject.toml` -* Minimum: 85% (enforced in CI) -* Branch coverage: Enabled -* Parallel mode: Enabled (thread + multiprocessing concurrency) - -**Markdown Reports**: - -* Enabled: `md_report = true` -* Output: `reports/pytest.md` -* Flavor: GitHub-flavored markdown -* Exclude outcomes: `passed`, `skipped` (only show failures/errors) - -### Test Fixtures and Patterns - -**Key fixtures (conftest.py):** - -* Environment isolation (HOME, config dirs) -* Mocked responses for API calls -* Temporary file creation -* Authentication mocking - -**Example test pattern:** - -```python -import pytest -from unittest.mock import patch - -@pytest.mark.unit -def test_sdk_metadata_minimal(monkeypatch): - """Test SDK metadata with clean environment.""" - # Isolate environment - monkeypatch.delenv("GITHUB_ACTIONS", raising=False) - monkeypatch.delenv("PYTEST_CURRENT_TEST", raising=False) - - # Run test - result = build_sdk_metadata() - - # Assertions - assert result.submission.date is not None - assert result.user_agent is not None -``` - -**See tests/CLAUDE.md for comprehensive testing patterns and examples.** - -### Finding Unmarked Tests - -**Critical**: To find tests missing category markers (which will NOT run in CI): - -```bash -# Find all tests without unit/integration/e2e markers -uv run pytest -m "not unit and not integration and not e2e" --collect-only - -# This should return 0 tests if all are properly marked -# If tests are found, they are missing required markers -``` - -**Why this works**: The marker expression matches tests that don't have any of the required category markers. - -**Add to pre-commit checks**: - -```bash -# Verify no unmarked tests exist -if uv run pytest -m "not unit and not integration and not e2e" --collect-only 2>&1 | grep -q "collected 0 items"; then - echo "✅ All tests have category markers" -else - echo "❌ Found tests without category markers - they will NOT run in CI!" - exit 1 -fi -``` - -## Development Workflow - -### Initial Setup - -```bash -# Clone repository -git clone https://github.com/aignostics/python-sdk.git -cd python-sdk - -# Install uv (if not installed) -curl -LsSf https://astral.sh/uv/install.sh | sh - -# Install all dependencies including dev tools -make install -# This runs: uv sync --all-extras + installs pre-commit hooks - -# Verify installation -uv run aignostics --version -``` - -### Development Cycle - -**1. Create Feature Branch** - -```bash -# From main branch -git checkout main -git pull origin main - -# Create feature branch -git checkout -b feat/my-feature - -# Or bugfix branch -git checkout -b fix/bug-description -``` - -**2. Make Changes and Validate** - -```bash -# Run linting (this is fast, run frequently) -make lint -# Runs: ruff format, ruff check, pyright, mypy - -# Run tests -make test -# Or specific test types -make test_unit # Fast unit tests only -make test_integration # Integration tests - -# Full validation (what CI runs) -make all -# Runs: lint + test + docs + audit (~20 minutes) -``` - -**3. Pre-commit Hooks (Automatic)** - -The repository uses pre-commit hooks installed by `make install`: - -```yaml -# .pre-commit-config.yaml -hooks: - - ruff formatting check - - ruff linting check - - mypy type checking - - trailing whitespace removal - - end-of-file fixer - - yaml validation -``` - -**Skip hooks only if necessary:** - -```bash -git commit --no-verify -m "WIP: debugging" -``` - -**4. Commit Convention** - -Use conventional commits for automatic changelog generation: - -```bash -# Feature -git commit -m "feat(platform): add operation caching system" - -# Bug fix -git commit -m "fix(application): handle missing artifact states" - -# Documentation -git commit -m "docs: update testing workflow in CLAUDE.md" - -# Refactor -git commit -m "refactor(wsi): simplify thumbnail generation" - -# Test -git commit -m "test(platform): add SDK metadata validation tests" - -# Chore -git commit -m "chore: bump dependencies" -``` - -**Types:** `feat`, `fix`, `docs`, `refactor`, `test`, `chore`, `ci`, `perf`, `build` - -**5. Push and Create PR** - -```bash -# Push to remote -git push origin feat/my-feature - -# Create PR (via gh cli or GitHub UI) -gh pr create --title "feat: add operation caching" --body "Description..." - -# IMPORTANT: Add label to skip long-running tests -gh pr edit --add-label "skip:test_long_running" -``` - -**PR triggers:** - -* Lint checks (~5 min) -* Security audit (~3 min) -* Test matrix on Python 3.11, 3.12, 3.13 (~15 min) -* CodeQL security scanning (~10 min) -* Claude Code automated review (~10 min) -* Ketryx compliance reporting - -**6. Address Review Feedback** - -```bash -# Make changes -git add . -git commit -m "fix: address review comments" -git push origin feat/my-feature - -# CI re-runs automatically -``` - -**7. Merge PR** - -* Ensure all CI checks pass (green checkmarks) -* Get approval from maintainer -* Squash and merge (default) or merge commit -* Delete feature branch after merge - -### Build System (Nox) - -The SDK uses Nox for build automation with uv integration: - -**Key Nox sessions:** - -```bash -# Lint session (ruff format + check + pyright + mypy) -uv run nox -s lint - -# Audit session (pip-audit + pip-licenses + SBOMs) -uv run nox -s audit - -# Test session (pytest with coverage) -uv run nox -s test # Default markers -uv run nox -s test -- -m unit # Specific markers - -# Test matrix (all Python versions) -uv run nox -s test-3.11 -uv run nox -s test-3.12 -uv run nox -s test-3.13 - -# Documentation -uv run nox -s docs # Build Sphinx docs - -# Setup session (install all dev tools) -uv run nox -s setup - -# Version bumping -uv run nox -s bump -- patch # 1.0.0 -> 1.0.1 -uv run nox -s bump -- minor # 1.0.0 -> 1.1.0 -uv run nox -s bump -- major # 1.0.0 -> 2.0.0 -``` - -**Makefile wraps Nox for convenience:** - -```bash -make lint → uv run nox -s lint -make test → uv run nox -s test -make docs → uv run nox -s docs -make audit → uv run nox -s audit -make all → all of the above -``` - -### Adding Dependencies - -**Runtime dependency:** - -```bash -# Add to main dependencies -uv add requests - -# Add with version constraint -uv add "httpx>=0.25.0" - -# Update pyproject.toml automatically -``` - -**Development dependency:** - -```bash -# Add to dev dependencies -uv add --dev pytest-mock - -# Or specific group -uv add --group docs sphinx-rtd-theme -``` - -**Optional dependency group:** - -```bash -# Edit pyproject.toml -[project.optional-dependencies] -gui = ["nicegui>=1.0.0"] -qupath = ["ijson>=3.0.0"] - -# Install with extras -uv sync --extra gui -uv sync --all-extras # Install all optional groups -``` - -### Version Bumping and Releases - -**Bump version (via Nox):** - -```bash -# Patch version (1.0.0 -> 1.0.1) -make bump patch - -# Minor version (1.0.0 -> 1.1.0) -make bump minor - -# Major version (1.0.0 -> 2.0.0) -make bump major -``` - -**This process:** - -1. Updates version in `pyproject.toml` -2. Creates git commit: "Bump version: 1.0.0 → 1.0.1" -3. Creates git tag: `v1.0.1` -4. Generates changelog from conventional commits - -**Push with tags:** - -```bash -# Push commits and tags -git push --follow-tags - -# CI detects tag and triggers: -# 1. Full CI pipeline (lint + test + audit) -# 2. Package build and publish to PyPI -# 3. Docker image build and publish -# 4. GitHub release creation -# 5. Slack notification -``` - -**Manual release (if needed):** - -```bash -# Build package -uv build - -# Publish to PyPI (via UV_PUBLISH_TOKEN secret) -uv publish -``` - -### CI/CD Integration - -See [.github/CLAUDE.md](.github/CLAUDE.md) for comprehensive CI/CD documentation including: - -* Complete workflow architecture -* Claude Code automation (PR reviews, interactive sessions) -* Environment configuration (staging/production) -* Scheduled testing (6h staging, 24h production) -* Debugging failed CI runs -* Secrets management - -**Quick CI reference:** - -```bash -# Skip CI for commit -git commit -m "docs: update README [skip ci]" - -# Or with skip:ci in commit message -git commit -m "skip:ci: work in progress" - -# Add PR label to skip long-running tests -gh pr edit --add-label "skip:test_long_running" -``` - -### IDE Setup Recommendations - -**VS Code (.vscode/settings.json):** - -```json -{ - "python.defaultInterpreterPath": ".venv/bin/python", - "python.testing.pytestEnabled": true, - "python.testing.pytestArgs": ["-v"], - "python.linting.enabled": true, - "python.linting.ruffEnabled": true, - "python.formatting.provider": "ruff", - "editor.formatOnSave": true, - "editor.codeActionsOnSave": { - "source.organizeImports": true - } -} -``` - -**PyCharm:** - -* Configure Python interpreter: `.venv/bin/python` -* Enable pytest as test runner -* Set up ruff as external tool -* Configure mypy plugin for type checking - -## Tips and Tricks for Claude Code Efficiency - -### Quick Discovery Commands - -**Find files by pattern**: - -```bash -# Find all test files -find tests -name "*_test.py" -o -name "test_*.py" - -# Find Python files excluding tests -find src -name "*.py" | grep -v __pycache__ - -# Find configuration files -find . -maxdepth 2 -name "*.toml" -o -name "*.yml" -o -name "*.yaml" | grep -v node_modules -``` - -**Search code effectively**: - -```bash -# Find all imports of a module -grep -r "from aignostics.platform import" --include="*.py" - -# Find all test markers -grep -r "@pytest.mark." tests/ --include="*.py" | cut -d: -f2 | sort | uniq -c - -# Find all CLI commands -grep -r "@cli.*\.command" src/ --include="*.py" - -# Find TODOs and FIXMEs -grep -rn "TODO\|FIXME" src/ --include="*.py" -``` - -**Git exploration**: - -```bash -# View commit history for a specific file -git log --oneline --follow -- path/to/file.py - -# See what changed in recent commits -git log --oneline --stat -10 - -# Find who last modified a line -git blame -L 100,110 path/to/file.py - -# Check current branch and recent commits -git log --oneline --graph --decorate -20 -``` - -### Testing Shortcuts - -**Run specific test categories**: - -```bash -# Run only fast tests (unit + integration, no e2e) -uv run pytest -m "unit or integration" -v - -# Run tests for a specific module -uv run pytest tests/aignostics/platform/ -v - -# Run tests matching a pattern -uv run pytest -k "metadata" -v - -# Run last failed tests -uv run pytest --lf - -# Run tests that failed in last session, then continue with others -uv run pytest --ff -``` - -**Test discovery and validation**: - -```bash -# Collect tests without running (verify test discovery) -uv run pytest --collect-only - -# Find tests without category markers (CRITICAL - they won't run in CI!) -uv run pytest -m "not unit and not integration and not e2e" --collect-only - -# List all available markers -uv run pytest --markers - -# Dry run with verbose output -uv run pytest --collect-only -v | grep " you can directly clone via ```git clone git@github.com:aignostics/python-sdk.git``` and ```cd python-sdk```. - -Install or update development dependencies using +If you are one of the committers of https://github.com/aignostics/python-sdk you can directly clone via ```git clone git@github.com:aignostics/python-sdk.git``` and ```cd python-sdk```. +Install or update development dependencies using ```shell make install ``` ## Directory Layout -```text +``` ├── Makefile # Central entrypoint for build, test, release and deploy ├── noxfile.py # Noxfile for running tests in multiple python environments and other tasks ├── .pre-commit-config.yaml # Definition of hooks run on commits @@ -75,7 +74,6 @@ make setup Don't forget to configure your `.env` file with the required environment variables. Notes: - 1. .env.example is provided as a template, use ```cp .env.example .env``` and edit ```.env``` to create your environment. 2. .env is excluded from version control, so feel free to add secret values. @@ -87,7 +85,6 @@ make help # Shows help with additional build targets, e.g. to build PDF docume ``` Notes: - 1. Primary build steps defined in `noxfile.py`. 2. Distribution dumped into ```dist/``` 3. Documentation dumped into ```docs/build/html/``` and ```docs/build/latex/``` @@ -114,13 +111,9 @@ git push ``` Notes: - 1. [pre-commit hooks](https://pre-commit.com/) will run automatically on commit to ensure code quality. 2. We use the conventional commits format - see the [code style guide](CODE_STYLE.md) for the mandatory commit message format. -3. You can skip workflows using either commit messages or PR labels: - - **Commit message**: Include skip markers like ```skip:ci```, ```skip:test:long-running```, ```skip:test:unit```, ```skip:test:integration```, ```skip:test:e2e```, ```skip:test:matrix-runner```, or ```skip:test:all``` in your commit message - - **PR labels**: Add labels with the same names (e.g., `skip:ci`, `skip:test:long-running`) to your pull request - - Both methods work independently - you can use either or both +3. If your commit message includes ```skip:ci``` workflows will not be triggered. Further supported skips are ```skip:test:long-running```, ```skip:test:regular```, ```skip:test:matrix-runner```. ### Publish Release @@ -132,10 +125,10 @@ make x.y.z # Targeted release ``` Notes: - 1. Changelog generated automatically 2. Publishes to PyPi, Docker Registries, Read The Docs, Streamlit and Auditing services + ## Advanced usage ### Developing the GUI @@ -153,7 +146,6 @@ make act ``` Notes: - 1. Workflow defined in `.github/workflows/*.yml` 2. ci-cd.yml calls all build steps defined in noxfile.py @@ -202,15 +194,16 @@ docker compose down ``` Notes: - 1. The API service is run based on the slim Docker image. Change in compose.yaml if you need the API service to run on the fat image. + ### Pinning GitHub Actions ```shell pinact run # See https://dev.to/suzukishunsuke/pin-github-actions-to-a-full-length-commit-sha-for-security-2n7p ``` + ## Update from Template Update project to latest version of [oe-python-template](https://github.com/helmut-hoffer-von-ankershoffen/oe-python-template) template. @@ -219,53 +212,6 @@ Update project to latest version of [oe-python-template](https://github.com/helm make update_from_template ``` -## Custom Metadata - -When submitting application runs to the Aignostics Platform, you can attach custom metadata to provide additional context, tracking information, or control flags. The SDK itself automatically attaches structured metadata to every run, which includes: - -- **SDK version and submission details**: When and how the run was submitted (CLI, script, or GUI) -- **User information**: Organization and user details (when authenticated) -- **CI/CD context**: GitHub Actions workflow information, pytest test context -- **Workflow control**: Flags like `validate_only` or `onboard_to_aignostics_portal` -- **Scheduling**: Due dates and deadlines for run completion -- **Notes**: Optional user-provided notes - -### SDK Metadata Schema - -The SDK metadata follows a strict JSON Schema to ensure data quality and consistency. You can: - -- **View the schema**: [SDK Metadata Schema (latest)](https://raw.githubusercontent.com/aignostics/python-sdk/main/docs/source/_static/sdk_metadata_schema_latest.json) -- **Validate your metadata**: The SDK automatically validates metadata before submission -- **Extend with custom fields**: Add your own metadata alongside the SDK-generated metadata - -### Adding Custom Metadata - -When submitting runs programmatically, you can provide additional metadata: - -```python -from aignostics.platform import Client - -client = Client() - -# Your custom metadata -custom_metadata = { - "experiment_id": "exp-2025-001", - "dataset_version": "v2.1", - "custom_flags": { - "enable_feature_x": True - } -} - -# Submit run with custom metadata -# SDK metadata is automatically added under the "sdk" key -run = client.runs.submit( - application_id="your-app", - items=[...], - custom_metadata=custom_metadata -) -``` - -The SDK will merge your custom metadata with its own tracking metadata, ensuring both are included in the run submission. The SDK metadata is always placed under the `sdk` key to avoid conflicts with your custom fields. ## Pull Request Guidelines diff --git a/Dockerfile b/Dockerfile index 61212f96..2bec2182 100644 --- a/Dockerfile +++ b/Dockerfile @@ -5,7 +5,7 @@ FROM python:3.13-slim-bookworm AS base FROM base AS builder # Copy in UV -COPY --from=ghcr.io/astral-sh/uv:0.9.7 /uv /bin/uv +COPY --from=ghcr.io/astral-sh/uv:0.8.23 /uv /bin/uv # We use the system interpreter managed by uv ENV UV_PYTHON_DOWNLOADS=0 @@ -27,7 +27,7 @@ FROM builder AS builder-slim RUN --mount=type=cache,target=/root/.cache/uv \ --mount=type=bind,source=uv.lock,target=uv.lock \ --mount=type=bind,source=pyproject.toml,target=pyproject.toml \ - uv sync --frozen --no-install-project --no-dev --no-editable + uv sync --frozen --no-install-project --no-dev --no-editable --python 3.13 # Then, add the rest of the project source code and install it # Installing separately from its dependencies allows optimal layer caching @@ -46,7 +46,7 @@ COPY examples /app/examples # Nothing yet RUN --mount=type=cache,target=/root/.cache/uv \ - uv sync --frozen --no-dev --no-editable + uv sync --frozen --no-dev --no-editable --python 3.13 # The all builder takes in all extras @@ -56,7 +56,7 @@ FROM builder AS builder-all RUN --mount=type=cache,target=/root/.cache/uv \ --mount=type=bind,source=uv.lock,target=uv.lock \ --mount=type=bind,source=pyproject.toml,target=pyproject.toml \ - uv sync --frozen --no-install-project --all-extras --no-dev --no-editable + uv sync --frozen --no-install-project --all-extras --no-dev --no-editable --python 3.13 # Then, add the rest of the project source code and install it # Installing separately from its dependencies allows optimal layer caching @@ -75,7 +75,7 @@ COPY examples /app/examples COPY codegen/out/aignx /app/codegen/out/aignx RUN --mount=type=cache,target=/root/.cache/uv \ - uv sync --frozen --all-extras --no-dev --no-editable + uv sync --frozen --all-extras --no-dev --no-editable --python 3.13 # Base of our build targets @@ -125,9 +125,6 @@ FROM target AS all # Copy fat app, i.e. with all extras, make it immutable COPY --from=builder-all --chown=root:root --chmod=755 /app /app -# Provide writeable .cache folder for python sdk, used for token storage -RUN mkdir -p /app/.cache && chown app:app /app/.cache && chmod 700 /app/.cache - # Run as nonroot USER app WORKDIR /app diff --git a/Makefile b/Makefile index 3e377f7d..e9d1c097 100644 --- a/Makefile +++ b/Makefile @@ -1,28 +1,7 @@ # Makefile for running common development tasks -# Read and validate Python version from .python-version file -PYTHON_VERSION := $(shell \ - if [ ! -f .python-version ]; then \ - echo "Error: .python-version file not found" >&2; \ - echo "INVALID"; \ - else \ - version=$$(cat .python-version); \ - if ! echo "$$version" | grep -qE '^[0-9]+\.[0-9]+(\.[0-9]+)?$$'; then \ - echo "Error: .python-version must contain a valid 2 or 3 segment version number (e.g., 3.11 or 3.11.5), got: $$version" >&2; \ - echo "INVALID"; \ - else \ - echo "$$version"; \ - fi; \ - fi) - -# Check if Python version is valid and fail if not -ifeq ($(PYTHON_VERSION),INVALID) -$(error Python version validation failed. See error message above.) -endif - # Define all PHONY targets -.PHONY: act all audit bump clean codegen dist dist_native docs docker_build gui_watch install lint pre_commit_run_all profile setup test test_coverage_reset test_default test_e2e test_e2e_matrix test_integration test_integration_matrix test_long_running test_scheduled test_sequential test_unit test_unit_matrix test_very_long_running update_from_template - +.PHONY: all act audit bump clean codegen dist dist_native docs docker_build install lint pre_commit_run_all profile setup setup test test_coverage_reset test_long_running test_scheduled test_sequential update_from_template gui_watch # Main target i.e. default sessions defined in noxfile.py all: @@ -35,7 +14,6 @@ nox-cmd = @if [ "$@" = "test" ]; then \ if [ -n "$(filter 3.%,$(MAKECMDGOALS))" ]; then \ uv run --all-extras nox -s test -p $(filter 3.%,$(MAKECMDGOALS)); \ elif [ -n "$(filter-out $@,$(MAKECMDGOALS))" ]; then \ - echo $(filter-out $@,$(MAKECMDGOALS)); \ uv run --all-extras nox -s $@ -- $(filter-out $@,$(MAKECMDGOALS)); \ else \ uv run --all-extras nox -s $@; \ @@ -57,46 +35,17 @@ install: sh install.sh uv run pre-commit install -## Run default tests, i.e. unit, then integration, then e2e tests, no (very_)long_running tests, single Python version -test_default: - XDIST_WORKER_FACTOR=0.5 uv run --all-extras nox -s test_default - -## Run unit tests (non-sociable tests) -test_unit: - XDIST_WORKER_FACTOR=0.0 uv run --all-extras nox -s test -p $(PYTHON_VERSION) -- -m "unit and not long_running and not very_long_running" --cov-append - -test_unit_matrix: - XDIST_WORKER_FACTOR=0.5 uv run --all-extras nox -s test -- -m "unit and not long_running and not very_long_running" --cov-append - -## Run integration tests (test real layer/module interactions with mocked external services) -test_integration: - XDIST_WORKER_FACTOR=0.2 uv run --all-extras nox -s test -p $(PYTHON_VERSION) -- -m "integration and not long_running and not very_long_running" --cov-append - -test_integration_matrix: - XDIST_WORKER_FACTOR=0.5 uv run --all-extras nox -s test -- -m "integration and not long_running and not very_long_running" --cov-append - -## Run e2e tests (test complete workflows with real external services) -test_e2e: - XDIST_WORKER_FACTOR=1 uv run --all-extras nox -s test -p $(PYTHON_VERSION) -- -m "e2e and not long_running and not very_long_running" --cov-append - -test_e2e_matrix: - XDIST_WORKER_FACTOR=1 uv run --all-extras nox -s test -- -m "e2e and not long_running and not very_long_running" --cov-append - ## Run tests marked as long_running test_long_running: - XDIST_WORKER_FACTOR=2 uv run --all-extras nox -s test -p $(PYTHON_VERSION) -- -m long_running --cov-append - -## Run tests marked as very_long_running -test_very_long_running: - XDIST_WORKER_FACTOR=2 uv run --all-extras nox -s test -p $(PYTHON_VERSION) -- -m very_long_running --cov-append + uv run --all-extras nox -s test -p 3.13 -- -m long_running --cov-append -## Run tests marked as scheduled or scheduled_only +## Run tests marked as scheduled test_scheduled: - XDIST_WORKER_FACTOR=1 uv run --all-extras nox -s test -p $(PYTHON_VERSION) -- -m "(scheduled or scheduled_only)" + uv run --all-extras nox -s test -p 3.13 -- -m scheduled ## Run tests marked as sequential test_sequential: - uv run --all-extras nox -s test -p $(PYTHON_VERSION) -- -m sequential + uv run --all-extras nox -s test -p 3.13 -- -m sequential ## Reset test coverage data test_coverage_reset: @@ -134,7 +83,7 @@ profile: # Signing: https://gist.github.com/bpteague/750906b9a02094e7389427d308ba1002 dist_native: # Build - uv run --python $(PYTHON_VERSION) --no-dev --extra pyinstaller --extra qupath --extra marimo pyinstaller --distpath dist_native --clean --noconfirm aignostics.spec + uv run --python 3.13.6 --no-dev --extra pyinstaller --extra qupath --extra marimo pyinstaller --distpath dist_native --clean --noconfirm aignostics.spec # Create 7z archive preserving symlinks @if command -v 7z >/dev/null 2>&1; then \ cd dist_native; \ @@ -156,9 +105,7 @@ codegen: # format via jq, and save as codegen/in/openapi_$version.json, with the # version extracted from the info.version field in the JSON mkdir -p codegen/in/archive - # curl -s https://platform.aignostics.com/api/v1/openapi.json | jq . > codegen/in/openapi.json - #curl -s https://platform-dev.aignostics.ai/api/v1/openapi.json | jq . > codegen/in/openapi.json - curl -s https://platform-staging.aignostics.com/api/v1/openapi.json | jq . > codegen/in/openapi.json + curl -s https://platform.aignostics.com/api/v1/openapi.json | jq . > codegen/in/openapi.json version=$$(jq -r .info.version codegen/in/openapi.json); \ echo "Detected version $$version"; \ cp -f codegen/in/openapi.json codegen/in/archive/openapi_$${version}.json @@ -204,17 +151,10 @@ help: @echo " pre_commit_run_all - Run pre-commit hooks on all files" @echo " profile - Profile with Scalene" @echo " setup - Setup development environment" - @echo " test_default - Run unit, then integration, then e2e tests, no long-running ones (python version defined in .python-version)" - @echo " test_unit - Run unit tests (python version defined in .python-version)" - @echo " test_unit_matrix - Run unit tests (matrix testing Python versions)" - @echo " test_integration - Run integration tests (python version defined in .python-version)" - @echo " test_integration_matrix - Run integration tests (matrix testing Python versions)" - @echo " test_e2e - Run regular end-to-end tests (python version defined in .python-version)" - @echo " test_e2e_matrix - Run regular end-to-end tests (matrix testing Python versions)" - @echo " test_long_running - Run long-running end-to-end tests (python version defined in .python-version)" - @echo " test_very_long_running - Run very long-running end-to-end tests (python version defined in .python-version)" - @echo " test_sequential - Run tests marked as sequential (python version defined in .python-version)" - @echo " test_scheduled - Run tests marked as scheduled (python version defined in .python-version)" + @echo " test [3.11|3.12|3.13] - Run tests (for specific Python version)" + @echo " test_sequential - Run tests marked as sequential with Python 3.13" + @echo " test_scheduled - Run tests marked as scheduled with Python 3.13" + @echo " test_long_running - Run tests marked as long running with Python 3.13" @echo " test_coverage_reset - Reset test coverage data" @echo " update_from_template - Update from template using copier" @echo "" diff --git a/OPERATIONAL_EXCELLENCE.md b/OPERATIONAL_EXCELLENCE.md index 099a2701..2d860f2d 100644 --- a/OPERATIONAL_EXCELLENCE.md +++ b/OPERATIONAL_EXCELLENCE.md @@ -3,7 +3,7 @@ > 🧠 This project was scaffolded using the template [oe-python-template](https://github.com/helmut-hoffer-von-ankershoffen/oe-python-template) with [copier](https://copier.readthedocs.io/), thereby applying the following toolchain: > 1. Linting with [Ruff](https://github.com/astral-sh/ruff) -2. Static type checking with [mypy](https://mypy.readthedocs.io/en/stable/) and [pyright](https://github.com/microsoft/pyright) +2. Static type checking with [mypy](https://mypy.readthedocs.io/en/stable/) 3. Complete set of [pre-commit](https://pre-commit.com/) hooks including [detect-secrets](https://github.com/Yelp/detect-secrets) and [pygrep](https://github.com/pre-commit/pygrep-hooks) 4. Unit and E2E testing with [pytest](https://docs.pytest.org/en/stable/) including parallel test execution 5. Matrix testing in multiple environments with [nox](https://nox.thea.codes/en/stable/) @@ -11,7 +11,7 @@ 7. CI/CD pipeline automated with [GitHub Actions](https://github.com/features/actions) with parallel and reusable workflows, including scheduled testing, release automation, and multiple reporting channels and formats 8. CI/CD pipeline can be run locally with [act](https://github.com/nektos/act) 9. Code quality and security checks with [SonarQube](https://www.sonarsource.com/products/sonarcloud) and [GitHub CodeQL](https://codeql.github.com/) -10. Dependency monitoring and vulnerability scanning with [pip-audit](https://pypi.org/project/pip-audit/), [trivy](https://trivy.dev/latest/), [Renovate](https://github.com/renovatebot/renovate), [GitHub Dependabot](https://docs.github.com/en/code-security/getting-started/dependabot-quickstart-guide) and [Ketryx](https://ketryx.com/) +10. Dependency monitoring and vulnerability scanning with [pip-audit](https://pypi.org/project/pip-audit/), [trivy](https://trivy.dev/latest/), [Renovate](https://github.com/renovatebot/renovate), and [GitHub Dependabot](https://docs.github.com/en/code-security/getting-started/dependabot-quickstart-guide) 11. Error monitoring and profiling with [Sentry](https://sentry.io/) (optional) 12. Logging and metrics with [Logfire](https://logfire.dev/) (optional) 13. Prepared for uptime monitoring and scheduled tests with [betterstack](https://betterstack.com/) or alternatives @@ -30,7 +30,5 @@ 26. One-click development environments with [Dev Containers](https://code.visualstudio.com/docs/devcontainers/containers) and [GitHub Codespaces](https://github.com/features/codespaces) 27. Settings for use with [VSCode](https://code.visualstudio.com/) 28. Settings and custom instructions for use with [GitHub Copilot](https://docs.github.com/en/copilot/customizing-copilot/adding-repository-custom-instructions-for-github-copilot) -29. Automated Pull Request Reviews with [Claude Code](https://docs.claude.com/en/docs/claude-code/github-actions) -30. ISO compliant Application Lifecycle Management (ALM) with [Ketryx](https://ketryx.com/) See [oe-python-template](https://github.com/helmut-hoffer-von-ankershoffen/oe-python-template?tab=readme-ov-file#multi-head-application-features) for how to bootstrap multi-headed applications with the template. Example code generated applies the modulith software architecture pattern with dependency injection, enabling auto-discovery of domain services, CLI commands, API operations and GUI pages. diff --git a/README.md b/README.md index c3df4fde..1adb9c53 100644 --- a/README.md +++ b/README.md @@ -25,7 +25,7 @@ The **Aignostics Python SDK** includes multiple pathways to interact with the **Aignostics Platform**: -1. Use the **Aignostics Launchpad** to analyze whole slide images with advanced computational pathology applications like +1. Use the **Aignostics Launchpad** to analyze whole slide images with advanced computational pathology applications like [Atlas H&E-TME](https://www.aignostics.com/products/he-tme-profiling-product) directly from your desktop. View your results by launching popular tools such as [QuPath](https://qupath.github.io/) and Python Notebooks with one click. The app runs on Mac OS X, Windows, and Linux. @@ -60,34 +60,33 @@ more about how we achieve ## Quick Start > [!Note] -> See as follows for a quick start guide to get you up and running with the Aignostics Python SDK as quickly as possible. -> If you first want to learn bout the basic concepts and components of the Aignostics Platform skip to that section below. -> The further reading section points you to reference documentation listing all available CLI commands, methods and classes provided by the client library, operations of the API, how we achieve operational excellence, security, and more. +> See as follows for a quick start guide to get you up and running with the Aignostics Python SDK as quickly as possible. +> If you first want to learn bout the basic concepts and components of the Aignostics Platform skip to that section below. +> The further reading section points you to reference documentation listing all available CLI commands, methods and classes provided by the client library, operations of the API, how we achieve operational excellence, security, and more. > If you are not familiar with terminology please check the glossary at the end of this document. ### Launchpad: Run your first computational pathology analysis in 10 minutes from your desktop The **Aignostics Launchpad** is a graphical desktop application that allows you to run -applications on whole slide images (WSIs) from your computer, and inspect results with QuPath and Python Notebooks with one click. It is designed to be user-friendly and intuitive, for use by Research Pathologists and Data Scientists. +applications on whole slide images (WSIs) from your computer, and inspect results with QuPath and Python Notebooks with one click. It is designed to be user-friendly and intuitive, for use by Research Pathologists and Data Scientists. The Launchpad is available for Mac OS X, Windows, and Linux, and can be installed easily: -1. Visit the [Quick Start](https://platform.aignostics.com/getting-started/quick-start) +1. Visit the [Quick Start](https://platform.aignostics.com/getting-started/quick-start) page in the Aignostics Console. 2. Copy the installation script and paste it into your terminal - compatible with MacOS, Windows, and Linux. 3. Launch the application by running `uvx aignostics launchpad`. -4. Follow the intuitive graphical interface to analyze public datasets or your own whole slide images +4. Follow the intuitive graphical interface to analyze public datasets or your own whole slide images with [Atlas H&E-TME](https://www.aignostics.com/products/he-tme-profiling-product) and other computational pathology applications. > [!Note] -> The Launchpad features a growing ecosystem of extensions that seamlessly integrate with standard digital pathology tools. To use the Launchpad with all available extensions, run `uvx --from "aignostics[qupath,marimo]" aignostics launchpad`. Currently available extensions are: -> +> The Launchpad features a growing ecosystem of extensions that seamlessly integrate with standard digital pathology tools. To use the Launchpad with all available extensions, run `uvx --with aignostics[qupath,marimo] aignostics launchpad`. Currently available extensions are: > 1. **QuPath extension**: View your application results in [QuPath](https://qupath.github.io/) with a single click. The Launchpad creates QuPath projects on-the-fly. > 2. **Marimo extension**: Analyze your application results using [Marimo](https://marimo.io/) notebooks embedded in the Launchpad. You don't have to leave the Launchpad to do real data science. ### CLI: Manage datasets and application runs from your terminal -The Python SDK includes the **Aignostics CLI**, a Command-Line Interface (CLI) that allows you to +The Python SDK includes the **Aignostics CLI**, a Command-Line Interface that allows you to interact with the Aignostics Platform directly from your terminal or shell script. See as follows for a simple example where we download a sample dataset for the [Atlas @@ -108,7 +107,7 @@ nano tcga_luad/metadata.csv uvx aignostics application run upload he-tme data/tcga_luad/run.csv # Submit the application run and print tha run id uvx aignostics application run submit he-tme data/tcga_luad/run.csv -# Check the status of the application run you submitted +# Check the status of the application run you triggered uvx aignostics application run list # Incrementally download results when they become available # Fill in the id from the output in the previous step @@ -145,10 +144,10 @@ to learn about all commands and options available. > [your personal dashboard on the Aignostics Platform website](https://platform.aignostics.com/getting-started/quick-start) > and follow the steps outlined in the `Use in Python Notebooks` section. -The Python SDK includes Jupyter and Marimo notebooks to help you get started interacting +The Python SDK includes Jupyter and Marimo notebooks to help you get started interacting with the Aignostics Platform in your notebook environment. -The notebooks showcase the interaction with the Aignostics Platform using our "Test Application". To run one them, +The notebooks showcase the interaction with the Aignostics Platform using our "Test Application". To run one them, please follow the steps outlined in the snippet below to clone this repository and start either the [Jupyter](https://docs.jupyter.org/en/latest/index.html) ([examples/notebook.ipynb](https://github.com/aignostics/python-sdk/blob/main/examples/notebook.ipynb)) @@ -182,12 +181,12 @@ uv run marimo edit examples/notebook.py Next to using the Launchpad, CLI and example notebooks, the Python SDK includes the *Aignostics Client Library* for integration with your Python Codebase. -The following sections outline how to install the Python SDK for this purpose and +The following sections outline how to install the Python SDK for this purpose and interact with the Client. ### Installation -The Aignostics Python SDK is published on the [Python Package Index (PyPI)](https://pypi.org/project/aignostics/), +The Aignostics Python SDK is published on the [Python Package Index (PyPI)](https://pypi.org/project/aignostics/), is compatible with Python 3.11 and above, and can be installed via via `uv` or `pip`: **Install with [uv](https://docs.astral.sh/uv/):** If you don't have uv @@ -207,7 +206,7 @@ pip install aignostics #### Usage -The following snippet shows how to use the Client to submit an application +The following snippet shows how to use the Client to trigger an application run: ```python @@ -215,21 +214,21 @@ from aignostics import platform # initialize the client client = platform.Client() -# submit an application run -application_run = client.runs.submit( - application_id="test-app", +# trigger an application run +application_run = client.runs.create( + application_version="two-task-dummy:v0.35.0", items=[ platform.InputItem( - external_id="slide-1", + reference="slide-1", input_artifacts=[ platform.InputArtifact( - name="whole_slide_image", + name="user_slide", download_url="", metadata={ - "checksum_base64_crc32c": "AAAAAA==", - "resolution_mpp": 0.25, - "width_px": 1000, - "height_px": 1000, + "checksum_crc32c": "AAAAAA==", + "base_mpp": 0.25, + "width": 1000, + "height": 1000, }, ) ], @@ -247,23 +246,22 @@ to learn about all classes and methods. ##### Defining the input for an application run -When creating an application run, you need to specify the `application_id` and optionally the -`application_version` (version number) of the application you want to run. If you omit the version, -the latest version will be used automatically. Additionally, you need to define the input items you -want to process in the run. The input items are defined as follows: +Next to the `application_version` of the application you want to run, you have +to define the input items you want to process in the run. The input items are +defined as follows: ```python platform.InputItem( - external_id="1", + reference="1", input_artifacts=[ platform.InputArtifact( - name="whole_slide_image", # defined by the application version's input artifact schema + name="user_slide", # defined by the application version input_artifact schema download_url="", - metadata={ # defined by the application version's input artifact schema - "checksum_base64_crc32c": "N+LWCg==", - "resolution_mpp": 0.46499982, - "width_px": 3728, - "height_px": 3640, + metadata={ # defined by the application version input_artifact schema + "checksum_crc32c": "N+LWCg==", + "base_mpp": 0.46499982, + "width": 3728, + "height": 3640, }, ) ], @@ -276,7 +274,8 @@ string. This is used to identify the item in the results later on. The data & metadata you need to provide for each item. The required artifacts depend on the application version you want to run - in the case of test application, there is only one artifact required, which is the image to process on. The -artifact name is defined as `whole_slide_image` for this application. +artifact name is defined as `user_slide` for the `two-task-dummy` application +and `whole_slide_image` for the `he-tme` application. The `download_url` is a signed URL that allows the Aignostics Platform to download the image data later during processing. diff --git a/SECURITY.md b/SECURITY.md index 4fa382de..49389335 100644 --- a/SECURITY.md +++ b/SECURITY.md @@ -5,7 +5,6 @@ If you discover a security vulnerability in Aignostics Python SDK, please [report it here](https://github.com/aignostics/python-sdk/security/advisories/new). We take all security reports seriously. Upon receiving a security report, we will: - 1. Confirm receipt of the vulnerability report 2. Investigate the issue 3. Work on a fix @@ -25,7 +24,6 @@ a. **[GitHub Dependabot](https://github.com/dependabot)**: Monitors dependencies b. **[Renovate](https://www.mend.io/renovate/)**: Monitors dependencies for vulnerabilities pre and post release on GitHub. [Dependency Dashboard](https://github.com/aignostics/python-sdk/issues?q=is%3Aissue%20state%3Aopen%20Dependency%20Dashboard) published. c. **[pip-audit](https://pypi.org/project/pip-audit/)**: Pre commit to GitHub scans Python dependencies for known vulnerabilities using data from the [Python Advisory Database](https://github.com/pypa/advisory-database). `vulnerabilities.json` published [per release](https://github.com/aignostics/python-sdk/releases). d. **[trivy](https://trivy.dev/latest/)**: Pre commit to GitHub scans Python dependencies for known vulnerabilities using data from [GitHub Advisory Database](https://github.com/advisories?query=ecosystem%3Apip) and [OSV.dev](https://osv.dev/list?q=&ecosystem=PyPI). `sbom.spdx` published [per release](https://github.com/aignostics/python-sdk/releases). -e. **[ox.security](https://www.ox.security/)**: Monitors dependencies for vulnerabilities pre and post release on GitHub. ### 2. License Compliance Checks and Software Bill of Materials (SBOM) @@ -37,18 +35,14 @@ d. **[trivy](https://trivy.dev/latest/)**: Generates Software Bill of Materials a. **[GitHub CodeQL](https://codeql.github.com/)**: Analyzes code for common vulnerabilities and coding errors using GitHub's semantic code analysis engine. [Code scanning results](https://github.com/aignostics/python-sdk/security/code-scanning) published. b. **[SonarQube](https://www.sonarsource.com/products/sonarcloud/)**: Performs comprehensive static code analysis to detect code quality issues, security vulnerabilities, and bugs. [Security hotspots](https://sonarcloud.io/project/security_hotspots?id=aignostics_python-sdk) published. -c. **[ox.security](https://www.ox.security/)**: Analyzes code to adhere with security best practices. ### 4. Secret Detection - a. **[GitHub Secret scanning](https://docs.github.com/en/code-security/secret-scanning/introduction/about-secret-scanning)**: Automatically scans for secrets in the codebase and alerts if any are found. [Secret scanning alerts](https://github.com/aignostics/python-sdk/security/secret-scanning) published. b. **[Yelp/detect-secrets](https://github.com/Yelp/detect-secrets)**: Pre-commit hook and automated scanning to prevent accidental inclusion of secrets or sensitive information in commits. [Pre-Commit hook](https://github.com/aignostics/python-sdk/blob/main/.pre-commit-config.yaml) published. -c. **[ox.security](https://www.ox.security/)**: Scans for secrets and sensitive information in the codebase. ## Security Best Practices We follow these security best practices: - 1. Regular dependency updates 2. Comprehensive test coverage 3. Code review process for changes by external contributors @@ -56,7 +50,6 @@ We follow these security best practices: 5. Adherence to Python security best practices We promote security awareness among contributors and users: - 1. We indicate security as a priority in our [code style guide](CODE_STYLE.md), to be followed by human and agentic contributors as mandatory @@ -65,4 +58,4 @@ We promote security awareness among contributors and users: ## Security Compliance -For questions about security compliance or for more details about our security practices, please contact . +For questions about security compliance or for more details about our security practices, please contact helmut@aignostics.com. diff --git a/SOFTWARE_ARCHITECTURE.md b/SOFTWARE_ARCHITECTURE.md deleted file mode 100644 index 45402556..00000000 --- a/SOFTWARE_ARCHITECTURE.md +++ /dev/null @@ -1,869 +0,0 @@ -# Software Architecture Document - -**Aignostics Python SDK** - -**Version:** 0.2.105 -**Date:** August 12, 2025 -**Status:** Draft - ---- - -## 1. Overview - -### 1.1 Context - -The Aignostics Python SDK is a comprehensive client library that provides programmatic access to the Aignostics AI Platform services. It serves as a bridge between local development environments and cloud-based AI services, enabling developers to interact with applications, manage data buckets, process datasets, and utilize various AI-powered tools through both command-line and graphical interfaces. - -The SDK is designed to support data scientists, researchers, and developers working with digital pathology, whole slide imaging (WSI), and pathological applications in the life pathology domain. - -### 1.2 General Architecture and Patterns Applied - -#### Simplified Overview for Onboarding - -```mermaid -flowchart LR - subgraph "User Interfaces" - CLI[🖥️ Command Line] - GUI[🖱️ Launchpad App] - NB[📓 Notebooks] - end - - subgraph "Core Functionality" - APP[🧬 Run ML Applications] - FILES[🗂️ Manage Files] - DATA[📊 Handle Datasets] - end - - subgraph "Cloud Platform" - API[🌐 Aignostics Platform] - ML[🤖 ML Processing] - STORE[💾 Cloud Storage] - end - - subgraph "External Tools" - QP[🔍 QuPath Analysis] - IDC[📚 Public Datasets] - end - - CLI --> APP - GUI --> APP - NB --> APP - - CLI --> FILES - GUI --> FILES - - CLI --> DATA - GUI --> DATA - - APP --> API - FILES --> STORE - DATA --> IDC - - API --> ML - APP --> QP - - %% Annotations - CLI -.->|"Commands like:
aignostics application run"| APP - GUI -.->|"Drag & drop
Point & click"| APP - API -.->|"Processes WSI images
Returns results"| ML - APP -.->|"Opens results in
QuPath projects"| QP - - classDef interface fill:#e3f2fd,stroke:#1976d2,stroke-width:2px - classDef core fill:#e8f5e8,stroke:#388e3c,stroke-width:2px - classDef cloud fill:#fff3e0,stroke:#f57c00,stroke-width:2px - classDef external fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px - - class CLI,GUI,NB interface - class APP,FILES,DATA core - class API,ML,STORE cloud - class QP,IDC external -``` - -The SDK follows a **Modulith Architecture** pattern, organizing functionality into cohesive modules while maintaining a monolithic deployment structure. This approach provides the benefits of modular design (clear boundaries, focused responsibilities) while avoiding the complexity of distributed systems. - -**Key Architectural Patterns:** - -- **Modulith Pattern**: Self-contained modules with clear boundaries and minimal inter-module dependencies -- **Dependency Injection**: Dynamic discovery and registration of services, CLI commands, and GUI pages -- **Service Layer Pattern**: Core business logic encapsulated in service classes with consistent interfaces -- **Dual Presentation Layers**: - - (a) **CLI Layer**: Command-line interface using Typer framework - - (b) **GUI Layer**: Web-based graphical interface using NiceGUI framework -- **Settings-based Configuration**: Environment-aware configuration management using Pydantic Settings -- **Plugin Architecture**: Optional modules that can be enabled/disabled based on available dependencies - -```mermaid -graph TB - subgraph "Presentation Layer" - CLI[CLI Interface
Typer Commands] - GUI[GUI Interface
NiceGUI/Launchpad] - NOTEBOOK[Notebook Interface
Marimo Server] - end - - subgraph "Domain Services" - AS[Application Service] - BS[Bucket Service] - DS[Dataset Service] - NS[Notebook Service] - WS[WSI Service] - QS[QuPath Service] - SS[System Service] - end - - subgraph "Platform Layer" - PS[Platform Service
API Client] - AUTH[Authentication] - CLIENT[HTTP Client] - end - - subgraph "Infrastructure Layer" - DI[Dependency Injection
Auto-discovery] - SETTINGS[Settings Management
Pydantic] - LOGGING[Logging & Monitoring
Sentry/Logfire] - BOOT[Boot Sequence] - end - - subgraph "External Services" - PLATFORM_API[Aignostics Platform API] - QUPATH_APP[QuPath Application] - IDC_API[NCI Image Data Commons] - end - - %% Presentation to Domain Service connections - CLI --> AS - CLI --> BS - CLI --> DS - CLI --> NS - CLI --> WS - CLI --> QS - CLI --> SS - - GUI --> AS - GUI --> BS - GUI --> DS - GUI --> WS - GUI --> SS - GUI --> NS - - NOTEBOOK --> AS - NOTEBOOK --> BS - NOTEBOOK --> DS - - %% Domain Service interdependencies - AS --> BS - AS --> WS - AS --> QS - AS --> PS - - BS --> PS - DS --> PS - WS --> PS - NS --> PS - - %% Platform to External Services - PS --> AUTH - PS --> CLIENT - CLIENT --> PLATFORM_API - - %% External integrations - QS --> QUPATH_APP - DS --> IDC_API - - %% Infrastructure connections - AS --> DI - BS --> DI - DS --> DI - NS --> DI - WS --> DI - QS --> DI - SS --> DI - PS --> DI - - AS --> SETTINGS - BS --> SETTINGS - DS --> SETTINGS - NS --> SETTINGS - WS --> SETTINGS - QS --> SETTINGS - SS --> SETTINGS - PS --> SETTINGS - - AS --> LOGGING - BS --> LOGGING - DS --> LOGGING - NS --> LOGGING - WS --> LOGGING - QS --> LOGGING - SS --> LOGGING - PS --> LOGGING - - DI --> BOOT - SETTINGS --> BOOT - LOGGING --> BOOT - - %% Styling - classDef presentation fill:#e3f2fd,stroke:#1976d2,stroke-width:2px - classDef domain fill:#e8f5e8,stroke:#388e3c,stroke-width:2px - classDef platform fill:#fff3e0,stroke:#f57c00,stroke-width:2px - classDef infrastructure fill:#fce4ec,stroke:#c2185b,stroke-width:2px - classDef external fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px - - class CLI,GUI,NOTEBOOK presentation - class AS,BS,DS,NS,WS,QS,SS domain - class PS,AUTH,CLIENT platform - class DI,SETTINGS,LOGGING,BOOT infrastructure - class PLATFORM_API,QUPATH_APP,IDC_API external -``` - -**Architecture Overview:** - -This high-level diagram shows the four main architectural layers and their relationships: - -- **🔵 Presentation Layer**: User interfaces (CLI, GUI, Notebooks) -- **🟢 Domain Services**: Business logic modules for specific functionality -- **🟠 Platform Layer**: API client and authentication services -- **🔴 Infrastructure Layer**: Cross-cutting concerns and utilities -- **🟣 External Services**: Third-party integrations - -### 1.3 Language and Frameworks - -**Core Technologies:** - -- **[Python 3.11+](https://www.python.org/)**: Primary programming language with full type hints and modern features -- **[Typer](https://typer.tiangolo.com/)**: CLI framework for building intuitive command-line interfaces with automatic help generation -- **[NiceGUI](https://nicegui.io/)**: Modern web-based GUI framework for creating responsive user interfaces -- **[FastAPI](https://fastapi.tiangolo.com/)**: High-performance web framework for API endpoints (inherited from template) -- **[Pydantic](https://docs.pydantic.dev/)**: Data validation and settings management with type safety -- **[Requests](https://docs.python-requests.org/)**: HTTP client library for API communication - -**Key Dependencies:** - -- **[aignx-codegen](https://github.com/aignostics/aignx-codegen)**: Auto-generated API client for Aignostics Platform -- **[Marimo](https://marimo.io/)**: Interactive notebook environment for data exploration -- **[Google CRC32C](https://github.com/googleapis/python-crc32c)**: Data integrity verification for file transfers -- **[Humanize](https://github.com/python-humanize/humanize)**: Human-readable formatting for file sizes, dates, and progress - -**Optional Extensions:** - -- **[QuPath](https://qupath.github.io/) Integration**: Advanced pathology image analysis capabilities -- **WSI Processing**: Whole slide image format support and processing -- **[Jupyter Notebook](https://jupyter.org/)**: Alternative notebook environment support - -### 1.4 Build Chain and CI/CD - -The project implements a comprehensive DevOps pipeline with multiple quality gates and automated processes: - -```mermaid -flowchart TD - subgraph "Development Phase" - DEV[👩‍💻 Developer] - CODE[📝 Code Changes] - PRECOMMIT[🔍 Pre-commit Hooks] - end - - subgraph "Quality Gates" - LINT[🧹 Linting
Ruff formatter] - TYPE[🔍 Type Checking
MyPy strict mode] - TEST[🧪 Testing
pytest + coverage] - SEC[🛡️ Security
pip-audit + secrets] - end - - subgraph "Build & Package" - BUILD[📦 Build Package
Python wheel] - DOCKER[🐳 Docker Images
Slim + Full variants] - DOCS[📚 Documentation
Sphinx + API docs] - end - - subgraph "Release & Deploy" - PYPI[🐍 PyPI Release] - REGISTRY[🗂️ Container Registry] - RTD[📖 Read the Docs] - MONITOR[📊 Monitoring
Sentry + Logfire] - end - - DEV --> CODE - CODE --> PRECOMMIT - PRECOMMIT --> LINT - PRECOMMIT --> TYPE - PRECOMMIT --> TEST - PRECOMMIT --> SEC - - LINT --> BUILD - TYPE --> BUILD - TEST --> BUILD - SEC --> BUILD - - BUILD --> DOCKER - BUILD --> DOCS - BUILD --> PYPI - - DOCKER --> REGISTRY - DOCS --> RTD - PYPI --> MONITOR - - %% Annotations for clarity - PRECOMMIT -.->|"Runs automatically
on git commit"| LINT - TEST -.->|"85% coverage
requirement"| BUILD - SEC -.->|"Scans dependencies
& secrets"| BUILD - DOCKER -.->|"Multi-arch builds
ARM64 + AMD64"| REGISTRY - - classDef dev fill:#e3f2fd,stroke:#1976d2,stroke-width:2px - classDef quality fill:#fff3e0,stroke:#f57c00,stroke-width:2px - classDef build fill:#e8f5e8,stroke:#388e3c,stroke-width:2px - classDef deploy fill:#fce4ec,stroke:#c2185b,stroke-width:2px - - class DEV,CODE,PRECOMMIT dev - class LINT,TYPE,TEST,SEC quality - class BUILD,DOCKER,DOCS build - class PYPI,REGISTRY,RTD,MONITOR deploy -``` - -**Key Pipeline Features:** - -- **Code Generation**: Automated API client generation from OpenAPI specifications -- **Pre-commit Hooks**: Automated quality checks including `detect-secrets` and `pygrep` -- **Multi-environment Testing**: Matrix testing across Python versions and operating systems -- **Security Scanning**: `pip-audit` dependency vulnerability scanning and secret detection -- **Documentation Generation**: Automated API docs and user guides using Sphinx -- **Multi-platform Builds**: Docker images for both ARM64 and AMD64 architectures -- **Compliance Integration**: Automated reporting to compliance platforms - -**Code Quality & Analysis:** - -- **Linting with Ruff**: Fast Python linter and formatter following Black code style -- **Static Type Checking with MyPy**: Strict type checking in all code paths -- **Pre-commit Hooks**: Automated quality checks including `detect-secrets` and `pygrep` -- **Code Quality Analysis**: SonarQube and GitHub CodeQL integration - -**Testing & Coverage:** - -- **Unit and E2E Testing with pytest**: Comprehensive test suite with parallel execution -- **Matrix Testing with Nox**: Multi-environment testing across Python versions -- **Test Coverage Reporting**: Codecov integration with coverage artifacts -- **Regression Testing**: Automated detection of breaking changes - -**Security & Compliance:** - -- **Dependency Monitoring**: Renovate and GitHub Dependabot for automated updates -- **Vulnerability Scanning**: `pip-audit` and Trivy security analysis -- **License Compliance**: `pip-licenses` with allowlist validation and attribution generation -- **SBOM Generation**: Software Bill of Materials in CycloneDX and SPDX formats - -**Documentation & Release:** - -- **Documentation with Sphinx**: Automated generation of HTML/PDF documentation -- **API Documentation**: Interactive OpenAPI specification with Swagger UI -- **Version Management**: `bump-my-version` for semantic versioning -- **Changelog Generation**: `git-cliff` for automated release notes -- **Multi-format Publishing**: PyPI packages, Docker images, and Read The Docs - -**Monitoring & Observability:** - -- **Error Monitoring**: Sentry integration for production error tracking -- **Logging & Metrics**: Logfire integration for structured logging -- **Uptime Monitoring**: Prepared integration with monitoring services - -**Deployment & Distribution:** - -- **Multi-stage Docker Builds**: Fat (all extras) and slim (core only) variants -- **Multi-architecture Support**: ARM64 and AMD64 container images -- **Container Security**: Non-root execution within immutable containers -- **Registry Publishing**: Docker.io and GitHub Container Registry with attestations - -**Development Environment:** - -- **Dev Containers**: One-click development environments with GitHub Codespaces -- **VSCode Integration**: Optimized settings and extensions for development all found under ./vscode directory -- **GitHub Copilot**: Custom instructions and prompts for AI-assisted development -- **Local CI/CD**: Act integration for running GitHub Actions locally - -### 1.5 Layers and Modules - -#### High-Level Module Organization - -```mermaid -graph TB - subgraph "Presentation Interfaces" - CLI["🖥️ CLI Interface
Typer-based Commands"] - GUI["🖱️ GUI Launchpad
NiceGUI Web Interface"] - NOTEBOOK["📓 Interactive Notebooks
Marimo Server"] - end - - subgraph "Domain Modules" - APPLICATION["🧬 Application Module
ML Application Management"] - BUCKET["🗂️ Bucket Module
Cloud File Storage"] - DATASET["📊 Dataset Module
Data Management"] - WSI["🔬 WSI Module
Whole Slide Image Processing"] - QUPATH["🔍 QuPath Module
Pathology Analysis Integration"] - SYSTEM["⚙️ System Module
Health & Diagnostics"] - NOTEBOOK_MOD["📔 Notebook Module
Interactive Computing"] - end - - subgraph "Platform Layer" - PLATFORM["🌐 Platform Module
API Client & Authentication"] - end - - subgraph "Infrastructure Layer" - UTILS["🔧 Utils Module
DI, Settings, Logging"] - end - - subgraph "Third-party Integrations" - THIRDPARTY["🔗 Third-party Module
External Service Connectors"] - end - - %% Direct presentation dependencies - CLI --> APPLICATION - CLI --> BUCKET - CLI --> DATASET - CLI --> WSI - CLI --> QUPATH - CLI --> SYSTEM - CLI --> NOTEBOOK_MOD - - GUI --> APPLICATION - GUI --> BUCKET - GUI --> DATASET - GUI --> WSI - GUI --> SYSTEM - GUI --> NOTEBOOK_MOD - - NOTEBOOK --> APPLICATION - NOTEBOOK --> BUCKET - NOTEBOOK --> DATASET - - %% Domain module dependencies - APPLICATION --> PLATFORM - APPLICATION --> BUCKET - APPLICATION --> WSI - APPLICATION --> QUPATH - - BUCKET --> PLATFORM - DATASET --> PLATFORM - WSI --> PLATFORM - QUPATH --> THIRDPARTY - NOTEBOOK_MOD --> PLATFORM - - %% Infrastructure dependencies (all modules depend on utils) - APPLICATION --> UTILS - BUCKET --> UTILS - DATASET --> UTILS - WSI --> UTILS - QUPATH --> UTILS - SYSTEM --> UTILS - NOTEBOOK_MOD --> UTILS - PLATFORM --> UTILS - THIRDPARTY --> UTILS - - %% Styling with better contrast - classDef presentation fill:#e3f2fd,stroke:#1976d2,stroke-width:2px,color:#000 - classDef domain fill:#e8f5e8,stroke:#388e3c,stroke-width:2px,color:#000 - classDef platform fill:#fff3e0,stroke:#f57c00,stroke-width:2px,color:#000 - classDef infrastructure fill:#fce4ec,stroke:#c2185b,stroke-width:2px,color:#000 - classDef thirdparty fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px,color:#000 - - class CLI,GUI,NOTEBOOK presentation - class APPLICATION,BUCKET,DATASET,WSI,QUPATH,SYSTEM,NOTEBOOK_MOD domain - class PLATFORM platform - class UTILS infrastructure - class THIRDPARTY thirdparty -``` - -#### Detailed Data Flow: Application Processing Workflow - -```mermaid -sequenceDiagram - participant User as 👤 User - participant CLI as 🖥️ CLI/GUI - participant App as 🧬 Application Service - participant Bucket as 🗂️ Bucket Service - participant Platform as 🌐 Platform API - participant ML as 🤖 ML Pipeline - participant QuPath as 🔍 QuPath - - Note over User,QuPath: Computational Pathology Analysis Workflow - - User->>CLI: Upload WSI files - CLI->>Bucket: Upload to cloud storage - Bucket->>Platform: Generate signed URLs - Platform-->>Bucket: Return upload URLs - Bucket-->>CLI: Upload progress updates - CLI-->>User: Show upload status - - User->>CLI: Submit application run - CLI->>App: Create application run - App->>Platform: Submit run with metadata - Platform->>ML: Start processing pipeline - - Note over ML: Image Analysis:
• Tissue segmentation
• Cell detection
• Feature extraction - - ML-->>Platform: Processing updates - Platform-->>App: Status notifications - App-->>CLI: Real-time progress - CLI-->>User: Show processing status - - ML->>Platform: Results ready - Platform-->>App: Download URLs available - App->>Bucket: Download results - Bucket-->>App: Result files (GeoJSON, TIFF, CSV) - - App->>QuPath: Generate QuPath project - QuPath-->>App: Project created - App-->>CLI: Results available - CLI-->>User: Open in QuPath/Notebooks -``` - -The SDK is organized into distinct layers, each with specific responsibilities: - -#### Infrastructure Layer (`utils/`) - -**Core Utilities and Cross-cutting Concerns:** - -- **Boot Sequence**: Application initialization and dependency injection setup -- **Dependency Injection**: Dynamic discovery and registration of services and UI components -- **Settings Management**: Environment-aware configuration using Pydantic Settings -- **Logging & Monitoring**: Structured logging with Logfire and Sentry integration -- **Authentication**: Token-based authentication with caching mechanisms -- **Health Monitoring**: Service health checks and status reporting - -#### Platform Layer (`platform/`) - -**Foundation Services:** - -- **API Client**: Auto-generated client for Aignostics Platform REST API -- **Authentication Service**: OAuth/JWT token management and renewal -- **Core Resources**: Applications, versions, runs, and user management -- **Exception Handling**: Standardized error handling and API response processing -- **Configuration**: Platform-specific settings and endpoint management - -#### Domain Modules - -Each domain module follows a consistent internal structure: - -**Application Module (`application/`)** - -- **Service** (`_service.py`): Core business logic for application management and execution -- **CLI** (`_cli.py`): Command-line interface for application operations -- **GUI** (`_gui/`): Web-based interface for application management -- **Settings** (`_settings.py`): Module-specific configuration -- **Utilities** (`_utils.py`): Helper functions and data transformations - -**Bucket Module (`bucket/`)** - -- **Service**: Cloud storage operations, file upload/download with progress tracking -- **CLI**: Command-line file management operations -- **GUI**: Drag-and-drop file manager interface -- **Settings**: Storage configuration and authentication - -**Dataset Module (`dataset/`)** - -- **Service**: Dataset creation, validation, and metadata management -- **CLI**: Batch dataset operations and validation -- **GUI**: Interactive dataset builder and explorer -- **Settings**: Dataset processing configuration - -**WSI Module (`wsi/`)** - -- **Service**: Whole slide image processing and format conversion -- **Utilities**: Image format detection and metadata extraction -- **Integration**: Support for various medical imaging formats (DICOM, TIFF, SVS) - -**QuPath Module (`qupath/`)** - -- **Service**: Integration with QuPath for advanced pathology analysis -- **Annotation Processing**: Import/export of pathology annotations -- **Project Management**: QuPath project creation and synchronization - -**Notebook Module (`notebook/`)** - -- **Service**: Marimo notebook server management -- **Templates**: Pre-configured notebook templates for common workflows -- **Integration**: Seamless data flow between notebooks and platform services - -**System Module (`system/`)** - -- **Service**: System diagnostics and environment information -- **Health Checks**: Comprehensive system health monitoring -- **Configuration**: System-level settings and capability detection - -**Third-Party Integration (`third_party/`)** - -- **Embedded Dependencies**: Vendored third-party libraries for reliability -- **IDC Index**: Integration with Image Data Commons for medical imaging datasets -- **Bottle.py**: Lightweight WSGI micro web-framework for specific use cases - -#### Presentation Layer - -**CLI Interface (`cli.py`)** - -- Auto-discovery and registration of module CLI commands -- Consistent help text and error handling across all commands -- Progress indicators and interactive prompts -- Support for both interactive and scripted usage - -**GUI Interface (`gui/`)** - -- Responsive web-based interface using NiceGUI -- Consistent theming and layout across all modules -- Real-time progress tracking and status updates -- File drag-and-drop capabilities and interactive forms - -## 2. Design Principles - -### 2.1 Modular Architecture - -Each module is designed as a self-contained unit with: - -- **Clear Boundaries**: Well-defined interfaces and minimal coupling -- **Consistent Structure**: Standardized organization across all modules -- **Independent Testing**: Module-specific test suites with isolated dependencies -- **Optional Dependencies**: Graceful degradation when optional features are unavailable - -### 2.2 Dependency Injection - -The SDK uses a sophisticated dependency injection system: - -- **Automatic Discovery**: Services and UI components are automatically registered -- **Dynamic Loading**: Modules are loaded on-demand based on available dependencies -- **Lifecycle Management**: Proper initialization and cleanup of resources -- **Configuration Injection**: Settings are automatically injected into services - -### 2.3 Configuration Management - -Hierarchical configuration system: - -- **Environment Variables**: Platform and module-specific environment variables -- **Settings Files**: `.env` files for local development configuration -- **Default Values**: Sensible defaults for all configuration options -- **Validation**: Type-safe configuration with Pydantic validation - -### 2.4 Error Handling and Resilience - -Comprehensive error handling strategy: - -- **Typed Exceptions**: Domain-specific exception hierarchies -- **Graceful Degradation**: Fallback behavior when services are unavailable -- **Retry Logic**: Automatic retry with exponential backoff for transient failures -- **User-Friendly Messages**: Clear error messages with actionable guidance - -### 2.5 Security and Privacy - -Security-first design principles: - -- **Token-based Authentication**: Secure API authentication with automatic refresh -- **Sensitive Data Protection**: Automatic masking of secrets in logs and outputs -- **Input Validation**: Comprehensive validation of all user inputs and API responses -- **Secure Defaults**: All security settings default to the most secure option - -## 3. Integration Patterns - -### 3.1 Aignostics Platform API Integration - -The SDK serves as a comprehensive client for the **Aignostics Platform API**, a RESTful web service that provides access to advanced computational pathology applications and machine learning workflows: - -**Core API Services:** - -- **Application Management**: Access to computational pathology applications like Atlas H&E-TME, tissue segmentation, cell detection and classification -- **Run Orchestration**: Submit, monitor, and manage application runs with up to 500 whole slide images per batch -- **Result Management**: Incremental download of results as processing completes, with automatic 30-day retention -- **Resource Management**: User quotas, organization management, and usage monitoring - -**Machine Learning Applications:** - -The platform provides access to a growing ecosystem of computational pathology applications: - -- **Atlas H&E-TME**: Tumor microenvironment analysis for H&E stained slides -- **Test Application**: Free validation application for integration testing -- **Tissue Quality Control**: Automated assessment of slide quality and artifacts -- **Cell Detection & Classification**: Advanced machine learning models for cellular analysis -- **Biomarker Scoring**: Quantitative analysis of immunohistochemical markers - -**Technical Integration:** - -- **Auto-generated Client**: Type-safe API client generated from OpenAPI specifications using aignx-codegen -- **Authentication Handling**: OAuth/JWT token management with automatic refresh and secure credential storage -- **Request/Response Transformation**: Conversion between Platform API models and SDK domain objects -- **Error Mapping**: Platform API errors mapped to SDK-specific exceptions with actionable error messages -- **Batch Processing**: Support for high-throughput processing with incremental result delivery - -**Data Flow Architecture:** - -```mermaid -sequenceDiagram - participant SDK as Aignostics SDK - participant Platform as Aignostics Platform API - participant ML as ML Processing Pipeline - participant Storage as Cloud Storage - - SDK->>Platform: Submit Application Run - Platform->>Platform: Validate Input & Schedule - Platform->>Storage: Download WSI via Signed URLs - Platform->>ML: Execute Computational Pathology Application - ML->>ML: Process: Tissue Segmentation, Cell Detection, etc. - ML->>Storage: Store Results (GeoJSON, TIFF, CSV) - Platform->>SDK: Notify Results Available - SDK->>Storage: Download Results Incrementally -``` - -**Supported Image Formats & Standards:** - -- **Input Formats**: Pyramidal DICOM, TIFF, SVS, and other digital pathology formats -- **Output Formats**: QuPath GeoJSON (annotations), TIFF (heatmaps), CSV (measurements and statistics) -- **Metadata Standards**: DICOM metadata extraction and validation -- **Quality Assurance**: CRC32C checksums and automated format validation - -### 3.2 File System Integration - -Comprehensive file system operations optimized for large medical imaging files: - -- **Progress Tracking**: Real-time progress indicators for large file operations with human-readable size formatting -- **Integrity Verification**: CRC32C checksums for data integrity validation during transfers -- **Resume Capability**: Ability to resume interrupted file transfers for large WSI files -- **Batch Operations**: Efficient handling of multiple whole slide images with parallel processing -- **Cross-platform Compatibility**: Consistent behavior across operating systems - -### 3.3 External Tool Integration - -Seamless integration with external tools: - -- **QuPath**: Direct integration for pathology image analysis -- **Jupyter/Marimo**: Notebook environments for interactive data exploration -- **File Managers**: Native file manager integration for easy file access -- **Web Browsers**: Embedded browser components for rich user interfaces - -## 4. Application Ecosystem - -### 4.1 Computational Pathology Applications - -The SDK provides access to Aignostics' portfolio of advanced computational pathology applications, each designed for specific analysis purposes in digital pathology workflows: - -**Atlas H&E-TME (Hematoxylin & Eosin - Tumor Microenvironment)** - -- Advanced machine learning-based tissue and cell analysis for H&E stained slides -- Quantitative tumor microenvironment characterization -- Automated tissue segmentation and cell classification -- Spatial analysis of cellular interactions and distributions - -**Application Versioning & Management** - -- Each application supports multiple versions with semantic versioning -- Version-specific input requirements and output schemas -- Backward compatibility for stable integrations -- Application-specific documentation and constraints (staining methods, tissue types, diseases) - -**Processing Pipeline** - -- Automated quality control and format validation -- Parallel processing of multiple whole slide images (up to 500 per batch) -- Real-time status monitoring with detailed error reporting -- Incremental result delivery as individual slides complete processing - -### 4.2 Data Processing Workflows - -**Input Processing** - -- Multi-format support: Pyramidal DICOM, TIFF, SVS, and other digital pathology formats -- Automated metadata extraction from DICOM headers -- Base magnification (MPP) detection and validation -- Image dimension analysis and pyramid level inspection - -**Machine Learning Execution** - -- Cloud-based processing with enterprise-grade security -- Scalable compute resources for high-throughput analysis -- GPU-accelerated inference for complex pathology models -- Quality control checkpoints throughout the processing pipeline - -**Output Generation** - -- Standardized result formats for downstream analysis: - - **QuPath GeoJSON**: Polygon annotations for tissue regions and cellular structures - - **TIFF Images**: Heatmaps and segmentation masks with spatial information - - **CSV Data**: Quantitative measurements, statistics, and biomarker scores -- Metadata preservation and provenance tracking -- Result validation and quality assurance checks - -### 4.3 Integration Capabilities - -**Enterprise Integration** - -- RESTful API for language-agnostic integration -- Support for Laboratory Information Management Systems (LIMS) -- Integration with Image Management Systems (IMS) -- SAML/OIDC authentication for enterprise identity providers - -**Research Workflows** - -- Jupyter and Marimo notebook integration for interactive analysis -- QuPath project generation for advanced visualization -- Export capabilities for external analysis tools -- Batch processing for large-scale studies - -**Quality Assurance & Compliance** - -- Automated validation of input requirements -- Processing audit trails and provenance tracking -- Secure data handling with configurable retention policies -- Two-factor authentication and role-based access control - -## 5. Quality Assurance - -### 5.1 Testing Strategy - -Multi-layered testing approach: - -- **Unit Tests**: Individual component testing with >85% coverage requirement using **[pytest](https://docs.pytest.org/)** with **[pytest-cov](https://pytest-cov.readthedocs.io/)** for coverage reporting -- **End-to-End Tests**: Complete workflow testing from CLI and GUI using **[pytest](https://docs.pytest.org/)** with **[NiceGUI testing plugin](https://nicegui.io/documentation/section_testing)** -- **Regression Tests**: Automated detection of breaking changes using **[pytest-regressions](https://pytest-regressions.readthedocs.io/)** -- **Parallel Execution**: Multi-process test execution using **[pytest-xdist](https://pytest-xdist.readthedocs.io/)** -- **Async Testing**: Asynchronous code testing using **[pytest-asyncio](https://pytest-asyncio.readthedocs.io/)** -- **Long-running Tests**: Scheduled integration tests marked with `@pytest.mark.long_running` and `@pytest.mark.scheduled` - -### 4.2 Code Quality - -Automated code quality enforcement: - -- **Style Consistency**: Automated formatting with Ruff/Black -- **Type Safety**: 100% type annotation coverage with MyPy strict mode -- **Complexity Monitoring**: Cyclomatic complexity limits and code metrics -- **Security Scanning**: Automated detection of security vulnerabilities - -### 4.3 Documentation Standards - -Comprehensive documentation requirements: - -- **API Documentation**: Auto-generated from type hints and docstrings -- **User Guides**: Step-by-step tutorials for common workflows -- **Architecture Documentation**: This document and module-specific designs -- **Release Notes**: Automated changelog generation from commit messages - -## 6. Deployment and Operations - -### 6.1 Distribution Channels - -Multiple distribution methods: - -- **PyPI Package**: Standard Python package installation via pip/uv -- **Docker Images**: Containerized deployment with multiple variants -- **Source Installation**: Direct installation from GitHub repository -- **Development Setup**: One-click development environment setup - -### 6.2 Configuration Management - -Environment-aware configuration: - -- **Development**: Local `.env` files with development defaults -- **Testing**: Isolated test configuration with mock services -- **Production**: Environment variables with validation and defaults -- **Container**: Container-specific configuration and health checks - -### 6.3 Monitoring and Observability - -Production monitoring capabilities: - -- **Health Endpoints**: Service health checks for monitoring systems -- **Structured Logging**: JSON-formatted logs with correlation IDs -- **Error Tracking**: Automatic error reporting to monitoring services - ---- - -This architecture document reflects the current state of the Aignostics Python SDK as of August 2025. The design emphasizes modularity, maintainability, and extensibility while providing a consistent developer experience across different interaction modes (CLI, GUI, API). diff --git a/VERSION b/VERSION index ecd405e2..e8efeba9 100644 --- a/VERSION +++ b/VERSION @@ -1 +1 @@ -0.2.197 +0.2.189 diff --git a/aignostics.spec b/aignostics.spec index 023a69a1..ee1a2182 100644 --- a/aignostics.spec +++ b/aignostics.spec @@ -137,7 +137,7 @@ else: name='aignostics.app', icon='logo.ico', bundle_identifier='com.aignostics.launchpad', - version='0.2.197', + version='0.2.189', info_plist={ 'NSPrincipalClass': 'NSApplication', 'NSAppleScriptEnabled': False, diff --git a/codecov.yml b/codecov.yml index 20fe235b..027b99ad 100644 --- a/codecov.yml +++ b/codecov.yml @@ -1,11 +1,11 @@ coverage: - range: "70...75" + range: "75...80" status: project: default: - target: 70% + target: 80% informational: false patch: default: - target: 75% + target: 80% informational: false diff --git a/codegen/in/archive/openapi_1.0.0-beta.3.json b/codegen/in/archive/openapi_1.0.0-beta.3.json deleted file mode 100644 index 958ab01b..00000000 --- a/codegen/in/archive/openapi_1.0.0-beta.3.json +++ /dev/null @@ -1,2641 +0,0 @@ -{ - "openapi": "3.1.0", - "info": { - "title": "Aignostics Platform API", - "description": "\nThe Aignostics Platform is a cloud-based service that enables organizations to access advanced computational pathology applications through a secure API. The platform provides standardized access to Aignostics' portfolio of computational pathology solutions, with Atlas H&E-TME serving as an example of the available API endpoints. \n\nTo begin using the platform, your organization must first be registered by our business support team. If you don't have an account yet, please contact your account manager or email support@aignostics.com to get started. \n\nMore information about our applications can be found on [https://platform.aignostics.com](https://platform.aignostics.com).\n\n**How to authorize and test API endpoints:**\n\n1. Click the \"Authorize\" button in the right corner below\n3. Click \"Authorize\" button in the dialog to log in with your Aignostics Platform credentials\n4. After successful login, you'll be redirected back and can use \"Try it out\" on any endpoint\n\n**Note**: You only need to authorize once per session. The lock icons next to endpoints will show green when authorized.\n\n", - "version": "1.0.0-beta.3" - }, - "servers": [ - { - "url": "/api" - } - ], - "paths": { - "/v1/applications": { - "get": { - "tags": [ - "Public" - ], - "summary": "List available applications", - "description": "Returns the list of the applications, available to the caller.\n\nThe application is available if any of the versions of the application is assigned to the caller’s organization.\nThe response is paginated and sorted according to the provided parameters.", - "operationId": "list_applications_v1_applications_get", - "security": [ - { - "OAuth2AuthorizationCodeBearer": [] - } - ], - "parameters": [ - { - "name": "page", - "in": "query", - "required": false, - "schema": { - "type": "integer", - "minimum": 1, - "default": 1, - "title": "Page" - } - }, - { - "name": "page-size", - "in": "query", - "required": false, - "schema": { - "type": "integer", - "maximum": 100, - "minimum": 5, - "default": 50, - "title": "Page-Size" - } - }, - { - "name": "sort", - "in": "query", - "required": false, - "schema": { - "anyOf": [ - { - "type": "array", - "items": { - "type": "string" - } - }, - { - "type": "null" - } - ], - "description": "Sort the results by one or more fields. Use `+` for ascending and `-` for descending order.\n\n**Available fields:**\n- `application_id`\n- `name`\n- `description`\n- `regulatory_classes`\n\n**Examples:**\n- `?sort=application_id` - Sort by application_id ascending\n- `?sort=-name` - Sort by name descending\n- `?sort=+description&sort=name` - Sort by description ascending, then name descending", - "title": "Sort" - }, - "description": "Sort the results by one or more fields. Use `+` for ascending and `-` for descending order.\n\n**Available fields:**\n- `application_id`\n- `name`\n- `description`\n- `regulatory_classes`\n\n**Examples:**\n- `?sort=application_id` - Sort by application_id ascending\n- `?sort=-name` - Sort by name descending\n- `?sort=+description&sort=name` - Sort by description ascending, then name descending" - } - ], - "responses": { - "200": { - "description": "A list of applications available to the caller", - "content": { - "application/json": { - "schema": { - "type": "array", - "items": { - "$ref": "#/components/schemas/ApplicationReadShortResponse" - }, - "title": "Response List Applications V1 Applications Get" - }, - "example": [ - { - "application_id": "he-tme", - "name": "Atlas H&E-TME", - "regulatory_classes": [ - "RUO" - ], - "description": "The Atlas H&E TME is an AI application designed to examine FFPE (formalin-fixed, paraffin-embedded) tissues stained with H&E (hematoxylin and eosin), delivering comprehensive insights into the tumor microenvironment.", - "latest_version": { - "number": "1.0.0", - "released_at": "2025-09-01T19:01:05.401Z" - } - }, - { - "application_id": "test-app", - "name": "Test Application", - "regulatory_classes": [ - "RUO" - ], - "description": "This is the test application with two algorithms: TissueQc and Tissue Segmentation", - "latest_version": { - "number": "2.0.0", - "released_at": "2025-09-02T19:01:05.401Z" - } - } - ] - } - } - }, - "401": { - "description": "Unauthorized - Invalid or missing authentication" - }, - "422": { - "description": "Validation Error", - "content": { - "application/json": { - "schema": { - "$ref": "#/components/schemas/HTTPValidationError" - } - } - } - } - } - } - }, - "/v1/applications/{application_id}": { - "get": { - "tags": [ - "Public" - ], - "summary": "Read Application By Id", - "description": "Retrieve details of a specific application by its ID.", - "operationId": "read_application_by_id_v1_applications__application_id__get", - "security": [ - { - "OAuth2AuthorizationCodeBearer": [] - } - ], - "parameters": [ - { - "name": "application_id", - "in": "path", - "required": true, - "schema": { - "type": "string", - "title": "Application Id" - } - } - ], - "responses": { - "200": { - "description": "Successful Response", - "content": { - "application/json": { - "schema": { - "$ref": "#/components/schemas/ApplicationReadResponse" - } - } - } - }, - "403": { - "description": "Forbidden - You don't have permission to see this application" - }, - "404": { - "description": "Not Found - Application with the given ID does not exist" - }, - "422": { - "description": "Validation Error", - "content": { - "application/json": { - "schema": { - "$ref": "#/components/schemas/HTTPValidationError" - } - } - } - } - } - } - }, - "/v1/applications/{application_id}/versions/{version}": { - "get": { - "tags": [ - "Public" - ], - "summary": "Application Version Details", - "description": "Get the application version details\n\nAllows caller to retrieve information about application version based on provided application version ID.", - "operationId": "application_version_details_v1_applications__application_id__versions__version__get", - "security": [ - { - "OAuth2AuthorizationCodeBearer": [] - } - ], - "parameters": [ - { - "name": "application_id", - "in": "path", - "required": true, - "schema": { - "type": "string", - "title": "Application Id" - } - }, - { - "name": "version", - "in": "path", - "required": true, - "schema": { - "type": "string", - "title": "Version" - } - } - ], - "responses": { - "200": { - "description": "Successful Response", - "content": { - "application/json": { - "schema": { - "$ref": "#/components/schemas/VersionReadResponse" - }, - "example": { - "version_number": "0.4.4", - "changelog": "New deployment", - "input_artifacts": [ - { - "name": "whole_slide_image", - "mime_type": "image/tiff", - "metadata_schema": { - "type": "object", - "$defs": { - "LungCancerMetadata": { - "type": "object", - "title": "LungCancerMetadata", - "required": [ - "type", - "tissue" - ], - "properties": { - "type": { - "enum": [ - "lung" - ], - "type": "string", - "const": "lung", - "title": "Type" - }, - "tissue": { - "enum": [ - "lung", - "lymph node", - "liver", - "adrenal gland", - "bone", - "brain" - ], - "type": "string", - "title": "Tissue" - } - }, - "additionalProperties": false - } - }, - "title": "ExternalImageMetadata", - "$schema": "http://json-schema.org/draft-07/schema#", - "required": [ - "checksum_crc32c", - "base_mpp", - "width", - "height", - "cancer" - ], - "properties": { - "stain": { - "enum": [ - "H&E" - ], - "type": "string", - "const": "H&E", - "title": "Stain", - "default": "H&E" - }, - "width": { - "type": "integer", - "title": "Width", - "maximum": 150000, - "minimum": 1 - }, - "cancer": { - "anyOf": [ - { - "$ref": "#/$defs/LungCancerMetadata" - } - ], - "title": "Cancer" - }, - "height": { - "type": "integer", - "title": "Height", - "maximum": 150000, - "minimum": 1 - }, - "base_mpp": { - "type": "number", - "title": "Base Mpp", - "maximum": 0.5, - "minimum": 0.125 - }, - "mime_type": { - "enum": [ - "application/dicom", - "image/tiff" - ], - "type": "string", - "title": "Mime Type", - "default": "image/tiff" - }, - "checksum_crc32c": { - "type": "string", - "title": "Checksum Crc32C" - } - }, - "description": "Metadata corresponding to an external image.", - "additionalProperties": false - } - } - ], - "output_artifacts": [ - { - "name": "tissue_qc:tiff_heatmap", - "mime_type": "image/tiff", - "metadata_schema": { - "type": "object", - "title": "HeatmapMetadata", - "$schema": "http://json-schema.org/draft-07/schema#", - "required": [ - "checksum_crc32c", - "width", - "height", - "class_colors" - ], - "properties": { - "width": { - "type": "integer", - "title": "Width" - }, - "height": { - "type": "integer", - "title": "Height" - }, - "base_mpp": { - "type": "number", - "title": "Base Mpp", - "maximum": 0.5, - "minimum": 0.125 - }, - "mime_type": { - "enum": [ - "image/tiff" - ], - "type": "string", - "const": "image/tiff", - "title": "Mime Type", - "default": "image/tiff" - }, - "class_colors": { - "type": "object", - "title": "Class Colors", - "additionalProperties": { - "type": "array", - "maxItems": 3, - "minItems": 3, - "prefixItems": [ - { - "type": "integer", - "maximum": 255, - "minimum": 0 - }, - { - "type": "integer", - "maximum": 255, - "minimum": 0 - }, - { - "type": "integer", - "maximum": 255, - "minimum": 0 - } - ] - } - }, - "checksum_crc32c": { - "type": "string", - "title": "Checksum Crc32C" - } - }, - "description": "Metadata corresponding to a segmentation heatmap file.", - "additionalProperties": false - }, - "scope": "ITEM", - "visibility": "EXTERNAL" - } - ], - "released_at": "2025-04-16T08:45:20.655972Z" - } - } - } - }, - "403": { - "description": "Forbidden - You don't have permission to see this version" - }, - "404": { - "description": "Not Found - Application version with given ID is not available to you or does not exist" - }, - "422": { - "description": "Validation Error", - "content": { - "application/json": { - "schema": { - "$ref": "#/components/schemas/HTTPValidationError" - } - } - } - } - } - } - }, - "/v1/runs": { - "get": { - "tags": [ - "Public" - ], - "summary": "List Runs", - "description": "List runs with filtering, sorting, and pagination capabilities.\n\nReturns paginated runs that were submitted by the user.", - "operationId": "list_runs_v1_runs_get", - "security": [ - { - "OAuth2AuthorizationCodeBearer": [] - } - ], - "parameters": [ - { - "name": "application_id", - "in": "query", - "required": false, - "schema": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "description": "Optional application ID filter", - "examples": [ - "tissue-segmentation", - "heta" - ], - "title": "Application Id" - }, - "description": "Optional application ID filter" - }, - { - "name": "application_version", - "in": "query", - "required": false, - "schema": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "description": "Optional Version Name", - "examples": [ - "1.0.2", - "1.0.1-beta2" - ], - "title": "Application Version" - }, - "description": "Optional Version Name" - }, - { - "name": "external_id", - "in": "query", - "required": false, - "schema": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "description": "Optionally filter runs by items with this external ID", - "examples": [ - "slide_001", - "patient_12345_sample_A" - ], - "title": "External Id" - }, - "description": "Optionally filter runs by items with this external ID" - }, - { - "name": "custom_metadata", - "in": "query", - "required": false, - "schema": { - "anyOf": [ - { - "type": "string", - "maxLength": 1000 - }, - { - "type": "null" - } - ], - "description": "Use PostgreSQL JSONPath expressions to filter runs by their custom_metadata.\n#### URL Encoding Required\n**Important**: JSONPath expressions contain special characters that must be URL-encoded when used in query parameters. Most HTTP clients handle this automatically, but when constructing URLs manually, ensure proper encoding.\n\n#### Examples (Clear Format):\n- **Field existence**: `$.study` - Runs that have a study field defined\n- **Exact value match**: `$.study ? (@ == \"high\")` - Runs with specific study value\n- **Numeric comparison**: `$.confidence_score ? (@ > 0.75)` - Runs with confidence score greater than 0.75\n- **Array operations**: `$.tags[*] ? (@ == \"draft\")` - Runs with tags array containing \"draft\"\n- **Complex conditions**: `$.resources ? (@.gpu_count > 2 && @.memory_gb >= 16)` - Runs with high resource requirements\n\n#### Examples (URL-Encoded Format):\n- **Field existence**: `%24.study`\n- **Exact value match**: `%24.study%20%3F%20(%40%20%3D%3D%20%22high%22)`\n- **Numeric comparison**: `%24.confidence_score%20%3F%20(%40%20%3E%200.75)`\n- **Array operations**: `%24.tags%5B*%5D%20%3F%20(%40%20%3D%3D%20%22draft%22)`\n- **Complex conditions**: `%24.resources%20%3F%20(%40.gpu_count%20%3E%202%20%26%26%20%40.memory_gb%20%3E%3D%2016)`\n\n#### Notes\n- JSONPath expressions are evaluated using PostgreSQL's `@?` operator\n- The `$.` prefix is automatically added to root-level field references if missing\n- String values in conditions must be enclosed in double quotes\n- Use `&&` for AND operations and `||` for OR operations\n- Regular expressions use `like_regex` with standard regex syntax\n- **Remember to URL-encode the entire JSONPath expression when making HTTP requests**\n\n ", - "title": "Custom Metadata" - }, - "description": "Use PostgreSQL JSONPath expressions to filter runs by their custom_metadata.\n#### URL Encoding Required\n**Important**: JSONPath expressions contain special characters that must be URL-encoded when used in query parameters. Most HTTP clients handle this automatically, but when constructing URLs manually, ensure proper encoding.\n\n#### Examples (Clear Format):\n- **Field existence**: `$.study` - Runs that have a study field defined\n- **Exact value match**: `$.study ? (@ == \"high\")` - Runs with specific study value\n- **Numeric comparison**: `$.confidence_score ? (@ > 0.75)` - Runs with confidence score greater than 0.75\n- **Array operations**: `$.tags[*] ? (@ == \"draft\")` - Runs with tags array containing \"draft\"\n- **Complex conditions**: `$.resources ? (@.gpu_count > 2 && @.memory_gb >= 16)` - Runs with high resource requirements\n\n#### Examples (URL-Encoded Format):\n- **Field existence**: `%24.study`\n- **Exact value match**: `%24.study%20%3F%20(%40%20%3D%3D%20%22high%22)`\n- **Numeric comparison**: `%24.confidence_score%20%3F%20(%40%20%3E%200.75)`\n- **Array operations**: `%24.tags%5B*%5D%20%3F%20(%40%20%3D%3D%20%22draft%22)`\n- **Complex conditions**: `%24.resources%20%3F%20(%40.gpu_count%20%3E%202%20%26%26%20%40.memory_gb%20%3E%3D%2016)`\n\n#### Notes\n- JSONPath expressions are evaluated using PostgreSQL's `@?` operator\n- The `$.` prefix is automatically added to root-level field references if missing\n- String values in conditions must be enclosed in double quotes\n- Use `&&` for AND operations and `||` for OR operations\n- Regular expressions use `like_regex` with standard regex syntax\n- **Remember to URL-encode the entire JSONPath expression when making HTTP requests**\n\n ", - "examples": { - "no_filter": { - "summary": "No filter (returns all)", - "description": "Returns all items without filtering by custom metadata", - "value": "$" - }, - "field_exists": { - "summary": "Check if field exists", - "description": "Find applications that have a project field defined", - "value": "$.study" - }, - "field_has_value": { - "summary": "Check if field has a certain value", - "description": "Compare a field value against a certain value", - "value": "$.study ? (@ == \"abc-1\")" - }, - "numeric_comparisons": { - "summary": "Compare to a numeric value of a field", - "description": "Compare a field value against a numeric value of a field", - "value": "$.confidence_score ? (@ > 0.75)" - }, - "array_operations": { - "summary": "Check if an array contains a certain value", - "description": "Check if an array contains a certain value", - "value": "$.tags[*] ? (@ == \"draft\")" - }, - "complex_filters": { - "summary": "Combine multiple checks", - "description": "Combine multiple checks", - "value": "$.resources ? (@.gpu_count > 2 && @.memory_gb >= 16)" - } - } - }, - { - "name": "page", - "in": "query", - "required": false, - "schema": { - "type": "integer", - "minimum": 1, - "default": 1, - "title": "Page" - } - }, - { - "name": "page_size", - "in": "query", - "required": false, - "schema": { - "type": "integer", - "maximum": 100, - "minimum": 5, - "default": 50, - "title": "Page Size" - } - }, - { - "name": "sort", - "in": "query", - "required": false, - "schema": { - "anyOf": [ - { - "type": "array", - "items": { - "type": "string" - } - }, - { - "type": "null" - } - ], - "description": "Sort the results by one or more fields. Use `+` for ascending and `-` for descending order.\n\n**Available fields:**\n- `run_id`\n- `application_version_id`\n- `organization_id`\n- `status`\n- `submitted_at`\n- `submitted_by`\n\n**Examples:**\n- `?sort=submitted_at` - Sort by creation time (ascending)\n- `?sort=-submitted_at` - Sort by creation time (descending)\n- `?sort=status&sort=-submitted_at` - Sort by status, then by time (descending)\n", - "title": "Sort" - }, - "description": "Sort the results by one or more fields. Use `+` for ascending and `-` for descending order.\n\n**Available fields:**\n- `run_id`\n- `application_version_id`\n- `organization_id`\n- `status`\n- `submitted_at`\n- `submitted_by`\n\n**Examples:**\n- `?sort=submitted_at` - Sort by creation time (ascending)\n- `?sort=-submitted_at` - Sort by creation time (descending)\n- `?sort=status&sort=-submitted_at` - Sort by status, then by time (descending)\n" - } - ], - "responses": { - "200": { - "description": "Successful Response", - "content": { - "application/json": { - "schema": { - "type": "array", - "items": { - "$ref": "#/components/schemas/RunReadResponse" - }, - "title": "Response List Runs V1 Runs Get" - } - } - } - }, - "404": { - "description": "Run not found" - }, - "422": { - "description": "Validation Error", - "content": { - "application/json": { - "schema": { - "$ref": "#/components/schemas/HTTPValidationError" - } - } - } - } - } - }, - "post": { - "tags": [ - "Public" - ], - "summary": "Initiate Run", - "description": "This endpoint initiates a processing run for a selected application and version, and returns a `run_id` for tracking purposes.\n\nSlide processing occurs asynchronously, allowing you to retrieve results for individual slides as soon as they\ncomplete processing. The system typically processes slides in batches of four, though this number may be reduced\nduring periods of high demand.\nBelow is an example of the required payload for initiating an Atlas H&E TME processing run.\n\n\n### Payload\n\nThe payload includes `application_id`, optional `version_number`, and `items` base fields.\n\n`application_id` is the unique identifier for the application.\n`version_number` is the semantic version to use. If not provided, the latest available version will be used.\n\n`items` includes the list of the items to process (slides, in case of HETA application).\nEvery item has a set of standard fields defined by the API, plus the custom_metadata, specific to the\nchosen application.\n\nExample payload structure with the comments:\n```\n{\n application_id: \"he-tme\",\n version_number: \"1.0.0-beta\",\n items: [{\n \"external_id\": \"slide_1\",\n \"input_artifacts\": [{\n \"name\": \"user_slide\",\n \"download_url\": \"https://...\",\n \"custom_metadata\": {\n \"specimen\": {\n \"disease\": \"LUNG_CANCER\",\n \"tissue\": \"LUNG\"\n },\n \"staining_method\": \"H&E\",\n \"width_px\": 136223,\n \"height_px\": 87761,\n \"resolution_mpp\": 0.2628238,\n \"media-type\":\"image/tiff\",\n \"checksum_base64_crc32c\": \"64RKKA==\"\n }\n }]\n }]\n}\n```\n\n| Parameter | Description |\n| :---- | :---- |\n| `application_id` required | Unique ID for the application |\n| `version_number` optional | Semantic version of the application. If not provided, the latest available version will be used |\n| `items` required | List of submitted items (WSIs) with parameters described below. |\n| `external_id` required | Unique WSI name or ID for easy reference to items, provided by the caller. The external_id should be unique across all items of the run. |\n| `input_artifacts` required | List of provided artifacts for a WSI; at the moment Atlas H&E-TME receives only 1 artifact per slide (the slide itself), but for some other applications this can be a slide and an segmentation map |\n| `name` required | Type of artifact; Atlas H&E-TME supports only `\"input_slide\"` |\n| `download_url` required | Signed URL to the input file in the S3 or GCS; Should be valid for at least 6 days |\n| `specimen: disease` required | Supported cancer types for Atlas H&E-TME (see full list in Atlas H&E-TME manual) |\n| `specimen: tissue` required | Supported tissue types for Atlas H&E-TME (see full list in Atlas H&E-TME manual) |\n| `staining_method` required | WSI stain /bio-marker; Atlas H&E-TME supports only `\"H&E\"` |\n| `width_px` required | Integer value. Number of pixels of the WSI in the X dimension. |\n| `height_px` required | Integer value. Number of pixels of the WSI in the Y dimension. |\n| `resolution_mpp` required | Resolution of WSI in micrometers per pixel; check allowed range in Atlas H&E-TME manual |\n| `media-type` required | Supported media formats; available values are: image/tiff (for .tiff or .tif WSI) application/dicom (for DICOM ) application/zip (for zipped DICOM) application/octet-stream (for .svs WSI) |\n| `checksum_base64_crc32c` required | Base64 encoded big-endian CRC32C checksum of the WSI image |\n\n\n\n### Response\n\nThe endpoint returns the run UUID. After that the job is scheduled for the\nexecution in the background.\n\nTo check the status of the run call `v1/runs/{run_id}`.\n\n### Rejection\n\nApart from the authentication, authorization and malformed input error, the request can be\nrejected when the quota limit is exceeded. More details on quotas is described in the\ndocumentation", - "operationId": "create_run_v1_runs_post", - "security": [ - { - "OAuth2AuthorizationCodeBearer": [] - } - ], - "requestBody": { - "required": true, - "content": { - "application/json": { - "schema": { - "$ref": "#/components/schemas/RunCreationRequest" - } - } - } - }, - "responses": { - "201": { - "description": "Successful Response", - "content": { - "application/json": { - "schema": { - "$ref": "#/components/schemas/RunCreationResponse" - } - } - } - }, - "404": { - "description": "Application version not found" - }, - "403": { - "description": "Forbidden - You don't have permission to create this run" - }, - "400": { - "description": "Bad Request - Input validation failed" - }, - "422": { - "description": "Validation Error", - "content": { - "application/json": { - "schema": { - "$ref": "#/components/schemas/HTTPValidationError" - } - } - } - } - } - } - }, - "/v1/runs/{run_id}": { - "get": { - "tags": [ - "Public" - ], - "summary": "Get run details", - "description": "This endpoint allows the caller to retrieve the current status of a run along with other relevant run details.\n A run becomes available immediately after it is created through the POST `/runs/` endpoint.\n\n To download the output results, use GET `/runs/{run_id}/` items to get outputs for all slides.\nAccess to a run is restricted to the user who created it.", - "operationId": "get_run_v1_runs__run_id__get", - "security": [ - { - "OAuth2AuthorizationCodeBearer": [] - } - ], - "parameters": [ - { - "name": "run_id", - "in": "path", - "required": true, - "schema": { - "type": "string", - "format": "uuid", - "description": "Run id, returned by `POST /runs/` endpoint", - "title": "Run Id" - }, - "description": "Run id, returned by `POST /runs/` endpoint" - } - ], - "responses": { - "200": { - "description": "Successful Response", - "content": { - "application/json": { - "schema": { - "$ref": "#/components/schemas/RunReadResponse" - } - } - } - }, - "404": { - "description": "Run not found because it was deleted." - }, - "403": { - "description": "Forbidden - You don't have permission to see this run" - }, - "422": { - "description": "Validation Error", - "content": { - "application/json": { - "schema": { - "$ref": "#/components/schemas/HTTPValidationError" - } - } - } - } - } - } - }, - "/v1/runs/{run_id}/cancel": { - "post": { - "tags": [ - "Public" - ], - "summary": "Cancel Run", - "description": "The run can be canceled by the user who created the run.\n\nThe execution can be canceled any time while the application is not in a final state. The\npending items will not be processed and will not add to the cost.\n\nWhen the application is canceled, the already completed items stay available for download.", - "operationId": "cancel_run_v1_runs__run_id__cancel_post", - "security": [ - { - "OAuth2AuthorizationCodeBearer": [] - } - ], - "parameters": [ - { - "name": "run_id", - "in": "path", - "required": true, - "schema": { - "type": "string", - "format": "uuid", - "description": "Run id, returned by `POST /runs/` endpoint", - "title": "Run Id" - }, - "description": "Run id, returned by `POST /runs/` endpoint" - } - ], - "responses": { - "202": { - "description": "Successful Response", - "content": { - "application/json": { - "schema": {} - } - } - }, - "404": { - "description": "Run not found" - }, - "403": { - "description": "Forbidden - You don't have permission to cancel this run" - }, - "409": { - "description": "Conflict - The Run is already cancelled" - }, - "422": { - "description": "Validation Error", - "content": { - "application/json": { - "schema": { - "$ref": "#/components/schemas/HTTPValidationError" - } - } - } - } - } - } - }, - "/v1/runs/{run_id}/items": { - "get": { - "tags": [ - "Public" - ], - "summary": "List Run Items", - "description": "List items in a run with filtering, sorting, and pagination capabilities.\n\nReturns paginated items within a specific run. Results can be filtered\nby item IDs, external_ids, status, and custom_metadata using JSONPath expressions.\n\n## JSONPath Metadata Filtering\nUse PostgreSQL JSONPath expressions to filter items using their custom_metadata.\n\n### Examples:\n- **Field existence**: `$.case_id` - Results that have a case_id field defined\n- **Exact value match**: `$.priority ? (@ == \"high\")` - Results with high priority\n- **Numeric comparison**: `$.confidence_score ? (@ > 0.95)` - Results with high confidence\n- **Array operations**: `$.flags[*] ? (@ == \"reviewed\")` - Results flagged as reviewed\n- **Complex conditions**: `$.metrics ? (@.accuracy > 0.9 && @.recall > 0.8)` - Results meeting performance thresholds\n\n## Notes\n- JSONPath expressions are evaluated using PostgreSQL's `@?` operator\n- The `$.` prefix is automatically added to root-level field references if missing\n- String values in conditions must be enclosed in double quotes\n- Use `&&` for AND operations and `||` for OR operations", - "operationId": "list_run_items_v1_runs__run_id__items_get", - "security": [ - { - "OAuth2AuthorizationCodeBearer": [] - } - ], - "parameters": [ - { - "name": "run_id", - "in": "path", - "required": true, - "schema": { - "type": "string", - "format": "uuid", - "description": "Run id, returned by `POST /runs/` endpoint", - "title": "Run Id" - }, - "description": "Run id, returned by `POST /runs/` endpoint" - }, - { - "name": "item_id__in", - "in": "query", - "required": false, - "schema": { - "anyOf": [ - { - "type": "array", - "items": { - "type": "string", - "format": "uuid" - } - }, - { - "type": "null" - } - ], - "description": "Filter for item ids", - "title": "Item Id In" - }, - "description": "Filter for item ids" - }, - { - "name": "external_id__in", - "in": "query", - "required": false, - "schema": { - "anyOf": [ - { - "type": "array", - "items": { - "type": "string" - } - }, - { - "type": "null" - } - ], - "description": "Filter for items by their external_id from the input payload", - "title": "External Id In" - }, - "description": "Filter for items by their external_id from the input payload" - }, - { - "name": "state", - "in": "query", - "required": false, - "schema": { - "anyOf": [ - { - "$ref": "#/components/schemas/ItemState" - }, - { - "type": "null" - } - ], - "description": "Filter items by their state", - "title": "State" - }, - "description": "Filter items by their state" - }, - { - "name": "termination_reason", - "in": "query", - "required": false, - "schema": { - "anyOf": [ - { - "$ref": "#/components/schemas/ItemTerminationReason" - }, - { - "type": "null" - } - ], - "description": "Filter items by their termination reason. Only applies to TERMINATED items.", - "title": "Termination Reason" - }, - "description": "Filter items by their termination reason. Only applies to TERMINATED items." - }, - { - "name": "custom_metadata", - "in": "query", - "required": false, - "schema": { - "anyOf": [ - { - "type": "string", - "maxLength": 1000 - }, - { - "type": "null" - } - ], - "description": "JSONPath expression to filter items by their custom_metadata", - "title": "Custom Metadata" - }, - "description": "JSONPath expression to filter items by their custom_metadata", - "examples": { - "no_filter": { - "summary": "No filter (returns all)", - "description": "Returns all items without filtering by custom metadata", - "value": "$" - }, - "field_exists": { - "summary": "Check if field exists", - "description": "Find items that have a project field defined", - "value": "$.project" - }, - "field_has_value": { - "summary": "Check if field has a certain value", - "description": "Compare a field value against a certain value", - "value": "$.project ? (@ == \"cancer-research\")" - }, - "numeric_comparisons": { - "summary": "Compare to a numeric value of a field", - "description": "Compare a field value against a numeric value of a field", - "value": "$.duration_hours ? (@ < 2)" - }, - "array_operations": { - "summary": "Check if an array contains a certain value", - "description": "Check if an array contains a certain value", - "value": "$.tags[*] ? (@ == \"production\")" - }, - "complex_filters": { - "summary": "Combine multiple checks", - "description": "Combine multiple checks", - "value": "$.resources ? (@.gpu_count > 2 && @.memory_gb >= 16)" - } - } - }, - { - "name": "page", - "in": "query", - "required": false, - "schema": { - "type": "integer", - "minimum": 1, - "default": 1, - "title": "Page" - } - }, - { - "name": "page_size", - "in": "query", - "required": false, - "schema": { - "type": "integer", - "maximum": 100, - "minimum": 5, - "default": 50, - "title": "Page Size" - } - }, - { - "name": "sort", - "in": "query", - "required": false, - "schema": { - "anyOf": [ - { - "type": "array", - "items": { - "type": "string" - } - }, - { - "type": "null" - } - ], - "description": "Sort the items by one or more fields. Use `+` for ascending and `-` for descending order.\n **Available fields:**\n- `item_id`\n- `run_id`\n- `external_id`\n- `custom_metadata`\n\n**Examples:**\n- `?sort=item_id` - Sort by id of the item (ascending)\n- `?sort=-external_id` - Sort by external ID (descending)\n- `?sort=custom_metadata&sort=-external_id` - Sort by metadata, then by external ID (descending)", - "title": "Sort" - }, - "description": "Sort the items by one or more fields. Use `+` for ascending and `-` for descending order.\n **Available fields:**\n- `item_id`\n- `run_id`\n- `external_id`\n- `custom_metadata`\n\n**Examples:**\n- `?sort=item_id` - Sort by id of the item (ascending)\n- `?sort=-external_id` - Sort by external ID (descending)\n- `?sort=custom_metadata&sort=-external_id` - Sort by metadata, then by external ID (descending)" - } - ], - "responses": { - "200": { - "description": "Successful Response", - "content": { - "application/json": { - "schema": { - "type": "array", - "items": { - "$ref": "#/components/schemas/ItemResultReadResponse" - }, - "title": "Response List Run Items V1 Runs Run Id Items Get" - } - } - } - }, - "404": { - "description": "Run not found" - }, - "422": { - "description": "Validation Error", - "content": { - "application/json": { - "schema": { - "$ref": "#/components/schemas/HTTPValidationError" - } - } - } - } - } - } - }, - "/v1/runs/{run_id}/items/{external_id}": { - "get": { - "tags": [ - "Public" - ], - "summary": "Get Item By Run", - "description": "Retrieve details of a specific item (slide) by its external ID and the run ID.", - "operationId": "get_item_by_run_v1_runs__run_id__items__external_id__get", - "security": [ - { - "OAuth2AuthorizationCodeBearer": [] - } - ], - "parameters": [ - { - "name": "run_id", - "in": "path", - "required": true, - "schema": { - "type": "string", - "format": "uuid", - "description": "The run id, returned by `POST /runs/` endpoint", - "title": "Run Id" - }, - "description": "The run id, returned by `POST /runs/` endpoint" - }, - { - "name": "external_id", - "in": "path", - "required": true, - "schema": { - "type": "string", - "description": "The `external_id` that was defined for the item by the customer that triggered the run.", - "title": "External Id" - }, - "description": "The `external_id` that was defined for the item by the customer that triggered the run." - } - ], - "responses": { - "200": { - "description": "Successful Response", - "content": { - "application/json": { - "schema": { - "$ref": "#/components/schemas/ItemResultReadResponse" - } - } - } - }, - "404": { - "description": "Not Found - Item with given ID does not exist" - }, - "403": { - "description": "Forbidden - You don't have permission to see this item" - }, - "422": { - "description": "Validation Error", - "content": { - "application/json": { - "schema": { - "$ref": "#/components/schemas/HTTPValidationError" - } - } - } - } - } - } - }, - "/v1/runs/{run_id}/artifacts": { - "delete": { - "tags": [ - "Public" - ], - "summary": "Delete Run Items", - "description": "This endpoint allows the caller to explicitly delete artifacts generated by a run.\nIt can only be invoked when the run has reached a final state\n(PROCESSED, CANCELED_SYSTEM, CANCELED_USER).\nNote that by default, all artifacts are automatically deleted 30 days after the run finishes,\n regardless of whether the caller explicitly requests deletion.", - "operationId": "delete_run_items_v1_runs__run_id__artifacts_delete", - "security": [ - { - "OAuth2AuthorizationCodeBearer": [] - } - ], - "parameters": [ - { - "name": "run_id", - "in": "path", - "required": true, - "schema": { - "type": "string", - "format": "uuid", - "description": "Run id, returned by `POST /runs/` endpoint", - "title": "Run Id" - }, - "description": "Run id, returned by `POST /runs/` endpoint" - } - ], - "responses": { - "200": { - "description": "Run artifacts deleted", - "content": { - "application/json": { - "schema": {} - } - } - }, - "404": { - "description": "Run not found" - }, - "422": { - "description": "Validation Error", - "content": { - "application/json": { - "schema": { - "$ref": "#/components/schemas/HTTPValidationError" - } - } - } - } - } - } - }, - "/v1/runs/{run_id}/custom-metadata": { - "put": { - "tags": [ - "Public" - ], - "summary": "Put Run Custom Metadata", - "operationId": "put_run_custom_metadata_v1_runs__run_id__custom_metadata_put", - "security": [ - { - "OAuth2AuthorizationCodeBearer": [] - } - ], - "parameters": [ - { - "name": "run_id", - "in": "path", - "required": true, - "schema": { - "type": "string", - "format": "uuid", - "description": "Run id, returned by `POST /runs/` endpoint", - "title": "Run Id" - }, - "description": "Run id, returned by `POST /runs/` endpoint" - } - ], - "requestBody": { - "required": true, - "content": { - "application/json": { - "schema": { - "$ref": "#/components/schemas/CustomMetadataUpdateRequest" - } - } - } - }, - "responses": { - "200": { - "description": "Successful Response", - "content": { - "application/json": { - "schema": {} - } - } - }, - "404": { - "description": "Run not found" - }, - "422": { - "description": "Validation Error", - "content": { - "application/json": { - "schema": { - "$ref": "#/components/schemas/HTTPValidationError" - } - } - } - } - } - } - }, - "/v1/runs/{run_id}/items/{external_id}/custom-metadata": { - "put": { - "tags": [ - "Public" - ], - "summary": "Put Item Custom Metadata By Run", - "operationId": "put_item_custom_metadata_by_run_v1_runs__run_id__items__external_id__custom_metadata_put", - "security": [ - { - "OAuth2AuthorizationCodeBearer": [] - } - ], - "parameters": [ - { - "name": "run_id", - "in": "path", - "required": true, - "schema": { - "type": "string", - "format": "uuid", - "description": "The run id, returned by `POST /runs/` endpoint", - "title": "Run Id" - }, - "description": "The run id, returned by `POST /runs/` endpoint" - }, - { - "name": "external_id", - "in": "path", - "required": true, - "schema": { - "type": "string", - "description": "The `external_id` that was defined for the item by the customer that triggered the run.", - "title": "External Id" - }, - "description": "The `external_id` that was defined for the item by the customer that triggered the run." - } - ], - "requestBody": { - "required": true, - "content": { - "application/json": { - "schema": { - "$ref": "#/components/schemas/CustomMetadataUpdateRequest" - } - } - } - }, - "responses": { - "200": { - "description": "Successful Response", - "content": { - "application/json": { - "schema": {} - } - } - }, - "422": { - "description": "Validation Error", - "content": { - "application/json": { - "schema": { - "$ref": "#/components/schemas/HTTPValidationError" - } - } - } - } - } - } - }, - "/v1/me": { - "get": { - "tags": [ - "Public" - ], - "summary": "Get current user", - "description": "Retrieves your identity details, including name, email, and organization.\nThis is useful for verifying that the request is being made under the correct user profile\nand organization context, as well as confirming that the expected environment variables are correctly set\n(in case you are using Python SDK)", - "operationId": "get_me_v1_me_get", - "responses": { - "200": { - "description": "Successful Response", - "content": { - "application/json": { - "schema": { - "$ref": "#/components/schemas/MeReadResponse" - } - } - } - } - }, - "security": [ - { - "OAuth2AuthorizationCodeBearer": [] - } - ] - } - } - }, - "components": { - "schemas": { - "ApplicationReadResponse": { - "properties": { - "application_id": { - "type": "string", - "title": "Application Id", - "description": "Application ID", - "examples": [ - "he-tme" - ] - }, - "name": { - "type": "string", - "title": "Name", - "description": "Application display name", - "examples": [ - "Atlas H&E-TME" - ] - }, - "regulatory_classes": { - "items": { - "type": "string" - }, - "type": "array", - "title": "Regulatory Classes", - "description": "Regulatory classes, to which the applications comply with. Possible values include: RUO, IVDR, FDA.", - "examples": [ - [ - "RUO" - ] - ] - }, - "description": { - "type": "string", - "title": "Description", - "description": "Describing what the application can do ", - "examples": [ - "The Atlas H&E TME is an AI application designed to examine FFPE (formalin-fixed, paraffin-embedded) tissues stained with H&E (hematoxylin and eosin), delivering comprehensive insights into the tumor microenvironment." - ] - }, - "versions": { - "items": { - "$ref": "#/components/schemas/ApplicationVersion" - }, - "type": "array", - "title": "Versions", - "description": "All version numbers available to the user" - } - }, - "type": "object", - "required": [ - "application_id", - "name", - "regulatory_classes", - "description", - "versions" - ], - "title": "ApplicationReadResponse", - "description": "Response schema for `List available applications` and `Read Application by Id` endpoints" - }, - "ApplicationReadShortResponse": { - "properties": { - "application_id": { - "type": "string", - "title": "Application Id", - "description": "Application ID", - "examples": [ - "he-tme" - ] - }, - "name": { - "type": "string", - "title": "Name", - "description": "Application display name", - "examples": [ - "Atlas H&E-TME" - ] - }, - "regulatory_classes": { - "items": { - "type": "string" - }, - "type": "array", - "title": "Regulatory Classes", - "description": "Regulatory classes, to which the applications comply with. Possible values include: RUO, IVDR, FDA.", - "examples": [ - [ - "RUO" - ] - ] - }, - "description": { - "type": "string", - "title": "Description", - "description": "Describing what the application can do ", - "examples": [ - "The Atlas H&E TME is an AI application designed to examine FFPE (formalin-fixed, paraffin-embedded) tissues stained with H&E (hematoxylin and eosin), delivering comprehensive insights into the tumor microenvironment." - ] - }, - "latest_version": { - "anyOf": [ - { - "$ref": "#/components/schemas/ApplicationVersion" - }, - { - "type": "null" - } - ], - "description": "The version with highest version number available to the user" - } - }, - "type": "object", - "required": [ - "application_id", - "name", - "regulatory_classes", - "description" - ], - "title": "ApplicationReadShortResponse", - "description": "Response schema for `List available applications` and `Read Application by Id` endpoints" - }, - "ApplicationVersion": { - "properties": { - "number": { - "type": "string", - "title": "Number", - "description": "The number of the latest version", - "examples": [ - "1.0.0" - ] - }, - "released_at": { - "type": "string", - "format": "date-time", - "title": "Released At", - "description": "The timestamp for when the application version was made available in the Platform", - "examples": [ - "2025-09-15T10:30:45.123Z" - ] - } - }, - "type": "object", - "required": [ - "number", - "released_at" - ], - "title": "ApplicationVersion" - }, - "ArtifactOutput": { - "type": "string", - "enum": [ - "NONE", - "AVAILABLE", - "DELETED_BY_USER", - "DELETED_BY_SYSTEM" - ], - "title": "ArtifactOutput" - }, - "ArtifactState": { - "type": "string", - "enum": [ - "PENDING", - "PROCESSING", - "TERMINATED" - ], - "title": "ArtifactState" - }, - "ArtifactTerminationReason": { - "type": "string", - "enum": [ - "SUCCEEDED", - "USER_ERROR", - "SYSTEM_ERROR", - "SKIPPED" - ], - "title": "ArtifactTerminationReason" - }, - "Auth0Organization": { - "properties": { - "id": { - "type": "string", - "title": "Id", - "description": "Unique organization identifier", - "examples": [ - "org_123456" - ] - }, - "name": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "title": "Name", - "description": "Organization name (E.g. “aignx”)", - "examples": [ - "aignx" - ] - }, - "display_name": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "title": "Display Name", - "description": "Public organization name (E.g. “Aignostics GmbH”)", - "examples": [ - "Aignostics GmbH" - ] - }, - "aignostics_bucket_hmac_access_key_id": { - "type": "string", - "title": "Aignostics Bucket Hmac Access Key Id", - "description": "HMAC access key ID for the Aignostics-provided storage bucket. Used to authenticate requests for uploading files and generating signed URLs", - "examples": [ - "YOUR_HMAC_ACCESS_KEY_ID" - ] - }, - "aignostics_bucket_hmac_secret_access_key": { - "type": "string", - "title": "Aignostics Bucket Hmac Secret Access Key", - "description": "HMAC secret access key paired with the access key ID. Keep this credential secure.", - "examples": [ - "YOUR/HMAC/SECRET_ACCESS_KEY" - ] - }, - "aignostics_bucket_name": { - "type": "string", - "title": "Aignostics Bucket Name", - "description": "Name of the bucket provided by Aignostics for storing input artifacts (slide images)", - "examples": [ - "aignostics-platform-bucket" - ] - }, - "aignostics_bucket_protocol": { - "type": "string", - "title": "Aignostics Bucket Protocol", - "description": "Protocol to use for bucket access. Defines the URL scheme for connecting to the storage service", - "examples": [ - "gs" - ] - }, - "aignostics_logfire_token": { - "type": "string", - "title": "Aignostics Logfire Token", - "description": "Authentication token for Logfire observability service. Enables sending application logs and performance metrics to Aignostics for monitoring and support", - "examples": [ - "your-logfire-token" - ] - }, - "aignostics_sentry_dsn": { - "type": "string", - "title": "Aignostics Sentry Dsn", - "description": "Data Source Name (DSN) for Sentry error tracking service. Allows automatic reporting of errors and exceptions to Aignostics support team", - "examples": [ - "https://2354s3#ewsha@o44.ingest.us.sentry.io/34345123432" - ] - } - }, - "type": "object", - "required": [ - "id", - "aignostics_bucket_hmac_access_key_id", - "aignostics_bucket_hmac_secret_access_key", - "aignostics_bucket_name", - "aignostics_bucket_protocol", - "aignostics_logfire_token", - "aignostics_sentry_dsn" - ], - "title": "Auth0Organization", - "description": "Model for Auth0 Organization object returned from Auth0 API.\n\nFor details, see:\nhttps://auth0.com/docs/api/management/v2#!/Organizations/get_organizations_by_id\n\nAignostics-specific metadata fields are extracted from the `metadata` field." - }, - "Auth0User": { - "properties": { - "id": { - "type": "string", - "title": "Id", - "description": "Unique user identifier", - "examples": [ - "auth0|123456" - ] - }, - "email": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "title": "Email", - "description": "User email", - "examples": [ - "user@domain.com" - ] - }, - "email_verified": { - "anyOf": [ - { - "type": "boolean" - }, - { - "type": "null" - } - ], - "title": "Email Verified", - "examples": [ - true - ] - }, - "name": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "title": "Name", - "description": "First and last name of the user", - "examples": [ - "Jane Doe" - ] - }, - "given_name": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "title": "Given Name", - "examples": [ - "Jane" - ] - }, - "family_name": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "title": "Family Name", - "examples": [ - "Doe" - ] - }, - "nickname": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "title": "Nickname", - "examples": [ - "jdoe" - ] - }, - "picture": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "title": "Picture", - "examples": [ - "https://example.com/jdoe.jpg" - ] - }, - "updated_at": { - "anyOf": [ - { - "type": "string", - "format": "date-time" - }, - { - "type": "null" - } - ], - "title": "Updated At", - "examples": [ - "2023-10-05T14:48:00.000Z" - ] - } - }, - "type": "object", - "required": [ - "id" - ], - "title": "Auth0User", - "description": "Model for Auth0 User object returned from Auth0 API.\n\nFor details, see:\nhttps://auth0.com/docs/api/management/v2/users/get-users-by-id" - }, - "CustomMetadataUpdateRequest": { - "properties": { - "custom_metadata": { - "anyOf": [ - { - "type": "object" - }, - { - "type": "null" - } - ], - "title": "Custom Metadata", - "description": "JSON metadata that should be set for the run", - "examples": [ - { - "department": "D1", - "study": "abc-1" - } - ] - }, - "custom_metadata_checksum": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "title": "Custom Metadata Checksum", - "description": "Optional field to verify that the latest custom metadata was known. If set to the checksum retrieved via the /runs endpoint, it must match the checksum of the current value in the database.", - "examples": [ - "f54fe109" - ] - } - }, - "type": "object", - "title": "CustomMetadataUpdateRequest" - }, - "HTTPValidationError": { - "properties": { - "detail": { - "items": { - "$ref": "#/components/schemas/ValidationError" - }, - "type": "array", - "title": "Detail" - } - }, - "type": "object", - "title": "HTTPValidationError" - }, - "InputArtifact": { - "properties": { - "name": { - "type": "string", - "title": "Name" - }, - "mime_type": { - "type": "string", - "pattern": "^\\w+\\/\\w+[-+.|\\w+]+\\w+$", - "title": "Mime Type", - "examples": [ - "image/tiff" - ] - }, - "metadata_schema": { - "type": "object", - "title": "Metadata Schema" - } - }, - "type": "object", - "required": [ - "name", - "mime_type", - "metadata_schema" - ], - "title": "InputArtifact" - }, - "InputArtifactCreationRequest": { - "properties": { - "name": { - "type": "string", - "title": "Name", - "description": "Type of artifact. For Atlas H&E-TME, use \"input_slide\"", - "examples": [ - "input_slide" - ] - }, - "download_url": { - "type": "string", - "maxLength": 2083, - "minLength": 1, - "format": "uri", - "title": "Download Url", - "description": "[Signed URL](https://cloud.google.com/cdn/docs/using-signed-urls) to the input artifact file. The URL should be valid for at least 6 days from the payload submission time.", - "examples": [ - "https://example.com/case-no-1-slide.tiff" - ] - }, - "metadata": { - "type": "object", - "title": "Metadata", - "description": "The metadata of the artifact, required by the application version. The JSON schema of the metadata can be requested by `/v1/versions/{application_version_id}`. The schema is located in `input_artifacts.[].metadata_schema`", - "examples": [ - { - "checksum_base64_crc32c": "752f9554", - "height": 2000, - "height_mpp": 0.5, - "width": 10000, - "width_mpp": 0.5 - } - ] - } - }, - "type": "object", - "required": [ - "name", - "download_url", - "metadata" - ], - "title": "InputArtifactCreationRequest", - "description": "Input artifact containing the slide image and associated metadata." - }, - "ItemCreationRequest": { - "properties": { - "external_id": { - "type": "string", - "maxLength": 255, - "title": "External Id", - "description": "Unique identifier for this item within the run. Used for referencing items. Must be unique across all items in the same run", - "examples": [ - "slide_1", - "patient_001_slide_A", - "sample_12345" - ] - }, - "custom_metadata": { - "anyOf": [ - { - "type": "object" - }, - { - "type": "null" - } - ], - "title": "Custom Metadata", - "description": "Optional JSON custom_metadata to store additional information alongside an item.", - "examples": [ - { - "case": "abc" - } - ] - }, - "input_artifacts": { - "items": { - "$ref": "#/components/schemas/InputArtifactCreationRequest" - }, - "type": "array", - "title": "Input Artifacts", - "description": "List of input artifacts for this item. For Atlas H&E-TME, typically contains one artifact (the slide image)", - "examples": [ - [ - { - "download_url": "https://example-bucket.s3.amazonaws.com/slide1.tiff", - "metadata": { - "checksum_base64_crc32c": "64RKKA==", - "height_px": 87761, - "media-type": "image/tiff", - "resolution_mpp": 0.2628238, - "specimen": { - "disease": "LUNG_CANCER", - "tissue": "LUNG" - }, - "staining_method": "H&E", - "width_px": 136223 - }, - "name": "input_slide" - } - ] - ] - } - }, - "type": "object", - "required": [ - "external_id", - "input_artifacts" - ], - "title": "ItemCreationRequest", - "description": "Individual item (slide) to be processed in a run." - }, - "ItemOutput": { - "type": "string", - "enum": [ - "NONE", - "FULL" - ], - "title": "ItemOutput" - }, - "ItemResultReadResponse": { - "properties": { - "item_id": { - "type": "string", - "format": "uuid", - "title": "Item Id", - "description": "Item UUID generated by the Platform" - }, - "external_id": { - "type": "string", - "title": "External Id", - "description": "The external_id of the item from the user payload", - "examples": [ - "slide_1" - ] - }, - "custom_metadata": { - "anyOf": [ - { - "type": "object" - }, - { - "type": "null" - } - ], - "title": "Custom Metadata", - "description": "The custom_metadata of the item that has been provided by the user on run creation." - }, - "custom_metadata_checksum": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "title": "Custom Metadata Checksum", - "description": "The checksum of the `custom_metadata` field.\nCan be used in the `PUT /runs/{run-id}/items/{external_id}/custom_metadata`\nrequest to avoid unwanted override of the values in concurrent requests.", - "examples": [ - "f54fe109" - ] - }, - "state": { - "$ref": "#/components/schemas/ItemState", - "description": "\nThe item moves from `PENDING` to `PROCESSING` to `TERMINATED` state.\nWhen terminated, consult the `termination_reason` property to see whether it was successful.\n " - }, - "output": { - "$ref": "#/components/schemas/ItemOutput", - "description": "The output status of the item (NONE, FULL)" - }, - "termination_reason": { - "anyOf": [ - { - "$ref": "#/components/schemas/ItemTerminationReason" - }, - { - "type": "null" - } - ], - "description": "\nWhen the `state` is `TERMINATED` this will explain why\n`SUCCEEDED` -> Successful processing.\n`USER_ERROR` -> Failed because the provided input was invalid.\n`SYSTEM_ERROR` -> There was an error in the model or platform.\n`SKIPPED` -> Was cancelled\n" - }, - "error_message": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "title": "Error Message", - "description": "\n The error message in case the `termination_reason` is in `USER_ERROR` or `SYSTEM_ERROR`\n ", - "examples": [ - "This item was not processed because the threshold of 3 items finishing in error state (user or system error) was reached before the item was processed.", - "The item was not processed because the run was cancelled by the user before the item was processed.User error raised by Application because the input data provided by the user cannot be processed:\nThe image width is 123000 px, but the maximum width is 100000 px", - "A system error occurred during the item execution:\n System went out of memory in cell classification", - "An unknown system error occurred during the item execution" - ] - }, - "terminated_at": { - "anyOf": [ - { - "type": "string", - "format": "date-time" - }, - { - "type": "null" - } - ], - "title": "Terminated At", - "description": "Timestamp showing when the item reached a terminal state.", - "examples": [ - "2024-01-15T10:30:45.123Z" - ] - }, - "output_artifacts": { - "items": { - "$ref": "#/components/schemas/OutputArtifactResultReadResponse" - }, - "type": "array", - "title": "Output Artifacts", - "description": "\nThe list of the results generated by the application algorithm. The number of files and their\ntypes depend on the particular application version, call `/v1/versions/{version_id}` to get\nthe details.\n " - } - }, - "type": "object", - "required": [ - "item_id", - "external_id", - "custom_metadata", - "state", - "output", - "output_artifacts" - ], - "title": "ItemResultReadResponse", - "description": "Response schema for items in `List Run Items` endpoint" - }, - "ItemState": { - "type": "string", - "enum": [ - "PENDING", - "PROCESSING", - "TERMINATED" - ], - "title": "ItemState" - }, - "ItemTerminationReason": { - "type": "string", - "enum": [ - "SUCCEEDED", - "USER_ERROR", - "SYSTEM_ERROR", - "SKIPPED" - ], - "title": "ItemTerminationReason" - }, - "MeReadResponse": { - "properties": { - "user": { - "$ref": "#/components/schemas/Auth0User" - }, - "organization": { - "$ref": "#/components/schemas/Auth0Organization" - } - }, - "type": "object", - "required": [ - "user", - "organization" - ], - "title": "MeReadResponse", - "description": "Response schema for `Get current user` endpoint" - }, - "OutputArtifact": { - "properties": { - "name": { - "type": "string", - "title": "Name" - }, - "mime_type": { - "type": "string", - "pattern": "^\\w+\\/\\w+[-+.|\\w+]+\\w+$", - "title": "Mime Type", - "examples": [ - "application/vnd.apache.parquet" - ] - }, - "metadata_schema": { - "type": "object", - "title": "Metadata Schema" - }, - "scope": { - "$ref": "#/components/schemas/OutputArtifactScope" - }, - "visibility": { - "$ref": "#/components/schemas/OutputArtifactVisibility" - } - }, - "type": "object", - "required": [ - "name", - "mime_type", - "metadata_schema", - "scope", - "visibility" - ], - "title": "OutputArtifact" - }, - "OutputArtifactResultReadResponse": { - "properties": { - "output_artifact_id": { - "type": "string", - "format": "uuid", - "title": "Output Artifact Id", - "description": "The Id of the artifact. Used internally" - }, - "name": { - "type": "string", - "title": "Name", - "description": "\nName of the output from the output schema from the `/v1/versions/{version_id}` endpoint.\n ", - "examples": [ - "tissue_qc:tiff_heatmap" - ] - }, - "metadata": { - "type": "object", - "title": "Metadata", - "description": "The metadata of the output artifact, provided by the application" - }, - "state": { - "$ref": "#/components/schemas/ArtifactState", - "description": "The current state of the artifact (PENDING, PROCESSING, TERMINATED)" - }, - "termination_reason": { - "anyOf": [ - { - "$ref": "#/components/schemas/ArtifactTerminationReason" - }, - { - "type": "null" - } - ], - "description": "The reason for termination when state is TERMINATED" - }, - "output": { - "$ref": "#/components/schemas/ArtifactOutput", - "description": "The output status of the artifact (NONE, FULL)" - }, - "error_message": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "title": "Error Message", - "description": "Error message when artifact is in error state" - }, - "download_url": { - "anyOf": [ - { - "type": "string", - "maxLength": 2083, - "minLength": 1, - "format": "uri" - }, - { - "type": "null" - } - ], - "title": "Download Url", - "description": "\nThe download URL to the output file. The URL is valid for 1 hour after the endpoint is called.\nA new URL is generated every time the endpoint is called.\n " - } - }, - "type": "object", - "required": [ - "output_artifact_id", - "name", - "metadata", - "state", - "output", - "download_url" - ], - "title": "OutputArtifactResultReadResponse" - }, - "OutputArtifactScope": { - "type": "string", - "enum": [ - "ITEM", - "GLOBAL" - ], - "title": "OutputArtifactScope" - }, - "OutputArtifactVisibility": { - "type": "string", - "enum": [ - "INTERNAL", - "EXTERNAL" - ], - "title": "OutputArtifactVisibility" - }, - "RunCreationRequest": { - "properties": { - "application_id": { - "type": "string", - "title": "Application Id", - "description": "Unique ID for the application to use for processing", - "examples": [ - "he-tme" - ] - }, - "version_number": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "title": "Version Number", - "description": "Semantic version of the application to use for processing. If not provided, the latest available version will be used", - "examples": [ - "1.0.0-beta1" - ] - }, - "custom_metadata": { - "anyOf": [ - { - "type": "object" - }, - { - "type": "null" - } - ], - "title": "Custom Metadata", - "description": "Optional JSON metadata to store additional information alongside the run", - "examples": [ - { - "department": "D1", - "study": "abc-1" - } - ] - }, - "items": { - "items": { - "$ref": "#/components/schemas/ItemCreationRequest" - }, - "type": "array", - "minItems": 1, - "title": "Items", - "description": "List of items (slides) to process. Each item represents a whole slide image (WSI) with its associated metadata and artifacts", - "examples": [ - [ - { - "external_id": "slide_1", - "input_artifacts": [ - { - "download_url": "https://example-bucket.s3.amazonaws.com/slide1.tiff?signature=...", - "metadata": { - "checksum_base64_crc32c": "64RKKA==", - "height_px": 87761, - "media-type": "image/tiff", - "resolution_mpp": 0.2628238, - "specimen": { - "disease": "LUNG_CANCER", - "tissue": "LUNG" - }, - "staining_method": "H&E", - "width_px": 136223 - }, - "name": "input_slide" - } - ] - } - ] - ] - } - }, - "type": "object", - "required": [ - "application_id", - "items" - ], - "title": "RunCreationRequest", - "description": "Request schema for `Initiate Run` endpoint.\nIt describes which application version is chosen, and which user data should be processed." - }, - "RunCreationResponse": { - "properties": { - "run_id": { - "type": "string", - "format": "uuid", - "title": "Run Id", - "default": "Run id", - "examples": [ - "3fa85f64-5717-4562-b3fc-2c963f66afa6" - ] - } - }, - "type": "object", - "title": "RunCreationResponse" - }, - "RunItemStatistics": { - "properties": { - "item_count": { - "type": "integer", - "title": "Item Count", - "description": "Total number of the items in the run" - }, - "item_pending_count": { - "type": "integer", - "title": "Item Pending Count", - "description": "The number of items in `PENDING` state" - }, - "item_processing_count": { - "type": "integer", - "title": "Item Processing Count", - "description": "The number of items in `PROCESSING` state" - }, - "item_user_error_count": { - "type": "integer", - "title": "Item User Error Count", - "description": "The number of items in `TERMINATED` state, and the item termination reason is `USER_ERROR`" - }, - "item_system_error_count": { - "type": "integer", - "title": "Item System Error Count", - "description": "The number of items in `TERMINATED` state, and the item termination reason is `SYSTEM_ERROR`" - }, - "item_skipped_count": { - "type": "integer", - "title": "Item Skipped Count", - "description": "The number of items in `TERMINATED` state, and the item termination reason is `SKIPPED`" - }, - "item_succeeded_count": { - "type": "integer", - "title": "Item Succeeded Count", - "description": "The number of items in `TERMINATED` state, and the item termination reason is `SUCCEEDED`" - } - }, - "type": "object", - "required": [ - "item_count", - "item_pending_count", - "item_processing_count", - "item_user_error_count", - "item_system_error_count", - "item_skipped_count", - "item_succeeded_count" - ], - "title": "RunItemStatistics" - }, - "RunOutput": { - "type": "string", - "enum": [ - "NONE", - "PARTIAL", - "FULL" - ], - "title": "RunOutput" - }, - "RunReadResponse": { - "properties": { - "run_id": { - "type": "string", - "format": "uuid", - "title": "Run Id", - "description": "UUID of the application" - }, - "application_id": { - "type": "string", - "title": "Application Id", - "description": "Application id", - "examples": [ - "he-tme" - ] - }, - "version_number": { - "type": "string", - "title": "Version Number", - "description": "Application version number", - "examples": [ - "0.4.4" - ] - }, - "state": { - "$ref": "#/components/schemas/RunState", - "description": "When the run request is received by the Platform, the `state` of it is set to\n`PENDING`. The state changes to `PROCESSING` when at least one item is being processed. After `PROCESSING`, the\nstate of the run can switch back to `PENDING` if there are no processing items, or to `TERMINATED` when the run\nfinished processing." - }, - "output": { - "$ref": "#/components/schemas/RunOutput", - "description": "The status of the output of the run. When 0 items are successfully processed the output is\n`NONE`, after one item is successfully processed, the value is set to `PARTIAL`. When all items of the run are\nsuccessfully processed, the output is set to `FULL`." - }, - "termination_reason": { - "anyOf": [ - { - "$ref": "#/components/schemas/RunTerminationReason" - }, - { - "type": "null" - } - ], - "description": "The termination reason of the run. When the run is not in `TERMINATED` state, the\n termination_reason is `null`. If all items of of the run are processed (successfully or with an error), then\n termination_reason is set to `ALL_ITEMS_PROCESSED`. If the run is cancelled by the user, the value is set to\n `CANCELED_BY_USER`. If the run reaches the threshold of number of failed items, the Platform cancels the run\n and sets the termination_reason to `CANCELED_BY_SYSTEM`.\n " - }, - "error_code": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "title": "Error Code", - "description": "When the termination_reason is set to CANCELED_BY_SYSTEM, the error_code is set to define the\n structured description of the error.", - "examples": [ - "SCHEDULER.ITEMS_WITH_ERROR_THRESHOLD_REACHED" - ] - }, - "error_message": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "title": "Error Message", - "description": "When the termination_reason is set to CANCELED_BY_SYSTEM, the error_message is set to provide\n more insights to the error cause.", - "examples": [ - "Run canceled given errors on more than 10 items." - ] - }, - "statistics": { - "$ref": "#/components/schemas/RunItemStatistics", - "description": "Aggregated statistics of the run execution" - }, - "custom_metadata": { - "anyOf": [ - { - "type": "object" - }, - { - "type": "null" - } - ], - "title": "Custom Metadata", - "description": "Optional JSON metadata that was stored in alongside the run by the user", - "examples": [ - { - "department": "D1", - "study": "abc-1" - } - ] - }, - "custom_metadata_checksum": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "title": "Custom Metadata Checksum", - "description": "The checksum of the `custom_metadata` field. Can be used in the `PUT /runs/{run-id}/custom_metadata`\nrequest to avoid unwanted override of the values in concurrent requests.", - "examples": [ - "f54fe109" - ] - }, - "submitted_at": { - "type": "string", - "format": "date-time", - "title": "Submitted At", - "description": "Timestamp showing when the run was triggered" - }, - "submitted_by": { - "type": "string", - "title": "Submitted By", - "description": "Id of the user who triggered the run", - "examples": [ - "auth0|123456" - ] - }, - "terminated_at": { - "anyOf": [ - { - "type": "string", - "format": "date-time" - }, - { - "type": "null" - } - ], - "title": "Terminated At", - "description": "Timestamp showing when the run reached a terminal state.", - "examples": [ - "2024-01-15T10:30:45.123Z" - ] - } - }, - "type": "object", - "required": [ - "run_id", - "application_id", - "version_number", - "state", - "output", - "termination_reason", - "error_code", - "error_message", - "statistics", - "submitted_at", - "submitted_by" - ], - "title": "RunReadResponse", - "description": "Response schema for `Get run details` endpoint" - }, - "RunState": { - "type": "string", - "enum": [ - "PENDING", - "PROCESSING", - "TERMINATED" - ], - "title": "RunState" - }, - "RunTerminationReason": { - "type": "string", - "enum": [ - "ALL_ITEMS_PROCESSED", - "CANCELED_BY_SYSTEM", - "CANCELED_BY_USER" - ], - "title": "RunTerminationReason" - }, - "ValidationError": { - "properties": { - "loc": { - "items": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "integer" - } - ] - }, - "type": "array", - "title": "Location" - }, - "msg": { - "type": "string", - "title": "Message" - }, - "type": { - "type": "string", - "title": "Error Type" - } - }, - "type": "object", - "required": [ - "loc", - "msg", - "type" - ], - "title": "ValidationError" - }, - "VersionReadResponse": { - "properties": { - "version_number": { - "type": "string", - "title": "Version Number", - "description": "Semantic version of the application" - }, - "changelog": { - "type": "string", - "title": "Changelog", - "description": "Description of the changes relative to the previous version" - }, - "input_artifacts": { - "items": { - "$ref": "#/components/schemas/InputArtifact" - }, - "type": "array", - "title": "Input Artifacts", - "description": "List of the input fields, provided by the User" - }, - "output_artifacts": { - "items": { - "$ref": "#/components/schemas/OutputArtifact" - }, - "type": "array", - "title": "Output Artifacts", - "description": "List of the output fields, generated by the application" - }, - "released_at": { - "type": "string", - "format": "date-time", - "title": "Released At", - "description": "The timestamp when the application version was registered" - } - }, - "type": "object", - "required": [ - "version_number", - "changelog", - "input_artifacts", - "output_artifacts", - "released_at" - ], - "title": "VersionReadResponse", - "description": "Base Response schema for the `Application Version Details` endpoint" - } - }, - "securitySchemes": { - "OAuth2AuthorizationCodeBearer": { - "type": "oauth2", - "flows": { - "authorizationCode": { - "scopes": {}, - "authorizationUrl": "https://aignostics-platform-staging.eu.auth0.com/authorize", - "tokenUrl": "https://aignostics-platform-staging.eu.auth0.com/oauth/token" - } - } - } - } - } -} diff --git a/codegen/in/archive/openapi_1.0.0.beta7.json b/codegen/in/archive/openapi_1.0.0.beta7.json deleted file mode 100644 index ed47ea22..00000000 --- a/codegen/in/archive/openapi_1.0.0.beta7.json +++ /dev/null @@ -1,2675 +0,0 @@ -{ - "openapi": "3.1.0", - "info": { - "title": "Aignostics Platform API", - "description": "\nThe Aignostics Platform is a cloud-based service that enables organizations to access advanced computational pathology applications through a secure API. The platform provides standardized access to Aignostics' portfolio of computational pathology solutions, with Atlas H&E-TME serving as an example of the available API endpoints. \n\nTo begin using the platform, your organization must first be registered by our business support team. If you don't have an account yet, please contact your account manager or email support@aignostics.com to get started. \n\nMore information about our applications can be found on [https://platform.aignostics.com](https://platform.aignostics.com).\n\n**How to authorize and test API endpoints:**\n\n1. Click the \"Authorize\" button in the right corner below\n3. Click \"Authorize\" button in the dialog to log in with your Aignostics Platform credentials\n4. After successful login, you'll be redirected back and can use \"Try it out\" on any endpoint\n\n**Note**: You only need to authorize once per session. The lock icons next to endpoints will show green when authorized.\n\n", - "version": "1.0.0.beta7" - }, - "servers": [ - { - "url": "/api" - } - ], - "paths": { - "/v1/applications": { - "get": { - "tags": [ - "Public" - ], - "summary": "List available applications", - "description": "Returns the list of the applications, available to the caller.\n\nThe application is available if any of the versions of the application is assigned to the caller’s organization.\nThe response is paginated and sorted according to the provided parameters.", - "operationId": "list_applications_v1_applications_get", - "security": [ - { - "OAuth2AuthorizationCodeBearer": [] - } - ], - "parameters": [ - { - "name": "page", - "in": "query", - "required": false, - "schema": { - "type": "integer", - "minimum": 1, - "default": 1, - "title": "Page" - } - }, - { - "name": "page-size", - "in": "query", - "required": false, - "schema": { - "type": "integer", - "maximum": 100, - "minimum": 5, - "default": 50, - "title": "Page-Size" - } - }, - { - "name": "sort", - "in": "query", - "required": false, - "schema": { - "anyOf": [ - { - "type": "array", - "items": { - "type": "string" - } - }, - { - "type": "null" - } - ], - "description": "Sort the results by one or more fields. Use `+` for ascending and `-` for descending order.\n\n**Available fields:**\n- `application_id`\n- `name`\n- `description`\n- `regulatory_classes`\n\n**Examples:**\n- `?sort=application_id` - Sort by application_id ascending\n- `?sort=-name` - Sort by name descending\n- `?sort=+description&sort=name` - Sort by description ascending, then name descending", - "title": "Sort" - }, - "description": "Sort the results by one or more fields. Use `+` for ascending and `-` for descending order.\n\n**Available fields:**\n- `application_id`\n- `name`\n- `description`\n- `regulatory_classes`\n\n**Examples:**\n- `?sort=application_id` - Sort by application_id ascending\n- `?sort=-name` - Sort by name descending\n- `?sort=+description&sort=name` - Sort by description ascending, then name descending" - } - ], - "responses": { - "200": { - "description": "A list of applications available to the caller", - "content": { - "application/json": { - "schema": { - "type": "array", - "items": { - "$ref": "#/components/schemas/ApplicationReadShortResponse" - }, - "title": "Response List Applications V1 Applications Get" - }, - "example": [ - { - "application_id": "he-tme", - "name": "Atlas H&E-TME", - "regulatory_classes": [ - "RUO" - ], - "description": "The Atlas H&E TME is an AI application designed to examine FFPE (formalin-fixed, paraffin-embedded) tissues stained with H&E (hematoxylin and eosin), delivering comprehensive insights into the tumor microenvironment.", - "latest_version": { - "number": "1.0.0", - "released_at": "2025-09-01T19:01:05.401Z" - } - }, - { - "application_id": "test-app", - "name": "Test Application", - "regulatory_classes": [ - "RUO" - ], - "description": "This is the test application with two algorithms: TissueQc and Tissue Segmentation", - "latest_version": { - "number": "2.0.0", - "released_at": "2025-09-02T19:01:05.401Z" - } - } - ] - } - } - }, - "401": { - "description": "Unauthorized - Invalid or missing authentication" - }, - "422": { - "description": "Validation Error", - "content": { - "application/json": { - "schema": { - "$ref": "#/components/schemas/HTTPValidationError" - } - } - } - } - } - } - }, - "/v1/applications/{application_id}": { - "get": { - "tags": [ - "Public" - ], - "summary": "Read Application By Id", - "description": "Retrieve details of a specific application by its ID.", - "operationId": "read_application_by_id_v1_applications__application_id__get", - "security": [ - { - "OAuth2AuthorizationCodeBearer": [] - } - ], - "parameters": [ - { - "name": "application_id", - "in": "path", - "required": true, - "schema": { - "type": "string", - "title": "Application Id" - } - } - ], - "responses": { - "200": { - "description": "Successful Response", - "content": { - "application/json": { - "schema": { - "$ref": "#/components/schemas/ApplicationReadResponse" - } - } - } - }, - "403": { - "description": "Forbidden - You don't have permission to see this application" - }, - "404": { - "description": "Not Found - Application with the given ID does not exist" - }, - "422": { - "description": "Validation Error", - "content": { - "application/json": { - "schema": { - "$ref": "#/components/schemas/HTTPValidationError" - } - } - } - } - } - } - }, - "/v1/applications/{application_id}/versions/{version}": { - "get": { - "tags": [ - "Public" - ], - "summary": "Application Version Details", - "description": "Get the application version details\n\nAllows caller to retrieve information about application version based on provided application version ID.", - "operationId": "application_version_details_v1_applications__application_id__versions__version__get", - "security": [ - { - "OAuth2AuthorizationCodeBearer": [] - } - ], - "parameters": [ - { - "name": "application_id", - "in": "path", - "required": true, - "schema": { - "type": "string", - "title": "Application Id" - } - }, - { - "name": "version", - "in": "path", - "required": true, - "schema": { - "type": "string", - "title": "Version" - } - } - ], - "responses": { - "200": { - "description": "Successful Response", - "content": { - "application/json": { - "schema": { - "$ref": "#/components/schemas/VersionReadResponse" - }, - "example": { - "version_number": "0.4.4", - "changelog": "New deployment", - "input_artifacts": [ - { - "name": "whole_slide_image", - "mime_type": "image/tiff", - "metadata_schema": { - "type": "object", - "$defs": { - "LungCancerMetadata": { - "type": "object", - "title": "LungCancerMetadata", - "required": [ - "type", - "tissue" - ], - "properties": { - "type": { - "enum": [ - "lung" - ], - "type": "string", - "const": "lung", - "title": "Type" - }, - "tissue": { - "enum": [ - "lung", - "lymph node", - "liver", - "adrenal gland", - "bone", - "brain" - ], - "type": "string", - "title": "Tissue" - } - }, - "additionalProperties": false - } - }, - "title": "ExternalImageMetadata", - "$schema": "http://json-schema.org/draft-07/schema#", - "required": [ - "checksum_crc32c", - "base_mpp", - "width", - "height", - "cancer" - ], - "properties": { - "stain": { - "enum": [ - "H&E" - ], - "type": "string", - "const": "H&E", - "title": "Stain", - "default": "H&E" - }, - "width": { - "type": "integer", - "title": "Width", - "maximum": 150000, - "minimum": 1 - }, - "cancer": { - "anyOf": [ - { - "$ref": "#/$defs/LungCancerMetadata" - } - ], - "title": "Cancer" - }, - "height": { - "type": "integer", - "title": "Height", - "maximum": 150000, - "minimum": 1 - }, - "base_mpp": { - "type": "number", - "title": "Base Mpp", - "maximum": 0.5, - "minimum": 0.125 - }, - "mime_type": { - "enum": [ - "application/dicom", - "image/tiff" - ], - "type": "string", - "title": "Mime Type", - "default": "image/tiff" - }, - "checksum_crc32c": { - "type": "string", - "title": "Checksum Crc32C" - } - }, - "description": "Metadata corresponding to an external image.", - "additionalProperties": false - } - } - ], - "output_artifacts": [ - { - "name": "tissue_qc:tiff_heatmap", - "mime_type": "image/tiff", - "metadata_schema": { - "type": "object", - "title": "HeatmapMetadata", - "$schema": "http://json-schema.org/draft-07/schema#", - "required": [ - "checksum_crc32c", - "width", - "height", - "class_colors" - ], - "properties": { - "width": { - "type": "integer", - "title": "Width" - }, - "height": { - "type": "integer", - "title": "Height" - }, - "base_mpp": { - "type": "number", - "title": "Base Mpp", - "maximum": 0.5, - "minimum": 0.125 - }, - "mime_type": { - "enum": [ - "image/tiff" - ], - "type": "string", - "const": "image/tiff", - "title": "Mime Type", - "default": "image/tiff" - }, - "class_colors": { - "type": "object", - "title": "Class Colors", - "additionalProperties": { - "type": "array", - "maxItems": 3, - "minItems": 3, - "prefixItems": [ - { - "type": "integer", - "maximum": 255, - "minimum": 0 - }, - { - "type": "integer", - "maximum": 255, - "minimum": 0 - }, - { - "type": "integer", - "maximum": 255, - "minimum": 0 - } - ] - } - }, - "checksum_crc32c": { - "type": "string", - "title": "Checksum Crc32C" - } - }, - "description": "Metadata corresponding to a segmentation heatmap file.", - "additionalProperties": false - }, - "scope": "ITEM", - "visibility": "EXTERNAL" - } - ], - "released_at": "2025-04-16T08:45:20.655972Z" - } - } - } - }, - "403": { - "description": "Forbidden - You don't have permission to see this version" - }, - "404": { - "description": "Not Found - Application version with given ID is not available to you or does not exist" - }, - "422": { - "description": "Validation Error", - "content": { - "application/json": { - "schema": { - "$ref": "#/components/schemas/HTTPValidationError" - } - } - } - } - } - } - }, - "/v1/runs": { - "get": { - "tags": [ - "Public" - ], - "summary": "List Runs", - "description": "List runs with filtering, sorting, and pagination capabilities.\n\nReturns paginated runs that were submitted by the user.", - "operationId": "list_runs_v1_runs_get", - "security": [ - { - "OAuth2AuthorizationCodeBearer": [] - } - ], - "parameters": [ - { - "name": "application_id", - "in": "query", - "required": false, - "schema": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "description": "Optional application ID filter", - "examples": [ - "tissue-segmentation", - "heta" - ], - "title": "Application Id" - }, - "description": "Optional application ID filter" - }, - { - "name": "application_version", - "in": "query", - "required": false, - "schema": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "description": "Optional Version Name", - "examples": [ - "1.0.2", - "1.0.1-beta2" - ], - "title": "Application Version" - }, - "description": "Optional Version Name" - }, - { - "name": "external_id", - "in": "query", - "required": false, - "schema": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "description": "Optionally filter runs by items with this external ID", - "examples": [ - "slide_001", - "patient_12345_sample_A" - ], - "title": "External Id" - }, - "description": "Optionally filter runs by items with this external ID" - }, - { - "name": "custom_metadata", - "in": "query", - "required": false, - "schema": { - "anyOf": [ - { - "type": "string", - "maxLength": 1000 - }, - { - "type": "null" - } - ], - "description": "Use PostgreSQL JSONPath expressions to filter runs by their custom_metadata.\n#### URL Encoding Required\n**Important**: JSONPath expressions contain special characters that must be URL-encoded when used in query parameters. Most HTTP clients handle this automatically, but when constructing URLs manually, ensure proper encoding.\n\n#### Examples (Clear Format):\n- **Field existence**: `$.study` - Runs that have a study field defined\n- **Exact value match**: `$.study ? (@ == \"high\")` - Runs with specific study value\n- **Numeric comparison**: `$.confidence_score ? (@ > 0.75)` - Runs with confidence score greater than 0.75\n- **Array operations**: `$.tags[*] ? (@ == \"draft\")` - Runs with tags array containing \"draft\"\n- **Complex conditions**: `$.resources ? (@.gpu_count > 2 && @.memory_gb >= 16)` - Runs with high resource requirements\n\n#### Examples (URL-Encoded Format):\n- **Field existence**: `%24.study`\n- **Exact value match**: `%24.study%20%3F%20(%40%20%3D%3D%20%22high%22)`\n- **Numeric comparison**: `%24.confidence_score%20%3F%20(%40%20%3E%200.75)`\n- **Array operations**: `%24.tags%5B*%5D%20%3F%20(%40%20%3D%3D%20%22draft%22)`\n- **Complex conditions**: `%24.resources%20%3F%20(%40.gpu_count%20%3E%202%20%26%26%20%40.memory_gb%20%3E%3D%2016)`\n\n#### Notes\n- JSONPath expressions are evaluated using PostgreSQL's `@?` operator\n- The `$.` prefix is automatically added to root-level field references if missing\n- String values in conditions must be enclosed in double quotes\n- Use `&&` for AND operations and `||` for OR operations\n- Regular expressions use `like_regex` with standard regex syntax\n- **Remember to URL-encode the entire JSONPath expression when making HTTP requests**\n\n ", - "title": "Custom Metadata" - }, - "description": "Use PostgreSQL JSONPath expressions to filter runs by their custom_metadata.\n#### URL Encoding Required\n**Important**: JSONPath expressions contain special characters that must be URL-encoded when used in query parameters. Most HTTP clients handle this automatically, but when constructing URLs manually, ensure proper encoding.\n\n#### Examples (Clear Format):\n- **Field existence**: `$.study` - Runs that have a study field defined\n- **Exact value match**: `$.study ? (@ == \"high\")` - Runs with specific study value\n- **Numeric comparison**: `$.confidence_score ? (@ > 0.75)` - Runs with confidence score greater than 0.75\n- **Array operations**: `$.tags[*] ? (@ == \"draft\")` - Runs with tags array containing \"draft\"\n- **Complex conditions**: `$.resources ? (@.gpu_count > 2 && @.memory_gb >= 16)` - Runs with high resource requirements\n\n#### Examples (URL-Encoded Format):\n- **Field existence**: `%24.study`\n- **Exact value match**: `%24.study%20%3F%20(%40%20%3D%3D%20%22high%22)`\n- **Numeric comparison**: `%24.confidence_score%20%3F%20(%40%20%3E%200.75)`\n- **Array operations**: `%24.tags%5B*%5D%20%3F%20(%40%20%3D%3D%20%22draft%22)`\n- **Complex conditions**: `%24.resources%20%3F%20(%40.gpu_count%20%3E%202%20%26%26%20%40.memory_gb%20%3E%3D%2016)`\n\n#### Notes\n- JSONPath expressions are evaluated using PostgreSQL's `@?` operator\n- The `$.` prefix is automatically added to root-level field references if missing\n- String values in conditions must be enclosed in double quotes\n- Use `&&` for AND operations and `||` for OR operations\n- Regular expressions use `like_regex` with standard regex syntax\n- **Remember to URL-encode the entire JSONPath expression when making HTTP requests**\n\n ", - "examples": { - "no_filter": { - "summary": "No filter (returns all)", - "description": "Returns all items without filtering by custom metadata", - "value": "$" - }, - "field_exists": { - "summary": "Check if field exists", - "description": "Find applications that have a project field defined", - "value": "$.study" - }, - "field_has_value": { - "summary": "Check if field has a certain value", - "description": "Compare a field value against a certain value", - "value": "$.study ? (@ == \"abc-1\")" - }, - "numeric_comparisons": { - "summary": "Compare to a numeric value of a field", - "description": "Compare a field value against a numeric value of a field", - "value": "$.confidence_score ? (@ > 0.75)" - }, - "array_operations": { - "summary": "Check if an array contains a certain value", - "description": "Check if an array contains a certain value", - "value": "$.tags[*] ? (@ == \"draft\")" - }, - "complex_filters": { - "summary": "Combine multiple checks", - "description": "Combine multiple checks", - "value": "$.resources ? (@.gpu_count > 2 && @.memory_gb >= 16)" - } - } - }, - { - "name": "page", - "in": "query", - "required": false, - "schema": { - "type": "integer", - "minimum": 1, - "default": 1, - "title": "Page" - } - }, - { - "name": "page_size", - "in": "query", - "required": false, - "schema": { - "type": "integer", - "maximum": 100, - "minimum": 5, - "default": 50, - "title": "Page Size" - } - }, - { - "name": "sort", - "in": "query", - "required": false, - "schema": { - "anyOf": [ - { - "type": "array", - "items": { - "type": "string" - } - }, - { - "type": "null" - } - ], - "description": "Sort the results by one or more fields. Use `+` for ascending and `-` for descending order.\n\n**Available fields:**\n- `run_id`\n- `application_version_id`\n- `organization_id`\n- `status`\n- `submitted_at`\n- `submitted_by`\n\n**Examples:**\n- `?sort=submitted_at` - Sort by creation time (ascending)\n- `?sort=-submitted_at` - Sort by creation time (descending)\n- `?sort=status&sort=-submitted_at` - Sort by status, then by time (descending)\n", - "title": "Sort" - }, - "description": "Sort the results by one or more fields. Use `+` for ascending and `-` for descending order.\n\n**Available fields:**\n- `run_id`\n- `application_version_id`\n- `organization_id`\n- `status`\n- `submitted_at`\n- `submitted_by`\n\n**Examples:**\n- `?sort=submitted_at` - Sort by creation time (ascending)\n- `?sort=-submitted_at` - Sort by creation time (descending)\n- `?sort=status&sort=-submitted_at` - Sort by status, then by time (descending)\n" - } - ], - "responses": { - "200": { - "description": "Successful Response", - "content": { - "application/json": { - "schema": { - "type": "array", - "items": { - "$ref": "#/components/schemas/RunReadResponse" - }, - "title": "Response List Runs V1 Runs Get" - } - } - } - }, - "404": { - "description": "Run not found" - }, - "422": { - "description": "Validation Error", - "content": { - "application/json": { - "schema": { - "$ref": "#/components/schemas/HTTPValidationError" - } - } - } - } - } - }, - "post": { - "tags": [ - "Public" - ], - "summary": "Initiate Run", - "description": "This endpoint initiates a processing run for a selected application and version, and returns a `run_id` for tracking purposes.\n\nSlide processing occurs asynchronously, allowing you to retrieve results for individual slides as soon as they\ncomplete processing. The system typically processes slides in batches of four, though this number may be reduced\nduring periods of high demand.\nBelow is an example of the required payload for initiating an Atlas H&E TME processing run.\n\n\n### Payload\n\nThe payload includes `application_id`, optional `version_number`, and `items` base fields.\n\n`application_id` is the unique identifier for the application.\n`version_number` is the semantic version to use. If not provided, the latest available version will be used.\n\n`items` includes the list of the items to process (slides, in case of HETA application).\nEvery item has a set of standard fields defined by the API, plus the custom_metadata, specific to the\nchosen application.\n\nExample payload structure with the comments:\n```\n{\n application_id: \"he-tme\",\n version_number: \"1.0.0-beta\",\n items: [{\n \"external_id\": \"slide_1\",\n \"input_artifacts\": [{\n \"name\": \"user_slide\",\n \"download_url\": \"https://...\",\n \"custom_metadata\": {\n \"specimen\": {\n \"disease\": \"LUNG_CANCER\",\n \"tissue\": \"LUNG\"\n },\n \"staining_method\": \"H&E\",\n \"width_px\": 136223,\n \"height_px\": 87761,\n \"resolution_mpp\": 0.2628238,\n \"media-type\":\"image/tiff\",\n \"checksum_base64_crc32c\": \"64RKKA==\"\n }\n }]\n }]\n}\n```\n\n| Parameter | Description |\n| :---- | :---- |\n| `application_id` required | Unique ID for the application |\n| `version_number` optional | Semantic version of the application. If not provided, the latest available version will be used |\n| `items` required | List of submitted items (WSIs) with parameters described below. |\n| `external_id` required | Unique WSI name or ID for easy reference to items, provided by the caller. The external_id should be unique across all items of the run. |\n| `input_artifacts` required | List of provided artifacts for a WSI; at the moment Atlas H&E-TME receives only 1 artifact per slide (the slide itself), but for some other applications this can be a slide and an segmentation map |\n| `name` required | Type of artifact; Atlas H&E-TME supports only `\"input_slide\"` |\n| `download_url` required | Signed URL to the input file in the S3 or GCS; Should be valid for at least 6 days |\n| `specimen: disease` required | Supported cancer types for Atlas H&E-TME (see full list in Atlas H&E-TME manual) |\n| `specimen: tissue` required | Supported tissue types for Atlas H&E-TME (see full list in Atlas H&E-TME manual) |\n| `staining_method` required | WSI stain /bio-marker; Atlas H&E-TME supports only `\"H&E\"` |\n| `width_px` required | Integer value. Number of pixels of the WSI in the X dimension. |\n| `height_px` required | Integer value. Number of pixels of the WSI in the Y dimension. |\n| `resolution_mpp` required | Resolution of WSI in micrometers per pixel; check allowed range in Atlas H&E-TME manual |\n| `media-type` required | Supported media formats; available values are: image/tiff (for .tiff or .tif WSI) application/dicom (for DICOM ) application/zip (for zipped DICOM) application/octet-stream (for .svs WSI) |\n| `checksum_base64_crc32c` required | Base64 encoded big-endian CRC32C checksum of the WSI image |\n\n\n\n### Response\n\nThe endpoint returns the run UUID. After that the job is scheduled for the\nexecution in the background.\n\nTo check the status of the run call `v1/runs/{run_id}`.\n\n### Rejection\n\nApart from the authentication, authorization and malformed input error, the request can be\nrejected when the quota limit is exceeded. More details on quotas is described in the\ndocumentation", - "operationId": "create_run_v1_runs_post", - "security": [ - { - "OAuth2AuthorizationCodeBearer": [] - } - ], - "requestBody": { - "required": true, - "content": { - "application/json": { - "schema": { - "$ref": "#/components/schemas/RunCreationRequest" - } - } - } - }, - "responses": { - "201": { - "description": "Successful Response", - "content": { - "application/json": { - "schema": { - "$ref": "#/components/schemas/RunCreationResponse" - } - } - } - }, - "404": { - "description": "Application version not found" - }, - "403": { - "description": "Forbidden - You don't have permission to create this run" - }, - "400": { - "description": "Bad Request - Input validation failed" - }, - "422": { - "description": "Validation Error", - "content": { - "application/json": { - "schema": { - "$ref": "#/components/schemas/HTTPValidationError" - } - } - } - } - } - } - }, - "/v1/runs/{run_id}": { - "get": { - "tags": [ - "Public" - ], - "summary": "Get run details", - "description": "This endpoint allows the caller to retrieve the current status of a run along with other relevant run details.\n A run becomes available immediately after it is created through the POST `/runs/` endpoint.\n\n To download the output results, use GET `/runs/{run_id}/` items to get outputs for all slides.\nAccess to a run is restricted to the user who created it.", - "operationId": "get_run_v1_runs__run_id__get", - "security": [ - { - "OAuth2AuthorizationCodeBearer": [] - } - ], - "parameters": [ - { - "name": "run_id", - "in": "path", - "required": true, - "schema": { - "type": "string", - "format": "uuid", - "description": "Run id, returned by `POST /runs/` endpoint", - "title": "Run Id" - }, - "description": "Run id, returned by `POST /runs/` endpoint" - } - ], - "responses": { - "200": { - "description": "Successful Response", - "content": { - "application/json": { - "schema": { - "$ref": "#/components/schemas/RunReadResponse" - } - } - } - }, - "404": { - "description": "Run not found because it was deleted." - }, - "403": { - "description": "Forbidden - You don't have permission to see this run" - }, - "422": { - "description": "Validation Error", - "content": { - "application/json": { - "schema": { - "$ref": "#/components/schemas/HTTPValidationError" - } - } - } - } - } - } - }, - "/v1/runs/{run_id}/cancel": { - "post": { - "tags": [ - "Public" - ], - "summary": "Cancel Run", - "description": "The run can be canceled by the user who created the run.\n\nThe execution can be canceled any time while the application is not in a final state. The\npending items will not be processed and will not add to the cost.\n\nWhen the application is canceled, the already completed items stay available for download.", - "operationId": "cancel_run_v1_runs__run_id__cancel_post", - "security": [ - { - "OAuth2AuthorizationCodeBearer": [] - } - ], - "parameters": [ - { - "name": "run_id", - "in": "path", - "required": true, - "schema": { - "type": "string", - "format": "uuid", - "description": "Run id, returned by `POST /runs/` endpoint", - "title": "Run Id" - }, - "description": "Run id, returned by `POST /runs/` endpoint" - } - ], - "responses": { - "202": { - "description": "Successful Response", - "content": { - "application/json": { - "schema": {} - } - } - }, - "404": { - "description": "Run not found" - }, - "403": { - "description": "Forbidden - You don't have permission to cancel this run" - }, - "409": { - "description": "Conflict - The Run is already cancelled" - }, - "422": { - "description": "Validation Error", - "content": { - "application/json": { - "schema": { - "$ref": "#/components/schemas/HTTPValidationError" - } - } - } - } - } - } - }, - "/v1/runs/{run_id}/items": { - "get": { - "tags": [ - "Public" - ], - "summary": "List Run Items", - "description": "List items in a run with filtering, sorting, and pagination capabilities.\n\nReturns paginated items within a specific run. Results can be filtered\nby item IDs, external_ids, status, and custom_metadata using JSONPath expressions.\n\n## JSONPath Metadata Filtering\nUse PostgreSQL JSONPath expressions to filter items using their custom_metadata.\n\n### Examples:\n- **Field existence**: `$.case_id` - Results that have a case_id field defined\n- **Exact value match**: `$.priority ? (@ == \"high\")` - Results with high priority\n- **Numeric comparison**: `$.confidence_score ? (@ > 0.95)` - Results with high confidence\n- **Array operations**: `$.flags[*] ? (@ == \"reviewed\")` - Results flagged as reviewed\n- **Complex conditions**: `$.metrics ? (@.accuracy > 0.9 && @.recall > 0.8)` - Results meeting performance thresholds\n\n## Notes\n- JSONPath expressions are evaluated using PostgreSQL's `@?` operator\n- The `$.` prefix is automatically added to root-level field references if missing\n- String values in conditions must be enclosed in double quotes\n- Use `&&` for AND operations and `||` for OR operations", - "operationId": "list_run_items_v1_runs__run_id__items_get", - "security": [ - { - "OAuth2AuthorizationCodeBearer": [] - } - ], - "parameters": [ - { - "name": "run_id", - "in": "path", - "required": true, - "schema": { - "type": "string", - "format": "uuid", - "description": "Run id, returned by `POST /runs/` endpoint", - "title": "Run Id" - }, - "description": "Run id, returned by `POST /runs/` endpoint" - }, - { - "name": "item_id__in", - "in": "query", - "required": false, - "schema": { - "anyOf": [ - { - "type": "array", - "items": { - "type": "string", - "format": "uuid" - } - }, - { - "type": "null" - } - ], - "description": "Filter for item ids", - "title": "Item Id In" - }, - "description": "Filter for item ids" - }, - { - "name": "external_id__in", - "in": "query", - "required": false, - "schema": { - "anyOf": [ - { - "type": "array", - "items": { - "type": "string" - } - }, - { - "type": "null" - } - ], - "description": "Filter for items by their external_id from the input payload", - "title": "External Id In" - }, - "description": "Filter for items by their external_id from the input payload" - }, - { - "name": "state", - "in": "query", - "required": false, - "schema": { - "anyOf": [ - { - "$ref": "#/components/schemas/ItemState" - }, - { - "type": "null" - } - ], - "description": "Filter items by their state", - "title": "State" - }, - "description": "Filter items by their state" - }, - { - "name": "termination_reason", - "in": "query", - "required": false, - "schema": { - "anyOf": [ - { - "$ref": "#/components/schemas/ItemTerminationReason" - }, - { - "type": "null" - } - ], - "description": "Filter items by their termination reason. Only applies to TERMINATED items.", - "title": "Termination Reason" - }, - "description": "Filter items by their termination reason. Only applies to TERMINATED items." - }, - { - "name": "custom_metadata", - "in": "query", - "required": false, - "schema": { - "anyOf": [ - { - "type": "string", - "maxLength": 1000 - }, - { - "type": "null" - } - ], - "description": "JSONPath expression to filter items by their custom_metadata", - "title": "Custom Metadata" - }, - "description": "JSONPath expression to filter items by their custom_metadata", - "examples": { - "no_filter": { - "summary": "No filter (returns all)", - "description": "Returns all items without filtering by custom metadata", - "value": "$" - }, - "field_exists": { - "summary": "Check if field exists", - "description": "Find items that have a project field defined", - "value": "$.project" - }, - "field_has_value": { - "summary": "Check if field has a certain value", - "description": "Compare a field value against a certain value", - "value": "$.project ? (@ == \"cancer-research\")" - }, - "numeric_comparisons": { - "summary": "Compare to a numeric value of a field", - "description": "Compare a field value against a numeric value of a field", - "value": "$.duration_hours ? (@ < 2)" - }, - "array_operations": { - "summary": "Check if an array contains a certain value", - "description": "Check if an array contains a certain value", - "value": "$.tags[*] ? (@ == \"production\")" - }, - "complex_filters": { - "summary": "Combine multiple checks", - "description": "Combine multiple checks", - "value": "$.resources ? (@.gpu_count > 2 && @.memory_gb >= 16)" - } - } - }, - { - "name": "page", - "in": "query", - "required": false, - "schema": { - "type": "integer", - "minimum": 1, - "default": 1, - "title": "Page" - } - }, - { - "name": "page_size", - "in": "query", - "required": false, - "schema": { - "type": "integer", - "maximum": 100, - "minimum": 5, - "default": 50, - "title": "Page Size" - } - }, - { - "name": "sort", - "in": "query", - "required": false, - "schema": { - "anyOf": [ - { - "type": "array", - "items": { - "type": "string" - } - }, - { - "type": "null" - } - ], - "description": "Sort the items by one or more fields. Use `+` for ascending and `-` for descending order.\n **Available fields:**\n- `item_id`\n- `run_id`\n- `external_id`\n- `custom_metadata`\n\n**Examples:**\n- `?sort=item_id` - Sort by id of the item (ascending)\n- `?sort=-external_id` - Sort by external ID (descending)\n- `?sort=custom_metadata&sort=-external_id` - Sort by metadata, then by external ID (descending)", - "title": "Sort" - }, - "description": "Sort the items by one or more fields. Use `+` for ascending and `-` for descending order.\n **Available fields:**\n- `item_id`\n- `run_id`\n- `external_id`\n- `custom_metadata`\n\n**Examples:**\n- `?sort=item_id` - Sort by id of the item (ascending)\n- `?sort=-external_id` - Sort by external ID (descending)\n- `?sort=custom_metadata&sort=-external_id` - Sort by metadata, then by external ID (descending)" - } - ], - "responses": { - "200": { - "description": "Successful Response", - "content": { - "application/json": { - "schema": { - "type": "array", - "items": { - "$ref": "#/components/schemas/ItemResultReadResponse" - }, - "title": "Response List Run Items V1 Runs Run Id Items Get" - } - } - } - }, - "404": { - "description": "Run not found" - }, - "422": { - "description": "Validation Error", - "content": { - "application/json": { - "schema": { - "$ref": "#/components/schemas/HTTPValidationError" - } - } - } - } - } - } - }, - "/v1/runs/{run_id}/items/{external_id}": { - "get": { - "tags": [ - "Public" - ], - "summary": "Get Item By Run", - "description": "Retrieve details of a specific item (slide) by its external ID and the run ID.", - "operationId": "get_item_by_run_v1_runs__run_id__items__external_id__get", - "security": [ - { - "OAuth2AuthorizationCodeBearer": [] - } - ], - "parameters": [ - { - "name": "run_id", - "in": "path", - "required": true, - "schema": { - "type": "string", - "format": "uuid", - "description": "The run id, returned by `POST /runs/` endpoint", - "title": "Run Id" - }, - "description": "The run id, returned by `POST /runs/` endpoint" - }, - { - "name": "external_id", - "in": "path", - "required": true, - "schema": { - "type": "string", - "description": "The `external_id` that was defined for the item by the customer that triggered the run.", - "title": "External Id" - }, - "description": "The `external_id` that was defined for the item by the customer that triggered the run." - } - ], - "responses": { - "200": { - "description": "Successful Response", - "content": { - "application/json": { - "schema": { - "$ref": "#/components/schemas/ItemResultReadResponse" - } - } - } - }, - "404": { - "description": "Not Found - Item with given ID does not exist" - }, - "403": { - "description": "Forbidden - You don't have permission to see this item" - }, - "422": { - "description": "Validation Error", - "content": { - "application/json": { - "schema": { - "$ref": "#/components/schemas/HTTPValidationError" - } - } - } - } - } - } - }, - "/v1/runs/{run_id}/artifacts": { - "delete": { - "tags": [ - "Public" - ], - "summary": "Delete Run Items", - "description": "This endpoint allows the caller to explicitly delete artifacts generated by a run.\nIt can only be invoked when the run has reached a final state\n(PROCESSED, CANCELED_SYSTEM, CANCELED_USER).\nNote that by default, all artifacts are automatically deleted 30 days after the run finishes,\n regardless of whether the caller explicitly requests deletion.", - "operationId": "delete_run_items_v1_runs__run_id__artifacts_delete", - "security": [ - { - "OAuth2AuthorizationCodeBearer": [] - } - ], - "parameters": [ - { - "name": "run_id", - "in": "path", - "required": true, - "schema": { - "type": "string", - "format": "uuid", - "description": "Run id, returned by `POST /runs/` endpoint", - "title": "Run Id" - }, - "description": "Run id, returned by `POST /runs/` endpoint" - } - ], - "responses": { - "200": { - "description": "Run artifacts deleted", - "content": { - "application/json": { - "schema": {} - } - } - }, - "404": { - "description": "Run not found" - }, - "422": { - "description": "Validation Error", - "content": { - "application/json": { - "schema": { - "$ref": "#/components/schemas/HTTPValidationError" - } - } - } - } - } - } - }, - "/v1/runs/{run_id}/custom-metadata": { - "put": { - "tags": [ - "Public" - ], - "summary": "Put Run Custom Metadata", - "operationId": "put_run_custom_metadata_v1_runs__run_id__custom_metadata_put", - "security": [ - { - "OAuth2AuthorizationCodeBearer": [] - } - ], - "parameters": [ - { - "name": "run_id", - "in": "path", - "required": true, - "schema": { - "type": "string", - "format": "uuid", - "description": "Run id, returned by `POST /runs/` endpoint", - "title": "Run Id" - }, - "description": "Run id, returned by `POST /runs/` endpoint" - } - ], - "requestBody": { - "required": true, - "content": { - "application/json": { - "schema": { - "$ref": "#/components/schemas/CustomMetadataUpdateRequest" - } - } - } - }, - "responses": { - "200": { - "description": "Successful Response", - "content": { - "application/json": { - "schema": {} - } - } - }, - "404": { - "description": "Run not found" - }, - "422": { - "description": "Validation Error", - "content": { - "application/json": { - "schema": { - "$ref": "#/components/schemas/HTTPValidationError" - } - } - } - } - } - } - }, - "/v1/runs/{run_id}/items/{external_id}/custom-metadata": { - "put": { - "tags": [ - "Public" - ], - "summary": "Put Item Custom Metadata By Run", - "operationId": "put_item_custom_metadata_by_run_v1_runs__run_id__items__external_id__custom_metadata_put", - "security": [ - { - "OAuth2AuthorizationCodeBearer": [] - } - ], - "parameters": [ - { - "name": "run_id", - "in": "path", - "required": true, - "schema": { - "type": "string", - "format": "uuid", - "description": "The run id, returned by `POST /runs/` endpoint", - "title": "Run Id" - }, - "description": "The run id, returned by `POST /runs/` endpoint" - }, - { - "name": "external_id", - "in": "path", - "required": true, - "schema": { - "type": "string", - "description": "The `external_id` that was defined for the item by the customer that triggered the run.", - "title": "External Id" - }, - "description": "The `external_id` that was defined for the item by the customer that triggered the run." - } - ], - "requestBody": { - "required": true, - "content": { - "application/json": { - "schema": { - "$ref": "#/components/schemas/CustomMetadataUpdateRequest" - } - } - } - }, - "responses": { - "200": { - "description": "Successful Response", - "content": { - "application/json": { - "schema": {} - } - } - }, - "422": { - "description": "Validation Error", - "content": { - "application/json": { - "schema": { - "$ref": "#/components/schemas/HTTPValidationError" - } - } - } - } - } - } - }, - "/v1/me": { - "get": { - "tags": [ - "Public" - ], - "summary": "Get current user", - "description": "Retrieves your identity details, including name, email, and organization.\nThis is useful for verifying that the request is being made under the correct user profile\nand organization context, as well as confirming that the expected environment variables are correctly set\n(in case you are using Python SDK)", - "operationId": "get_me_v1_me_get", - "responses": { - "200": { - "description": "Successful Response", - "content": { - "application/json": { - "schema": { - "$ref": "#/components/schemas/MeReadResponse" - } - } - } - } - }, - "security": [ - { - "OAuth2AuthorizationCodeBearer": [] - } - ] - } - } - }, - "components": { - "schemas": { - "ApplicationReadResponse": { - "properties": { - "application_id": { - "type": "string", - "title": "Application Id", - "description": "Application ID", - "examples": [ - "he-tme" - ] - }, - "name": { - "type": "string", - "title": "Name", - "description": "Application display name", - "examples": [ - "Atlas H&E-TME" - ] - }, - "regulatory_classes": { - "items": { - "type": "string" - }, - "type": "array", - "title": "Regulatory Classes", - "description": "Regulatory classes, to which the applications comply with. Possible values include: RUO, IVDR, FDA.", - "examples": [ - [ - "RUO" - ] - ] - }, - "description": { - "type": "string", - "title": "Description", - "description": "Describing what the application can do ", - "examples": [ - "The Atlas H&E TME is an AI application designed to examine FFPE (formalin-fixed, paraffin-embedded) tissues stained with H&E (hematoxylin and eosin), delivering comprehensive insights into the tumor microenvironment." - ] - }, - "versions": { - "items": { - "$ref": "#/components/schemas/ApplicationVersion" - }, - "type": "array", - "title": "Versions", - "description": "All version numbers available to the user" - } - }, - "type": "object", - "required": [ - "application_id", - "name", - "regulatory_classes", - "description", - "versions" - ], - "title": "ApplicationReadResponse", - "description": "Response schema for `List available applications` and `Read Application by Id` endpoints" - }, - "ApplicationReadShortResponse": { - "properties": { - "application_id": { - "type": "string", - "title": "Application Id", - "description": "Application ID", - "examples": [ - "he-tme" - ] - }, - "name": { - "type": "string", - "title": "Name", - "description": "Application display name", - "examples": [ - "Atlas H&E-TME" - ] - }, - "regulatory_classes": { - "items": { - "type": "string" - }, - "type": "array", - "title": "Regulatory Classes", - "description": "Regulatory classes, to which the applications comply with. Possible values include: RUO, IVDR, FDA.", - "examples": [ - [ - "RUO" - ] - ] - }, - "description": { - "type": "string", - "title": "Description", - "description": "Describing what the application can do ", - "examples": [ - "The Atlas H&E TME is an AI application designed to examine FFPE (formalin-fixed, paraffin-embedded) tissues stained with H&E (hematoxylin and eosin), delivering comprehensive insights into the tumor microenvironment." - ] - }, - "latest_version": { - "anyOf": [ - { - "$ref": "#/components/schemas/ApplicationVersion" - }, - { - "type": "null" - } - ], - "description": "The version with highest version number available to the user" - } - }, - "type": "object", - "required": [ - "application_id", - "name", - "regulatory_classes", - "description" - ], - "title": "ApplicationReadShortResponse", - "description": "Response schema for `List available applications` and `Read Application by Id` endpoints" - }, - "ApplicationVersion": { - "properties": { - "number": { - "type": "string", - "title": "Number", - "description": "The number of the latest version", - "examples": [ - "1.0.0" - ] - }, - "released_at": { - "type": "string", - "format": "date-time", - "title": "Released At", - "description": "The timestamp for when the application version was made available in the Platform", - "examples": [ - "2025-09-15T10:30:45.123Z" - ] - } - }, - "type": "object", - "required": [ - "number", - "released_at" - ], - "title": "ApplicationVersion" - }, - "ArtifactOutput": { - "type": "string", - "enum": [ - "NONE", - "AVAILABLE", - "DELETED_BY_USER", - "DELETED_BY_SYSTEM" - ], - "title": "ArtifactOutput" - }, - "ArtifactState": { - "type": "string", - "enum": [ - "PENDING", - "PROCESSING", - "TERMINATED" - ], - "title": "ArtifactState" - }, - "ArtifactTerminationReason": { - "type": "string", - "enum": [ - "SUCCEEDED", - "USER_ERROR", - "SYSTEM_ERROR", - "SKIPPED" - ], - "title": "ArtifactTerminationReason" - }, - "CustomMetadataUpdateRequest": { - "properties": { - "custom_metadata": { - "anyOf": [ - { - "type": "object" - }, - { - "type": "null" - } - ], - "title": "Custom Metadata", - "description": "JSON metadata that should be set for the run", - "examples": [ - { - "department": "D1", - "study": "abc-1" - } - ] - }, - "custom_metadata_checksum": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "title": "Custom Metadata Checksum", - "description": "Optional field to verify that the latest custom metadata was known. If set to the checksum retrieved via the /runs endpoint, it must match the checksum of the current value in the database.", - "examples": [ - "f54fe109" - ] - } - }, - "type": "object", - "title": "CustomMetadataUpdateRequest" - }, - "HTTPValidationError": { - "properties": { - "detail": { - "items": { - "$ref": "#/components/schemas/ValidationError" - }, - "type": "array", - "title": "Detail" - } - }, - "type": "object", - "title": "HTTPValidationError" - }, - "InputArtifact": { - "properties": { - "name": { - "type": "string", - "title": "Name" - }, - "mime_type": { - "type": "string", - "pattern": "^\\w+\\/\\w+[-+.|\\w+]+\\w+$", - "title": "Mime Type", - "examples": [ - "image/tiff" - ] - }, - "metadata_schema": { - "type": "object", - "title": "Metadata Schema" - } - }, - "type": "object", - "required": [ - "name", - "mime_type", - "metadata_schema" - ], - "title": "InputArtifact" - }, - "InputArtifactCreationRequest": { - "properties": { - "name": { - "type": "string", - "title": "Name", - "description": "Type of artifact. For Atlas H&E-TME, use \"input_slide\"", - "examples": [ - "input_slide" - ] - }, - "download_url": { - "type": "string", - "maxLength": 2083, - "minLength": 1, - "format": "uri", - "title": "Download Url", - "description": "[Signed URL](https://cloud.google.com/cdn/docs/using-signed-urls) to the input artifact file. The URL should be valid for at least 6 days from the payload submission time.", - "examples": [ - "https://example.com/case-no-1-slide.tiff" - ] - }, - "metadata": { - "type": "object", - "title": "Metadata", - "description": "The metadata of the artifact, required by the application version. The JSON schema of the metadata can be requested by `/v1/versions/{application_version_id}`. The schema is located in `input_artifacts.[].metadata_schema`", - "examples": [ - { - "checksum_base64_crc32c": "752f9554", - "height": 2000, - "height_mpp": 0.5, - "width": 10000, - "width_mpp": 0.5 - } - ] - } - }, - "type": "object", - "required": [ - "name", - "download_url", - "metadata" - ], - "title": "InputArtifactCreationRequest", - "description": "Input artifact containing the slide image and associated metadata." - }, - "ItemCreationRequest": { - "properties": { - "external_id": { - "type": "string", - "maxLength": 255, - "title": "External Id", - "description": "Unique identifier for this item within the run. Used for referencing items. Must be unique across all items in the same run", - "examples": [ - "slide_1", - "patient_001_slide_A", - "sample_12345" - ] - }, - "custom_metadata": { - "anyOf": [ - { - "type": "object" - }, - { - "type": "null" - } - ], - "title": "Custom Metadata", - "description": "Optional JSON custom_metadata to store additional information alongside an item.", - "examples": [ - { - "case": "abc" - } - ] - }, - "input_artifacts": { - "items": { - "$ref": "#/components/schemas/InputArtifactCreationRequest" - }, - "type": "array", - "title": "Input Artifacts", - "description": "List of input artifacts for this item. For Atlas H&E-TME, typically contains one artifact (the slide image)", - "examples": [ - [ - { - "download_url": "https://example-bucket.s3.amazonaws.com/slide1.tiff", - "metadata": { - "checksum_base64_crc32c": "64RKKA==", - "height_px": 87761, - "media-type": "image/tiff", - "resolution_mpp": 0.2628238, - "specimen": { - "disease": "LUNG_CANCER", - "tissue": "LUNG" - }, - "staining_method": "H&E", - "width_px": 136223 - }, - "name": "input_slide" - } - ] - ] - } - }, - "type": "object", - "required": [ - "external_id", - "input_artifacts" - ], - "title": "ItemCreationRequest", - "description": "Individual item (slide) to be processed in a run." - }, - "ItemOutput": { - "type": "string", - "enum": [ - "NONE", - "FULL" - ], - "title": "ItemOutput" - }, - "ItemResultReadResponse": { - "properties": { - "item_id": { - "type": "string", - "format": "uuid", - "title": "Item Id", - "description": "Item UUID generated by the Platform" - }, - "external_id": { - "type": "string", - "title": "External Id", - "description": "The external_id of the item from the user payload", - "examples": [ - "slide_1" - ] - }, - "custom_metadata": { - "anyOf": [ - { - "type": "object" - }, - { - "type": "null" - } - ], - "title": "Custom Metadata", - "description": "The custom_metadata of the item that has been provided by the user on run creation." - }, - "custom_metadata_checksum": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "title": "Custom Metadata Checksum", - "description": "The checksum of the `custom_metadata` field.\nCan be used in the `PUT /runs/{run-id}/items/{external_id}/custom_metadata`\nrequest to avoid unwanted override of the values in concurrent requests.", - "examples": [ - "f54fe109" - ] - }, - "state": { - "$ref": "#/components/schemas/ItemState", - "description": "\nThe item moves from `PENDING` to `PROCESSING` to `TERMINATED` state.\nWhen terminated, consult the `termination_reason` property to see whether it was successful.\n " - }, - "output": { - "$ref": "#/components/schemas/ItemOutput", - "description": "The output status of the item (NONE, FULL)" - }, - "termination_reason": { - "anyOf": [ - { - "$ref": "#/components/schemas/ItemTerminationReason" - }, - { - "type": "null" - } - ], - "description": "\nWhen the `state` is `TERMINATED` this will explain why\n`SUCCEEDED` -> Successful processing.\n`USER_ERROR` -> Failed because the provided input was invalid.\n`SYSTEM_ERROR` -> There was an error in the model or platform.\n`SKIPPED` -> Was cancelled\n" - }, - "error_message": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "title": "Error Message", - "description": "\n The error message in case the `termination_reason` is in `USER_ERROR` or `SYSTEM_ERROR`\n ", - "examples": [ - "This item was not processed because the threshold of 3 items finishing in error state (user or system error) was reached before the item was processed.", - "The item was not processed because the run was cancelled by the user before the item was processed.User error raised by Application because the input data provided by the user cannot be processed:\nThe image width is 123000 px, but the maximum width is 100000 px", - "A system error occurred during the item execution:\n System went out of memory in cell classification", - "An unknown system error occurred during the item execution" - ] - }, - "terminated_at": { - "anyOf": [ - { - "type": "string", - "format": "date-time" - }, - { - "type": "null" - } - ], - "title": "Terminated At", - "description": "Timestamp showing when the item reached a terminal state.", - "examples": [ - "2024-01-15T10:30:45.123Z" - ] - }, - "output_artifacts": { - "items": { - "$ref": "#/components/schemas/OutputArtifactResultReadResponse" - }, - "type": "array", - "title": "Output Artifacts", - "description": "\nThe list of the results generated by the application algorithm. The number of files and their\ntypes depend on the particular application version, call `/v1/versions/{version_id}` to get\nthe details.\n " - }, - "error_code": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "title": "Error Code", - "description": "Error code describing the error that occurred during item processing.", - "readOnly": true - } - }, - "type": "object", - "required": [ - "item_id", - "external_id", - "custom_metadata", - "state", - "output", - "output_artifacts", - "error_code" - ], - "title": "ItemResultReadResponse", - "description": "Response schema for items in `List Run Items` endpoint" - }, - "ItemState": { - "type": "string", - "enum": [ - "PENDING", - "PROCESSING", - "TERMINATED" - ], - "title": "ItemState" - }, - "ItemTerminationReason": { - "type": "string", - "enum": [ - "SUCCEEDED", - "USER_ERROR", - "SYSTEM_ERROR", - "SKIPPED" - ], - "title": "ItemTerminationReason" - }, - "MeReadResponse": { - "properties": { - "user": { - "$ref": "#/components/schemas/UserReadResponse" - }, - "organization": { - "$ref": "#/components/schemas/OrganizationReadResponse" - } - }, - "type": "object", - "required": [ - "user", - "organization" - ], - "title": "MeReadResponse", - "description": "Response schema for `Get current user` endpoint" - }, - "OrganizationReadResponse": { - "properties": { - "id": { - "type": "string", - "title": "Id", - "description": "Unique organization identifier", - "examples": [ - "org_123456" - ] - }, - "name": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "title": "Name", - "description": "Organization name (E.g. “aignx”)", - "examples": [ - "aignx" - ] - }, - "display_name": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "title": "Display Name", - "description": "Public organization name (E.g. “Aignostics GmbH”)", - "examples": [ - "Aignostics GmbH" - ] - }, - "aignostics_bucket_hmac_access_key_id": { - "type": "string", - "title": "Aignostics Bucket Hmac Access Key Id", - "description": "HMAC access key ID for the Aignostics-provided storage bucket. Used to authenticate requests for uploading files and generating signed URLs", - "examples": [ - "YOUR_HMAC_ACCESS_KEY_ID" - ] - }, - "aignostics_bucket_hmac_secret_access_key": { - "type": "string", - "title": "Aignostics Bucket Hmac Secret Access Key", - "description": "HMAC secret access key paired with the access key ID. Keep this credential secure.", - "examples": [ - "YOUR/HMAC/SECRET_ACCESS_KEY" - ] - }, - "aignostics_bucket_name": { - "type": "string", - "title": "Aignostics Bucket Name", - "description": "Name of the bucket provided by Aignostics for storing input artifacts (slide images)", - "examples": [ - "aignostics-platform-bucket" - ] - }, - "aignostics_bucket_protocol": { - "type": "string", - "title": "Aignostics Bucket Protocol", - "description": "Protocol to use for bucket access. Defines the URL scheme for connecting to the storage service", - "examples": [ - "gs" - ] - }, - "aignostics_logfire_token": { - "type": "string", - "title": "Aignostics Logfire Token", - "description": "Authentication token for Logfire observability service. Enables sending application logs and performance metrics to Aignostics for monitoring and support", - "examples": [ - "your-logfire-token" - ] - }, - "aignostics_sentry_dsn": { - "type": "string", - "title": "Aignostics Sentry Dsn", - "description": "Data Source Name (DSN) for Sentry error tracking service. Allows automatic reporting of errors and exceptions to Aignostics support team", - "examples": [ - "https://2354s3#ewsha@o44.ingest.us.sentry.io/34345123432" - ] - } - }, - "type": "object", - "required": [ - "id", - "aignostics_bucket_hmac_access_key_id", - "aignostics_bucket_hmac_secret_access_key", - "aignostics_bucket_name", - "aignostics_bucket_protocol", - "aignostics_logfire_token", - "aignostics_sentry_dsn" - ], - "title": "OrganizationReadResponse", - "description": "Part of response schema for Organization object in `Get current user` endpoint.\nThis model corresponds to the response schema returned from\nAuth0 GET /v2/organizations/{id} endpoint, flattens out the metadata out\nand doesn't return branding or token_quota objects.\nFor details, see:\nhttps://auth0.com/docs/api/management/v2/organizations/get-organizations-by-id\n\n#### Configuration for integrating with Aignostics Platform services.\n\nThe Aignostics Platform API requires signed URLs for input artifacts (slide images). To simplify this process,\nAignostics provides a dedicated storage bucket. The HMAC credentials below grant read and write\naccess to this bucket, allowing you to upload files and generate the signed URLs needed for API calls.\n\nAdditionally, logging and error reporting tokens enable Aignostics to provide better support and monitor\nsystem performance for your integration." - }, - "OutputArtifact": { - "properties": { - "name": { - "type": "string", - "title": "Name" - }, - "mime_type": { - "type": "string", - "pattern": "^\\w+\\/\\w+[-+.|\\w+]+\\w+$", - "title": "Mime Type", - "examples": [ - "application/vnd.apache.parquet" - ] - }, - "metadata_schema": { - "type": "object", - "title": "Metadata Schema" - }, - "scope": { - "$ref": "#/components/schemas/OutputArtifactScope" - }, - "visibility": { - "$ref": "#/components/schemas/OutputArtifactVisibility" - } - }, - "type": "object", - "required": [ - "name", - "mime_type", - "metadata_schema", - "scope", - "visibility" - ], - "title": "OutputArtifact" - }, - "OutputArtifactResultReadResponse": { - "properties": { - "output_artifact_id": { - "type": "string", - "format": "uuid", - "title": "Output Artifact Id", - "description": "The Id of the artifact. Used internally" - }, - "name": { - "type": "string", - "title": "Name", - "description": "\nName of the output from the output schema from the `/v1/versions/{version_id}` endpoint.\n ", - "examples": [ - "tissue_qc:tiff_heatmap" - ] - }, - "metadata": { - "anyOf": [ - { - "type": "object" - }, - { - "type": "null" - } - ], - "title": "Metadata", - "description": "The metadata of the output artifact, provided by the application. Can only be None if the artifact itself was deleted." - }, - "state": { - "$ref": "#/components/schemas/ArtifactState", - "description": "The current state of the artifact (PENDING, PROCESSING, TERMINATED)" - }, - "termination_reason": { - "anyOf": [ - { - "$ref": "#/components/schemas/ArtifactTerminationReason" - }, - { - "type": "null" - } - ], - "description": "The reason for termination when state is TERMINATED" - }, - "output": { - "$ref": "#/components/schemas/ArtifactOutput", - "description": "The output status of the artifact (NONE, FULL)" - }, - "error_message": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "title": "Error Message", - "description": "Error message when artifact is in error state" - }, - "download_url": { - "anyOf": [ - { - "type": "string", - "maxLength": 2083, - "minLength": 1, - "format": "uri" - }, - { - "type": "null" - } - ], - "title": "Download Url", - "description": "\nThe download URL to the output file. The URL is valid for 1 hour after the endpoint is called.\nA new URL is generated every time the endpoint is called.\n " - }, - "error_code": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "title": "Error Code", - "description": "Error code describing the error that occurred during artifact processing.", - "readOnly": true - } - }, - "type": "object", - "required": [ - "output_artifact_id", - "name", - "state", - "output", - "download_url", - "error_code" - ], - "title": "OutputArtifactResultReadResponse" - }, - "OutputArtifactScope": { - "type": "string", - "enum": [ - "ITEM", - "GLOBAL" - ], - "title": "OutputArtifactScope" - }, - "OutputArtifactVisibility": { - "type": "string", - "enum": [ - "INTERNAL", - "EXTERNAL" - ], - "title": "OutputArtifactVisibility" - }, - "RunCreationRequest": { - "properties": { - "application_id": { - "type": "string", - "title": "Application Id", - "description": "Unique ID for the application to use for processing", - "examples": [ - "he-tme" - ] - }, - "version_number": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "title": "Version Number", - "description": "Semantic version of the application to use for processing. If not provided, the latest available version will be used", - "examples": [ - "1.0.0-beta1" - ] - }, - "custom_metadata": { - "anyOf": [ - { - "type": "object" - }, - { - "type": "null" - } - ], - "title": "Custom Metadata", - "description": "Optional JSON metadata to store additional information alongside the run", - "examples": [ - { - "department": "D1", - "study": "abc-1" - } - ] - }, - "items": { - "items": { - "$ref": "#/components/schemas/ItemCreationRequest" - }, - "type": "array", - "minItems": 1, - "title": "Items", - "description": "List of items (slides) to process. Each item represents a whole slide image (WSI) with its associated metadata and artifacts", - "examples": [ - [ - { - "external_id": "slide_1", - "input_artifacts": [ - { - "download_url": "https://example-bucket.s3.amazonaws.com/slide1.tiff?signature=...", - "metadata": { - "checksum_base64_crc32c": "64RKKA==", - "height_px": 87761, - "media-type": "image/tiff", - "resolution_mpp": 0.2628238, - "specimen": { - "disease": "LUNG_CANCER", - "tissue": "LUNG" - }, - "staining_method": "H&E", - "width_px": 136223 - }, - "name": "input_slide" - } - ] - } - ] - ] - } - }, - "type": "object", - "required": [ - "application_id", - "items" - ], - "title": "RunCreationRequest", - "description": "Request schema for `Initiate Run` endpoint.\nIt describes which application version is chosen, and which user data should be processed." - }, - "RunCreationResponse": { - "properties": { - "run_id": { - "type": "string", - "format": "uuid", - "title": "Run Id", - "default": "Run id", - "examples": [ - "3fa85f64-5717-4562-b3fc-2c963f66afa6" - ] - } - }, - "type": "object", - "title": "RunCreationResponse" - }, - "RunItemStatistics": { - "properties": { - "item_count": { - "type": "integer", - "title": "Item Count", - "description": "Total number of the items in the run" - }, - "item_pending_count": { - "type": "integer", - "title": "Item Pending Count", - "description": "The number of items in `PENDING` state" - }, - "item_processing_count": { - "type": "integer", - "title": "Item Processing Count", - "description": "The number of items in `PROCESSING` state" - }, - "item_user_error_count": { - "type": "integer", - "title": "Item User Error Count", - "description": "The number of items in `TERMINATED` state, and the item termination reason is `USER_ERROR`" - }, - "item_system_error_count": { - "type": "integer", - "title": "Item System Error Count", - "description": "The number of items in `TERMINATED` state, and the item termination reason is `SYSTEM_ERROR`" - }, - "item_skipped_count": { - "type": "integer", - "title": "Item Skipped Count", - "description": "The number of items in `TERMINATED` state, and the item termination reason is `SKIPPED`" - }, - "item_succeeded_count": { - "type": "integer", - "title": "Item Succeeded Count", - "description": "The number of items in `TERMINATED` state, and the item termination reason is `SUCCEEDED`" - } - }, - "type": "object", - "required": [ - "item_count", - "item_pending_count", - "item_processing_count", - "item_user_error_count", - "item_system_error_count", - "item_skipped_count", - "item_succeeded_count" - ], - "title": "RunItemStatistics" - }, - "RunOutput": { - "type": "string", - "enum": [ - "NONE", - "PARTIAL", - "FULL" - ], - "title": "RunOutput" - }, - "RunReadResponse": { - "properties": { - "run_id": { - "type": "string", - "format": "uuid", - "title": "Run Id", - "description": "UUID of the application" - }, - "application_id": { - "type": "string", - "title": "Application Id", - "description": "Application id", - "examples": [ - "he-tme" - ] - }, - "version_number": { - "type": "string", - "title": "Version Number", - "description": "Application version number", - "examples": [ - "0.4.4" - ] - }, - "state": { - "$ref": "#/components/schemas/RunState", - "description": "When the run request is received by the Platform, the `state` of it is set to\n`PENDING`. The state changes to `PROCESSING` when at least one item is being processed. After `PROCESSING`, the\nstate of the run can switch back to `PENDING` if there are no processing items, or to `TERMINATED` when the run\nfinished processing." - }, - "output": { - "$ref": "#/components/schemas/RunOutput", - "description": "The status of the output of the run. When 0 items are successfully processed the output is\n`NONE`, after one item is successfully processed, the value is set to `PARTIAL`. When all items of the run are\nsuccessfully processed, the output is set to `FULL`." - }, - "termination_reason": { - "anyOf": [ - { - "$ref": "#/components/schemas/RunTerminationReason" - }, - { - "type": "null" - } - ], - "description": "The termination reason of the run. When the run is not in `TERMINATED` state, the\n termination_reason is `null`. If all items of of the run are processed (successfully or with an error), then\n termination_reason is set to `ALL_ITEMS_PROCESSED`. If the run is cancelled by the user, the value is set to\n `CANCELED_BY_USER`. If the run reaches the threshold of number of failed items, the Platform cancels the run\n and sets the termination_reason to `CANCELED_BY_SYSTEM`.\n " - }, - "error_code": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "title": "Error Code", - "description": "When the termination_reason is set to CANCELED_BY_SYSTEM, the error_code is set to define the\n structured description of the error.", - "examples": [ - "SCHEDULER.ITEMS_WITH_ERROR_THRESHOLD_REACHED" - ] - }, - "error_message": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "title": "Error Message", - "description": "When the termination_reason is set to CANCELED_BY_SYSTEM, the error_message is set to provide\n more insights to the error cause.", - "examples": [ - "Run canceled given errors on more than 10 items." - ] - }, - "statistics": { - "$ref": "#/components/schemas/RunItemStatistics", - "description": "Aggregated statistics of the run execution" - }, - "custom_metadata": { - "anyOf": [ - { - "type": "object" - }, - { - "type": "null" - } - ], - "title": "Custom Metadata", - "description": "Optional JSON metadata that was stored in alongside the run by the user", - "examples": [ - { - "department": "D1", - "study": "abc-1" - } - ] - }, - "custom_metadata_checksum": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "title": "Custom Metadata Checksum", - "description": "The checksum of the `custom_metadata` field. Can be used in the `PUT /runs/{run-id}/custom_metadata`\nrequest to avoid unwanted override of the values in concurrent requests.", - "examples": [ - "f54fe109" - ] - }, - "submitted_at": { - "type": "string", - "format": "date-time", - "title": "Submitted At", - "description": "Timestamp showing when the run was triggered" - }, - "submitted_by": { - "type": "string", - "title": "Submitted By", - "description": "Id of the user who triggered the run", - "examples": [ - "auth0|123456" - ] - }, - "terminated_at": { - "anyOf": [ - { - "type": "string", - "format": "date-time" - }, - { - "type": "null" - } - ], - "title": "Terminated At", - "description": "Timestamp showing when the run reached a terminal state.", - "examples": [ - "2024-01-15T10:30:45.123Z" - ] - } - }, - "type": "object", - "required": [ - "run_id", - "application_id", - "version_number", - "state", - "output", - "termination_reason", - "error_code", - "error_message", - "statistics", - "submitted_at", - "submitted_by" - ], - "title": "RunReadResponse", - "description": "Response schema for `Get run details` endpoint" - }, - "RunState": { - "type": "string", - "enum": [ - "PENDING", - "PROCESSING", - "TERMINATED" - ], - "title": "RunState" - }, - "RunTerminationReason": { - "type": "string", - "enum": [ - "ALL_ITEMS_PROCESSED", - "CANCELED_BY_SYSTEM", - "CANCELED_BY_USER" - ], - "title": "RunTerminationReason" - }, - "UserReadResponse": { - "properties": { - "id": { - "type": "string", - "title": "Id", - "description": "Unique user identifier", - "examples": [ - "auth0|123456" - ] - }, - "email": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "title": "Email", - "description": "User email", - "examples": [ - "user@domain.com" - ] - }, - "email_verified": { - "anyOf": [ - { - "type": "boolean" - }, - { - "type": "null" - } - ], - "title": "Email Verified", - "examples": [ - true - ] - }, - "name": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "title": "Name", - "description": "First and last name of the user", - "examples": [ - "Jane Doe" - ] - }, - "given_name": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "title": "Given Name", - "examples": [ - "Jane" - ] - }, - "family_name": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "title": "Family Name", - "examples": [ - "Doe" - ] - }, - "nickname": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "title": "Nickname", - "examples": [ - "jdoe" - ] - }, - "picture": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "title": "Picture", - "examples": [ - "https://example.com/jdoe.jpg" - ] - }, - "updated_at": { - "anyOf": [ - { - "type": "string", - "format": "date-time" - }, - { - "type": "null" - } - ], - "title": "Updated At", - "examples": [ - "2023-10-05T14:48:00.000Z" - ] - } - }, - "type": "object", - "required": [ - "id" - ], - "title": "UserReadResponse", - "description": "Part of response schema for User object in `Get current user` endpoint.\nThis model corresponds to the response schema returned from\nAuth0 GET /v2/users/{id} endpoint.\nFor details, see:\nhttps://auth0.com/docs/api/management/v2/users/get-users-by-id" - }, - "ValidationError": { - "properties": { - "loc": { - "items": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "integer" - } - ] - }, - "type": "array", - "title": "Location" - }, - "msg": { - "type": "string", - "title": "Message" - }, - "type": { - "type": "string", - "title": "Error Type" - } - }, - "type": "object", - "required": [ - "loc", - "msg", - "type" - ], - "title": "ValidationError" - }, - "VersionReadResponse": { - "properties": { - "version_number": { - "type": "string", - "title": "Version Number", - "description": "Semantic version of the application" - }, - "changelog": { - "type": "string", - "title": "Changelog", - "description": "Description of the changes relative to the previous version" - }, - "input_artifacts": { - "items": { - "$ref": "#/components/schemas/InputArtifact" - }, - "type": "array", - "title": "Input Artifacts", - "description": "List of the input fields, provided by the User" - }, - "output_artifacts": { - "items": { - "$ref": "#/components/schemas/OutputArtifact" - }, - "type": "array", - "title": "Output Artifacts", - "description": "List of the output fields, generated by the application" - }, - "released_at": { - "type": "string", - "format": "date-time", - "title": "Released At", - "description": "The timestamp when the application version was registered" - } - }, - "type": "object", - "required": [ - "version_number", - "changelog", - "input_artifacts", - "output_artifacts", - "released_at" - ], - "title": "VersionReadResponse", - "description": "Base Response schema for the `Application Version Details` endpoint" - } - }, - "securitySchemes": { - "OAuth2AuthorizationCodeBearer": { - "type": "oauth2", - "flows": { - "authorizationCode": { - "scopes": {}, - "authorizationUrl": "https://aignostics-platform-staging.eu.auth0.com/authorize", - "tokenUrl": "https://aignostics-platform-staging.eu.auth0.com/oauth/token" - } - } - } - } - } -} diff --git a/codegen/in/archive/openapi_1.0.0-alpha.json b/codegen/in/archive/openapi_alpha.json similarity index 100% rename from codegen/in/archive/openapi_1.0.0-alpha.json rename to codegen/in/archive/openapi_alpha.json diff --git a/codegen/in/openapi.json b/codegen/in/openapi.json index ed47ea22..12d1e1d5 100644 --- a/codegen/in/openapi.json +++ b/codegen/in/openapi.json @@ -3,7 +3,7 @@ "info": { "title": "Aignostics Platform API", "description": "\nThe Aignostics Platform is a cloud-based service that enables organizations to access advanced computational pathology applications through a secure API. The platform provides standardized access to Aignostics' portfolio of computational pathology solutions, with Atlas H&E-TME serving as an example of the available API endpoints. \n\nTo begin using the platform, your organization must first be registered by our business support team. If you don't have an account yet, please contact your account manager or email support@aignostics.com to get started. \n\nMore information about our applications can be found on [https://platform.aignostics.com](https://platform.aignostics.com).\n\n**How to authorize and test API endpoints:**\n\n1. Click the \"Authorize\" button in the right corner below\n3. Click \"Authorize\" button in the dialog to log in with your Aignostics Platform credentials\n4. After successful login, you'll be redirected back and can use \"Try it out\" on any endpoint\n\n**Note**: You only need to authorize once per session. The lock icons next to endpoints will show green when authorized.\n\n", - "version": "1.0.0.beta7" + "version": "1.0.0-beta6" }, "servers": [ { @@ -37,7 +37,7 @@ } }, { - "name": "page-size", + "name": "page_size", "in": "query", "required": false, "schema": { @@ -45,7 +45,7 @@ "maximum": 100, "minimum": 5, "default": 50, - "title": "Page-Size" + "title": "Page Size" } }, { @@ -78,7 +78,7 @@ "schema": { "type": "array", "items": { - "$ref": "#/components/schemas/ApplicationReadShortResponse" + "$ref": "#/components/schemas/ApplicationReadResponse" }, "title": "Response List Applications V1 Applications Get" }, @@ -89,11 +89,7 @@ "regulatory_classes": [ "RUO" ], - "description": "The Atlas H&E TME is an AI application designed to examine FFPE (formalin-fixed, paraffin-embedded) tissues stained with H&E (hematoxylin and eosin), delivering comprehensive insights into the tumor microenvironment.", - "latest_version": { - "number": "1.0.0", - "released_at": "2025-09-01T19:01:05.401Z" - } + "description": "The Atlas H&E TME is an AI application designed to examine FFPE (formalin-fixed, paraffin-embedded) tissues stained with H&E (hematoxylin and eosin), delivering comprehensive insights into the tumor microenvironment." }, { "application_id": "test-app", @@ -101,11 +97,7 @@ "regulatory_classes": [ "RUO" ], - "description": "This is the test application with two algorithms: TissueQc and Tissue Segmentation", - "latest_version": { - "number": "2.0.0", - "released_at": "2025-09-02T19:01:05.401Z" - } + "description": "This is the test application with two algorithms: TissueQc and Tissue Segmentation" } ] } @@ -181,14 +173,14 @@ } } }, - "/v1/applications/{application_id}/versions/{version}": { + "/v1/applications/{application_id}/versions": { "get": { "tags": [ "Public" ], - "summary": "Application Version Details", - "description": "Get the application version details\n\nAllows caller to retrieve information about application version based on provided application version ID.", - "operationId": "application_version_details_v1_applications__application_id__versions__version__get", + "summary": "List Available Application Versions", + "description": "Returns a list of available application versions for a specific application.\n\nA version is considered available when it has been assigned to your organization. Within a major version,\nall minor and patch updates are automatically accessible unless a specific version has been deprecated.\nMajor version upgrades, however, require explicit assignment and may be subject to contract modifications\nbefore becoming available to your organization.", + "operationId": "list_versions_by_application_id_v1_applications__application_id__versions_get", "security": [ { "OAuth2AuthorizationCodeBearer": [] @@ -204,13 +196,274 @@ "title": "Application Id" } }, + { + "name": "page", + "in": "query", + "required": false, + "schema": { + "type": "integer", + "minimum": 1, + "default": 1, + "title": "Page" + } + }, + { + "name": "page_size", + "in": "query", + "required": false, + "schema": { + "type": "integer", + "maximum": 100, + "minimum": 5, + "default": 50, + "title": "Page Size" + } + }, { "name": "version", + "in": "query", + "required": false, + "schema": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "description": "Semantic version of the application, example: `1.0.13`", + "examples": [ + "1.0.0", + "1.0.13" + ], + "title": "Version" + }, + "description": "Semantic version of the application, example: `1.0.13`" + }, + { + "name": "sort", + "in": "query", + "required": false, + "schema": { + "anyOf": [ + { + "type": "array", + "items": { + "type": "string" + } + }, + { + "type": "null" + } + ], + "description": "Sort the results by one or more fields. Use `+` for ascending and `-` for descending order.\n\n**Available fields:**\n- `application_version_id`\n- `version`\n- `application_id`\n- `changelog`\n- `created_at`\n\n**Examples:**\n- `?sort=application_id` - Sort by application_id ascending\n- `?sort=-version` - Sort by version descending\n- `?sort=+application_id&sort=-created_at` - Sort by application_id ascending, then created_at descending", + "title": "Sort" + }, + "description": "Sort the results by one or more fields. Use `+` for ascending and `-` for descending order.\n\n**Available fields:**\n- `application_version_id`\n- `version`\n- `application_id`\n- `changelog`\n- `created_at`\n\n**Examples:**\n- `?sort=application_id` - Sort by application_id ascending\n- `?sort=-version` - Sort by version descending\n- `?sort=+application_id&sort=-created_at` - Sort by application_id ascending, then created_at descending" + } + ], + "responses": { + "200": { + "description": "A list of application versions for a given application ID available to the caller", + "content": { + "application/json": { + "schema": { + "type": "array", + "items": { + "$ref": "#/components/schemas/ApplicationVersionReadResponse" + }, + "title": "Response List Versions By Application Id V1 Applications Application Id Versions Get" + }, + "example": [ + { + "application_version_id": "he-tme:v0.5.0", + "version": "0.5.0", + "application_id": "he-tme", + "changelog": "Redeployed after metadata name changes. ", + "input_artifacts": [ + { + "name": "user_slide", + "mime_type": "image/tiff", + "metadata_schema": { + "type": "object", + "$defs": { + "LungCancerSpecimen": { + "type": "object", + "title": "LungCancerSpecimen", + "required": [ + "disease", + "tissue" + ], + "properties": { + "tissue": { + "enum": [ + "LUNG", + "LYMPH_NODE", + "LIVER", + "ADRENAL_GLAND", + "BONE", + "BRAIN" + ], + "type": "string", + "title": "Tissue" + }, + "disease": { + "enum": [ + "LUNG_CANCER" + ], + "type": "string", + "const": "LUNG_CANCER", + "title": "Disease" + } + }, + "additionalProperties": false + } + }, + "title": "Slide Schema", + "$schema": "http://json-schema.org/draft-07/schema#", + "required": [ + "media_type", + "checksum_base64_crc32c", + "specimen", + "resolution_mpp", + "width_px", + "height_px", + "staining_method" + ], + "properties": { + "specimen": { + "anyOf": [ + { + "$ref": "#/$defs/LungCancerSpecimen" + } + ], + "title": "Specimen" + }, + "width_px": { + "type": "integer", + "title": "Width (px)", + "minimum": 1 + }, + "height_px": { + "type": "integer", + "title": "Height (px)", + "minimum": 1 + }, + "media_type": { + "enum": [ + "application/dicom", + "image/tiff", + "application/octet-stream", + "application/zip" + ], + "type": "string", + "title": "Media Type" + }, + "resolution_mpp": { + "type": "number", + "title": "Resolution (mpp)", + "maximum": 0.5, + "minimum": 0.125 + }, + "staining_method": { + "enum": [ + "H&E" + ], + "type": "string", + "const": "H&E", + "title": "Staining Method" + }, + "checksum_base64_crc32c": { + "type": "string", + "title": "Base64 encoded big-endian CRC32C checksum" + } + }, + "description": "Schema of a slide.", + "additionalProperties": false + } + } + ], + "output_artifacts": [ + { + "name": "tissue_qc:geojson_polygons", + "mime_type": "image/tiff", + "metadata_schema": { + "type": "object", + "title": "GeoJsonPolygonsMetadata", + "$schema": "http://json-schema.org/draft-07/schema#", + "required": [ + "checksum_base64_crc32c", + "polygon_mpp" + ], + "properties": { + "media_type": { + "enum": [ + "application/geo+json" + ], + "type": "string", + "const": "application/geo+json", + "title": "Media Type", + "default": "application/geo+json" + }, + "polygon_mpp": { + "type": "number", + "title": "Polygon Mpp" + }, + "checksum_base64_crc32c": { + "type": "string", + "title": "Base64 encoded big-endian CRC32C checksum" + } + }, + "description": "Metadata corresponding to GeoJSON polygons.", + "additionalProperties": false + }, + "scope": "ITEM" + } + ], + "created_at": "2025-06-03T11:45:55.646211Z" + } + ] + } + } + }, + "401": { + "description": "Unauthorized - Invalid or missing authentication" + }, + "422": { + "description": "Validation Error", + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/HTTPValidationError" + } + } + } + } + } + } + }, + "/v1/versions/{application_version_id}": { + "get": { + "tags": [ + "Public" + ], + "summary": "Application Version Details", + "description": "Get the application version details\n\nAllows caller to retrieve information about application version based on provided application version ID.", + "operationId": "application_version_details_v1_versions__application_version_id__get", + "security": [ + { + "OAuth2AuthorizationCodeBearer": [] + } + ], + "parameters": [ + { + "name": "application_version_id", "in": "path", "required": true, "schema": { "type": "string", - "title": "Version" + "title": "Application Version Id" } } ], @@ -223,11 +476,13 @@ "$ref": "#/components/schemas/VersionReadResponse" }, "example": { - "version_number": "0.4.4", + "application_version_id": "he-tme:v0.4.4", + "version": "0.4.4", + "application_id": "he-tme", "changelog": "New deployment", "input_artifacts": [ { - "name": "whole_slide_image", + "name": "user_slide", "mime_type": "image/tiff", "metadata_schema": { "type": "object", @@ -404,7 +659,7 @@ "visibility": "EXTERNAL" } ], - "released_at": "2025-04-16T08:45:20.655972Z" + "created_at": "2025-04-16T08:45:20.655972Z" } } } @@ -413,7 +668,7 @@ "description": "Forbidden - You don't have permission to see this version" }, "404": { - "description": "Not Found - Application version with given ID is not available to you or does not exist" + "description": "Not Found - Application version with given ID does not exist" }, "422": { "description": "Validation Error", @@ -433,9 +688,9 @@ "tags": [ "Public" ], - "summary": "List Runs", - "description": "List runs with filtering, sorting, and pagination capabilities.\n\nReturns paginated runs that were submitted by the user.", - "operationId": "list_runs_v1_runs_get", + "summary": "List Application Runs", + "description": "List application runs with filtering, sorting, and pagination capabilities.\n\nReturns paginated application runs that were triggered by the user.", + "operationId": "list_application_runs_v1_runs_get", "security": [ { "OAuth2AuthorizationCodeBearer": [] @@ -456,10 +711,6 @@ } ], "description": "Optional application ID filter", - "examples": [ - "tissue-segmentation", - "heta" - ], "title": "Application Id" }, "description": "Optional application ID filter" @@ -477,39 +728,13 @@ "type": "null" } ], - "description": "Optional Version Name", - "examples": [ - "1.0.2", - "1.0.1-beta2" - ], + "description": "Optional application version filter", "title": "Application Version" }, - "description": "Optional Version Name" - }, - { - "name": "external_id", - "in": "query", - "required": false, - "schema": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "description": "Optionally filter runs by items with this external ID", - "examples": [ - "slide_001", - "patient_12345_sample_A" - ], - "title": "External Id" - }, - "description": "Optionally filter runs by items with this external ID" + "description": "Optional application version filter" }, { - "name": "custom_metadata", + "name": "metadata", "in": "query", "required": false, "schema": { @@ -522,35 +747,30 @@ "type": "null" } ], - "description": "Use PostgreSQL JSONPath expressions to filter runs by their custom_metadata.\n#### URL Encoding Required\n**Important**: JSONPath expressions contain special characters that must be URL-encoded when used in query parameters. Most HTTP clients handle this automatically, but when constructing URLs manually, ensure proper encoding.\n\n#### Examples (Clear Format):\n- **Field existence**: `$.study` - Runs that have a study field defined\n- **Exact value match**: `$.study ? (@ == \"high\")` - Runs with specific study value\n- **Numeric comparison**: `$.confidence_score ? (@ > 0.75)` - Runs with confidence score greater than 0.75\n- **Array operations**: `$.tags[*] ? (@ == \"draft\")` - Runs with tags array containing \"draft\"\n- **Complex conditions**: `$.resources ? (@.gpu_count > 2 && @.memory_gb >= 16)` - Runs with high resource requirements\n\n#### Examples (URL-Encoded Format):\n- **Field existence**: `%24.study`\n- **Exact value match**: `%24.study%20%3F%20(%40%20%3D%3D%20%22high%22)`\n- **Numeric comparison**: `%24.confidence_score%20%3F%20(%40%20%3E%200.75)`\n- **Array operations**: `%24.tags%5B*%5D%20%3F%20(%40%20%3D%3D%20%22draft%22)`\n- **Complex conditions**: `%24.resources%20%3F%20(%40.gpu_count%20%3E%202%20%26%26%20%40.memory_gb%20%3E%3D%2016)`\n\n#### Notes\n- JSONPath expressions are evaluated using PostgreSQL's `@?` operator\n- The `$.` prefix is automatically added to root-level field references if missing\n- String values in conditions must be enclosed in double quotes\n- Use `&&` for AND operations and `||` for OR operations\n- Regular expressions use `like_regex` with standard regex syntax\n- **Remember to URL-encode the entire JSONPath expression when making HTTP requests**\n\n ", - "title": "Custom Metadata" + "description": "Use PostgreSQL JSONPath expressions to filter runs by their metadata.\n#### URL Encoding Required\n**Important**: JSONPath expressions contain special characters that must be URL-encoded when used in query parameters. Most HTTP clients handle this automatically, but when constructing URLs manually, ensure proper encoding.\n\n#### Examples (Clear Format):\n- **Field existence**: `$.project` - Runs that have a project field defined\n- **Exact value match**: `$.project ? (@ == \"cancer-research\")` - Runs with specific project value\n- **Numeric comparison**: `$.duration_hours ? (@ < 2)` - Runs with duration less than 2 hours\n- **Array operations**: `$.tags[*] ? (@ == \"production\")` - Runs tagged with \"production\"\n- **Complex conditions**: `$.resources ? (@.gpu_count > 2 && @.memory_gb >= 16)` - Runs with high resource requirements\n\n#### Examples (URL-Encoded Format):\n- **Field existence**: `%24.project`\n- **Exact value match**: `%24.project%20%3F%20(%40%20%3D%3D%20%22cancer-research%22)`\n- **Numeric comparison**: `%24.duration_hours%20%3F%20(%40%20%3C%202)`\n- **Array operations**: `%24.tags%5B*%5D%20%3F%20(%40%20%3D%3D%20%22production%22)`\n- **Complex conditions**: `%24.resources%20%3F%20(%40.gpu_count%20%3E%202%20%26%26%20%40.memory_gb%20%3E%3D%2016)`\n\n#### Notes\n- JSONPath expressions are evaluated using PostgreSQL's `@?` operator\n- The `$.` prefix is automatically added to root-level field references if missing\n- String values in conditions must be enclosed in double quotes\n- Use `&&` for AND operations and `||` for OR operations\n- Regular expressions use `like_regex` with standard regex syntax\n- **Remember to URL-encode the entire JSONPath expression when making HTTP requests**\n\n ", + "title": "Metadata" }, - "description": "Use PostgreSQL JSONPath expressions to filter runs by their custom_metadata.\n#### URL Encoding Required\n**Important**: JSONPath expressions contain special characters that must be URL-encoded when used in query parameters. Most HTTP clients handle this automatically, but when constructing URLs manually, ensure proper encoding.\n\n#### Examples (Clear Format):\n- **Field existence**: `$.study` - Runs that have a study field defined\n- **Exact value match**: `$.study ? (@ == \"high\")` - Runs with specific study value\n- **Numeric comparison**: `$.confidence_score ? (@ > 0.75)` - Runs with confidence score greater than 0.75\n- **Array operations**: `$.tags[*] ? (@ == \"draft\")` - Runs with tags array containing \"draft\"\n- **Complex conditions**: `$.resources ? (@.gpu_count > 2 && @.memory_gb >= 16)` - Runs with high resource requirements\n\n#### Examples (URL-Encoded Format):\n- **Field existence**: `%24.study`\n- **Exact value match**: `%24.study%20%3F%20(%40%20%3D%3D%20%22high%22)`\n- **Numeric comparison**: `%24.confidence_score%20%3F%20(%40%20%3E%200.75)`\n- **Array operations**: `%24.tags%5B*%5D%20%3F%20(%40%20%3D%3D%20%22draft%22)`\n- **Complex conditions**: `%24.resources%20%3F%20(%40.gpu_count%20%3E%202%20%26%26%20%40.memory_gb%20%3E%3D%2016)`\n\n#### Notes\n- JSONPath expressions are evaluated using PostgreSQL's `@?` operator\n- The `$.` prefix is automatically added to root-level field references if missing\n- String values in conditions must be enclosed in double quotes\n- Use `&&` for AND operations and `||` for OR operations\n- Regular expressions use `like_regex` with standard regex syntax\n- **Remember to URL-encode the entire JSONPath expression when making HTTP requests**\n\n ", + "description": "Use PostgreSQL JSONPath expressions to filter runs by their metadata.\n#### URL Encoding Required\n**Important**: JSONPath expressions contain special characters that must be URL-encoded when used in query parameters. Most HTTP clients handle this automatically, but when constructing URLs manually, ensure proper encoding.\n\n#### Examples (Clear Format):\n- **Field existence**: `$.project` - Runs that have a project field defined\n- **Exact value match**: `$.project ? (@ == \"cancer-research\")` - Runs with specific project value\n- **Numeric comparison**: `$.duration_hours ? (@ < 2)` - Runs with duration less than 2 hours\n- **Array operations**: `$.tags[*] ? (@ == \"production\")` - Runs tagged with \"production\"\n- **Complex conditions**: `$.resources ? (@.gpu_count > 2 && @.memory_gb >= 16)` - Runs with high resource requirements\n\n#### Examples (URL-Encoded Format):\n- **Field existence**: `%24.project`\n- **Exact value match**: `%24.project%20%3F%20(%40%20%3D%3D%20%22cancer-research%22)`\n- **Numeric comparison**: `%24.duration_hours%20%3F%20(%40%20%3C%202)`\n- **Array operations**: `%24.tags%5B*%5D%20%3F%20(%40%20%3D%3D%20%22production%22)`\n- **Complex conditions**: `%24.resources%20%3F%20(%40.gpu_count%20%3E%202%20%26%26%20%40.memory_gb%20%3E%3D%2016)`\n\n#### Notes\n- JSONPath expressions are evaluated using PostgreSQL's `@?` operator\n- The `$.` prefix is automatically added to root-level field references if missing\n- String values in conditions must be enclosed in double quotes\n- Use `&&` for AND operations and `||` for OR operations\n- Regular expressions use `like_regex` with standard regex syntax\n- **Remember to URL-encode the entire JSONPath expression when making HTTP requests**\n\n ", "examples": { - "no_filter": { - "summary": "No filter (returns all)", - "description": "Returns all items without filtering by custom metadata", - "value": "$" - }, "field_exists": { "summary": "Check if field exists", "description": "Find applications that have a project field defined", - "value": "$.study" + "value": "$.project" }, "field_has_value": { "summary": "Check if field has a certain value", "description": "Compare a field value against a certain value", - "value": "$.study ? (@ == \"abc-1\")" + "value": "$.project ? (@ == \"cancer-research\")" }, "numeric_comparisons": { "summary": "Compare to a numeric value of a field", "description": "Compare a field value against a numeric value of a field", - "value": "$.confidence_score ? (@ > 0.75)" + "value": "$.duration_hours ? (@ < 2)" }, "array_operations": { "summary": "Check if an array contains a certain value", "description": "Check if an array contains a certain value", - "value": "$.tags[*] ? (@ == \"draft\")" + "value": "$.tags[*] ? (@ == \"production\")" }, "complex_filters": { "summary": "Combine multiple checks", @@ -598,10 +818,10 @@ "type": "null" } ], - "description": "Sort the results by one or more fields. Use `+` for ascending and `-` for descending order.\n\n**Available fields:**\n- `run_id`\n- `application_version_id`\n- `organization_id`\n- `status`\n- `submitted_at`\n- `submitted_by`\n\n**Examples:**\n- `?sort=submitted_at` - Sort by creation time (ascending)\n- `?sort=-submitted_at` - Sort by creation time (descending)\n- `?sort=status&sort=-submitted_at` - Sort by status, then by time (descending)\n", + "description": "Sort the results by one or more fields. Use `+` for ascending and `-` for descending order.\n\n**Available fields:**\n- `application_run_id`\n- `application_version_id`\n- `organization_id`\n- `status`\n- `triggered_at`\n- `triggered_by`\n\n**Examples:**\n- `?sort=triggered_at` - Sort by creation time (ascending)\n- `?sort=-triggered_at` - Sort by creation time (descending)\n- `?sort=status&sort=-triggered_at` - Sort by status, then by time (descending)\n", "title": "Sort" }, - "description": "Sort the results by one or more fields. Use `+` for ascending and `-` for descending order.\n\n**Available fields:**\n- `run_id`\n- `application_version_id`\n- `organization_id`\n- `status`\n- `submitted_at`\n- `submitted_by`\n\n**Examples:**\n- `?sort=submitted_at` - Sort by creation time (ascending)\n- `?sort=-submitted_at` - Sort by creation time (descending)\n- `?sort=status&sort=-submitted_at` - Sort by status, then by time (descending)\n" + "description": "Sort the results by one or more fields. Use `+` for ascending and `-` for descending order.\n\n**Available fields:**\n- `application_run_id`\n- `application_version_id`\n- `organization_id`\n- `status`\n- `triggered_at`\n- `triggered_by`\n\n**Examples:**\n- `?sort=triggered_at` - Sort by creation time (ascending)\n- `?sort=-triggered_at` - Sort by creation time (descending)\n- `?sort=status&sort=-triggered_at` - Sort by status, then by time (descending)\n" } ], "responses": { @@ -614,13 +834,13 @@ "items": { "$ref": "#/components/schemas/RunReadResponse" }, - "title": "Response List Runs V1 Runs Get" + "title": "Response List Application Runs V1 Runs Get" } } } }, "404": { - "description": "Run not found" + "description": "Application run not found" }, "422": { "description": "Validation Error", @@ -638,9 +858,9 @@ "tags": [ "Public" ], - "summary": "Initiate Run", - "description": "This endpoint initiates a processing run for a selected application and version, and returns a `run_id` for tracking purposes.\n\nSlide processing occurs asynchronously, allowing you to retrieve results for individual slides as soon as they\ncomplete processing. The system typically processes slides in batches of four, though this number may be reduced\nduring periods of high demand.\nBelow is an example of the required payload for initiating an Atlas H&E TME processing run.\n\n\n### Payload\n\nThe payload includes `application_id`, optional `version_number`, and `items` base fields.\n\n`application_id` is the unique identifier for the application.\n`version_number` is the semantic version to use. If not provided, the latest available version will be used.\n\n`items` includes the list of the items to process (slides, in case of HETA application).\nEvery item has a set of standard fields defined by the API, plus the custom_metadata, specific to the\nchosen application.\n\nExample payload structure with the comments:\n```\n{\n application_id: \"he-tme\",\n version_number: \"1.0.0-beta\",\n items: [{\n \"external_id\": \"slide_1\",\n \"input_artifacts\": [{\n \"name\": \"user_slide\",\n \"download_url\": \"https://...\",\n \"custom_metadata\": {\n \"specimen\": {\n \"disease\": \"LUNG_CANCER\",\n \"tissue\": \"LUNG\"\n },\n \"staining_method\": \"H&E\",\n \"width_px\": 136223,\n \"height_px\": 87761,\n \"resolution_mpp\": 0.2628238,\n \"media-type\":\"image/tiff\",\n \"checksum_base64_crc32c\": \"64RKKA==\"\n }\n }]\n }]\n}\n```\n\n| Parameter | Description |\n| :---- | :---- |\n| `application_id` required | Unique ID for the application |\n| `version_number` optional | Semantic version of the application. If not provided, the latest available version will be used |\n| `items` required | List of submitted items (WSIs) with parameters described below. |\n| `external_id` required | Unique WSI name or ID for easy reference to items, provided by the caller. The external_id should be unique across all items of the run. |\n| `input_artifacts` required | List of provided artifacts for a WSI; at the moment Atlas H&E-TME receives only 1 artifact per slide (the slide itself), but for some other applications this can be a slide and an segmentation map |\n| `name` required | Type of artifact; Atlas H&E-TME supports only `\"input_slide\"` |\n| `download_url` required | Signed URL to the input file in the S3 or GCS; Should be valid for at least 6 days |\n| `specimen: disease` required | Supported cancer types for Atlas H&E-TME (see full list in Atlas H&E-TME manual) |\n| `specimen: tissue` required | Supported tissue types for Atlas H&E-TME (see full list in Atlas H&E-TME manual) |\n| `staining_method` required | WSI stain /bio-marker; Atlas H&E-TME supports only `\"H&E\"` |\n| `width_px` required | Integer value. Number of pixels of the WSI in the X dimension. |\n| `height_px` required | Integer value. Number of pixels of the WSI in the Y dimension. |\n| `resolution_mpp` required | Resolution of WSI in micrometers per pixel; check allowed range in Atlas H&E-TME manual |\n| `media-type` required | Supported media formats; available values are: image/tiff (for .tiff or .tif WSI) application/dicom (for DICOM ) application/zip (for zipped DICOM) application/octet-stream (for .svs WSI) |\n| `checksum_base64_crc32c` required | Base64 encoded big-endian CRC32C checksum of the WSI image |\n\n\n\n### Response\n\nThe endpoint returns the run UUID. After that the job is scheduled for the\nexecution in the background.\n\nTo check the status of the run call `v1/runs/{run_id}`.\n\n### Rejection\n\nApart from the authentication, authorization and malformed input error, the request can be\nrejected when the quota limit is exceeded. More details on quotas is described in the\ndocumentation", - "operationId": "create_run_v1_runs_post", + "summary": "Initiate Application Run", + "description": "This endpoint initiates a processing run for a selected application version and returns an `application_run_id` for tracking purposes.\n\nSlide processing occurs asynchronously, allowing you to retrieve results for individual slides as soon as they\ncomplete processing. The system typically processes slides in batches of four, though this number may be reduced\nduring periods of high demand.\nBelow is an example of the required payload for initiating an Atlas H&E TME processing run.\n\n\n### Payload\n\nThe payload includes `application_version_id` and `items` base fields.\n\n`application_version_id` is the id used for `/v1/versions/{application_id}` endpoint.\n\n`items` includes the list of the items to process (slides, in case of HETA application).\nEvery item has a set of standard fields defined by the API, plus the metadata, specific to the\nchosen application.\n\nExample payload structure with the comments:\n```\n{\n application_version_id: \"he-tme:v1.0.0-beta\",\n items: [{\n \"reference\": \"slide_1\",\n \"input_artifacts\": [{\n \"name\": \"user_slide\",\n \"download_url\": \"https://...\",\n \"metadata\": {\n \"specimen\": {\n \"disease\": \"LUNG_CANCER\",\n \"tissue\": \"LUNG\"\n },\n \"staining_method\": \"H&E\",\n \"width_px\": 136223,\n \"height_px\": 87761,\n \"resolution_mpp\": 0.2628238,\n \"media-type\":\"image/tiff\",\n \"checksum_base64_crc32c\": \"64RKKA==\"\n }\n }]\n }]\n}\n```\n\n| Parameter | Description |\n| :---- | :---- |\n| `application_version_id` required | Unique ID for the application (must include version) |\n| `items` required | List of submitted items (WSIs) with parameters described below. |\n| `reference` required | Unique WSI name or ID for easy reference to results, provided by the caller. The reference should be unique across all items of the application run. |\n| `input_artifacts` required | List of provided artifacts for a WSI; at the moment Atlas H&E-TME receives only 1 artifact per slide (the slide itself), but for some other applications this can be a slide and an segmentation map |\n| `name` required | Type of artifact; Atlas H&E-TME supports only `\"input_slide\"` |\n| `download_url` required | Signed URL to the input file in the S3 or GCS; Should be valid for at least 6 days |\n| `specimen: disease` required | Supported cancer types for Atlas H&E-TME (see full list in Atlas H&E-TME manual) |\n| `specimen: tissue` required | Supported tissue types for Atlas H&E-TME (see full list in Atlas H&E-TME manual) |\n| `staining_method` required | WSI stain /bio-marker; Atlas H&E-TME supports only `\"H&E\"` |\n| `width_px` required | Integer value. Number of pixels of the WSI in the X dimension. |\n| `height_px` required | Integer value. Number of pixels of the WSI in the Y dimension. |\n| `resolution_mpp` required | Resolution of WSI in micrometers per pixel; check allowed range in Atlas H&E-TME manual |\n| `media-type` required | Supported media formats; available values are: image/tiff (for .tiff or .tif WSI) application/dicom (for DICOM ) application/zip (for zipped DICOM) application/octet-stream (for .svs WSI) |\n| `checksum_base64_crc32c` required | Base64 encoded big-endian CRC32C checksum of the WSI image |\n\n\n\n### Response\n\nThe endpoint returns the application run UUID. After that the job is scheduled for the\nexecution in the background.\n\nTo check the status of the run call `v1/runs/{application_run_id}`.\n\n### Rejection\n\nApart from the authentication, authorization and malformed input error, the request can be\nrejected when the quota limit is exceeded. More details on quotas is described in the\ndocumentation", + "operationId": "create_application_run_v1_runs_post", "security": [ { "OAuth2AuthorizationCodeBearer": [] @@ -689,14 +909,14 @@ } } }, - "/v1/runs/{run_id}": { + "/v1/runs/{application_run_id}": { "get": { "tags": [ "Public" ], "summary": "Get run details", - "description": "This endpoint allows the caller to retrieve the current status of a run along with other relevant run details.\n A run becomes available immediately after it is created through the POST `/runs/` endpoint.\n\n To download the output results, use GET `/runs/{run_id}/` items to get outputs for all slides.\nAccess to a run is restricted to the user who created it.", - "operationId": "get_run_v1_runs__run_id__get", + "description": "This endpoint allows the caller to retrieve the current status of an application run along with other relevant run details.\n A run becomes available immediately after it is created through the POST `/runs/` endpoint.\n\n To download the output results, use GET `/runs/{application_run_id}/` results to get outputs for all slides.\nAccess to a run is restricted to the user who created it.", + "operationId": "get_run_v1_runs__application_run_id__get", "security": [ { "OAuth2AuthorizationCodeBearer": [] @@ -704,16 +924,16 @@ ], "parameters": [ { - "name": "run_id", + "name": "application_run_id", "in": "path", "required": true, "schema": { "type": "string", "format": "uuid", - "description": "Run id, returned by `POST /runs/` endpoint", - "title": "Run Id" + "description": "Application run id, returned by `POST /runs/` endpoint", + "title": "Application Run Id" }, - "description": "Run id, returned by `POST /runs/` endpoint" + "description": "Application run id, returned by `POST /runs/` endpoint" } ], "responses": { @@ -728,7 +948,7 @@ } }, "404": { - "description": "Run not found because it was deleted." + "description": "Application run not found because it was deleted." }, "403": { "description": "Forbidden - You don't have permission to see this run" @@ -746,14 +966,14 @@ } } }, - "/v1/runs/{run_id}/cancel": { + "/v1/runs/{application_run_id}/cancel": { "post": { "tags": [ "Public" ], - "summary": "Cancel Run", - "description": "The run can be canceled by the user who created the run.\n\nThe execution can be canceled any time while the application is not in a final state. The\npending items will not be processed and will not add to the cost.\n\nWhen the application is canceled, the already completed items stay available for download.", - "operationId": "cancel_run_v1_runs__run_id__cancel_post", + "summary": "Cancel Application Run", + "description": "The application run can be canceled by the user who created the application run.\n\nThe execution can be canceled any time while the application is not in a final state. The\npending items will not be processed and will not add to the cost.\n\nWhen the application is canceled, the already completed items stay available for download.", + "operationId": "cancel_application_run_v1_runs__application_run_id__cancel_post", "security": [ { "OAuth2AuthorizationCodeBearer": [] @@ -761,26 +981,21 @@ ], "parameters": [ { - "name": "run_id", + "name": "application_run_id", "in": "path", "required": true, "schema": { "type": "string", "format": "uuid", - "description": "Run id, returned by `POST /runs/` endpoint", - "title": "Run Id" + "description": "Application run id, returned by `POST /runs/` endpoint", + "title": "Application Run Id" }, - "description": "Run id, returned by `POST /runs/` endpoint" + "description": "Application run id, returned by `POST /runs/` endpoint" } ], "responses": { "202": { - "description": "Successful Response", - "content": { - "application/json": { - "schema": {} - } - } + "description": "Run cancelled successfully" }, "404": { "description": "Run not found" @@ -788,9 +1003,6 @@ "403": { "description": "Forbidden - You don't have permission to cancel this run" }, - "409": { - "description": "Conflict - The Run is already cancelled" - }, "422": { "description": "Validation Error", "content": { @@ -804,14 +1016,14 @@ } } }, - "/v1/runs/{run_id}/items": { + "/v1/runs/{application_run_id}/results": { "get": { "tags": [ "Public" ], - "summary": "List Run Items", - "description": "List items in a run with filtering, sorting, and pagination capabilities.\n\nReturns paginated items within a specific run. Results can be filtered\nby item IDs, external_ids, status, and custom_metadata using JSONPath expressions.\n\n## JSONPath Metadata Filtering\nUse PostgreSQL JSONPath expressions to filter items using their custom_metadata.\n\n### Examples:\n- **Field existence**: `$.case_id` - Results that have a case_id field defined\n- **Exact value match**: `$.priority ? (@ == \"high\")` - Results with high priority\n- **Numeric comparison**: `$.confidence_score ? (@ > 0.95)` - Results with high confidence\n- **Array operations**: `$.flags[*] ? (@ == \"reviewed\")` - Results flagged as reviewed\n- **Complex conditions**: `$.metrics ? (@.accuracy > 0.9 && @.recall > 0.8)` - Results meeting performance thresholds\n\n## Notes\n- JSONPath expressions are evaluated using PostgreSQL's `@?` operator\n- The `$.` prefix is automatically added to root-level field references if missing\n- String values in conditions must be enclosed in double quotes\n- Use `&&` for AND operations and `||` for OR operations", - "operationId": "list_run_items_v1_runs__run_id__items_get", + "summary": "List Run Results", + "description": "List results for items in an application run with filtering, sorting, and pagination capabilities.\n\nReturns paginated results for items within a specific application run. Results can be filtered\nby item IDs, references, status, and custom metadata using JSONPath expressions.\n\n## JSONPath Metadata Filtering\nUse PostgreSQL JSONPath expressions to filter results by their metadata.\n\n### Examples:\n- **Field existence**: `$.case_id` - Results that have a case_id field defined\n- **Exact value match**: `$.priority ? (@ == \"high\")` - Results with high priority\n- **Numeric comparison**: `$.confidence_score ? (@ > 0.95)` - Results with high confidence\n- **Array operations**: `$.flags[*] ? (@ == \"reviewed\")` - Results flagged as reviewed\n- **Complex conditions**: `$.metrics ? (@.accuracy > 0.9 && @.recall > 0.8)` - Results meeting performance thresholds\n\n## Notes\n- JSONPath expressions are evaluated using PostgreSQL's `@?` operator\n- The `$.` prefix is automatically added to root-level field references if missing\n- String values in conditions must be enclosed in double quotes\n- Use `&&` for AND operations and `||` for OR operations", + "operationId": "list_run_results_v1_runs__application_run_id__results_get", "security": [ { "OAuth2AuthorizationCodeBearer": [] @@ -819,16 +1031,16 @@ ], "parameters": [ { - "name": "run_id", + "name": "application_run_id", "in": "path", "required": true, "schema": { "type": "string", "format": "uuid", - "description": "Run id, returned by `POST /runs/` endpoint", - "title": "Run Id" + "description": "Application run id, returned by `POST /runs/` endpoint", + "title": "Application Run Id" }, - "description": "Run id, returned by `POST /runs/` endpoint" + "description": "Application run id, returned by `POST /runs/` endpoint" }, { "name": "item_id__in", @@ -847,13 +1059,13 @@ "type": "null" } ], - "description": "Filter for item ids", + "description": "Filter for items ids", "title": "Item Id In" }, - "description": "Filter for item ids" + "description": "Filter for items ids" }, { - "name": "external_id__in", + "name": "reference__in", "in": "query", "required": false, "schema": { @@ -868,49 +1080,34 @@ "type": "null" } ], - "description": "Filter for items by their external_id from the input payload", - "title": "External Id In" - }, - "description": "Filter for items by their external_id from the input payload" - }, - { - "name": "state", - "in": "query", - "required": false, - "schema": { - "anyOf": [ - { - "$ref": "#/components/schemas/ItemState" - }, - { - "type": "null" - } - ], - "description": "Filter items by their state", - "title": "State" + "description": "Filter for items by their reference from the input payload", + "title": "Reference In" }, - "description": "Filter items by their state" + "description": "Filter for items by their reference from the input payload" }, { - "name": "termination_reason", + "name": "status__in", "in": "query", "required": false, "schema": { "anyOf": [ { - "$ref": "#/components/schemas/ItemTerminationReason" + "type": "array", + "items": { + "$ref": "#/components/schemas/ItemStatus" + } }, { "type": "null" } ], - "description": "Filter items by their termination reason. Only applies to TERMINATED items.", - "title": "Termination Reason" + "description": "Filter for items in certain statuses", + "title": "Status In" }, - "description": "Filter items by their termination reason. Only applies to TERMINATED items." + "description": "Filter for items in certain statuses" }, { - "name": "custom_metadata", + "name": "metadata", "in": "query", "required": false, "schema": { @@ -923,19 +1120,14 @@ "type": "null" } ], - "description": "JSONPath expression to filter items by their custom_metadata", - "title": "Custom Metadata" + "description": "JSONPath expression to filter results by their metadata", + "title": "Metadata" }, - "description": "JSONPath expression to filter items by their custom_metadata", + "description": "JSONPath expression to filter results by their metadata", "examples": { - "no_filter": { - "summary": "No filter (returns all)", - "description": "Returns all items without filtering by custom metadata", - "value": "$" - }, "field_exists": { "summary": "Check if field exists", - "description": "Find items that have a project field defined", + "description": "Find results that have a project field defined", "value": "$.project" }, "field_has_value": { @@ -999,10 +1191,10 @@ "type": "null" } ], - "description": "Sort the items by one or more fields. Use `+` for ascending and `-` for descending order.\n **Available fields:**\n- `item_id`\n- `run_id`\n- `external_id`\n- `custom_metadata`\n\n**Examples:**\n- `?sort=item_id` - Sort by id of the item (ascending)\n- `?sort=-external_id` - Sort by external ID (descending)\n- `?sort=custom_metadata&sort=-external_id` - Sort by metadata, then by external ID (descending)", + "description": "Sort the results by one or more fields. Use `+` for ascending and `-` for descending order.\n **Available fields:**\n- `item_id`\n- `application_run_id`\n- `reference`\n- `status`\n- `metadata`\n\n**Examples:**\n- `?sort=item_id` - Sort by id of the item (ascending)\n- `?sort=-application_run_id` - Sort by id of the run (descending)\n- `?sort=status&sort=-item_idt` - Sort by status, then by id of the item (descending)", "title": "Sort" }, - "description": "Sort the items by one or more fields. Use `+` for ascending and `-` for descending order.\n **Available fields:**\n- `item_id`\n- `run_id`\n- `external_id`\n- `custom_metadata`\n\n**Examples:**\n- `?sort=item_id` - Sort by id of the item (ascending)\n- `?sort=-external_id` - Sort by external ID (descending)\n- `?sort=custom_metadata&sort=-external_id` - Sort by metadata, then by external ID (descending)" + "description": "Sort the results by one or more fields. Use `+` for ascending and `-` for descending order.\n **Available fields:**\n- `item_id`\n- `application_run_id`\n- `reference`\n- `status`\n- `metadata`\n\n**Examples:**\n- `?sort=item_id` - Sort by id of the item (ascending)\n- `?sort=-application_run_id` - Sort by id of the run (descending)\n- `?sort=status&sort=-item_idt` - Sort by status, then by id of the item (descending)" } ], "responses": { @@ -1015,13 +1207,13 @@ "items": { "$ref": "#/components/schemas/ItemResultReadResponse" }, - "title": "Response List Run Items V1 Runs Run Id Items Get" + "title": "Response List Run Results V1 Runs Application Run Id Results Get" } } } }, "404": { - "description": "Run not found" + "description": "Application run not found" }, "422": { "description": "Validation Error", @@ -1034,16 +1226,14 @@ } } } - } - }, - "/v1/runs/{run_id}/items/{external_id}": { - "get": { + }, + "delete": { "tags": [ "Public" ], - "summary": "Get Item By Run", - "description": "Retrieve details of a specific item (slide) by its external ID and the run ID.", - "operationId": "get_item_by_run_v1_runs__run_id__items__external_id__get", + "summary": "Delete Application Run Results", + "description": "This endpoint allows the caller to explicitly delete outputs generated by an application.\nIt can only be invoked when the application run has reached a final state\n(COMPLETED, COMPLETED_WITH_ERROR, CANCELED_USER, or CANCELED_SYSTEM).\nNote that by default, all outputs are automatically deleted 30 days after the application run finishes,\n regardless of whether the caller explicitly requests deletion.", + "operationId": "delete_application_run_results_v1_runs__application_run_id__results_delete", "security": [ { "OAuth2AuthorizationCodeBearer": [] @@ -1051,45 +1241,24 @@ ], "parameters": [ { - "name": "run_id", + "name": "application_run_id", "in": "path", "required": true, "schema": { "type": "string", "format": "uuid", - "description": "The run id, returned by `POST /runs/` endpoint", - "title": "Run Id" - }, - "description": "The run id, returned by `POST /runs/` endpoint" - }, - { - "name": "external_id", - "in": "path", - "required": true, - "schema": { - "type": "string", - "description": "The `external_id` that was defined for the item by the customer that triggered the run.", - "title": "External Id" + "description": "Application run id, returned by `POST /runs/` endpoint", + "title": "Application Run Id" }, - "description": "The `external_id` that was defined for the item by the customer that triggered the run." + "description": "Application run id, returned by `POST /runs/` endpoint" } ], "responses": { - "200": { - "description": "Successful Response", - "content": { - "application/json": { - "schema": { - "$ref": "#/components/schemas/ItemResultReadResponse" - } - } - } + "204": { + "description": "All application outputs successfully deleted" }, "404": { - "description": "Not Found - Item with given ID does not exist" - }, - "403": { - "description": "Forbidden - You don't have permission to see this item" + "description": "Application run not found" }, "422": { "description": "Validation Error", @@ -1104,14 +1273,14 @@ } } }, - "/v1/runs/{run_id}/artifacts": { - "delete": { + "/v1/items/{item_id}": { + "get": { "tags": [ "Public" ], - "summary": "Delete Run Items", - "description": "This endpoint allows the caller to explicitly delete artifacts generated by a run.\nIt can only be invoked when the run has reached a final state\n(PROCESSED, CANCELED_SYSTEM, CANCELED_USER).\nNote that by default, all artifacts are automatically deleted 30 days after the run finishes,\n regardless of whether the caller explicitly requests deletion.", - "operationId": "delete_run_items_v1_runs__run_id__artifacts_delete", + "summary": "Get Item", + "description": "Retrieve details of a specific item (slide) by its ID.", + "operationId": "get_item_v1_items__item_id__get", "security": [ { "OAuth2AuthorizationCodeBearer": [] @@ -1119,29 +1288,32 @@ ], "parameters": [ { - "name": "run_id", + "name": "item_id", "in": "path", "required": true, "schema": { "type": "string", "format": "uuid", - "description": "Run id, returned by `POST /runs/` endpoint", - "title": "Run Id" - }, - "description": "Run id, returned by `POST /runs/` endpoint" + "title": "Item Id" + } } ], "responses": { "200": { - "description": "Run artifacts deleted", + "description": "Successful Response", "content": { "application/json": { - "schema": {} + "schema": { + "$ref": "#/components/schemas/ItemReadResponse" + } } } }, + "403": { + "description": "Forbidden - You don't have permission to see this item" + }, "404": { - "description": "Run not found" + "description": "Not Found - Item with given ID does not exist" }, "422": { "description": "Validation Error", @@ -1156,152 +1328,22 @@ } } }, - "/v1/runs/{run_id}/custom-metadata": { - "put": { + "/v1/me": { + "get": { "tags": [ "Public" ], - "summary": "Put Run Custom Metadata", - "operationId": "put_run_custom_metadata_v1_runs__run_id__custom_metadata_put", - "security": [ - { - "OAuth2AuthorizationCodeBearer": [] - } - ], - "parameters": [ - { - "name": "run_id", - "in": "path", - "required": true, - "schema": { - "type": "string", - "format": "uuid", - "description": "Run id, returned by `POST /runs/` endpoint", - "title": "Run Id" - }, - "description": "Run id, returned by `POST /runs/` endpoint" - } - ], - "requestBody": { - "required": true, - "content": { - "application/json": { - "schema": { - "$ref": "#/components/schemas/CustomMetadataUpdateRequest" - } - } - } - }, + "summary": "Get current user", + "description": "Retrieves your identity details, including name, email, and organization.\nThis is useful for verifying that the request is being made under the correct user profile\nand organization context, as well as confirming that the expected environment variables are correctly set\n(in case you are using Python SDK)", + "operationId": "get_me_v1_me_get", "responses": { "200": { "description": "Successful Response", "content": { "application/json": { - "schema": {} - } - } - }, - "404": { - "description": "Run not found" - }, - "422": { - "description": "Validation Error", - "content": { - "application/json": { - "schema": { - "$ref": "#/components/schemas/HTTPValidationError" - } - } - } - } - } - } - }, - "/v1/runs/{run_id}/items/{external_id}/custom-metadata": { - "put": { - "tags": [ - "Public" - ], - "summary": "Put Item Custom Metadata By Run", - "operationId": "put_item_custom_metadata_by_run_v1_runs__run_id__items__external_id__custom_metadata_put", - "security": [ - { - "OAuth2AuthorizationCodeBearer": [] - } - ], - "parameters": [ - { - "name": "run_id", - "in": "path", - "required": true, - "schema": { - "type": "string", - "format": "uuid", - "description": "The run id, returned by `POST /runs/` endpoint", - "title": "Run Id" - }, - "description": "The run id, returned by `POST /runs/` endpoint" - }, - { - "name": "external_id", - "in": "path", - "required": true, - "schema": { - "type": "string", - "description": "The `external_id` that was defined for the item by the customer that triggered the run.", - "title": "External Id" - }, - "description": "The `external_id` that was defined for the item by the customer that triggered the run." - } - ], - "requestBody": { - "required": true, - "content": { - "application/json": { - "schema": { - "$ref": "#/components/schemas/CustomMetadataUpdateRequest" - } - } - } - }, - "responses": { - "200": { - "description": "Successful Response", - "content": { - "application/json": { - "schema": {} - } - } - }, - "422": { - "description": "Validation Error", - "content": { - "application/json": { - "schema": { - "$ref": "#/components/schemas/HTTPValidationError" - } - } - } - } - } - } - }, - "/v1/me": { - "get": { - "tags": [ - "Public" - ], - "summary": "Get current user", - "description": "Retrieves your identity details, including name, email, and organization.\nThis is useful for verifying that the request is being made under the correct user profile\nand organization context, as well as confirming that the expected environment variables are correctly set\n(in case you are using Python SDK)", - "operationId": "get_me_v1_me_get", - "responses": { - "200": { - "description": "Successful Response", - "content": { - "application/json": { - "schema": { - "$ref": "#/components/schemas/MeReadResponse" - } + "schema": { + "$ref": "#/components/schemas/MeReadResponse" + } } } } @@ -1354,14 +1396,6 @@ "examples": [ "The Atlas H&E TME is an AI application designed to examine FFPE (formalin-fixed, paraffin-embedded) tissues stained with H&E (hematoxylin and eosin), delivering comprehensive insights into the tumor microenvironment." ] - }, - "versions": { - "items": { - "$ref": "#/components/schemas/ApplicationVersion" - }, - "type": "array", - "title": "Versions", - "description": "All version numbers available to the user" } }, "type": "object", @@ -1369,167 +1403,101 @@ "application_id", "name", "regulatory_classes", - "description", - "versions" + "description" ], "title": "ApplicationReadResponse", "description": "Response schema for `List available applications` and `Read Application by Id` endpoints" }, - "ApplicationReadShortResponse": { + "ApplicationRunStatus": { + "type": "string", + "enum": [ + "CANCELED_SYSTEM", + "CANCELED_USER", + "COMPLETED", + "COMPLETED_WITH_ERROR", + "RECEIVED", + "REJECTED", + "RUNNING", + "SCHEDULED" + ], + "title": "ApplicationRunStatus" + }, + "ApplicationVersionReadResponse": { "properties": { - "application_id": { + "application_version_id": { "type": "string", - "title": "Application Id", - "description": "Application ID", + "title": "Application Version Id", + "description": "Application version ID", "examples": [ - "he-tme" + "he-tme:v0.0.1" ] }, - "name": { + "version": { "type": "string", - "title": "Name", - "description": "Application display name", + "title": "Version", + "description": "Semantic version of the application", "examples": [ - "Atlas H&E-TME" + "0.0.1" ] }, - "regulatory_classes": { - "items": { - "type": "string" - }, - "type": "array", - "title": "Regulatory Classes", - "description": "Regulatory classes, to which the applications comply with. Possible values include: RUO, IVDR, FDA.", - "examples": [ - [ - "RUO" - ] - ] - }, - "description": { + "application_id": { "type": "string", - "title": "Description", - "description": "Describing what the application can do ", - "examples": [ - "The Atlas H&E TME is an AI application designed to examine FFPE (formalin-fixed, paraffin-embedded) tissues stained with H&E (hematoxylin and eosin), delivering comprehensive insights into the tumor microenvironment." - ] + "title": "Application Id", + "description": "Application ID" }, - "latest_version": { + "flow_id": { "anyOf": [ { - "$ref": "#/components/schemas/ApplicationVersion" + "type": "string", + "format": "uuid" }, { "type": "null" } ], - "description": "The version with highest version number available to the user" - } - }, - "type": "object", - "required": [ - "application_id", - "name", - "regulatory_classes", - "description" - ], - "title": "ApplicationReadShortResponse", - "description": "Response schema for `List available applications` and `Read Application by Id` endpoints" - }, - "ApplicationVersion": { - "properties": { - "number": { + "title": "Flow Id", + "description": "Flow ID, used internally by the platform" + }, + "changelog": { "type": "string", - "title": "Number", - "description": "The number of the latest version", - "examples": [ - "1.0.0" - ] + "title": "Changelog", + "description": "Description of the changes relative to the previous version" + }, + "input_artifacts": { + "items": { + "$ref": "#/components/schemas/InputArtifactReadResponse" + }, + "type": "array", + "title": "Input Artifacts", + "description": "Lists required input fields, that should be provided by the caller" + }, + "output_artifacts": { + "items": { + "$ref": "#/components/schemas/OutputArtifactReadResponse" + }, + "type": "array", + "title": "Output Artifacts", + "description": "Lists the structure of the output artifacts generated by the application" }, - "released_at": { + "created_at": { "type": "string", "format": "date-time", - "title": "Released At", - "description": "The timestamp for when the application version was made available in the Platform", - "examples": [ - "2025-09-15T10:30:45.123Z" - ] + "title": "Created At", + "description": "The timestamp when the application version was registered" } }, "type": "object", "required": [ - "number", - "released_at" - ], - "title": "ApplicationVersion" - }, - "ArtifactOutput": { - "type": "string", - "enum": [ - "NONE", - "AVAILABLE", - "DELETED_BY_USER", - "DELETED_BY_SYSTEM" - ], - "title": "ArtifactOutput" - }, - "ArtifactState": { - "type": "string", - "enum": [ - "PENDING", - "PROCESSING", - "TERMINATED" - ], - "title": "ArtifactState" - }, - "ArtifactTerminationReason": { - "type": "string", - "enum": [ - "SUCCEEDED", - "USER_ERROR", - "SYSTEM_ERROR", - "SKIPPED" + "application_version_id", + "version", + "application_id", + "changelog", + "input_artifacts", + "output_artifacts", + "created_at" ], - "title": "ArtifactTerminationReason" - }, - "CustomMetadataUpdateRequest": { - "properties": { - "custom_metadata": { - "anyOf": [ - { - "type": "object" - }, - { - "type": "null" - } - ], - "title": "Custom Metadata", - "description": "JSON metadata that should be set for the run", - "examples": [ - { - "department": "D1", - "study": "abc-1" - } - ] - }, - "custom_metadata_checksum": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "title": "Custom Metadata Checksum", - "description": "Optional field to verify that the latest custom metadata was known. If set to the checksum retrieved via the /runs endpoint, it must match the checksum of the current value in the database.", - "examples": [ - "f54fe109" - ] - } - }, - "type": "object", - "title": "CustomMetadataUpdateRequest" + "title": "ApplicationVersionReadResponse", + "description": "Response schema for `List Available Application Versions` endpoint" }, "HTTPValidationError": { "properties": { @@ -1616,20 +1584,46 @@ "title": "InputArtifactCreationRequest", "description": "Input artifact containing the slide image and associated metadata." }, + "InputArtifactReadResponse": { + "properties": { + "name": { + "type": "string", + "title": "Name" + }, + "mime_type": { + "type": "string", + "pattern": "^\\w+\\/\\w+[-+.|\\w+]+\\w+$", + "title": "Mime Type", + "examples": [ + "image/tiff" + ] + }, + "metadata_schema": { + "type": "object", + "title": "Metadata Schema" + } + }, + "type": "object", + "required": [ + "name", + "mime_type", + "metadata_schema" + ], + "title": "InputArtifactReadResponse" + }, "ItemCreationRequest": { "properties": { - "external_id": { + "reference": { "type": "string", - "maxLength": 255, - "title": "External Id", - "description": "Unique identifier for this item within the run. Used for referencing items. Must be unique across all items in the same run", + "title": "Reference", + "description": "Unique identifier for this item within the run. Used for referencing results. Must be unique across all items in the same application run", "examples": [ "slide_1", "patient_001_slide_A", "sample_12345" ] }, - "custom_metadata": { + "metadata": { "anyOf": [ { "type": "object" @@ -1638,8 +1632,8 @@ "type": "null" } ], - "title": "Custom Metadata", - "description": "Optional JSON custom_metadata to store additional information alongside an item.", + "title": "Metadata", + "description": "Optional JSON metadata to store additional information alongside an item.", "examples": [ { "case": "abc" @@ -1677,83 +1671,136 @@ }, "type": "object", "required": [ - "external_id", + "reference", "input_artifacts" ], "title": "ItemCreationRequest", - "description": "Individual item (slide) to be processed in a run." + "description": "Individual item (slide) to be processed in an application run." }, - "ItemOutput": { - "type": "string", - "enum": [ - "NONE", - "FULL" - ], - "title": "ItemOutput" - }, - "ItemResultReadResponse": { + "ItemReadResponse": { "properties": { "item_id": { "type": "string", "format": "uuid", - "title": "Item Id", - "description": "Item UUID generated by the Platform" + "title": "Item Id" }, - "external_id": { + "application_run_id": { + "anyOf": [ + { + "type": "string", + "format": "uuid" + }, + { + "type": "null" + } + ], + "title": "Application Run Id", + "examples": [ + "3fa85f64-5717-4562-b3fc-2c963f66afa6" + ] + }, + "reference": { "type": "string", - "title": "External Id", - "description": "The external_id of the item from the user payload", + "title": "Reference", "examples": [ - "slide_1" + "sample-123" ] }, - "custom_metadata": { + "status": { + "$ref": "#/components/schemas/ItemStatus" + }, + "message": { "anyOf": [ { - "type": "object" + "type": "string" }, { "type": "null" } ], - "title": "Custom Metadata", - "description": "The custom_metadata of the item that has been provided by the user on run creation." + "title": "Message", + "examples": [ + "Processing started" + ] }, - "custom_metadata_checksum": { + "terminated_at": { "anyOf": [ { - "type": "string" + "type": "string", + "format": "date-time" }, { "type": "null" } ], - "title": "Custom Metadata Checksum", - "description": "The checksum of the `custom_metadata` field.\nCan be used in the `PUT /runs/{run-id}/items/{external_id}/custom_metadata`\nrequest to avoid unwanted override of the values in concurrent requests.", + "title": "Terminated At", + "description": "Timestamp showing when the item reached a terminal state.", + "examples": [ + "2024-01-15T10:30:45.123Z" + ] + } + }, + "type": "object", + "required": [ + "item_id", + "reference", + "status" + ], + "title": "ItemReadResponse", + "description": "Response schema for `Get Item` endpoint" + }, + "ItemResultReadResponse": { + "properties": { + "item_id": { + "type": "string", + "format": "uuid", + "title": "Item Id", + "description": "Item UUID generated by the Platform" + }, + "application_run_id": { + "type": "string", + "format": "uuid", + "title": "Application Run Id", + "description": "Application run UUID to which the item belongs" + }, + "reference": { + "type": "string", + "title": "Reference", + "description": "The reference of the item from the user payload", "examples": [ - "f54fe109" + "slide_1" ] }, - "state": { - "$ref": "#/components/schemas/ItemState", - "description": "\nThe item moves from `PENDING` to `PROCESSING` to `TERMINATED` state.\nWhen terminated, consult the `termination_reason` property to see whether it was successful.\n " + "metadata": { + "anyOf": [ + { + "type": "object" + }, + { + "type": "null" + } + ], + "title": "Metadata", + "description": "The metadata of the item that has been provided by the user on application run creation." }, - "output": { - "$ref": "#/components/schemas/ItemOutput", - "description": "The output status of the item (NONE, FULL)" + "status": { + "$ref": "#/components/schemas/ItemStatus", + "description": "\nWhen the item is not processed yet, the status is set to `pending`.\n\nWhen the item is successfully finished, status is set to `succeeded`, and the processing results\nbecome available for download in `output_artifacts` field.\n\nWhen the item processing is failed because the provided item is invalid, the status is set to\n`error_user`. When the item processing failed because of the error in the model or platform,\nthe status is set to `error_system`. When the application_run is canceled, the status of all\npending items is set to either `cancelled_user` or `cancelled_system`.\n " }, - "termination_reason": { + "error": { "anyOf": [ { - "$ref": "#/components/schemas/ItemTerminationReason" + "type": "string" }, { "type": "null" } ], - "description": "\nWhen the `state` is `TERMINATED` this will explain why\n`SUCCEEDED` -> Successful processing.\n`USER_ERROR` -> Failed because the provided input was invalid.\n`SYSTEM_ERROR` -> There was an error in the model or platform.\n`SKIPPED` -> Was cancelled\n" + "title": "Error", + "description": "\n The error message in case the item is in `error_system` or `error_user` state\n ", + "deprecated": true }, - "error_message": { + "message": { "anyOf": [ { "type": "string" @@ -1762,8 +1809,8 @@ "type": "null" } ], - "title": "Error Message", - "description": "\n The error message in case the `termination_reason` is in `USER_ERROR` or `SYSTEM_ERROR`\n ", + "title": "Message", + "description": "\nThe error message in case the item is in `error_system` or `error_user` state\n ", "examples": [ "This item was not processed because the threshold of 3 items finishing in error state (user or system error) was reached before the item was processed.", "The item was not processed because the run was cancelled by the user before the item was processed.User error raised by Application because the input data provided by the user cannot be processed:\nThe image width is 123000 px, but the maximum width is 100000 px", @@ -1794,52 +1841,32 @@ "type": "array", "title": "Output Artifacts", "description": "\nThe list of the results generated by the application algorithm. The number of files and their\ntypes depend on the particular application version, call `/v1/versions/{version_id}` to get\nthe details.\n " - }, - "error_code": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "title": "Error Code", - "description": "Error code describing the error that occurred during item processing.", - "readOnly": true } }, "type": "object", "required": [ "item_id", - "external_id", - "custom_metadata", - "state", - "output", - "output_artifacts", - "error_code" + "application_run_id", + "reference", + "metadata", + "status", + "message", + "output_artifacts" ], "title": "ItemResultReadResponse", - "description": "Response schema for items in `List Run Items` endpoint" + "description": "Response schema for items in `List Run Results` endpoint" }, - "ItemState": { + "ItemStatus": { "type": "string", "enum": [ "PENDING", - "PROCESSING", - "TERMINATED" - ], - "title": "ItemState" - }, - "ItemTerminationReason": { - "type": "string", - "enum": [ - "SUCCEEDED", - "USER_ERROR", - "SYSTEM_ERROR", - "SKIPPED" + "CANCELED_USER", + "CANCELED_SYSTEM", + "ERROR_USER", + "ERROR_SYSTEM", + "SUCCEEDED" ], - "title": "ItemTerminationReason" + "title": "ItemStatus" }, "MeReadResponse": { "properties": { @@ -1903,7 +1930,7 @@ "title": "Aignostics Bucket Hmac Access Key Id", "description": "HMAC access key ID for the Aignostics-provided storage bucket. Used to authenticate requests for uploading files and generating signed URLs", "examples": [ - "YOUR_HMAC_ACCESS_KEY_ID" + "AKIAIOSFODNN7EXAMPLE" ] }, "aignostics_bucket_hmac_secret_access_key": { @@ -1911,7 +1938,7 @@ "title": "Aignostics Bucket Hmac Secret Access Key", "description": "HMAC secret access key paired with the access key ID. Keep this credential secure.", "examples": [ - "YOUR/HMAC/SECRET_ACCESS_KEY" + "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY" ] }, "aignostics_bucket_name": { @@ -1995,6 +2022,37 @@ ], "title": "OutputArtifact" }, + "OutputArtifactReadResponse": { + "properties": { + "name": { + "type": "string", + "title": "Name" + }, + "mime_type": { + "type": "string", + "pattern": "^\\w+\\/\\w+[-+.|\\w+]+\\w+$", + "title": "Mime Type", + "examples": [ + "application/vnd.apache.parquet" + ] + }, + "metadata_schema": { + "type": "object", + "title": "Metadata Schema" + }, + "scope": { + "$ref": "#/components/schemas/OutputArtifactScope" + } + }, + "type": "object", + "required": [ + "name", + "mime_type", + "metadata_schema", + "scope" + ], + "title": "OutputArtifactReadResponse" + }, "OutputArtifactResultReadResponse": { "properties": { "output_artifact_id": { @@ -2012,47 +2070,9 @@ ] }, "metadata": { - "anyOf": [ - { - "type": "object" - }, - { - "type": "null" - } - ], + "type": "object", "title": "Metadata", - "description": "The metadata of the output artifact, provided by the application. Can only be None if the artifact itself was deleted." - }, - "state": { - "$ref": "#/components/schemas/ArtifactState", - "description": "The current state of the artifact (PENDING, PROCESSING, TERMINATED)" - }, - "termination_reason": { - "anyOf": [ - { - "$ref": "#/components/schemas/ArtifactTerminationReason" - }, - { - "type": "null" - } - ], - "description": "The reason for termination when state is TERMINATED" - }, - "output": { - "$ref": "#/components/schemas/ArtifactOutput", - "description": "The output status of the artifact (NONE, FULL)" - }, - "error_message": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "title": "Error Message", - "description": "Error message when artifact is in error state" + "description": "The metadata of the output artifact, provided by the application" }, "download_url": { "anyOf": [ @@ -2068,29 +2088,14 @@ ], "title": "Download Url", "description": "\nThe download URL to the output file. The URL is valid for 1 hour after the endpoint is called.\nA new URL is generated every time the endpoint is called.\n " - }, - "error_code": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "title": "Error Code", - "description": "Error code describing the error that occurred during artifact processing.", - "readOnly": true } }, "type": "object", "required": [ "output_artifact_id", "name", - "state", - "output", - "download_url", - "error_code" + "metadata", + "download_url" ], "title": "OutputArtifactResultReadResponse" }, @@ -2110,32 +2115,94 @@ ], "title": "OutputArtifactVisibility" }, - "RunCreationRequest": { + "PayloadInputArtifact": { "properties": { - "application_id": { + "input_artifact_id": { "type": "string", - "title": "Application Id", - "description": "Unique ID for the application to use for processing", - "examples": [ - "he-tme" - ] + "format": "uuid", + "title": "Input Artifact Id" }, - "version_number": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "title": "Version Number", - "description": "Semantic version of the application to use for processing. If not provided, the latest available version will be used", + "metadata": { + "type": "object", + "title": "Metadata" + }, + "download_url": { + "type": "string", + "minLength": 1, + "format": "uri", + "title": "Download Url" + } + }, + "type": "object", + "required": [ + "metadata", + "download_url" + ], + "title": "PayloadInputArtifact" + }, + "PayloadItem": { + "properties": { + "item_id": { + "type": "string", + "format": "uuid", + "title": "Item Id" + }, + "input_artifacts": { + "additionalProperties": { + "$ref": "#/components/schemas/PayloadInputArtifact" + }, + "type": "object", + "title": "Input Artifacts" + }, + "output_artifacts": { + "additionalProperties": { + "$ref": "#/components/schemas/PayloadOutputArtifact" + }, + "type": "object", + "title": "Output Artifacts" + } + }, + "type": "object", + "required": [ + "item_id", + "input_artifacts", + "output_artifacts" + ], + "title": "PayloadItem" + }, + "PayloadOutputArtifact": { + "properties": { + "output_artifact_id": { + "type": "string", + "format": "uuid", + "title": "Output Artifact Id" + }, + "data": { + "$ref": "#/components/schemas/TransferUrls" + }, + "metadata": { + "$ref": "#/components/schemas/TransferUrls" + } + }, + "type": "object", + "required": [ + "output_artifact_id", + "data", + "metadata" + ], + "title": "PayloadOutputArtifact" + }, + "RunCreationRequest": { + "properties": { + "application_version_id": { + "type": "string", + "title": "Application Version Id", + "description": "Unique ID for the application version to use for processing. Must include version suffix (e.g., 'he-tme:v1.0.0-beta')", "examples": [ - "1.0.0-beta1" + "he-tme:v1.0.0-beta" ] }, - "custom_metadata": { + "metadata": { "anyOf": [ { "type": "object" @@ -2144,8 +2211,8 @@ "type": "null" } ], - "title": "Custom Metadata", - "description": "Optional JSON metadata to store additional information alongside the run", + "title": "Metadata", + "description": "Optional JSON metadata to store additional information alongside the application run", "examples": [ { "department": "D1", @@ -2164,7 +2231,6 @@ "examples": [ [ { - "external_id": "slide_1", "input_artifacts": [ { "download_url": "https://example-bucket.s3.amazonaws.com/slide1.tiff?signature=...", @@ -2182,7 +2248,8 @@ }, "name": "input_slide" } - ] + ], + "reference": "slide_1" } ] ] @@ -2190,19 +2257,19 @@ }, "type": "object", "required": [ - "application_id", + "application_version_id", "items" ], "title": "RunCreationRequest", - "description": "Request schema for `Initiate Run` endpoint.\nIt describes which application version is chosen, and which user data should be processed." + "description": "Request schema for `Initiate Application Run` endpoint.\nIt describes which application version is chosen, and which user data should be processed." }, "RunCreationResponse": { "properties": { - "run_id": { + "application_run_id": { "type": "string", "format": "uuid", - "title": "Run Id", - "default": "Run id", + "title": "Application Run Id", + "default": "Application run id", "examples": [ "3fa85f64-5717-4562-b3fc-2c963f66afa6" ] @@ -2211,124 +2278,46 @@ "type": "object", "title": "RunCreationResponse" }, - "RunItemStatistics": { - "properties": { - "item_count": { - "type": "integer", - "title": "Item Count", - "description": "Total number of the items in the run" - }, - "item_pending_count": { - "type": "integer", - "title": "Item Pending Count", - "description": "The number of items in `PENDING` state" - }, - "item_processing_count": { - "type": "integer", - "title": "Item Processing Count", - "description": "The number of items in `PROCESSING` state" - }, - "item_user_error_count": { - "type": "integer", - "title": "Item User Error Count", - "description": "The number of items in `TERMINATED` state, and the item termination reason is `USER_ERROR`" - }, - "item_system_error_count": { - "type": "integer", - "title": "Item System Error Count", - "description": "The number of items in `TERMINATED` state, and the item termination reason is `SYSTEM_ERROR`" - }, - "item_skipped_count": { - "type": "integer", - "title": "Item Skipped Count", - "description": "The number of items in `TERMINATED` state, and the item termination reason is `SKIPPED`" - }, - "item_succeeded_count": { - "type": "integer", - "title": "Item Succeeded Count", - "description": "The number of items in `TERMINATED` state, and the item termination reason is `SUCCEEDED`" - } - }, - "type": "object", - "required": [ - "item_count", - "item_pending_count", - "item_processing_count", - "item_user_error_count", - "item_system_error_count", - "item_skipped_count", - "item_succeeded_count" - ], - "title": "RunItemStatistics" - }, - "RunOutput": { - "type": "string", - "enum": [ - "NONE", - "PARTIAL", - "FULL" - ], - "title": "RunOutput" - }, "RunReadResponse": { "properties": { - "run_id": { + "application_run_id": { "type": "string", "format": "uuid", - "title": "Run Id", + "title": "Application Run Id", "description": "UUID of the application" }, - "application_id": { + "application_version_id": { "type": "string", - "title": "Application Id", - "description": "Application id", + "title": "Application Version Id", + "description": "ID of the application version", "examples": [ - "he-tme" + "he-tme:v0.4.4" ] }, - "version_number": { + "organization_id": { "type": "string", - "title": "Version Number", - "description": "Application version number", + "title": "Organization Id", + "description": "Organization of the owner of the application run", "examples": [ - "0.4.4" + "org-123" ] }, - "state": { - "$ref": "#/components/schemas/RunState", - "description": "When the run request is received by the Platform, the `state` of it is set to\n`PENDING`. The state changes to `PROCESSING` when at least one item is being processed. After `PROCESSING`, the\nstate of the run can switch back to `PENDING` if there are no processing items, or to `TERMINATED` when the run\nfinished processing." - }, - "output": { - "$ref": "#/components/schemas/RunOutput", - "description": "The status of the output of the run. When 0 items are successfully processed the output is\n`NONE`, after one item is successfully processed, the value is set to `PARTIAL`. When all items of the run are\nsuccessfully processed, the output is set to `FULL`." - }, - "termination_reason": { + "user_payload": { "anyOf": [ { - "$ref": "#/components/schemas/RunTerminationReason" + "$ref": "#/components/schemas/UserPayload" }, { "type": "null" } ], - "description": "The termination reason of the run. When the run is not in `TERMINATED` state, the\n termination_reason is `null`. If all items of of the run are processed (successfully or with an error), then\n termination_reason is set to `ALL_ITEMS_PROCESSED`. If the run is cancelled by the user, the value is set to\n `CANCELED_BY_USER`. If the run reaches the threshold of number of failed items, the Platform cancels the run\n and sets the termination_reason to `CANCELED_BY_SYSTEM`.\n " + "description": "Field used internally by the Platform" }, - "error_code": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "title": "Error Code", - "description": "When the termination_reason is set to CANCELED_BY_SYSTEM, the error_code is set to define the\n structured description of the error.", - "examples": [ - "SCHEDULER.ITEMS_WITH_ERROR_THRESHOLD_REACHED" - ] + "status": { + "$ref": "#/components/schemas/ApplicationRunStatus", + "description": "When the application run request is received by the Platform, the `status` of it is set to\n`running`. When the application run is scheduled, the input items will be processed and the result will\nbe generated incrementally. The results can be downloaded via `/v1/runs/{run_id}/results` endpoint.\nWhen all items are processed and all results are generated, the application status is set to\n`completed`. If the processing is done, but some items fail, the status is set to\n`completed_with_error`.\n\nWhen the application run reaches the threshold of number of failed items, the whole\napplication run is set to `canceled_system` and the remaining pending items are not processed.\nWhen the application run fails, the finished item results are available for download.\n\nIf the application run is canceled by calling `POST /v1/runs/{run_id}/cancel` endpoint, the\nprocessing of the items is stopped, and the application status is set to `cancelled_user`\n " }, - "error_message": { + "message": { "anyOf": [ { "type": "string" @@ -2337,17 +2326,15 @@ "type": "null" } ], - "title": "Error Message", - "description": "When the termination_reason is set to CANCELED_BY_SYSTEM, the error_message is set to provide\n more insights to the error cause.", + "title": "Message", + "description": "The description of the run error", "examples": [ - "Run canceled given errors on more than 10 items." + "The run was cancelled because the threshold of 3 items finishing in error state was reached. Query /runs/{run_id}/results to get the error message per item.", + "The run was cancelled because one item finished in error state despite 2 retries. Query /runs/{run_id}/results to get the error message per item.", + "The run is cancelled by the user. Query /runs/{run_id}/results to get the items processed before the cancellation." ] }, - "statistics": { - "$ref": "#/components/schemas/RunItemStatistics", - "description": "Aggregated statistics of the run execution" - }, - "custom_metadata": { + "metadata": { "anyOf": [ { "type": "object" @@ -2356,8 +2343,8 @@ "type": "null" } ], - "title": "Custom Metadata", - "description": "Optional JSON metadata that was stored in alongside the run by the user", + "title": "Metadata", + "description": "Optional JSON metadata that was stored in alongside the application run by the user", "examples": [ { "department": "D1", @@ -2365,31 +2352,16 @@ } ] }, - "custom_metadata_checksum": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "title": "Custom Metadata Checksum", - "description": "The checksum of the `custom_metadata` field. Can be used in the `PUT /runs/{run-id}/custom_metadata`\nrequest to avoid unwanted override of the values in concurrent requests.", - "examples": [ - "f54fe109" - ] - }, - "submitted_at": { + "triggered_at": { "type": "string", "format": "date-time", - "title": "Submitted At", - "description": "Timestamp showing when the run was triggered" + "title": "Triggered At", + "description": "Timestamp showing when the application run was triggered" }, - "submitted_by": { + "triggered_by": { "type": "string", - "title": "Submitted By", - "description": "Id of the user who triggered the run", + "title": "Triggered By", + "description": "Id of the user who triggered the application run", "examples": [ "auth0|123456" ] @@ -2405,7 +2377,7 @@ } ], "title": "Terminated At", - "description": "Timestamp showing when the run reached a terminal state.", + "description": "Timestamp showing when the application run reached a terminal state.", "examples": [ "2024-01-15T10:30:45.123Z" ] @@ -2413,38 +2385,80 @@ }, "type": "object", "required": [ - "run_id", - "application_id", - "version_number", - "state", - "output", - "termination_reason", - "error_code", - "error_message", - "statistics", - "submitted_at", - "submitted_by" + "application_run_id", + "application_version_id", + "organization_id", + "status", + "message", + "triggered_at", + "triggered_by" ], "title": "RunReadResponse", "description": "Response schema for `Get run details` endpoint" }, - "RunState": { - "type": "string", - "enum": [ - "PENDING", - "PROCESSING", - "TERMINATED" + "TransferUrls": { + "properties": { + "upload_url": { + "type": "string", + "minLength": 1, + "format": "uri", + "title": "Upload Url" + }, + "download_url": { + "type": "string", + "minLength": 1, + "format": "uri", + "title": "Download Url" + } + }, + "type": "object", + "required": [ + "upload_url", + "download_url" ], - "title": "RunState" + "title": "TransferUrls" }, - "RunTerminationReason": { - "type": "string", - "enum": [ - "ALL_ITEMS_PROCESSED", - "CANCELED_BY_SYSTEM", - "CANCELED_BY_USER" + "UserPayload": { + "properties": { + "application_id": { + "type": "string", + "title": "Application Id" + }, + "application_run_id": { + "type": "string", + "format": "uuid", + "title": "Application Run Id" + }, + "global_output_artifacts": { + "anyOf": [ + { + "additionalProperties": { + "$ref": "#/components/schemas/PayloadOutputArtifact" + }, + "type": "object" + }, + { + "type": "null" + } + ], + "title": "Global Output Artifacts" + }, + "items": { + "items": { + "$ref": "#/components/schemas/PayloadItem" + }, + "type": "array", + "title": "Items" + } + }, + "type": "object", + "required": [ + "application_id", + "application_run_id", + "global_output_artifacts", + "items" ], - "title": "RunTerminationReason" + "title": "UserPayload" }, "UserReadResponse": { "properties": { @@ -2614,11 +2628,34 @@ }, "VersionReadResponse": { "properties": { - "version_number": { + "application_version_id": { "type": "string", - "title": "Version Number", + "title": "Application Version Id", + "description": "Application version ID" + }, + "version": { + "type": "string", + "title": "Version", "description": "Semantic version of the application" }, + "application_id": { + "type": "string", + "title": "Application Id", + "description": "Application ID" + }, + "flow_id": { + "anyOf": [ + { + "type": "string", + "format": "uuid" + }, + { + "type": "null" + } + ], + "title": "Flow Id", + "description": "Flow ID, used internally by the platform" + }, "changelog": { "type": "string", "title": "Changelog", @@ -2640,23 +2677,25 @@ "title": "Output Artifacts", "description": "List of the output fields, generated by the application" }, - "released_at": { + "created_at": { "type": "string", "format": "date-time", - "title": "Released At", + "title": "Created At", "description": "The timestamp when the application version was registered" } }, "type": "object", "required": [ - "version_number", + "application_version_id", + "version", + "application_id", "changelog", "input_artifacts", "output_artifacts", - "released_at" + "created_at" ], "title": "VersionReadResponse", - "description": "Base Response schema for the `Application Version Details` endpoint" + "description": "Response schema for `Application Version Details` endpoint" } }, "securitySchemes": { @@ -2665,8 +2704,8 @@ "flows": { "authorizationCode": { "scopes": {}, - "authorizationUrl": "https://aignostics-platform-staging.eu.auth0.com/authorize", - "tokenUrl": "https://aignostics-platform-staging.eu.auth0.com/oauth/token" + "authorizationUrl": "https://aignostics-platform.eu.auth0.com/authorize", + "tokenUrl": "https://aignostics-platform.eu.auth0.com/oauth/token" } } } diff --git a/codegen/out/.openapi-generator/FILES b/codegen/out/.openapi-generator/FILES index 73b10186..0d71d6dc 100644 --- a/codegen/out/.openapi-generator/FILES +++ b/codegen/out/.openapi-generator/FILES @@ -4,33 +4,31 @@ aignx/codegen/api_response.py aignx/codegen/configuration.py aignx/codegen/exceptions.py aignx/codegen/models/application_read_response.py -aignx/codegen/models/application_read_short_response.py -aignx/codegen/models/application_version.py -aignx/codegen/models/artifact_output.py -aignx/codegen/models/artifact_state.py -aignx/codegen/models/artifact_termination_reason.py -aignx/codegen/models/custom_metadata_update_request.py +aignx/codegen/models/application_run_status.py +aignx/codegen/models/application_version_read_response.py aignx/codegen/models/http_validation_error.py aignx/codegen/models/input_artifact.py aignx/codegen/models/input_artifact_creation_request.py +aignx/codegen/models/input_artifact_read_response.py aignx/codegen/models/item_creation_request.py -aignx/codegen/models/item_output.py +aignx/codegen/models/item_read_response.py aignx/codegen/models/item_result_read_response.py -aignx/codegen/models/item_state.py -aignx/codegen/models/item_termination_reason.py +aignx/codegen/models/item_status.py aignx/codegen/models/me_read_response.py aignx/codegen/models/organization_read_response.py aignx/codegen/models/output_artifact.py +aignx/codegen/models/output_artifact_read_response.py aignx/codegen/models/output_artifact_result_read_response.py aignx/codegen/models/output_artifact_scope.py aignx/codegen/models/output_artifact_visibility.py +aignx/codegen/models/payload_input_artifact.py +aignx/codegen/models/payload_item.py +aignx/codegen/models/payload_output_artifact.py aignx/codegen/models/run_creation_request.py aignx/codegen/models/run_creation_response.py -aignx/codegen/models/run_item_statistics.py -aignx/codegen/models/run_output.py aignx/codegen/models/run_read_response.py -aignx/codegen/models/run_state.py -aignx/codegen/models/run_termination_reason.py +aignx/codegen/models/transfer_urls.py +aignx/codegen/models/user_payload.py aignx/codegen/models/user_read_response.py aignx/codegen/models/validation_error.py aignx/codegen/models/validation_error_loc_inner.py diff --git a/codegen/out/aignx/codegen/api/public_api.py b/codegen/out/aignx/codegen/api/public_api.py index ce166906..c457a1f2 100644 --- a/codegen/out/aignx/codegen/api/public_api.py +++ b/codegen/out/aignx/codegen/api/public_api.py @@ -5,7 +5,7 @@ The Aignostics Platform is a cloud-based service that enables organizations to access advanced computational pathology applications through a secure API. The platform provides standardized access to Aignostics' portfolio of computational pathology solutions, with Atlas H&E-TME serving as an example of the available API endpoints. To begin using the platform, your organization must first be registered by our business support team. If you don't have an account yet, please contact your account manager or email support@aignostics.com to get started. More information about our applications can be found on [https://platform.aignostics.com](https://platform.aignostics.com). **How to authorize and test API endpoints:** 1. Click the \"Authorize\" button in the right corner below 3. Click \"Authorize\" button in the dialog to log in with your Aignostics Platform credentials 4. After successful login, you'll be redirected back and can use \"Try it out\" on any endpoint **Note**: You only need to authorize once per session. The lock icons next to endpoints will show green when authorized. - The version of the OpenAPI document: 1.0.0.beta7 + The version of the OpenAPI document: 1.0.0-beta6 Generated by OpenAPI Generator (https://openapi-generator.tech) Do not edit the class manually. @@ -17,14 +17,13 @@ from typing_extensions import Annotated from pydantic import Field, StrictStr -from typing import Any, List, Optional +from typing import List, Optional from typing_extensions import Annotated from aignx.codegen.models.application_read_response import ApplicationReadResponse -from aignx.codegen.models.application_read_short_response import ApplicationReadShortResponse -from aignx.codegen.models.custom_metadata_update_request import CustomMetadataUpdateRequest +from aignx.codegen.models.application_version_read_response import ApplicationVersionReadResponse +from aignx.codegen.models.item_read_response import ItemReadResponse from aignx.codegen.models.item_result_read_response import ItemResultReadResponse -from aignx.codegen.models.item_state import ItemState -from aignx.codegen.models.item_termination_reason import ItemTerminationReason +from aignx.codegen.models.item_status import ItemStatus from aignx.codegen.models.me_read_response import MeReadResponse from aignx.codegen.models.run_creation_request import RunCreationRequest from aignx.codegen.models.run_creation_response import RunCreationResponse @@ -50,10 +49,9 @@ def __init__(self, api_client=None) -> None: @validate_call - def application_version_details_v1_applications_application_id_versions_version_get( + def application_version_details_v1_versions_application_version_id_get( self, - application_id: StrictStr, - version: StrictStr, + application_version_id: StrictStr, _request_timeout: Union[ None, Annotated[StrictFloat, Field(gt=0)], @@ -71,10 +69,8 @@ def application_version_details_v1_applications_application_id_versions_version_ Get the application version details Allows caller to retrieve information about application version based on provided application version ID. - :param application_id: (required) - :type application_id: str - :param version: (required) - :type version: str + :param application_version_id: (required) + :type application_version_id: str :param _request_timeout: timeout setting for this request. If one number provided, it will be total request timeout. It can also be a pair (tuple) of @@ -97,9 +93,8 @@ def application_version_details_v1_applications_application_id_versions_version_ :return: Returns the result object. """ # noqa: E501 - _param = self._application_version_details_v1_applications_application_id_versions_version_get_serialize( - application_id=application_id, - version=version, + _param = self._application_version_details_v1_versions_application_version_id_get_serialize( + application_version_id=application_version_id, _request_auth=_request_auth, _content_type=_content_type, _headers=_headers, @@ -124,10 +119,9 @@ def application_version_details_v1_applications_application_id_versions_version_ @validate_call - def application_version_details_v1_applications_application_id_versions_version_get_with_http_info( + def application_version_details_v1_versions_application_version_id_get_with_http_info( self, - application_id: StrictStr, - version: StrictStr, + application_version_id: StrictStr, _request_timeout: Union[ None, Annotated[StrictFloat, Field(gt=0)], @@ -145,10 +139,8 @@ def application_version_details_v1_applications_application_id_versions_version_ Get the application version details Allows caller to retrieve information about application version based on provided application version ID. - :param application_id: (required) - :type application_id: str - :param version: (required) - :type version: str + :param application_version_id: (required) + :type application_version_id: str :param _request_timeout: timeout setting for this request. If one number provided, it will be total request timeout. It can also be a pair (tuple) of @@ -171,9 +163,8 @@ def application_version_details_v1_applications_application_id_versions_version_ :return: Returns the result object. """ # noqa: E501 - _param = self._application_version_details_v1_applications_application_id_versions_version_get_serialize( - application_id=application_id, - version=version, + _param = self._application_version_details_v1_versions_application_version_id_get_serialize( + application_version_id=application_version_id, _request_auth=_request_auth, _content_type=_content_type, _headers=_headers, @@ -198,10 +189,9 @@ def application_version_details_v1_applications_application_id_versions_version_ @validate_call - def application_version_details_v1_applications_application_id_versions_version_get_without_preload_content( + def application_version_details_v1_versions_application_version_id_get_without_preload_content( self, - application_id: StrictStr, - version: StrictStr, + application_version_id: StrictStr, _request_timeout: Union[ None, Annotated[StrictFloat, Field(gt=0)], @@ -219,10 +209,8 @@ def application_version_details_v1_applications_application_id_versions_version_ Get the application version details Allows caller to retrieve information about application version based on provided application version ID. - :param application_id: (required) - :type application_id: str - :param version: (required) - :type version: str + :param application_version_id: (required) + :type application_version_id: str :param _request_timeout: timeout setting for this request. If one number provided, it will be total request timeout. It can also be a pair (tuple) of @@ -245,9 +233,8 @@ def application_version_details_v1_applications_application_id_versions_version_ :return: Returns the result object. """ # noqa: E501 - _param = self._application_version_details_v1_applications_application_id_versions_version_get_serialize( - application_id=application_id, - version=version, + _param = self._application_version_details_v1_versions_application_version_id_get_serialize( + application_version_id=application_version_id, _request_auth=_request_auth, _content_type=_content_type, _headers=_headers, @@ -267,10 +254,9 @@ def application_version_details_v1_applications_application_id_versions_version_ return response_data.response - def _application_version_details_v1_applications_application_id_versions_version_get_serialize( + def _application_version_details_v1_versions_application_version_id_get_serialize( self, - application_id, - version, + application_version_id, _request_auth, _content_type, _headers, @@ -292,10 +278,8 @@ def _application_version_details_v1_applications_application_id_versions_version _body_params: Optional[bytes] = None # process the path parameters - if application_id is not None: - _path_params['application_id'] = application_id - if version is not None: - _path_params['version'] = version + if application_version_id is not None: + _path_params['application_version_id'] = application_version_id # process the query parameters # process the header parameters # process the form parameters @@ -318,7 +302,7 @@ def _application_version_details_v1_applications_application_id_versions_version return self.api_client.param_serialize( method='GET', - resource_path='/api/v1/applications/{application_id}/versions/{version}', + resource_path='/api/v1/versions/{application_version_id}', path_params=_path_params, query_params=_query_params, header_params=_header_params, @@ -335,9 +319,9 @@ def _application_version_details_v1_applications_application_id_versions_version @validate_call - def cancel_run_v1_runs_run_id_cancel_post( + def cancel_application_run_v1_runs_application_run_id_cancel_post( self, - run_id: Annotated[StrictStr, Field(description="Run id, returned by `POST /runs/` endpoint")], + application_run_id: Annotated[StrictStr, Field(description="Application run id, returned by `POST /runs/` endpoint")], _request_timeout: Union[ None, Annotated[StrictFloat, Field(gt=0)], @@ -350,13 +334,13 @@ def cancel_run_v1_runs_run_id_cancel_post( _content_type: Optional[StrictStr] = None, _headers: Optional[Dict[StrictStr, Any]] = None, _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, - ) -> object: - """Cancel Run + ) -> None: + """Cancel Application Run - The run can be canceled by the user who created the run. The execution can be canceled any time while the application is not in a final state. The pending items will not be processed and will not add to the cost. When the application is canceled, the already completed items stay available for download. + The application run can be canceled by the user who created the application run. The execution can be canceled any time while the application is not in a final state. The pending items will not be processed and will not add to the cost. When the application is canceled, the already completed items stay available for download. - :param run_id: Run id, returned by `POST /runs/` endpoint (required) - :type run_id: str + :param application_run_id: Application run id, returned by `POST /runs/` endpoint (required) + :type application_run_id: str :param _request_timeout: timeout setting for this request. If one number provided, it will be total request timeout. It can also be a pair (tuple) of @@ -379,8 +363,8 @@ def cancel_run_v1_runs_run_id_cancel_post( :return: Returns the result object. """ # noqa: E501 - _param = self._cancel_run_v1_runs_run_id_cancel_post_serialize( - run_id=run_id, + _param = self._cancel_application_run_v1_runs_application_run_id_cancel_post_serialize( + application_run_id=application_run_id, _request_auth=_request_auth, _content_type=_content_type, _headers=_headers, @@ -388,10 +372,9 @@ def cancel_run_v1_runs_run_id_cancel_post( ) _response_types_map: Dict[str, Optional[str]] = { - '202': "object", + '202': None, '404': None, '403': None, - '409': None, '422': "HTTPValidationError", } response_data = self.api_client.call_api( @@ -406,9 +389,9 @@ def cancel_run_v1_runs_run_id_cancel_post( @validate_call - def cancel_run_v1_runs_run_id_cancel_post_with_http_info( + def cancel_application_run_v1_runs_application_run_id_cancel_post_with_http_info( self, - run_id: Annotated[StrictStr, Field(description="Run id, returned by `POST /runs/` endpoint")], + application_run_id: Annotated[StrictStr, Field(description="Application run id, returned by `POST /runs/` endpoint")], _request_timeout: Union[ None, Annotated[StrictFloat, Field(gt=0)], @@ -421,13 +404,13 @@ def cancel_run_v1_runs_run_id_cancel_post_with_http_info( _content_type: Optional[StrictStr] = None, _headers: Optional[Dict[StrictStr, Any]] = None, _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, - ) -> ApiResponse[object]: - """Cancel Run + ) -> ApiResponse[None]: + """Cancel Application Run - The run can be canceled by the user who created the run. The execution can be canceled any time while the application is not in a final state. The pending items will not be processed and will not add to the cost. When the application is canceled, the already completed items stay available for download. + The application run can be canceled by the user who created the application run. The execution can be canceled any time while the application is not in a final state. The pending items will not be processed and will not add to the cost. When the application is canceled, the already completed items stay available for download. - :param run_id: Run id, returned by `POST /runs/` endpoint (required) - :type run_id: str + :param application_run_id: Application run id, returned by `POST /runs/` endpoint (required) + :type application_run_id: str :param _request_timeout: timeout setting for this request. If one number provided, it will be total request timeout. It can also be a pair (tuple) of @@ -450,8 +433,8 @@ def cancel_run_v1_runs_run_id_cancel_post_with_http_info( :return: Returns the result object. """ # noqa: E501 - _param = self._cancel_run_v1_runs_run_id_cancel_post_serialize( - run_id=run_id, + _param = self._cancel_application_run_v1_runs_application_run_id_cancel_post_serialize( + application_run_id=application_run_id, _request_auth=_request_auth, _content_type=_content_type, _headers=_headers, @@ -459,10 +442,9 @@ def cancel_run_v1_runs_run_id_cancel_post_with_http_info( ) _response_types_map: Dict[str, Optional[str]] = { - '202': "object", + '202': None, '404': None, '403': None, - '409': None, '422': "HTTPValidationError", } response_data = self.api_client.call_api( @@ -477,9 +459,9 @@ def cancel_run_v1_runs_run_id_cancel_post_with_http_info( @validate_call - def cancel_run_v1_runs_run_id_cancel_post_without_preload_content( + def cancel_application_run_v1_runs_application_run_id_cancel_post_without_preload_content( self, - run_id: Annotated[StrictStr, Field(description="Run id, returned by `POST /runs/` endpoint")], + application_run_id: Annotated[StrictStr, Field(description="Application run id, returned by `POST /runs/` endpoint")], _request_timeout: Union[ None, Annotated[StrictFloat, Field(gt=0)], @@ -493,12 +475,12 @@ def cancel_run_v1_runs_run_id_cancel_post_without_preload_content( _headers: Optional[Dict[StrictStr, Any]] = None, _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, ) -> RESTResponseType: - """Cancel Run + """Cancel Application Run - The run can be canceled by the user who created the run. The execution can be canceled any time while the application is not in a final state. The pending items will not be processed and will not add to the cost. When the application is canceled, the already completed items stay available for download. + The application run can be canceled by the user who created the application run. The execution can be canceled any time while the application is not in a final state. The pending items will not be processed and will not add to the cost. When the application is canceled, the already completed items stay available for download. - :param run_id: Run id, returned by `POST /runs/` endpoint (required) - :type run_id: str + :param application_run_id: Application run id, returned by `POST /runs/` endpoint (required) + :type application_run_id: str :param _request_timeout: timeout setting for this request. If one number provided, it will be total request timeout. It can also be a pair (tuple) of @@ -521,8 +503,8 @@ def cancel_run_v1_runs_run_id_cancel_post_without_preload_content( :return: Returns the result object. """ # noqa: E501 - _param = self._cancel_run_v1_runs_run_id_cancel_post_serialize( - run_id=run_id, + _param = self._cancel_application_run_v1_runs_application_run_id_cancel_post_serialize( + application_run_id=application_run_id, _request_auth=_request_auth, _content_type=_content_type, _headers=_headers, @@ -530,10 +512,9 @@ def cancel_run_v1_runs_run_id_cancel_post_without_preload_content( ) _response_types_map: Dict[str, Optional[str]] = { - '202': "object", + '202': None, '404': None, '403': None, - '409': None, '422': "HTTPValidationError", } response_data = self.api_client.call_api( @@ -543,9 +524,9 @@ def cancel_run_v1_runs_run_id_cancel_post_without_preload_content( return response_data.response - def _cancel_run_v1_runs_run_id_cancel_post_serialize( + def _cancel_application_run_v1_runs_application_run_id_cancel_post_serialize( self, - run_id, + application_run_id, _request_auth, _content_type, _headers, @@ -567,8 +548,8 @@ def _cancel_run_v1_runs_run_id_cancel_post_serialize( _body_params: Optional[bytes] = None # process the path parameters - if run_id is not None: - _path_params['run_id'] = run_id + if application_run_id is not None: + _path_params['application_run_id'] = application_run_id # process the query parameters # process the header parameters # process the form parameters @@ -591,7 +572,7 @@ def _cancel_run_v1_runs_run_id_cancel_post_serialize( return self.api_client.param_serialize( method='POST', - resource_path='/api/v1/runs/{run_id}/cancel', + resource_path='/api/v1/runs/{application_run_id}/cancel', path_params=_path_params, query_params=_query_params, header_params=_header_params, @@ -608,7 +589,7 @@ def _cancel_run_v1_runs_run_id_cancel_post_serialize( @validate_call - def create_run_v1_runs_post( + def create_application_run_v1_runs_post( self, run_creation_request: RunCreationRequest, _request_timeout: Union[ @@ -624,9 +605,9 @@ def create_run_v1_runs_post( _headers: Optional[Dict[StrictStr, Any]] = None, _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, ) -> RunCreationResponse: - """Initiate Run + """Initiate Application Run - This endpoint initiates a processing run for a selected application and version, and returns a `run_id` for tracking purposes. Slide processing occurs asynchronously, allowing you to retrieve results for individual slides as soon as they complete processing. The system typically processes slides in batches of four, though this number may be reduced during periods of high demand. Below is an example of the required payload for initiating an Atlas H&E TME processing run. ### Payload The payload includes `application_id`, optional `version_number`, and `items` base fields. `application_id` is the unique identifier for the application. `version_number` is the semantic version to use. If not provided, the latest available version will be used. `items` includes the list of the items to process (slides, in case of HETA application). Every item has a set of standard fields defined by the API, plus the custom_metadata, specific to the chosen application. Example payload structure with the comments: ``` { application_id: \"he-tme\", version_number: \"1.0.0-beta\", items: [{ \"external_id\": \"slide_1\", \"input_artifacts\": [{ \"name\": \"user_slide\", \"download_url\": \"https://...\", \"custom_metadata\": { \"specimen\": { \"disease\": \"LUNG_CANCER\", \"tissue\": \"LUNG\" }, \"staining_method\": \"H&E\", \"width_px\": 136223, \"height_px\": 87761, \"resolution_mpp\": 0.2628238, \"media-type\":\"image/tiff\", \"checksum_base64_crc32c\": \"64RKKA==\" } }] }] } ``` | Parameter | Description | | :---- | :---- | | `application_id` required | Unique ID for the application | | `version_number` optional | Semantic version of the application. If not provided, the latest available version will be used | | `items` required | List of submitted items (WSIs) with parameters described below. | | `external_id` required | Unique WSI name or ID for easy reference to items, provided by the caller. The external_id should be unique across all items of the run. | | `input_artifacts` required | List of provided artifacts for a WSI; at the moment Atlas H&E-TME receives only 1 artifact per slide (the slide itself), but for some other applications this can be a slide and an segmentation map | | `name` required | Type of artifact; Atlas H&E-TME supports only `\"input_slide\"` | | `download_url` required | Signed URL to the input file in the S3 or GCS; Should be valid for at least 6 days | | `specimen: disease` required | Supported cancer types for Atlas H&E-TME (see full list in Atlas H&E-TME manual) | | `specimen: tissue` required | Supported tissue types for Atlas H&E-TME (see full list in Atlas H&E-TME manual) | | `staining_method` required | WSI stain /bio-marker; Atlas H&E-TME supports only `\"H&E\"` | | `width_px` required | Integer value. Number of pixels of the WSI in the X dimension. | | `height_px` required | Integer value. Number of pixels of the WSI in the Y dimension. | | `resolution_mpp` required | Resolution of WSI in micrometers per pixel; check allowed range in Atlas H&E-TME manual | | `media-type` required | Supported media formats; available values are: image/tiff (for .tiff or .tif WSI) application/dicom (for DICOM ) application/zip (for zipped DICOM) application/octet-stream (for .svs WSI) | | `checksum_base64_crc32c` required | Base64 encoded big-endian CRC32C checksum of the WSI image | ### Response The endpoint returns the run UUID. After that the job is scheduled for the execution in the background. To check the status of the run call `v1/runs/{run_id}`. ### Rejection Apart from the authentication, authorization and malformed input error, the request can be rejected when the quota limit is exceeded. More details on quotas is described in the documentation + This endpoint initiates a processing run for a selected application version and returns an `application_run_id` for tracking purposes. Slide processing occurs asynchronously, allowing you to retrieve results for individual slides as soon as they complete processing. The system typically processes slides in batches of four, though this number may be reduced during periods of high demand. Below is an example of the required payload for initiating an Atlas H&E TME processing run. ### Payload The payload includes `application_version_id` and `items` base fields. `application_version_id` is the id used for `/v1/versions/{application_id}` endpoint. `items` includes the list of the items to process (slides, in case of HETA application). Every item has a set of standard fields defined by the API, plus the metadata, specific to the chosen application. Example payload structure with the comments: ``` { application_version_id: \"he-tme:v1.0.0-beta\", items: [{ \"reference\": \"slide_1\", \"input_artifacts\": [{ \"name\": \"user_slide\", \"download_url\": \"https://...\", \"metadata\": { \"specimen\": { \"disease\": \"LUNG_CANCER\", \"tissue\": \"LUNG\" }, \"staining_method\": \"H&E\", \"width_px\": 136223, \"height_px\": 87761, \"resolution_mpp\": 0.2628238, \"media-type\":\"image/tiff\", \"checksum_base64_crc32c\": \"64RKKA==\" } }] }] } ``` | Parameter | Description | | :---- | :---- | | `application_version_id` required | Unique ID for the application (must include version) | | `items` required | List of submitted items (WSIs) with parameters described below. | | `reference` required | Unique WSI name or ID for easy reference to results, provided by the caller. The reference should be unique across all items of the application run. | | `input_artifacts` required | List of provided artifacts for a WSI; at the moment Atlas H&E-TME receives only 1 artifact per slide (the slide itself), but for some other applications this can be a slide and an segmentation map | | `name` required | Type of artifact; Atlas H&E-TME supports only `\"input_slide\"` | | `download_url` required | Signed URL to the input file in the S3 or GCS; Should be valid for at least 6 days | | `specimen: disease` required | Supported cancer types for Atlas H&E-TME (see full list in Atlas H&E-TME manual) | | `specimen: tissue` required | Supported tissue types for Atlas H&E-TME (see full list in Atlas H&E-TME manual) | | `staining_method` required | WSI stain /bio-marker; Atlas H&E-TME supports only `\"H&E\"` | | `width_px` required | Integer value. Number of pixels of the WSI in the X dimension. | | `height_px` required | Integer value. Number of pixels of the WSI in the Y dimension. | | `resolution_mpp` required | Resolution of WSI in micrometers per pixel; check allowed range in Atlas H&E-TME manual | | `media-type` required | Supported media formats; available values are: image/tiff (for .tiff or .tif WSI) application/dicom (for DICOM ) application/zip (for zipped DICOM) application/octet-stream (for .svs WSI) | | `checksum_base64_crc32c` required | Base64 encoded big-endian CRC32C checksum of the WSI image | ### Response The endpoint returns the application run UUID. After that the job is scheduled for the execution in the background. To check the status of the run call `v1/runs/{application_run_id}`. ### Rejection Apart from the authentication, authorization and malformed input error, the request can be rejected when the quota limit is exceeded. More details on quotas is described in the documentation :param run_creation_request: (required) :type run_creation_request: RunCreationRequest @@ -652,7 +633,7 @@ def create_run_v1_runs_post( :return: Returns the result object. """ # noqa: E501 - _param = self._create_run_v1_runs_post_serialize( + _param = self._create_application_run_v1_runs_post_serialize( run_creation_request=run_creation_request, _request_auth=_request_auth, _content_type=_content_type, @@ -679,7 +660,7 @@ def create_run_v1_runs_post( @validate_call - def create_run_v1_runs_post_with_http_info( + def create_application_run_v1_runs_post_with_http_info( self, run_creation_request: RunCreationRequest, _request_timeout: Union[ @@ -695,9 +676,9 @@ def create_run_v1_runs_post_with_http_info( _headers: Optional[Dict[StrictStr, Any]] = None, _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, ) -> ApiResponse[RunCreationResponse]: - """Initiate Run + """Initiate Application Run - This endpoint initiates a processing run for a selected application and version, and returns a `run_id` for tracking purposes. Slide processing occurs asynchronously, allowing you to retrieve results for individual slides as soon as they complete processing. The system typically processes slides in batches of four, though this number may be reduced during periods of high demand. Below is an example of the required payload for initiating an Atlas H&E TME processing run. ### Payload The payload includes `application_id`, optional `version_number`, and `items` base fields. `application_id` is the unique identifier for the application. `version_number` is the semantic version to use. If not provided, the latest available version will be used. `items` includes the list of the items to process (slides, in case of HETA application). Every item has a set of standard fields defined by the API, plus the custom_metadata, specific to the chosen application. Example payload structure with the comments: ``` { application_id: \"he-tme\", version_number: \"1.0.0-beta\", items: [{ \"external_id\": \"slide_1\", \"input_artifacts\": [{ \"name\": \"user_slide\", \"download_url\": \"https://...\", \"custom_metadata\": { \"specimen\": { \"disease\": \"LUNG_CANCER\", \"tissue\": \"LUNG\" }, \"staining_method\": \"H&E\", \"width_px\": 136223, \"height_px\": 87761, \"resolution_mpp\": 0.2628238, \"media-type\":\"image/tiff\", \"checksum_base64_crc32c\": \"64RKKA==\" } }] }] } ``` | Parameter | Description | | :---- | :---- | | `application_id` required | Unique ID for the application | | `version_number` optional | Semantic version of the application. If not provided, the latest available version will be used | | `items` required | List of submitted items (WSIs) with parameters described below. | | `external_id` required | Unique WSI name or ID for easy reference to items, provided by the caller. The external_id should be unique across all items of the run. | | `input_artifacts` required | List of provided artifacts for a WSI; at the moment Atlas H&E-TME receives only 1 artifact per slide (the slide itself), but for some other applications this can be a slide and an segmentation map | | `name` required | Type of artifact; Atlas H&E-TME supports only `\"input_slide\"` | | `download_url` required | Signed URL to the input file in the S3 or GCS; Should be valid for at least 6 days | | `specimen: disease` required | Supported cancer types for Atlas H&E-TME (see full list in Atlas H&E-TME manual) | | `specimen: tissue` required | Supported tissue types for Atlas H&E-TME (see full list in Atlas H&E-TME manual) | | `staining_method` required | WSI stain /bio-marker; Atlas H&E-TME supports only `\"H&E\"` | | `width_px` required | Integer value. Number of pixels of the WSI in the X dimension. | | `height_px` required | Integer value. Number of pixels of the WSI in the Y dimension. | | `resolution_mpp` required | Resolution of WSI in micrometers per pixel; check allowed range in Atlas H&E-TME manual | | `media-type` required | Supported media formats; available values are: image/tiff (for .tiff or .tif WSI) application/dicom (for DICOM ) application/zip (for zipped DICOM) application/octet-stream (for .svs WSI) | | `checksum_base64_crc32c` required | Base64 encoded big-endian CRC32C checksum of the WSI image | ### Response The endpoint returns the run UUID. After that the job is scheduled for the execution in the background. To check the status of the run call `v1/runs/{run_id}`. ### Rejection Apart from the authentication, authorization and malformed input error, the request can be rejected when the quota limit is exceeded. More details on quotas is described in the documentation + This endpoint initiates a processing run for a selected application version and returns an `application_run_id` for tracking purposes. Slide processing occurs asynchronously, allowing you to retrieve results for individual slides as soon as they complete processing. The system typically processes slides in batches of four, though this number may be reduced during periods of high demand. Below is an example of the required payload for initiating an Atlas H&E TME processing run. ### Payload The payload includes `application_version_id` and `items` base fields. `application_version_id` is the id used for `/v1/versions/{application_id}` endpoint. `items` includes the list of the items to process (slides, in case of HETA application). Every item has a set of standard fields defined by the API, plus the metadata, specific to the chosen application. Example payload structure with the comments: ``` { application_version_id: \"he-tme:v1.0.0-beta\", items: [{ \"reference\": \"slide_1\", \"input_artifacts\": [{ \"name\": \"user_slide\", \"download_url\": \"https://...\", \"metadata\": { \"specimen\": { \"disease\": \"LUNG_CANCER\", \"tissue\": \"LUNG\" }, \"staining_method\": \"H&E\", \"width_px\": 136223, \"height_px\": 87761, \"resolution_mpp\": 0.2628238, \"media-type\":\"image/tiff\", \"checksum_base64_crc32c\": \"64RKKA==\" } }] }] } ``` | Parameter | Description | | :---- | :---- | | `application_version_id` required | Unique ID for the application (must include version) | | `items` required | List of submitted items (WSIs) with parameters described below. | | `reference` required | Unique WSI name or ID for easy reference to results, provided by the caller. The reference should be unique across all items of the application run. | | `input_artifacts` required | List of provided artifacts for a WSI; at the moment Atlas H&E-TME receives only 1 artifact per slide (the slide itself), but for some other applications this can be a slide and an segmentation map | | `name` required | Type of artifact; Atlas H&E-TME supports only `\"input_slide\"` | | `download_url` required | Signed URL to the input file in the S3 or GCS; Should be valid for at least 6 days | | `specimen: disease` required | Supported cancer types for Atlas H&E-TME (see full list in Atlas H&E-TME manual) | | `specimen: tissue` required | Supported tissue types for Atlas H&E-TME (see full list in Atlas H&E-TME manual) | | `staining_method` required | WSI stain /bio-marker; Atlas H&E-TME supports only `\"H&E\"` | | `width_px` required | Integer value. Number of pixels of the WSI in the X dimension. | | `height_px` required | Integer value. Number of pixels of the WSI in the Y dimension. | | `resolution_mpp` required | Resolution of WSI in micrometers per pixel; check allowed range in Atlas H&E-TME manual | | `media-type` required | Supported media formats; available values are: image/tiff (for .tiff or .tif WSI) application/dicom (for DICOM ) application/zip (for zipped DICOM) application/octet-stream (for .svs WSI) | | `checksum_base64_crc32c` required | Base64 encoded big-endian CRC32C checksum of the WSI image | ### Response The endpoint returns the application run UUID. After that the job is scheduled for the execution in the background. To check the status of the run call `v1/runs/{application_run_id}`. ### Rejection Apart from the authentication, authorization and malformed input error, the request can be rejected when the quota limit is exceeded. More details on quotas is described in the documentation :param run_creation_request: (required) :type run_creation_request: RunCreationRequest @@ -723,7 +704,7 @@ def create_run_v1_runs_post_with_http_info( :return: Returns the result object. """ # noqa: E501 - _param = self._create_run_v1_runs_post_serialize( + _param = self._create_application_run_v1_runs_post_serialize( run_creation_request=run_creation_request, _request_auth=_request_auth, _content_type=_content_type, @@ -750,7 +731,7 @@ def create_run_v1_runs_post_with_http_info( @validate_call - def create_run_v1_runs_post_without_preload_content( + def create_application_run_v1_runs_post_without_preload_content( self, run_creation_request: RunCreationRequest, _request_timeout: Union[ @@ -766,9 +747,9 @@ def create_run_v1_runs_post_without_preload_content( _headers: Optional[Dict[StrictStr, Any]] = None, _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, ) -> RESTResponseType: - """Initiate Run + """Initiate Application Run - This endpoint initiates a processing run for a selected application and version, and returns a `run_id` for tracking purposes. Slide processing occurs asynchronously, allowing you to retrieve results for individual slides as soon as they complete processing. The system typically processes slides in batches of four, though this number may be reduced during periods of high demand. Below is an example of the required payload for initiating an Atlas H&E TME processing run. ### Payload The payload includes `application_id`, optional `version_number`, and `items` base fields. `application_id` is the unique identifier for the application. `version_number` is the semantic version to use. If not provided, the latest available version will be used. `items` includes the list of the items to process (slides, in case of HETA application). Every item has a set of standard fields defined by the API, plus the custom_metadata, specific to the chosen application. Example payload structure with the comments: ``` { application_id: \"he-tme\", version_number: \"1.0.0-beta\", items: [{ \"external_id\": \"slide_1\", \"input_artifacts\": [{ \"name\": \"user_slide\", \"download_url\": \"https://...\", \"custom_metadata\": { \"specimen\": { \"disease\": \"LUNG_CANCER\", \"tissue\": \"LUNG\" }, \"staining_method\": \"H&E\", \"width_px\": 136223, \"height_px\": 87761, \"resolution_mpp\": 0.2628238, \"media-type\":\"image/tiff\", \"checksum_base64_crc32c\": \"64RKKA==\" } }] }] } ``` | Parameter | Description | | :---- | :---- | | `application_id` required | Unique ID for the application | | `version_number` optional | Semantic version of the application. If not provided, the latest available version will be used | | `items` required | List of submitted items (WSIs) with parameters described below. | | `external_id` required | Unique WSI name or ID for easy reference to items, provided by the caller. The external_id should be unique across all items of the run. | | `input_artifacts` required | List of provided artifacts for a WSI; at the moment Atlas H&E-TME receives only 1 artifact per slide (the slide itself), but for some other applications this can be a slide and an segmentation map | | `name` required | Type of artifact; Atlas H&E-TME supports only `\"input_slide\"` | | `download_url` required | Signed URL to the input file in the S3 or GCS; Should be valid for at least 6 days | | `specimen: disease` required | Supported cancer types for Atlas H&E-TME (see full list in Atlas H&E-TME manual) | | `specimen: tissue` required | Supported tissue types for Atlas H&E-TME (see full list in Atlas H&E-TME manual) | | `staining_method` required | WSI stain /bio-marker; Atlas H&E-TME supports only `\"H&E\"` | | `width_px` required | Integer value. Number of pixels of the WSI in the X dimension. | | `height_px` required | Integer value. Number of pixels of the WSI in the Y dimension. | | `resolution_mpp` required | Resolution of WSI in micrometers per pixel; check allowed range in Atlas H&E-TME manual | | `media-type` required | Supported media formats; available values are: image/tiff (for .tiff or .tif WSI) application/dicom (for DICOM ) application/zip (for zipped DICOM) application/octet-stream (for .svs WSI) | | `checksum_base64_crc32c` required | Base64 encoded big-endian CRC32C checksum of the WSI image | ### Response The endpoint returns the run UUID. After that the job is scheduled for the execution in the background. To check the status of the run call `v1/runs/{run_id}`. ### Rejection Apart from the authentication, authorization and malformed input error, the request can be rejected when the quota limit is exceeded. More details on quotas is described in the documentation + This endpoint initiates a processing run for a selected application version and returns an `application_run_id` for tracking purposes. Slide processing occurs asynchronously, allowing you to retrieve results for individual slides as soon as they complete processing. The system typically processes slides in batches of four, though this number may be reduced during periods of high demand. Below is an example of the required payload for initiating an Atlas H&E TME processing run. ### Payload The payload includes `application_version_id` and `items` base fields. `application_version_id` is the id used for `/v1/versions/{application_id}` endpoint. `items` includes the list of the items to process (slides, in case of HETA application). Every item has a set of standard fields defined by the API, plus the metadata, specific to the chosen application. Example payload structure with the comments: ``` { application_version_id: \"he-tme:v1.0.0-beta\", items: [{ \"reference\": \"slide_1\", \"input_artifacts\": [{ \"name\": \"user_slide\", \"download_url\": \"https://...\", \"metadata\": { \"specimen\": { \"disease\": \"LUNG_CANCER\", \"tissue\": \"LUNG\" }, \"staining_method\": \"H&E\", \"width_px\": 136223, \"height_px\": 87761, \"resolution_mpp\": 0.2628238, \"media-type\":\"image/tiff\", \"checksum_base64_crc32c\": \"64RKKA==\" } }] }] } ``` | Parameter | Description | | :---- | :---- | | `application_version_id` required | Unique ID for the application (must include version) | | `items` required | List of submitted items (WSIs) with parameters described below. | | `reference` required | Unique WSI name or ID for easy reference to results, provided by the caller. The reference should be unique across all items of the application run. | | `input_artifacts` required | List of provided artifacts for a WSI; at the moment Atlas H&E-TME receives only 1 artifact per slide (the slide itself), but for some other applications this can be a slide and an segmentation map | | `name` required | Type of artifact; Atlas H&E-TME supports only `\"input_slide\"` | | `download_url` required | Signed URL to the input file in the S3 or GCS; Should be valid for at least 6 days | | `specimen: disease` required | Supported cancer types for Atlas H&E-TME (see full list in Atlas H&E-TME manual) | | `specimen: tissue` required | Supported tissue types for Atlas H&E-TME (see full list in Atlas H&E-TME manual) | | `staining_method` required | WSI stain /bio-marker; Atlas H&E-TME supports only `\"H&E\"` | | `width_px` required | Integer value. Number of pixels of the WSI in the X dimension. | | `height_px` required | Integer value. Number of pixels of the WSI in the Y dimension. | | `resolution_mpp` required | Resolution of WSI in micrometers per pixel; check allowed range in Atlas H&E-TME manual | | `media-type` required | Supported media formats; available values are: image/tiff (for .tiff or .tif WSI) application/dicom (for DICOM ) application/zip (for zipped DICOM) application/octet-stream (for .svs WSI) | | `checksum_base64_crc32c` required | Base64 encoded big-endian CRC32C checksum of the WSI image | ### Response The endpoint returns the application run UUID. After that the job is scheduled for the execution in the background. To check the status of the run call `v1/runs/{application_run_id}`. ### Rejection Apart from the authentication, authorization and malformed input error, the request can be rejected when the quota limit is exceeded. More details on quotas is described in the documentation :param run_creation_request: (required) :type run_creation_request: RunCreationRequest @@ -794,7 +775,7 @@ def create_run_v1_runs_post_without_preload_content( :return: Returns the result object. """ # noqa: E501 - _param = self._create_run_v1_runs_post_serialize( + _param = self._create_application_run_v1_runs_post_serialize( run_creation_request=run_creation_request, _request_auth=_request_auth, _content_type=_content_type, @@ -816,7 +797,7 @@ def create_run_v1_runs_post_without_preload_content( return response_data.response - def _create_run_v1_runs_post_serialize( + def _create_application_run_v1_runs_post_serialize( self, run_creation_request, _request_auth, @@ -894,9 +875,9 @@ def _create_run_v1_runs_post_serialize( @validate_call - def delete_run_items_v1_runs_run_id_artifacts_delete( + def delete_application_run_results_v1_runs_application_run_id_results_delete( self, - run_id: Annotated[StrictStr, Field(description="Run id, returned by `POST /runs/` endpoint")], + application_run_id: Annotated[StrictStr, Field(description="Application run id, returned by `POST /runs/` endpoint")], _request_timeout: Union[ None, Annotated[StrictFloat, Field(gt=0)], @@ -909,13 +890,13 @@ def delete_run_items_v1_runs_run_id_artifacts_delete( _content_type: Optional[StrictStr] = None, _headers: Optional[Dict[StrictStr, Any]] = None, _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, - ) -> object: - """Delete Run Items + ) -> None: + """Delete Application Run Results - This endpoint allows the caller to explicitly delete artifacts generated by a run. It can only be invoked when the run has reached a final state (PROCESSED, CANCELED_SYSTEM, CANCELED_USER). Note that by default, all artifacts are automatically deleted 30 days after the run finishes, regardless of whether the caller explicitly requests deletion. + This endpoint allows the caller to explicitly delete outputs generated by an application. It can only be invoked when the application run has reached a final state (COMPLETED, COMPLETED_WITH_ERROR, CANCELED_USER, or CANCELED_SYSTEM). Note that by default, all outputs are automatically deleted 30 days after the application run finishes, regardless of whether the caller explicitly requests deletion. - :param run_id: Run id, returned by `POST /runs/` endpoint (required) - :type run_id: str + :param application_run_id: Application run id, returned by `POST /runs/` endpoint (required) + :type application_run_id: str :param _request_timeout: timeout setting for this request. If one number provided, it will be total request timeout. It can also be a pair (tuple) of @@ -938,8 +919,8 @@ def delete_run_items_v1_runs_run_id_artifacts_delete( :return: Returns the result object. """ # noqa: E501 - _param = self._delete_run_items_v1_runs_run_id_artifacts_delete_serialize( - run_id=run_id, + _param = self._delete_application_run_results_v1_runs_application_run_id_results_delete_serialize( + application_run_id=application_run_id, _request_auth=_request_auth, _content_type=_content_type, _headers=_headers, @@ -947,7 +928,7 @@ def delete_run_items_v1_runs_run_id_artifacts_delete( ) _response_types_map: Dict[str, Optional[str]] = { - '200': "object", + '204': None, '404': None, '422': "HTTPValidationError", } @@ -963,9 +944,9 @@ def delete_run_items_v1_runs_run_id_artifacts_delete( @validate_call - def delete_run_items_v1_runs_run_id_artifacts_delete_with_http_info( + def delete_application_run_results_v1_runs_application_run_id_results_delete_with_http_info( self, - run_id: Annotated[StrictStr, Field(description="Run id, returned by `POST /runs/` endpoint")], + application_run_id: Annotated[StrictStr, Field(description="Application run id, returned by `POST /runs/` endpoint")], _request_timeout: Union[ None, Annotated[StrictFloat, Field(gt=0)], @@ -978,13 +959,13 @@ def delete_run_items_v1_runs_run_id_artifacts_delete_with_http_info( _content_type: Optional[StrictStr] = None, _headers: Optional[Dict[StrictStr, Any]] = None, _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, - ) -> ApiResponse[object]: - """Delete Run Items + ) -> ApiResponse[None]: + """Delete Application Run Results - This endpoint allows the caller to explicitly delete artifacts generated by a run. It can only be invoked when the run has reached a final state (PROCESSED, CANCELED_SYSTEM, CANCELED_USER). Note that by default, all artifacts are automatically deleted 30 days after the run finishes, regardless of whether the caller explicitly requests deletion. + This endpoint allows the caller to explicitly delete outputs generated by an application. It can only be invoked when the application run has reached a final state (COMPLETED, COMPLETED_WITH_ERROR, CANCELED_USER, or CANCELED_SYSTEM). Note that by default, all outputs are automatically deleted 30 days after the application run finishes, regardless of whether the caller explicitly requests deletion. - :param run_id: Run id, returned by `POST /runs/` endpoint (required) - :type run_id: str + :param application_run_id: Application run id, returned by `POST /runs/` endpoint (required) + :type application_run_id: str :param _request_timeout: timeout setting for this request. If one number provided, it will be total request timeout. It can also be a pair (tuple) of @@ -1007,8 +988,8 @@ def delete_run_items_v1_runs_run_id_artifacts_delete_with_http_info( :return: Returns the result object. """ # noqa: E501 - _param = self._delete_run_items_v1_runs_run_id_artifacts_delete_serialize( - run_id=run_id, + _param = self._delete_application_run_results_v1_runs_application_run_id_results_delete_serialize( + application_run_id=application_run_id, _request_auth=_request_auth, _content_type=_content_type, _headers=_headers, @@ -1016,7 +997,7 @@ def delete_run_items_v1_runs_run_id_artifacts_delete_with_http_info( ) _response_types_map: Dict[str, Optional[str]] = { - '200': "object", + '204': None, '404': None, '422': "HTTPValidationError", } @@ -1032,9 +1013,9 @@ def delete_run_items_v1_runs_run_id_artifacts_delete_with_http_info( @validate_call - def delete_run_items_v1_runs_run_id_artifacts_delete_without_preload_content( + def delete_application_run_results_v1_runs_application_run_id_results_delete_without_preload_content( self, - run_id: Annotated[StrictStr, Field(description="Run id, returned by `POST /runs/` endpoint")], + application_run_id: Annotated[StrictStr, Field(description="Application run id, returned by `POST /runs/` endpoint")], _request_timeout: Union[ None, Annotated[StrictFloat, Field(gt=0)], @@ -1048,12 +1029,12 @@ def delete_run_items_v1_runs_run_id_artifacts_delete_without_preload_content( _headers: Optional[Dict[StrictStr, Any]] = None, _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, ) -> RESTResponseType: - """Delete Run Items + """Delete Application Run Results - This endpoint allows the caller to explicitly delete artifacts generated by a run. It can only be invoked when the run has reached a final state (PROCESSED, CANCELED_SYSTEM, CANCELED_USER). Note that by default, all artifacts are automatically deleted 30 days after the run finishes, regardless of whether the caller explicitly requests deletion. + This endpoint allows the caller to explicitly delete outputs generated by an application. It can only be invoked when the application run has reached a final state (COMPLETED, COMPLETED_WITH_ERROR, CANCELED_USER, or CANCELED_SYSTEM). Note that by default, all outputs are automatically deleted 30 days after the application run finishes, regardless of whether the caller explicitly requests deletion. - :param run_id: Run id, returned by `POST /runs/` endpoint (required) - :type run_id: str + :param application_run_id: Application run id, returned by `POST /runs/` endpoint (required) + :type application_run_id: str :param _request_timeout: timeout setting for this request. If one number provided, it will be total request timeout. It can also be a pair (tuple) of @@ -1076,8 +1057,8 @@ def delete_run_items_v1_runs_run_id_artifacts_delete_without_preload_content( :return: Returns the result object. """ # noqa: E501 - _param = self._delete_run_items_v1_runs_run_id_artifacts_delete_serialize( - run_id=run_id, + _param = self._delete_application_run_results_v1_runs_application_run_id_results_delete_serialize( + application_run_id=application_run_id, _request_auth=_request_auth, _content_type=_content_type, _headers=_headers, @@ -1085,7 +1066,7 @@ def delete_run_items_v1_runs_run_id_artifacts_delete_without_preload_content( ) _response_types_map: Dict[str, Optional[str]] = { - '200': "object", + '204': None, '404': None, '422': "HTTPValidationError", } @@ -1096,9 +1077,9 @@ def delete_run_items_v1_runs_run_id_artifacts_delete_without_preload_content( return response_data.response - def _delete_run_items_v1_runs_run_id_artifacts_delete_serialize( + def _delete_application_run_results_v1_runs_application_run_id_results_delete_serialize( self, - run_id, + application_run_id, _request_auth, _content_type, _headers, @@ -1120,8 +1101,8 @@ def _delete_run_items_v1_runs_run_id_artifacts_delete_serialize( _body_params: Optional[bytes] = None # process the path parameters - if run_id is not None: - _path_params['run_id'] = run_id + if application_run_id is not None: + _path_params['application_run_id'] = application_run_id # process the query parameters # process the header parameters # process the form parameters @@ -1144,7 +1125,7 @@ def _delete_run_items_v1_runs_run_id_artifacts_delete_serialize( return self.api_client.param_serialize( method='DELETE', - resource_path='/api/v1/runs/{run_id}/artifacts', + resource_path='/api/v1/runs/{application_run_id}/results', path_params=_path_params, query_params=_query_params, header_params=_header_params, @@ -1161,10 +1142,9 @@ def _delete_run_items_v1_runs_run_id_artifacts_delete_serialize( @validate_call - def get_item_by_run_v1_runs_run_id_items_external_id_get( + def get_item_v1_items_item_id_get( self, - run_id: Annotated[StrictStr, Field(description="The run id, returned by `POST /runs/` endpoint")], - external_id: Annotated[StrictStr, Field(description="The `external_id` that was defined for the item by the customer that triggered the run.")], + item_id: StrictStr, _request_timeout: Union[ None, Annotated[StrictFloat, Field(gt=0)], @@ -1177,15 +1157,13 @@ def get_item_by_run_v1_runs_run_id_items_external_id_get( _content_type: Optional[StrictStr] = None, _headers: Optional[Dict[StrictStr, Any]] = None, _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, - ) -> ItemResultReadResponse: - """Get Item By Run + ) -> ItemReadResponse: + """Get Item - Retrieve details of a specific item (slide) by its external ID and the run ID. + Retrieve details of a specific item (slide) by its ID. - :param run_id: The run id, returned by `POST /runs/` endpoint (required) - :type run_id: str - :param external_id: The `external_id` that was defined for the item by the customer that triggered the run. (required) - :type external_id: str + :param item_id: (required) + :type item_id: str :param _request_timeout: timeout setting for this request. If one number provided, it will be total request timeout. It can also be a pair (tuple) of @@ -1208,9 +1186,8 @@ def get_item_by_run_v1_runs_run_id_items_external_id_get( :return: Returns the result object. """ # noqa: E501 - _param = self._get_item_by_run_v1_runs_run_id_items_external_id_get_serialize( - run_id=run_id, - external_id=external_id, + _param = self._get_item_v1_items_item_id_get_serialize( + item_id=item_id, _request_auth=_request_auth, _content_type=_content_type, _headers=_headers, @@ -1218,9 +1195,9 @@ def get_item_by_run_v1_runs_run_id_items_external_id_get( ) _response_types_map: Dict[str, Optional[str]] = { - '200': "ItemResultReadResponse", - '404': None, + '200': "ItemReadResponse", '403': None, + '404': None, '422': "HTTPValidationError", } response_data = self.api_client.call_api( @@ -1235,10 +1212,9 @@ def get_item_by_run_v1_runs_run_id_items_external_id_get( @validate_call - def get_item_by_run_v1_runs_run_id_items_external_id_get_with_http_info( + def get_item_v1_items_item_id_get_with_http_info( self, - run_id: Annotated[StrictStr, Field(description="The run id, returned by `POST /runs/` endpoint")], - external_id: Annotated[StrictStr, Field(description="The `external_id` that was defined for the item by the customer that triggered the run.")], + item_id: StrictStr, _request_timeout: Union[ None, Annotated[StrictFloat, Field(gt=0)], @@ -1251,15 +1227,13 @@ def get_item_by_run_v1_runs_run_id_items_external_id_get_with_http_info( _content_type: Optional[StrictStr] = None, _headers: Optional[Dict[StrictStr, Any]] = None, _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, - ) -> ApiResponse[ItemResultReadResponse]: - """Get Item By Run + ) -> ApiResponse[ItemReadResponse]: + """Get Item - Retrieve details of a specific item (slide) by its external ID and the run ID. + Retrieve details of a specific item (slide) by its ID. - :param run_id: The run id, returned by `POST /runs/` endpoint (required) - :type run_id: str - :param external_id: The `external_id` that was defined for the item by the customer that triggered the run. (required) - :type external_id: str + :param item_id: (required) + :type item_id: str :param _request_timeout: timeout setting for this request. If one number provided, it will be total request timeout. It can also be a pair (tuple) of @@ -1282,9 +1256,8 @@ def get_item_by_run_v1_runs_run_id_items_external_id_get_with_http_info( :return: Returns the result object. """ # noqa: E501 - _param = self._get_item_by_run_v1_runs_run_id_items_external_id_get_serialize( - run_id=run_id, - external_id=external_id, + _param = self._get_item_v1_items_item_id_get_serialize( + item_id=item_id, _request_auth=_request_auth, _content_type=_content_type, _headers=_headers, @@ -1292,9 +1265,9 @@ def get_item_by_run_v1_runs_run_id_items_external_id_get_with_http_info( ) _response_types_map: Dict[str, Optional[str]] = { - '200': "ItemResultReadResponse", - '404': None, + '200': "ItemReadResponse", '403': None, + '404': None, '422': "HTTPValidationError", } response_data = self.api_client.call_api( @@ -1309,10 +1282,9 @@ def get_item_by_run_v1_runs_run_id_items_external_id_get_with_http_info( @validate_call - def get_item_by_run_v1_runs_run_id_items_external_id_get_without_preload_content( + def get_item_v1_items_item_id_get_without_preload_content( self, - run_id: Annotated[StrictStr, Field(description="The run id, returned by `POST /runs/` endpoint")], - external_id: Annotated[StrictStr, Field(description="The `external_id` that was defined for the item by the customer that triggered the run.")], + item_id: StrictStr, _request_timeout: Union[ None, Annotated[StrictFloat, Field(gt=0)], @@ -1326,14 +1298,12 @@ def get_item_by_run_v1_runs_run_id_items_external_id_get_without_preload_content _headers: Optional[Dict[StrictStr, Any]] = None, _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, ) -> RESTResponseType: - """Get Item By Run + """Get Item - Retrieve details of a specific item (slide) by its external ID and the run ID. + Retrieve details of a specific item (slide) by its ID. - :param run_id: The run id, returned by `POST /runs/` endpoint (required) - :type run_id: str - :param external_id: The `external_id` that was defined for the item by the customer that triggered the run. (required) - :type external_id: str + :param item_id: (required) + :type item_id: str :param _request_timeout: timeout setting for this request. If one number provided, it will be total request timeout. It can also be a pair (tuple) of @@ -1356,9 +1326,8 @@ def get_item_by_run_v1_runs_run_id_items_external_id_get_without_preload_content :return: Returns the result object. """ # noqa: E501 - _param = self._get_item_by_run_v1_runs_run_id_items_external_id_get_serialize( - run_id=run_id, - external_id=external_id, + _param = self._get_item_v1_items_item_id_get_serialize( + item_id=item_id, _request_auth=_request_auth, _content_type=_content_type, _headers=_headers, @@ -1366,9 +1335,9 @@ def get_item_by_run_v1_runs_run_id_items_external_id_get_without_preload_content ) _response_types_map: Dict[str, Optional[str]] = { - '200': "ItemResultReadResponse", - '404': None, + '200': "ItemReadResponse", '403': None, + '404': None, '422': "HTTPValidationError", } response_data = self.api_client.call_api( @@ -1378,10 +1347,9 @@ def get_item_by_run_v1_runs_run_id_items_external_id_get_without_preload_content return response_data.response - def _get_item_by_run_v1_runs_run_id_items_external_id_get_serialize( + def _get_item_v1_items_item_id_get_serialize( self, - run_id, - external_id, + item_id, _request_auth, _content_type, _headers, @@ -1403,10 +1371,8 @@ def _get_item_by_run_v1_runs_run_id_items_external_id_get_serialize( _body_params: Optional[bytes] = None # process the path parameters - if run_id is not None: - _path_params['run_id'] = run_id - if external_id is not None: - _path_params['external_id'] = external_id + if item_id is not None: + _path_params['item_id'] = item_id # process the query parameters # process the header parameters # process the form parameters @@ -1429,7 +1395,7 @@ def _get_item_by_run_v1_runs_run_id_items_external_id_get_serialize( return self.api_client.param_serialize( method='GET', - resource_path='/api/v1/runs/{run_id}/items/{external_id}', + resource_path='/api/v1/items/{item_id}', path_params=_path_params, query_params=_query_params, header_params=_header_params, @@ -1692,9 +1658,9 @@ def _get_me_v1_me_get_serialize( @validate_call - def get_run_v1_runs_run_id_get( + def get_run_v1_runs_application_run_id_get( self, - run_id: Annotated[StrictStr, Field(description="Run id, returned by `POST /runs/` endpoint")], + application_run_id: Annotated[StrictStr, Field(description="Application run id, returned by `POST /runs/` endpoint")], _request_timeout: Union[ None, Annotated[StrictFloat, Field(gt=0)], @@ -1710,10 +1676,10 @@ def get_run_v1_runs_run_id_get( ) -> RunReadResponse: """Get run details - This endpoint allows the caller to retrieve the current status of a run along with other relevant run details. A run becomes available immediately after it is created through the POST `/runs/` endpoint. To download the output results, use GET `/runs/{run_id}/` items to get outputs for all slides. Access to a run is restricted to the user who created it. + This endpoint allows the caller to retrieve the current status of an application run along with other relevant run details. A run becomes available immediately after it is created through the POST `/runs/` endpoint. To download the output results, use GET `/runs/{application_run_id}/` results to get outputs for all slides. Access to a run is restricted to the user who created it. - :param run_id: Run id, returned by `POST /runs/` endpoint (required) - :type run_id: str + :param application_run_id: Application run id, returned by `POST /runs/` endpoint (required) + :type application_run_id: str :param _request_timeout: timeout setting for this request. If one number provided, it will be total request timeout. It can also be a pair (tuple) of @@ -1736,8 +1702,8 @@ def get_run_v1_runs_run_id_get( :return: Returns the result object. """ # noqa: E501 - _param = self._get_run_v1_runs_run_id_get_serialize( - run_id=run_id, + _param = self._get_run_v1_runs_application_run_id_get_serialize( + application_run_id=application_run_id, _request_auth=_request_auth, _content_type=_content_type, _headers=_headers, @@ -1762,9 +1728,9 @@ def get_run_v1_runs_run_id_get( @validate_call - def get_run_v1_runs_run_id_get_with_http_info( + def get_run_v1_runs_application_run_id_get_with_http_info( self, - run_id: Annotated[StrictStr, Field(description="Run id, returned by `POST /runs/` endpoint")], + application_run_id: Annotated[StrictStr, Field(description="Application run id, returned by `POST /runs/` endpoint")], _request_timeout: Union[ None, Annotated[StrictFloat, Field(gt=0)], @@ -1780,10 +1746,10 @@ def get_run_v1_runs_run_id_get_with_http_info( ) -> ApiResponse[RunReadResponse]: """Get run details - This endpoint allows the caller to retrieve the current status of a run along with other relevant run details. A run becomes available immediately after it is created through the POST `/runs/` endpoint. To download the output results, use GET `/runs/{run_id}/` items to get outputs for all slides. Access to a run is restricted to the user who created it. + This endpoint allows the caller to retrieve the current status of an application run along with other relevant run details. A run becomes available immediately after it is created through the POST `/runs/` endpoint. To download the output results, use GET `/runs/{application_run_id}/` results to get outputs for all slides. Access to a run is restricted to the user who created it. - :param run_id: Run id, returned by `POST /runs/` endpoint (required) - :type run_id: str + :param application_run_id: Application run id, returned by `POST /runs/` endpoint (required) + :type application_run_id: str :param _request_timeout: timeout setting for this request. If one number provided, it will be total request timeout. It can also be a pair (tuple) of @@ -1806,8 +1772,8 @@ def get_run_v1_runs_run_id_get_with_http_info( :return: Returns the result object. """ # noqa: E501 - _param = self._get_run_v1_runs_run_id_get_serialize( - run_id=run_id, + _param = self._get_run_v1_runs_application_run_id_get_serialize( + application_run_id=application_run_id, _request_auth=_request_auth, _content_type=_content_type, _headers=_headers, @@ -1832,9 +1798,9 @@ def get_run_v1_runs_run_id_get_with_http_info( @validate_call - def get_run_v1_runs_run_id_get_without_preload_content( + def get_run_v1_runs_application_run_id_get_without_preload_content( self, - run_id: Annotated[StrictStr, Field(description="Run id, returned by `POST /runs/` endpoint")], + application_run_id: Annotated[StrictStr, Field(description="Application run id, returned by `POST /runs/` endpoint")], _request_timeout: Union[ None, Annotated[StrictFloat, Field(gt=0)], @@ -1850,10 +1816,10 @@ def get_run_v1_runs_run_id_get_without_preload_content( ) -> RESTResponseType: """Get run details - This endpoint allows the caller to retrieve the current status of a run along with other relevant run details. A run becomes available immediately after it is created through the POST `/runs/` endpoint. To download the output results, use GET `/runs/{run_id}/` items to get outputs for all slides. Access to a run is restricted to the user who created it. + This endpoint allows the caller to retrieve the current status of an application run along with other relevant run details. A run becomes available immediately after it is created through the POST `/runs/` endpoint. To download the output results, use GET `/runs/{application_run_id}/` results to get outputs for all slides. Access to a run is restricted to the user who created it. - :param run_id: Run id, returned by `POST /runs/` endpoint (required) - :type run_id: str + :param application_run_id: Application run id, returned by `POST /runs/` endpoint (required) + :type application_run_id: str :param _request_timeout: timeout setting for this request. If one number provided, it will be total request timeout. It can also be a pair (tuple) of @@ -1876,8 +1842,8 @@ def get_run_v1_runs_run_id_get_without_preload_content( :return: Returns the result object. """ # noqa: E501 - _param = self._get_run_v1_runs_run_id_get_serialize( - run_id=run_id, + _param = self._get_run_v1_runs_application_run_id_get_serialize( + application_run_id=application_run_id, _request_auth=_request_auth, _content_type=_content_type, _headers=_headers, @@ -1897,9 +1863,9 @@ def get_run_v1_runs_run_id_get_without_preload_content( return response_data.response - def _get_run_v1_runs_run_id_get_serialize( + def _get_run_v1_runs_application_run_id_get_serialize( self, - run_id, + application_run_id, _request_auth, _content_type, _headers, @@ -1921,8 +1887,8 @@ def _get_run_v1_runs_run_id_get_serialize( _body_params: Optional[bytes] = None # process the path parameters - if run_id is not None: - _path_params['run_id'] = run_id + if application_run_id is not None: + _path_params['application_run_id'] = application_run_id # process the query parameters # process the header parameters # process the form parameters @@ -1945,7 +1911,7 @@ def _get_run_v1_runs_run_id_get_serialize( return self.api_client.param_serialize( method='GET', - resource_path='/api/v1/runs/{run_id}', + resource_path='/api/v1/runs/{application_run_id}', path_params=_path_params, query_params=_query_params, header_params=_header_params, @@ -1962,11 +1928,14 @@ def _get_run_v1_runs_run_id_get_serialize( @validate_call - def list_applications_v1_applications_get( + def list_application_runs_v1_runs_get( self, + application_id: Annotated[Optional[StrictStr], Field(description="Optional application ID filter")] = None, + application_version: Annotated[Optional[StrictStr], Field(description="Optional application version filter")] = None, + metadata: Annotated[Optional[Annotated[str, Field(strict=True, max_length=1000)]], Field(description="Use PostgreSQL JSONPath expressions to filter runs by their metadata. #### URL Encoding Required **Important**: JSONPath expressions contain special characters that must be URL-encoded when used in query parameters. Most HTTP clients handle this automatically, but when constructing URLs manually, ensure proper encoding. #### Examples (Clear Format): - **Field existence**: `$.project` - Runs that have a project field defined - **Exact value match**: `$.project ? (@ == \"cancer-research\")` - Runs with specific project value - **Numeric comparison**: `$.duration_hours ? (@ < 2)` - Runs with duration less than 2 hours - **Array operations**: `$.tags[*] ? (@ == \"production\")` - Runs tagged with \"production\" - **Complex conditions**: `$.resources ? (@.gpu_count > 2 && @.memory_gb >= 16)` - Runs with high resource requirements #### Examples (URL-Encoded Format): - **Field existence**: `%24.project` - **Exact value match**: `%24.project%20%3F%20(%40%20%3D%3D%20%22cancer-research%22)` - **Numeric comparison**: `%24.duration_hours%20%3F%20(%40%20%3C%202)` - **Array operations**: `%24.tags%5B*%5D%20%3F%20(%40%20%3D%3D%20%22production%22)` - **Complex conditions**: `%24.resources%20%3F%20(%40.gpu_count%20%3E%202%20%26%26%20%40.memory_gb%20%3E%3D%2016)` #### Notes - JSONPath expressions are evaluated using PostgreSQL's `@?` operator - The `$.` prefix is automatically added to root-level field references if missing - String values in conditions must be enclosed in double quotes - Use `&&` for AND operations and `||` for OR operations - Regular expressions use `like_regex` with standard regex syntax - **Remember to URL-encode the entire JSONPath expression when making HTTP requests** ")] = None, page: Optional[Annotated[int, Field(strict=True, ge=1)]] = None, page_size: Optional[Annotated[int, Field(le=100, strict=True, ge=5)]] = None, - sort: Annotated[Optional[List[StrictStr]], Field(description="Sort the results by one or more fields. Use `+` for ascending and `-` for descending order. **Available fields:** - `application_id` - `name` - `description` - `regulatory_classes` **Examples:** - `?sort=application_id` - Sort by application_id ascending - `?sort=-name` - Sort by name descending - `?sort=+description&sort=name` - Sort by description ascending, then name descending")] = None, + sort: Annotated[Optional[List[StrictStr]], Field(description="Sort the results by one or more fields. Use `+` for ascending and `-` for descending order. **Available fields:** - `application_run_id` - `application_version_id` - `organization_id` - `status` - `triggered_at` - `triggered_by` **Examples:** - `?sort=triggered_at` - Sort by creation time (ascending) - `?sort=-triggered_at` - Sort by creation time (descending) - `?sort=status&sort=-triggered_at` - Sort by status, then by time (descending) ")] = None, _request_timeout: Union[ None, Annotated[StrictFloat, Field(gt=0)], @@ -1979,16 +1948,22 @@ def list_applications_v1_applications_get( _content_type: Optional[StrictStr] = None, _headers: Optional[Dict[StrictStr, Any]] = None, _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, - ) -> List[ApplicationReadShortResponse]: - """List available applications + ) -> List[RunReadResponse]: + """List Application Runs - Returns the list of the applications, available to the caller. The application is available if any of the versions of the application is assigned to the caller’s organization. The response is paginated and sorted according to the provided parameters. + List application runs with filtering, sorting, and pagination capabilities. Returns paginated application runs that were triggered by the user. + :param application_id: Optional application ID filter + :type application_id: str + :param application_version: Optional application version filter + :type application_version: str + :param metadata: Use PostgreSQL JSONPath expressions to filter runs by their metadata. #### URL Encoding Required **Important**: JSONPath expressions contain special characters that must be URL-encoded when used in query parameters. Most HTTP clients handle this automatically, but when constructing URLs manually, ensure proper encoding. #### Examples (Clear Format): - **Field existence**: `$.project` - Runs that have a project field defined - **Exact value match**: `$.project ? (@ == \"cancer-research\")` - Runs with specific project value - **Numeric comparison**: `$.duration_hours ? (@ < 2)` - Runs with duration less than 2 hours - **Array operations**: `$.tags[*] ? (@ == \"production\")` - Runs tagged with \"production\" - **Complex conditions**: `$.resources ? (@.gpu_count > 2 && @.memory_gb >= 16)` - Runs with high resource requirements #### Examples (URL-Encoded Format): - **Field existence**: `%24.project` - **Exact value match**: `%24.project%20%3F%20(%40%20%3D%3D%20%22cancer-research%22)` - **Numeric comparison**: `%24.duration_hours%20%3F%20(%40%20%3C%202)` - **Array operations**: `%24.tags%5B*%5D%20%3F%20(%40%20%3D%3D%20%22production%22)` - **Complex conditions**: `%24.resources%20%3F%20(%40.gpu_count%20%3E%202%20%26%26%20%40.memory_gb%20%3E%3D%2016)` #### Notes - JSONPath expressions are evaluated using PostgreSQL's `@?` operator - The `$.` prefix is automatically added to root-level field references if missing - String values in conditions must be enclosed in double quotes - Use `&&` for AND operations and `||` for OR operations - Regular expressions use `like_regex` with standard regex syntax - **Remember to URL-encode the entire JSONPath expression when making HTTP requests** + :type metadata: str :param page: :type page: int :param page_size: :type page_size: int - :param sort: Sort the results by one or more fields. Use `+` for ascending and `-` for descending order. **Available fields:** - `application_id` - `name` - `description` - `regulatory_classes` **Examples:** - `?sort=application_id` - Sort by application_id ascending - `?sort=-name` - Sort by name descending - `?sort=+description&sort=name` - Sort by description ascending, then name descending + :param sort: Sort the results by one or more fields. Use `+` for ascending and `-` for descending order. **Available fields:** - `application_run_id` - `application_version_id` - `organization_id` - `status` - `triggered_at` - `triggered_by` **Examples:** - `?sort=triggered_at` - Sort by creation time (ascending) - `?sort=-triggered_at` - Sort by creation time (descending) - `?sort=status&sort=-triggered_at` - Sort by status, then by time (descending) :type sort: List[str] :param _request_timeout: timeout setting for this request. If one number provided, it will be total request @@ -2012,7 +1987,10 @@ def list_applications_v1_applications_get( :return: Returns the result object. """ # noqa: E501 - _param = self._list_applications_v1_applications_get_serialize( + _param = self._list_application_runs_v1_runs_get_serialize( + application_id=application_id, + application_version=application_version, + metadata=metadata, page=page, page_size=page_size, sort=sort, @@ -2023,8 +2001,8 @@ def list_applications_v1_applications_get( ) _response_types_map: Dict[str, Optional[str]] = { - '200': "List[ApplicationReadShortResponse]", - '401': None, + '200': "List[RunReadResponse]", + '404': None, '422': "HTTPValidationError", } response_data = self.api_client.call_api( @@ -2039,11 +2017,14 @@ def list_applications_v1_applications_get( @validate_call - def list_applications_v1_applications_get_with_http_info( + def list_application_runs_v1_runs_get_with_http_info( self, + application_id: Annotated[Optional[StrictStr], Field(description="Optional application ID filter")] = None, + application_version: Annotated[Optional[StrictStr], Field(description="Optional application version filter")] = None, + metadata: Annotated[Optional[Annotated[str, Field(strict=True, max_length=1000)]], Field(description="Use PostgreSQL JSONPath expressions to filter runs by their metadata. #### URL Encoding Required **Important**: JSONPath expressions contain special characters that must be URL-encoded when used in query parameters. Most HTTP clients handle this automatically, but when constructing URLs manually, ensure proper encoding. #### Examples (Clear Format): - **Field existence**: `$.project` - Runs that have a project field defined - **Exact value match**: `$.project ? (@ == \"cancer-research\")` - Runs with specific project value - **Numeric comparison**: `$.duration_hours ? (@ < 2)` - Runs with duration less than 2 hours - **Array operations**: `$.tags[*] ? (@ == \"production\")` - Runs tagged with \"production\" - **Complex conditions**: `$.resources ? (@.gpu_count > 2 && @.memory_gb >= 16)` - Runs with high resource requirements #### Examples (URL-Encoded Format): - **Field existence**: `%24.project` - **Exact value match**: `%24.project%20%3F%20(%40%20%3D%3D%20%22cancer-research%22)` - **Numeric comparison**: `%24.duration_hours%20%3F%20(%40%20%3C%202)` - **Array operations**: `%24.tags%5B*%5D%20%3F%20(%40%20%3D%3D%20%22production%22)` - **Complex conditions**: `%24.resources%20%3F%20(%40.gpu_count%20%3E%202%20%26%26%20%40.memory_gb%20%3E%3D%2016)` #### Notes - JSONPath expressions are evaluated using PostgreSQL's `@?` operator - The `$.` prefix is automatically added to root-level field references if missing - String values in conditions must be enclosed in double quotes - Use `&&` for AND operations and `||` for OR operations - Regular expressions use `like_regex` with standard regex syntax - **Remember to URL-encode the entire JSONPath expression when making HTTP requests** ")] = None, page: Optional[Annotated[int, Field(strict=True, ge=1)]] = None, page_size: Optional[Annotated[int, Field(le=100, strict=True, ge=5)]] = None, - sort: Annotated[Optional[List[StrictStr]], Field(description="Sort the results by one or more fields. Use `+` for ascending and `-` for descending order. **Available fields:** - `application_id` - `name` - `description` - `regulatory_classes` **Examples:** - `?sort=application_id` - Sort by application_id ascending - `?sort=-name` - Sort by name descending - `?sort=+description&sort=name` - Sort by description ascending, then name descending")] = None, + sort: Annotated[Optional[List[StrictStr]], Field(description="Sort the results by one or more fields. Use `+` for ascending and `-` for descending order. **Available fields:** - `application_run_id` - `application_version_id` - `organization_id` - `status` - `triggered_at` - `triggered_by` **Examples:** - `?sort=triggered_at` - Sort by creation time (ascending) - `?sort=-triggered_at` - Sort by creation time (descending) - `?sort=status&sort=-triggered_at` - Sort by status, then by time (descending) ")] = None, _request_timeout: Union[ None, Annotated[StrictFloat, Field(gt=0)], @@ -2056,16 +2037,22 @@ def list_applications_v1_applications_get_with_http_info( _content_type: Optional[StrictStr] = None, _headers: Optional[Dict[StrictStr, Any]] = None, _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, - ) -> ApiResponse[List[ApplicationReadShortResponse]]: - """List available applications + ) -> ApiResponse[List[RunReadResponse]]: + """List Application Runs - Returns the list of the applications, available to the caller. The application is available if any of the versions of the application is assigned to the caller’s organization. The response is paginated and sorted according to the provided parameters. + List application runs with filtering, sorting, and pagination capabilities. Returns paginated application runs that were triggered by the user. + :param application_id: Optional application ID filter + :type application_id: str + :param application_version: Optional application version filter + :type application_version: str + :param metadata: Use PostgreSQL JSONPath expressions to filter runs by their metadata. #### URL Encoding Required **Important**: JSONPath expressions contain special characters that must be URL-encoded when used in query parameters. Most HTTP clients handle this automatically, but when constructing URLs manually, ensure proper encoding. #### Examples (Clear Format): - **Field existence**: `$.project` - Runs that have a project field defined - **Exact value match**: `$.project ? (@ == \"cancer-research\")` - Runs with specific project value - **Numeric comparison**: `$.duration_hours ? (@ < 2)` - Runs with duration less than 2 hours - **Array operations**: `$.tags[*] ? (@ == \"production\")` - Runs tagged with \"production\" - **Complex conditions**: `$.resources ? (@.gpu_count > 2 && @.memory_gb >= 16)` - Runs with high resource requirements #### Examples (URL-Encoded Format): - **Field existence**: `%24.project` - **Exact value match**: `%24.project%20%3F%20(%40%20%3D%3D%20%22cancer-research%22)` - **Numeric comparison**: `%24.duration_hours%20%3F%20(%40%20%3C%202)` - **Array operations**: `%24.tags%5B*%5D%20%3F%20(%40%20%3D%3D%20%22production%22)` - **Complex conditions**: `%24.resources%20%3F%20(%40.gpu_count%20%3E%202%20%26%26%20%40.memory_gb%20%3E%3D%2016)` #### Notes - JSONPath expressions are evaluated using PostgreSQL's `@?` operator - The `$.` prefix is automatically added to root-level field references if missing - String values in conditions must be enclosed in double quotes - Use `&&` for AND operations and `||` for OR operations - Regular expressions use `like_regex` with standard regex syntax - **Remember to URL-encode the entire JSONPath expression when making HTTP requests** + :type metadata: str :param page: :type page: int :param page_size: :type page_size: int - :param sort: Sort the results by one or more fields. Use `+` for ascending and `-` for descending order. **Available fields:** - `application_id` - `name` - `description` - `regulatory_classes` **Examples:** - `?sort=application_id` - Sort by application_id ascending - `?sort=-name` - Sort by name descending - `?sort=+description&sort=name` - Sort by description ascending, then name descending + :param sort: Sort the results by one or more fields. Use `+` for ascending and `-` for descending order. **Available fields:** - `application_run_id` - `application_version_id` - `organization_id` - `status` - `triggered_at` - `triggered_by` **Examples:** - `?sort=triggered_at` - Sort by creation time (ascending) - `?sort=-triggered_at` - Sort by creation time (descending) - `?sort=status&sort=-triggered_at` - Sort by status, then by time (descending) :type sort: List[str] :param _request_timeout: timeout setting for this request. If one number provided, it will be total request @@ -2089,7 +2076,10 @@ def list_applications_v1_applications_get_with_http_info( :return: Returns the result object. """ # noqa: E501 - _param = self._list_applications_v1_applications_get_serialize( + _param = self._list_application_runs_v1_runs_get_serialize( + application_id=application_id, + application_version=application_version, + metadata=metadata, page=page, page_size=page_size, sort=sort, @@ -2100,8 +2090,8 @@ def list_applications_v1_applications_get_with_http_info( ) _response_types_map: Dict[str, Optional[str]] = { - '200': "List[ApplicationReadShortResponse]", - '401': None, + '200': "List[RunReadResponse]", + '404': None, '422': "HTTPValidationError", } response_data = self.api_client.call_api( @@ -2116,11 +2106,14 @@ def list_applications_v1_applications_get_with_http_info( @validate_call - def list_applications_v1_applications_get_without_preload_content( + def list_application_runs_v1_runs_get_without_preload_content( self, + application_id: Annotated[Optional[StrictStr], Field(description="Optional application ID filter")] = None, + application_version: Annotated[Optional[StrictStr], Field(description="Optional application version filter")] = None, + metadata: Annotated[Optional[Annotated[str, Field(strict=True, max_length=1000)]], Field(description="Use PostgreSQL JSONPath expressions to filter runs by their metadata. #### URL Encoding Required **Important**: JSONPath expressions contain special characters that must be URL-encoded when used in query parameters. Most HTTP clients handle this automatically, but when constructing URLs manually, ensure proper encoding. #### Examples (Clear Format): - **Field existence**: `$.project` - Runs that have a project field defined - **Exact value match**: `$.project ? (@ == \"cancer-research\")` - Runs with specific project value - **Numeric comparison**: `$.duration_hours ? (@ < 2)` - Runs with duration less than 2 hours - **Array operations**: `$.tags[*] ? (@ == \"production\")` - Runs tagged with \"production\" - **Complex conditions**: `$.resources ? (@.gpu_count > 2 && @.memory_gb >= 16)` - Runs with high resource requirements #### Examples (URL-Encoded Format): - **Field existence**: `%24.project` - **Exact value match**: `%24.project%20%3F%20(%40%20%3D%3D%20%22cancer-research%22)` - **Numeric comparison**: `%24.duration_hours%20%3F%20(%40%20%3C%202)` - **Array operations**: `%24.tags%5B*%5D%20%3F%20(%40%20%3D%3D%20%22production%22)` - **Complex conditions**: `%24.resources%20%3F%20(%40.gpu_count%20%3E%202%20%26%26%20%40.memory_gb%20%3E%3D%2016)` #### Notes - JSONPath expressions are evaluated using PostgreSQL's `@?` operator - The `$.` prefix is automatically added to root-level field references if missing - String values in conditions must be enclosed in double quotes - Use `&&` for AND operations and `||` for OR operations - Regular expressions use `like_regex` with standard regex syntax - **Remember to URL-encode the entire JSONPath expression when making HTTP requests** ")] = None, page: Optional[Annotated[int, Field(strict=True, ge=1)]] = None, page_size: Optional[Annotated[int, Field(le=100, strict=True, ge=5)]] = None, - sort: Annotated[Optional[List[StrictStr]], Field(description="Sort the results by one or more fields. Use `+` for ascending and `-` for descending order. **Available fields:** - `application_id` - `name` - `description` - `regulatory_classes` **Examples:** - `?sort=application_id` - Sort by application_id ascending - `?sort=-name` - Sort by name descending - `?sort=+description&sort=name` - Sort by description ascending, then name descending")] = None, + sort: Annotated[Optional[List[StrictStr]], Field(description="Sort the results by one or more fields. Use `+` for ascending and `-` for descending order. **Available fields:** - `application_run_id` - `application_version_id` - `organization_id` - `status` - `triggered_at` - `triggered_by` **Examples:** - `?sort=triggered_at` - Sort by creation time (ascending) - `?sort=-triggered_at` - Sort by creation time (descending) - `?sort=status&sort=-triggered_at` - Sort by status, then by time (descending) ")] = None, _request_timeout: Union[ None, Annotated[StrictFloat, Field(gt=0)], @@ -2134,15 +2127,21 @@ def list_applications_v1_applications_get_without_preload_content( _headers: Optional[Dict[StrictStr, Any]] = None, _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, ) -> RESTResponseType: - """List available applications + """List Application Runs - Returns the list of the applications, available to the caller. The application is available if any of the versions of the application is assigned to the caller’s organization. The response is paginated and sorted according to the provided parameters. + List application runs with filtering, sorting, and pagination capabilities. Returns paginated application runs that were triggered by the user. + :param application_id: Optional application ID filter + :type application_id: str + :param application_version: Optional application version filter + :type application_version: str + :param metadata: Use PostgreSQL JSONPath expressions to filter runs by their metadata. #### URL Encoding Required **Important**: JSONPath expressions contain special characters that must be URL-encoded when used in query parameters. Most HTTP clients handle this automatically, but when constructing URLs manually, ensure proper encoding. #### Examples (Clear Format): - **Field existence**: `$.project` - Runs that have a project field defined - **Exact value match**: `$.project ? (@ == \"cancer-research\")` - Runs with specific project value - **Numeric comparison**: `$.duration_hours ? (@ < 2)` - Runs with duration less than 2 hours - **Array operations**: `$.tags[*] ? (@ == \"production\")` - Runs tagged with \"production\" - **Complex conditions**: `$.resources ? (@.gpu_count > 2 && @.memory_gb >= 16)` - Runs with high resource requirements #### Examples (URL-Encoded Format): - **Field existence**: `%24.project` - **Exact value match**: `%24.project%20%3F%20(%40%20%3D%3D%20%22cancer-research%22)` - **Numeric comparison**: `%24.duration_hours%20%3F%20(%40%20%3C%202)` - **Array operations**: `%24.tags%5B*%5D%20%3F%20(%40%20%3D%3D%20%22production%22)` - **Complex conditions**: `%24.resources%20%3F%20(%40.gpu_count%20%3E%202%20%26%26%20%40.memory_gb%20%3E%3D%2016)` #### Notes - JSONPath expressions are evaluated using PostgreSQL's `@?` operator - The `$.` prefix is automatically added to root-level field references if missing - String values in conditions must be enclosed in double quotes - Use `&&` for AND operations and `||` for OR operations - Regular expressions use `like_regex` with standard regex syntax - **Remember to URL-encode the entire JSONPath expression when making HTTP requests** + :type metadata: str :param page: :type page: int :param page_size: :type page_size: int - :param sort: Sort the results by one or more fields. Use `+` for ascending and `-` for descending order. **Available fields:** - `application_id` - `name` - `description` - `regulatory_classes` **Examples:** - `?sort=application_id` - Sort by application_id ascending - `?sort=-name` - Sort by name descending - `?sort=+description&sort=name` - Sort by description ascending, then name descending + :param sort: Sort the results by one or more fields. Use `+` for ascending and `-` for descending order. **Available fields:** - `application_run_id` - `application_version_id` - `organization_id` - `status` - `triggered_at` - `triggered_by` **Examples:** - `?sort=triggered_at` - Sort by creation time (ascending) - `?sort=-triggered_at` - Sort by creation time (descending) - `?sort=status&sort=-triggered_at` - Sort by status, then by time (descending) :type sort: List[str] :param _request_timeout: timeout setting for this request. If one number provided, it will be total request @@ -2166,7 +2165,10 @@ def list_applications_v1_applications_get_without_preload_content( :return: Returns the result object. """ # noqa: E501 - _param = self._list_applications_v1_applications_get_serialize( + _param = self._list_application_runs_v1_runs_get_serialize( + application_id=application_id, + application_version=application_version, + metadata=metadata, page=page, page_size=page_size, sort=sort, @@ -2177,8 +2179,8 @@ def list_applications_v1_applications_get_without_preload_content( ) _response_types_map: Dict[str, Optional[str]] = { - '200': "List[ApplicationReadShortResponse]", - '401': None, + '200': "List[RunReadResponse]", + '404': None, '422': "HTTPValidationError", } response_data = self.api_client.call_api( @@ -2188,8 +2190,11 @@ def list_applications_v1_applications_get_without_preload_content( return response_data.response - def _list_applications_v1_applications_get_serialize( + def _list_application_runs_v1_runs_get_serialize( self, + application_id, + application_version, + metadata, page, page_size, sort, @@ -2216,13 +2221,25 @@ def _list_applications_v1_applications_get_serialize( # process the path parameters # process the query parameters + if application_id is not None: + + _query_params.append(('application_id', application_id)) + + if application_version is not None: + + _query_params.append(('application_version', application_version)) + + if metadata is not None: + + _query_params.append(('metadata', metadata)) + if page is not None: _query_params.append(('page', page)) if page_size is not None: - _query_params.append(('page-size', page_size)) + _query_params.append(('page_size', page_size)) if sort is not None: @@ -2249,7 +2266,7 @@ def _list_applications_v1_applications_get_serialize( return self.api_client.param_serialize( method='GET', - resource_path='/api/v1/applications', + resource_path='/api/v1/runs', path_params=_path_params, query_params=_query_params, header_params=_header_params, @@ -2266,17 +2283,11 @@ def _list_applications_v1_applications_get_serialize( @validate_call - def list_run_items_v1_runs_run_id_items_get( + def list_applications_v1_applications_get( self, - run_id: Annotated[StrictStr, Field(description="Run id, returned by `POST /runs/` endpoint")], - item_id__in: Annotated[Optional[List[StrictStr]], Field(description="Filter for item ids")] = None, - external_id__in: Annotated[Optional[List[StrictStr]], Field(description="Filter for items by their external_id from the input payload")] = None, - state: Annotated[Optional[ItemState], Field(description="Filter items by their state")] = None, - termination_reason: Annotated[Optional[ItemTerminationReason], Field(description="Filter items by their termination reason. Only applies to TERMINATED items.")] = None, - custom_metadata: Annotated[Optional[Annotated[str, Field(strict=True, max_length=1000)]], Field(description="JSONPath expression to filter items by their custom_metadata")] = None, page: Optional[Annotated[int, Field(strict=True, ge=1)]] = None, page_size: Optional[Annotated[int, Field(le=100, strict=True, ge=5)]] = None, - sort: Annotated[Optional[List[StrictStr]], Field(description="Sort the items by one or more fields. Use `+` for ascending and `-` for descending order. **Available fields:** - `item_id` - `run_id` - `external_id` - `custom_metadata` **Examples:** - `?sort=item_id` - Sort by id of the item (ascending) - `?sort=-external_id` - Sort by external ID (descending) - `?sort=custom_metadata&sort=-external_id` - Sort by metadata, then by external ID (descending)")] = None, + sort: Annotated[Optional[List[StrictStr]], Field(description="Sort the results by one or more fields. Use `+` for ascending and `-` for descending order. **Available fields:** - `application_id` - `name` - `description` - `regulatory_classes` **Examples:** - `?sort=application_id` - Sort by application_id ascending - `?sort=-name` - Sort by name descending - `?sort=+description&sort=name` - Sort by description ascending, then name descending")] = None, _request_timeout: Union[ None, Annotated[StrictFloat, Field(gt=0)], @@ -2289,28 +2300,16 @@ def list_run_items_v1_runs_run_id_items_get( _content_type: Optional[StrictStr] = None, _headers: Optional[Dict[StrictStr, Any]] = None, _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, - ) -> List[ItemResultReadResponse]: - """List Run Items + ) -> List[ApplicationReadResponse]: + """List available applications - List items in a run with filtering, sorting, and pagination capabilities. Returns paginated items within a specific run. Results can be filtered by item IDs, external_ids, status, and custom_metadata using JSONPath expressions. ## JSONPath Metadata Filtering Use PostgreSQL JSONPath expressions to filter items using their custom_metadata. ### Examples: - **Field existence**: `$.case_id` - Results that have a case_id field defined - **Exact value match**: `$.priority ? (@ == \"high\")` - Results with high priority - **Numeric comparison**: `$.confidence_score ? (@ > 0.95)` - Results with high confidence - **Array operations**: `$.flags[*] ? (@ == \"reviewed\")` - Results flagged as reviewed - **Complex conditions**: `$.metrics ? (@.accuracy > 0.9 && @.recall > 0.8)` - Results meeting performance thresholds ## Notes - JSONPath expressions are evaluated using PostgreSQL's `@?` operator - The `$.` prefix is automatically added to root-level field references if missing - String values in conditions must be enclosed in double quotes - Use `&&` for AND operations and `||` for OR operations + Returns the list of the applications, available to the caller. The application is available if any of the versions of the application is assigned to the caller’s organization. The response is paginated and sorted according to the provided parameters. - :param run_id: Run id, returned by `POST /runs/` endpoint (required) - :type run_id: str - :param item_id__in: Filter for item ids - :type item_id__in: List[str] - :param external_id__in: Filter for items by their external_id from the input payload - :type external_id__in: List[str] - :param state: Filter items by their state - :type state: ItemState - :param termination_reason: Filter items by their termination reason. Only applies to TERMINATED items. - :type termination_reason: ItemTerminationReason - :param custom_metadata: JSONPath expression to filter items by their custom_metadata - :type custom_metadata: str :param page: :type page: int :param page_size: :type page_size: int - :param sort: Sort the items by one or more fields. Use `+` for ascending and `-` for descending order. **Available fields:** - `item_id` - `run_id` - `external_id` - `custom_metadata` **Examples:** - `?sort=item_id` - Sort by id of the item (ascending) - `?sort=-external_id` - Sort by external ID (descending) - `?sort=custom_metadata&sort=-external_id` - Sort by metadata, then by external ID (descending) + :param sort: Sort the results by one or more fields. Use `+` for ascending and `-` for descending order. **Available fields:** - `application_id` - `name` - `description` - `regulatory_classes` **Examples:** - `?sort=application_id` - Sort by application_id ascending - `?sort=-name` - Sort by name descending - `?sort=+description&sort=name` - Sort by description ascending, then name descending :type sort: List[str] :param _request_timeout: timeout setting for this request. If one number provided, it will be total request @@ -2334,13 +2333,7 @@ def list_run_items_v1_runs_run_id_items_get( :return: Returns the result object. """ # noqa: E501 - _param = self._list_run_items_v1_runs_run_id_items_get_serialize( - run_id=run_id, - item_id__in=item_id__in, - external_id__in=external_id__in, - state=state, - termination_reason=termination_reason, - custom_metadata=custom_metadata, + _param = self._list_applications_v1_applications_get_serialize( page=page, page_size=page_size, sort=sort, @@ -2351,8 +2344,8 @@ def list_run_items_v1_runs_run_id_items_get( ) _response_types_map: Dict[str, Optional[str]] = { - '200': "List[ItemResultReadResponse]", - '404': None, + '200': "List[ApplicationReadResponse]", + '401': None, '422': "HTTPValidationError", } response_data = self.api_client.call_api( @@ -2367,17 +2360,11 @@ def list_run_items_v1_runs_run_id_items_get( @validate_call - def list_run_items_v1_runs_run_id_items_get_with_http_info( + def list_applications_v1_applications_get_with_http_info( self, - run_id: Annotated[StrictStr, Field(description="Run id, returned by `POST /runs/` endpoint")], - item_id__in: Annotated[Optional[List[StrictStr]], Field(description="Filter for item ids")] = None, - external_id__in: Annotated[Optional[List[StrictStr]], Field(description="Filter for items by their external_id from the input payload")] = None, - state: Annotated[Optional[ItemState], Field(description="Filter items by their state")] = None, - termination_reason: Annotated[Optional[ItemTerminationReason], Field(description="Filter items by their termination reason. Only applies to TERMINATED items.")] = None, - custom_metadata: Annotated[Optional[Annotated[str, Field(strict=True, max_length=1000)]], Field(description="JSONPath expression to filter items by their custom_metadata")] = None, page: Optional[Annotated[int, Field(strict=True, ge=1)]] = None, page_size: Optional[Annotated[int, Field(le=100, strict=True, ge=5)]] = None, - sort: Annotated[Optional[List[StrictStr]], Field(description="Sort the items by one or more fields. Use `+` for ascending and `-` for descending order. **Available fields:** - `item_id` - `run_id` - `external_id` - `custom_metadata` **Examples:** - `?sort=item_id` - Sort by id of the item (ascending) - `?sort=-external_id` - Sort by external ID (descending) - `?sort=custom_metadata&sort=-external_id` - Sort by metadata, then by external ID (descending)")] = None, + sort: Annotated[Optional[List[StrictStr]], Field(description="Sort the results by one or more fields. Use `+` for ascending and `-` for descending order. **Available fields:** - `application_id` - `name` - `description` - `regulatory_classes` **Examples:** - `?sort=application_id` - Sort by application_id ascending - `?sort=-name` - Sort by name descending - `?sort=+description&sort=name` - Sort by description ascending, then name descending")] = None, _request_timeout: Union[ None, Annotated[StrictFloat, Field(gt=0)], @@ -2390,28 +2377,16 @@ def list_run_items_v1_runs_run_id_items_get_with_http_info( _content_type: Optional[StrictStr] = None, _headers: Optional[Dict[StrictStr, Any]] = None, _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, - ) -> ApiResponse[List[ItemResultReadResponse]]: - """List Run Items + ) -> ApiResponse[List[ApplicationReadResponse]]: + """List available applications - List items in a run with filtering, sorting, and pagination capabilities. Returns paginated items within a specific run. Results can be filtered by item IDs, external_ids, status, and custom_metadata using JSONPath expressions. ## JSONPath Metadata Filtering Use PostgreSQL JSONPath expressions to filter items using their custom_metadata. ### Examples: - **Field existence**: `$.case_id` - Results that have a case_id field defined - **Exact value match**: `$.priority ? (@ == \"high\")` - Results with high priority - **Numeric comparison**: `$.confidence_score ? (@ > 0.95)` - Results with high confidence - **Array operations**: `$.flags[*] ? (@ == \"reviewed\")` - Results flagged as reviewed - **Complex conditions**: `$.metrics ? (@.accuracy > 0.9 && @.recall > 0.8)` - Results meeting performance thresholds ## Notes - JSONPath expressions are evaluated using PostgreSQL's `@?` operator - The `$.` prefix is automatically added to root-level field references if missing - String values in conditions must be enclosed in double quotes - Use `&&` for AND operations and `||` for OR operations + Returns the list of the applications, available to the caller. The application is available if any of the versions of the application is assigned to the caller’s organization. The response is paginated and sorted according to the provided parameters. - :param run_id: Run id, returned by `POST /runs/` endpoint (required) - :type run_id: str - :param item_id__in: Filter for item ids - :type item_id__in: List[str] - :param external_id__in: Filter for items by their external_id from the input payload - :type external_id__in: List[str] - :param state: Filter items by their state - :type state: ItemState - :param termination_reason: Filter items by their termination reason. Only applies to TERMINATED items. - :type termination_reason: ItemTerminationReason - :param custom_metadata: JSONPath expression to filter items by their custom_metadata - :type custom_metadata: str :param page: :type page: int :param page_size: :type page_size: int - :param sort: Sort the items by one or more fields. Use `+` for ascending and `-` for descending order. **Available fields:** - `item_id` - `run_id` - `external_id` - `custom_metadata` **Examples:** - `?sort=item_id` - Sort by id of the item (ascending) - `?sort=-external_id` - Sort by external ID (descending) - `?sort=custom_metadata&sort=-external_id` - Sort by metadata, then by external ID (descending) + :param sort: Sort the results by one or more fields. Use `+` for ascending and `-` for descending order. **Available fields:** - `application_id` - `name` - `description` - `regulatory_classes` **Examples:** - `?sort=application_id` - Sort by application_id ascending - `?sort=-name` - Sort by name descending - `?sort=+description&sort=name` - Sort by description ascending, then name descending :type sort: List[str] :param _request_timeout: timeout setting for this request. If one number provided, it will be total request @@ -2435,13 +2410,7 @@ def list_run_items_v1_runs_run_id_items_get_with_http_info( :return: Returns the result object. """ # noqa: E501 - _param = self._list_run_items_v1_runs_run_id_items_get_serialize( - run_id=run_id, - item_id__in=item_id__in, - external_id__in=external_id__in, - state=state, - termination_reason=termination_reason, - custom_metadata=custom_metadata, + _param = self._list_applications_v1_applications_get_serialize( page=page, page_size=page_size, sort=sort, @@ -2452,8 +2421,8 @@ def list_run_items_v1_runs_run_id_items_get_with_http_info( ) _response_types_map: Dict[str, Optional[str]] = { - '200': "List[ItemResultReadResponse]", - '404': None, + '200': "List[ApplicationReadResponse]", + '401': None, '422': "HTTPValidationError", } response_data = self.api_client.call_api( @@ -2468,17 +2437,11 @@ def list_run_items_v1_runs_run_id_items_get_with_http_info( @validate_call - def list_run_items_v1_runs_run_id_items_get_without_preload_content( + def list_applications_v1_applications_get_without_preload_content( self, - run_id: Annotated[StrictStr, Field(description="Run id, returned by `POST /runs/` endpoint")], - item_id__in: Annotated[Optional[List[StrictStr]], Field(description="Filter for item ids")] = None, - external_id__in: Annotated[Optional[List[StrictStr]], Field(description="Filter for items by their external_id from the input payload")] = None, - state: Annotated[Optional[ItemState], Field(description="Filter items by their state")] = None, - termination_reason: Annotated[Optional[ItemTerminationReason], Field(description="Filter items by their termination reason. Only applies to TERMINATED items.")] = None, - custom_metadata: Annotated[Optional[Annotated[str, Field(strict=True, max_length=1000)]], Field(description="JSONPath expression to filter items by their custom_metadata")] = None, page: Optional[Annotated[int, Field(strict=True, ge=1)]] = None, page_size: Optional[Annotated[int, Field(le=100, strict=True, ge=5)]] = None, - sort: Annotated[Optional[List[StrictStr]], Field(description="Sort the items by one or more fields. Use `+` for ascending and `-` for descending order. **Available fields:** - `item_id` - `run_id` - `external_id` - `custom_metadata` **Examples:** - `?sort=item_id` - Sort by id of the item (ascending) - `?sort=-external_id` - Sort by external ID (descending) - `?sort=custom_metadata&sort=-external_id` - Sort by metadata, then by external ID (descending)")] = None, + sort: Annotated[Optional[List[StrictStr]], Field(description="Sort the results by one or more fields. Use `+` for ascending and `-` for descending order. **Available fields:** - `application_id` - `name` - `description` - `regulatory_classes` **Examples:** - `?sort=application_id` - Sort by application_id ascending - `?sort=-name` - Sort by name descending - `?sort=+description&sort=name` - Sort by description ascending, then name descending")] = None, _request_timeout: Union[ None, Annotated[StrictFloat, Field(gt=0)], @@ -2492,27 +2455,15 @@ def list_run_items_v1_runs_run_id_items_get_without_preload_content( _headers: Optional[Dict[StrictStr, Any]] = None, _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, ) -> RESTResponseType: - """List Run Items + """List available applications - List items in a run with filtering, sorting, and pagination capabilities. Returns paginated items within a specific run. Results can be filtered by item IDs, external_ids, status, and custom_metadata using JSONPath expressions. ## JSONPath Metadata Filtering Use PostgreSQL JSONPath expressions to filter items using their custom_metadata. ### Examples: - **Field existence**: `$.case_id` - Results that have a case_id field defined - **Exact value match**: `$.priority ? (@ == \"high\")` - Results with high priority - **Numeric comparison**: `$.confidence_score ? (@ > 0.95)` - Results with high confidence - **Array operations**: `$.flags[*] ? (@ == \"reviewed\")` - Results flagged as reviewed - **Complex conditions**: `$.metrics ? (@.accuracy > 0.9 && @.recall > 0.8)` - Results meeting performance thresholds ## Notes - JSONPath expressions are evaluated using PostgreSQL's `@?` operator - The `$.` prefix is automatically added to root-level field references if missing - String values in conditions must be enclosed in double quotes - Use `&&` for AND operations and `||` for OR operations + Returns the list of the applications, available to the caller. The application is available if any of the versions of the application is assigned to the caller’s organization. The response is paginated and sorted according to the provided parameters. - :param run_id: Run id, returned by `POST /runs/` endpoint (required) - :type run_id: str - :param item_id__in: Filter for item ids - :type item_id__in: List[str] - :param external_id__in: Filter for items by their external_id from the input payload - :type external_id__in: List[str] - :param state: Filter items by their state - :type state: ItemState - :param termination_reason: Filter items by their termination reason. Only applies to TERMINATED items. - :type termination_reason: ItemTerminationReason - :param custom_metadata: JSONPath expression to filter items by their custom_metadata - :type custom_metadata: str :param page: :type page: int :param page_size: :type page_size: int - :param sort: Sort the items by one or more fields. Use `+` for ascending and `-` for descending order. **Available fields:** - `item_id` - `run_id` - `external_id` - `custom_metadata` **Examples:** - `?sort=item_id` - Sort by id of the item (ascending) - `?sort=-external_id` - Sort by external ID (descending) - `?sort=custom_metadata&sort=-external_id` - Sort by metadata, then by external ID (descending) + :param sort: Sort the results by one or more fields. Use `+` for ascending and `-` for descending order. **Available fields:** - `application_id` - `name` - `description` - `regulatory_classes` **Examples:** - `?sort=application_id` - Sort by application_id ascending - `?sort=-name` - Sort by name descending - `?sort=+description&sort=name` - Sort by description ascending, then name descending :type sort: List[str] :param _request_timeout: timeout setting for this request. If one number provided, it will be total request @@ -2536,13 +2487,7 @@ def list_run_items_v1_runs_run_id_items_get_without_preload_content( :return: Returns the result object. """ # noqa: E501 - _param = self._list_run_items_v1_runs_run_id_items_get_serialize( - run_id=run_id, - item_id__in=item_id__in, - external_id__in=external_id__in, - state=state, - termination_reason=termination_reason, - custom_metadata=custom_metadata, + _param = self._list_applications_v1_applications_get_serialize( page=page, page_size=page_size, sort=sort, @@ -2553,8 +2498,8 @@ def list_run_items_v1_runs_run_id_items_get_without_preload_content( ) _response_types_map: Dict[str, Optional[str]] = { - '200': "List[ItemResultReadResponse]", - '404': None, + '200': "List[ApplicationReadResponse]", + '401': None, '422': "HTTPValidationError", } response_data = self.api_client.call_api( @@ -2564,14 +2509,8 @@ def list_run_items_v1_runs_run_id_items_get_without_preload_content( return response_data.response - def _list_run_items_v1_runs_run_id_items_get_serialize( + def _list_applications_v1_applications_get_serialize( self, - run_id, - item_id__in, - external_id__in, - state, - termination_reason, - custom_metadata, page, page_size, sort, @@ -2584,8 +2523,6 @@ def _list_run_items_v1_runs_run_id_items_get_serialize( _host = None _collection_formats: Dict[str, str] = { - 'item_id__in': 'multi', - 'external_id__in': 'multi', 'sort': 'multi', } @@ -2599,29 +2536,7 @@ def _list_run_items_v1_runs_run_id_items_get_serialize( _body_params: Optional[bytes] = None # process the path parameters - if run_id is not None: - _path_params['run_id'] = run_id # process the query parameters - if item_id__in is not None: - - _query_params.append(('item_id__in', item_id__in)) - - if external_id__in is not None: - - _query_params.append(('external_id__in', external_id__in)) - - if state is not None: - - _query_params.append(('state', state.value)) - - if termination_reason is not None: - - _query_params.append(('termination_reason', termination_reason.value)) - - if custom_metadata is not None: - - _query_params.append(('custom_metadata', custom_metadata)) - if page is not None: _query_params.append(('page', page)) @@ -2655,7 +2570,7 @@ def _list_run_items_v1_runs_run_id_items_get_serialize( return self.api_client.param_serialize( method='GET', - resource_path='/api/v1/runs/{run_id}/items', + resource_path='/api/v1/applications', path_params=_path_params, query_params=_query_params, header_params=_header_params, @@ -2672,15 +2587,16 @@ def _list_run_items_v1_runs_run_id_items_get_serialize( @validate_call - def list_runs_v1_runs_get( + def list_run_results_v1_runs_application_run_id_results_get( self, - application_id: Annotated[Optional[StrictStr], Field(description="Optional application ID filter")] = None, - application_version: Annotated[Optional[StrictStr], Field(description="Optional Version Name")] = None, - external_id: Annotated[Optional[StrictStr], Field(description="Optionally filter runs by items with this external ID")] = None, - custom_metadata: Annotated[Optional[Annotated[str, Field(strict=True, max_length=1000)]], Field(description="Use PostgreSQL JSONPath expressions to filter runs by their custom_metadata. #### URL Encoding Required **Important**: JSONPath expressions contain special characters that must be URL-encoded when used in query parameters. Most HTTP clients handle this automatically, but when constructing URLs manually, ensure proper encoding. #### Examples (Clear Format): - **Field existence**: `$.study` - Runs that have a study field defined - **Exact value match**: `$.study ? (@ == \"high\")` - Runs with specific study value - **Numeric comparison**: `$.confidence_score ? (@ > 0.75)` - Runs with confidence score greater than 0.75 - **Array operations**: `$.tags[*] ? (@ == \"draft\")` - Runs with tags array containing \"draft\" - **Complex conditions**: `$.resources ? (@.gpu_count > 2 && @.memory_gb >= 16)` - Runs with high resource requirements #### Examples (URL-Encoded Format): - **Field existence**: `%24.study` - **Exact value match**: `%24.study%20%3F%20(%40%20%3D%3D%20%22high%22)` - **Numeric comparison**: `%24.confidence_score%20%3F%20(%40%20%3E%200.75)` - **Array operations**: `%24.tags%5B*%5D%20%3F%20(%40%20%3D%3D%20%22draft%22)` - **Complex conditions**: `%24.resources%20%3F%20(%40.gpu_count%20%3E%202%20%26%26%20%40.memory_gb%20%3E%3D%2016)` #### Notes - JSONPath expressions are evaluated using PostgreSQL's `@?` operator - The `$.` prefix is automatically added to root-level field references if missing - String values in conditions must be enclosed in double quotes - Use `&&` for AND operations and `||` for OR operations - Regular expressions use `like_regex` with standard regex syntax - **Remember to URL-encode the entire JSONPath expression when making HTTP requests** ")] = None, + application_run_id: Annotated[StrictStr, Field(description="Application run id, returned by `POST /runs/` endpoint")], + item_id__in: Annotated[Optional[List[StrictStr]], Field(description="Filter for items ids")] = None, + reference__in: Annotated[Optional[List[StrictStr]], Field(description="Filter for items by their reference from the input payload")] = None, + status__in: Annotated[Optional[List[ItemStatus]], Field(description="Filter for items in certain statuses")] = None, + metadata: Annotated[Optional[Annotated[str, Field(strict=True, max_length=1000)]], Field(description="JSONPath expression to filter results by their metadata")] = None, page: Optional[Annotated[int, Field(strict=True, ge=1)]] = None, page_size: Optional[Annotated[int, Field(le=100, strict=True, ge=5)]] = None, - sort: Annotated[Optional[List[StrictStr]], Field(description="Sort the results by one or more fields. Use `+` for ascending and `-` for descending order. **Available fields:** - `run_id` - `application_version_id` - `organization_id` - `status` - `submitted_at` - `submitted_by` **Examples:** - `?sort=submitted_at` - Sort by creation time (ascending) - `?sort=-submitted_at` - Sort by creation time (descending) - `?sort=status&sort=-submitted_at` - Sort by status, then by time (descending) ")] = None, + sort: Annotated[Optional[List[StrictStr]], Field(description="Sort the results by one or more fields. Use `+` for ascending and `-` for descending order. **Available fields:** - `item_id` - `application_run_id` - `reference` - `status` - `metadata` **Examples:** - `?sort=item_id` - Sort by id of the item (ascending) - `?sort=-application_run_id` - Sort by id of the run (descending) - `?sort=status&sort=-item_idt` - Sort by status, then by id of the item (descending)")] = None, _request_timeout: Union[ None, Annotated[StrictFloat, Field(gt=0)], @@ -2693,24 +2609,26 @@ def list_runs_v1_runs_get( _content_type: Optional[StrictStr] = None, _headers: Optional[Dict[StrictStr, Any]] = None, _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, - ) -> List[RunReadResponse]: - """List Runs + ) -> List[ItemResultReadResponse]: + """List Run Results - List runs with filtering, sorting, and pagination capabilities. Returns paginated runs that were submitted by the user. + List results for items in an application run with filtering, sorting, and pagination capabilities. Returns paginated results for items within a specific application run. Results can be filtered by item IDs, references, status, and custom metadata using JSONPath expressions. ## JSONPath Metadata Filtering Use PostgreSQL JSONPath expressions to filter results by their metadata. ### Examples: - **Field existence**: `$.case_id` - Results that have a case_id field defined - **Exact value match**: `$.priority ? (@ == \"high\")` - Results with high priority - **Numeric comparison**: `$.confidence_score ? (@ > 0.95)` - Results with high confidence - **Array operations**: `$.flags[*] ? (@ == \"reviewed\")` - Results flagged as reviewed - **Complex conditions**: `$.metrics ? (@.accuracy > 0.9 && @.recall > 0.8)` - Results meeting performance thresholds ## Notes - JSONPath expressions are evaluated using PostgreSQL's `@?` operator - The `$.` prefix is automatically added to root-level field references if missing - String values in conditions must be enclosed in double quotes - Use `&&` for AND operations and `||` for OR operations - :param application_id: Optional application ID filter - :type application_id: str - :param application_version: Optional Version Name - :type application_version: str - :param external_id: Optionally filter runs by items with this external ID - :type external_id: str - :param custom_metadata: Use PostgreSQL JSONPath expressions to filter runs by their custom_metadata. #### URL Encoding Required **Important**: JSONPath expressions contain special characters that must be URL-encoded when used in query parameters. Most HTTP clients handle this automatically, but when constructing URLs manually, ensure proper encoding. #### Examples (Clear Format): - **Field existence**: `$.study` - Runs that have a study field defined - **Exact value match**: `$.study ? (@ == \"high\")` - Runs with specific study value - **Numeric comparison**: `$.confidence_score ? (@ > 0.75)` - Runs with confidence score greater than 0.75 - **Array operations**: `$.tags[*] ? (@ == \"draft\")` - Runs with tags array containing \"draft\" - **Complex conditions**: `$.resources ? (@.gpu_count > 2 && @.memory_gb >= 16)` - Runs with high resource requirements #### Examples (URL-Encoded Format): - **Field existence**: `%24.study` - **Exact value match**: `%24.study%20%3F%20(%40%20%3D%3D%20%22high%22)` - **Numeric comparison**: `%24.confidence_score%20%3F%20(%40%20%3E%200.75)` - **Array operations**: `%24.tags%5B*%5D%20%3F%20(%40%20%3D%3D%20%22draft%22)` - **Complex conditions**: `%24.resources%20%3F%20(%40.gpu_count%20%3E%202%20%26%26%20%40.memory_gb%20%3E%3D%2016)` #### Notes - JSONPath expressions are evaluated using PostgreSQL's `@?` operator - The `$.` prefix is automatically added to root-level field references if missing - String values in conditions must be enclosed in double quotes - Use `&&` for AND operations and `||` for OR operations - Regular expressions use `like_regex` with standard regex syntax - **Remember to URL-encode the entire JSONPath expression when making HTTP requests** - :type custom_metadata: str + :param application_run_id: Application run id, returned by `POST /runs/` endpoint (required) + :type application_run_id: str + :param item_id__in: Filter for items ids + :type item_id__in: List[str] + :param reference__in: Filter for items by their reference from the input payload + :type reference__in: List[str] + :param status__in: Filter for items in certain statuses + :type status__in: List[ItemStatus] + :param metadata: JSONPath expression to filter results by their metadata + :type metadata: str :param page: :type page: int :param page_size: :type page_size: int - :param sort: Sort the results by one or more fields. Use `+` for ascending and `-` for descending order. **Available fields:** - `run_id` - `application_version_id` - `organization_id` - `status` - `submitted_at` - `submitted_by` **Examples:** - `?sort=submitted_at` - Sort by creation time (ascending) - `?sort=-submitted_at` - Sort by creation time (descending) - `?sort=status&sort=-submitted_at` - Sort by status, then by time (descending) + :param sort: Sort the results by one or more fields. Use `+` for ascending and `-` for descending order. **Available fields:** - `item_id` - `application_run_id` - `reference` - `status` - `metadata` **Examples:** - `?sort=item_id` - Sort by id of the item (ascending) - `?sort=-application_run_id` - Sort by id of the run (descending) - `?sort=status&sort=-item_idt` - Sort by status, then by id of the item (descending) :type sort: List[str] :param _request_timeout: timeout setting for this request. If one number provided, it will be total request @@ -2734,11 +2652,12 @@ def list_runs_v1_runs_get( :return: Returns the result object. """ # noqa: E501 - _param = self._list_runs_v1_runs_get_serialize( - application_id=application_id, - application_version=application_version, - external_id=external_id, - custom_metadata=custom_metadata, + _param = self._list_run_results_v1_runs_application_run_id_results_get_serialize( + application_run_id=application_run_id, + item_id__in=item_id__in, + reference__in=reference__in, + status__in=status__in, + metadata=metadata, page=page, page_size=page_size, sort=sort, @@ -2749,7 +2668,7 @@ def list_runs_v1_runs_get( ) _response_types_map: Dict[str, Optional[str]] = { - '200': "List[RunReadResponse]", + '200': "List[ItemResultReadResponse]", '404': None, '422': "HTTPValidationError", } @@ -2765,15 +2684,16 @@ def list_runs_v1_runs_get( @validate_call - def list_runs_v1_runs_get_with_http_info( + def list_run_results_v1_runs_application_run_id_results_get_with_http_info( self, - application_id: Annotated[Optional[StrictStr], Field(description="Optional application ID filter")] = None, - application_version: Annotated[Optional[StrictStr], Field(description="Optional Version Name")] = None, - external_id: Annotated[Optional[StrictStr], Field(description="Optionally filter runs by items with this external ID")] = None, - custom_metadata: Annotated[Optional[Annotated[str, Field(strict=True, max_length=1000)]], Field(description="Use PostgreSQL JSONPath expressions to filter runs by their custom_metadata. #### URL Encoding Required **Important**: JSONPath expressions contain special characters that must be URL-encoded when used in query parameters. Most HTTP clients handle this automatically, but when constructing URLs manually, ensure proper encoding. #### Examples (Clear Format): - **Field existence**: `$.study` - Runs that have a study field defined - **Exact value match**: `$.study ? (@ == \"high\")` - Runs with specific study value - **Numeric comparison**: `$.confidence_score ? (@ > 0.75)` - Runs with confidence score greater than 0.75 - **Array operations**: `$.tags[*] ? (@ == \"draft\")` - Runs with tags array containing \"draft\" - **Complex conditions**: `$.resources ? (@.gpu_count > 2 && @.memory_gb >= 16)` - Runs with high resource requirements #### Examples (URL-Encoded Format): - **Field existence**: `%24.study` - **Exact value match**: `%24.study%20%3F%20(%40%20%3D%3D%20%22high%22)` - **Numeric comparison**: `%24.confidence_score%20%3F%20(%40%20%3E%200.75)` - **Array operations**: `%24.tags%5B*%5D%20%3F%20(%40%20%3D%3D%20%22draft%22)` - **Complex conditions**: `%24.resources%20%3F%20(%40.gpu_count%20%3E%202%20%26%26%20%40.memory_gb%20%3E%3D%2016)` #### Notes - JSONPath expressions are evaluated using PostgreSQL's `@?` operator - The `$.` prefix is automatically added to root-level field references if missing - String values in conditions must be enclosed in double quotes - Use `&&` for AND operations and `||` for OR operations - Regular expressions use `like_regex` with standard regex syntax - **Remember to URL-encode the entire JSONPath expression when making HTTP requests** ")] = None, + application_run_id: Annotated[StrictStr, Field(description="Application run id, returned by `POST /runs/` endpoint")], + item_id__in: Annotated[Optional[List[StrictStr]], Field(description="Filter for items ids")] = None, + reference__in: Annotated[Optional[List[StrictStr]], Field(description="Filter for items by their reference from the input payload")] = None, + status__in: Annotated[Optional[List[ItemStatus]], Field(description="Filter for items in certain statuses")] = None, + metadata: Annotated[Optional[Annotated[str, Field(strict=True, max_length=1000)]], Field(description="JSONPath expression to filter results by their metadata")] = None, page: Optional[Annotated[int, Field(strict=True, ge=1)]] = None, page_size: Optional[Annotated[int, Field(le=100, strict=True, ge=5)]] = None, - sort: Annotated[Optional[List[StrictStr]], Field(description="Sort the results by one or more fields. Use `+` for ascending and `-` for descending order. **Available fields:** - `run_id` - `application_version_id` - `organization_id` - `status` - `submitted_at` - `submitted_by` **Examples:** - `?sort=submitted_at` - Sort by creation time (ascending) - `?sort=-submitted_at` - Sort by creation time (descending) - `?sort=status&sort=-submitted_at` - Sort by status, then by time (descending) ")] = None, + sort: Annotated[Optional[List[StrictStr]], Field(description="Sort the results by one or more fields. Use `+` for ascending and `-` for descending order. **Available fields:** - `item_id` - `application_run_id` - `reference` - `status` - `metadata` **Examples:** - `?sort=item_id` - Sort by id of the item (ascending) - `?sort=-application_run_id` - Sort by id of the run (descending) - `?sort=status&sort=-item_idt` - Sort by status, then by id of the item (descending)")] = None, _request_timeout: Union[ None, Annotated[StrictFloat, Field(gt=0)], @@ -2786,24 +2706,26 @@ def list_runs_v1_runs_get_with_http_info( _content_type: Optional[StrictStr] = None, _headers: Optional[Dict[StrictStr, Any]] = None, _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, - ) -> ApiResponse[List[RunReadResponse]]: - """List Runs + ) -> ApiResponse[List[ItemResultReadResponse]]: + """List Run Results - List runs with filtering, sorting, and pagination capabilities. Returns paginated runs that were submitted by the user. + List results for items in an application run with filtering, sorting, and pagination capabilities. Returns paginated results for items within a specific application run. Results can be filtered by item IDs, references, status, and custom metadata using JSONPath expressions. ## JSONPath Metadata Filtering Use PostgreSQL JSONPath expressions to filter results by their metadata. ### Examples: - **Field existence**: `$.case_id` - Results that have a case_id field defined - **Exact value match**: `$.priority ? (@ == \"high\")` - Results with high priority - **Numeric comparison**: `$.confidence_score ? (@ > 0.95)` - Results with high confidence - **Array operations**: `$.flags[*] ? (@ == \"reviewed\")` - Results flagged as reviewed - **Complex conditions**: `$.metrics ? (@.accuracy > 0.9 && @.recall > 0.8)` - Results meeting performance thresholds ## Notes - JSONPath expressions are evaluated using PostgreSQL's `@?` operator - The `$.` prefix is automatically added to root-level field references if missing - String values in conditions must be enclosed in double quotes - Use `&&` for AND operations and `||` for OR operations - :param application_id: Optional application ID filter - :type application_id: str - :param application_version: Optional Version Name - :type application_version: str - :param external_id: Optionally filter runs by items with this external ID - :type external_id: str - :param custom_metadata: Use PostgreSQL JSONPath expressions to filter runs by their custom_metadata. #### URL Encoding Required **Important**: JSONPath expressions contain special characters that must be URL-encoded when used in query parameters. Most HTTP clients handle this automatically, but when constructing URLs manually, ensure proper encoding. #### Examples (Clear Format): - **Field existence**: `$.study` - Runs that have a study field defined - **Exact value match**: `$.study ? (@ == \"high\")` - Runs with specific study value - **Numeric comparison**: `$.confidence_score ? (@ > 0.75)` - Runs with confidence score greater than 0.75 - **Array operations**: `$.tags[*] ? (@ == \"draft\")` - Runs with tags array containing \"draft\" - **Complex conditions**: `$.resources ? (@.gpu_count > 2 && @.memory_gb >= 16)` - Runs with high resource requirements #### Examples (URL-Encoded Format): - **Field existence**: `%24.study` - **Exact value match**: `%24.study%20%3F%20(%40%20%3D%3D%20%22high%22)` - **Numeric comparison**: `%24.confidence_score%20%3F%20(%40%20%3E%200.75)` - **Array operations**: `%24.tags%5B*%5D%20%3F%20(%40%20%3D%3D%20%22draft%22)` - **Complex conditions**: `%24.resources%20%3F%20(%40.gpu_count%20%3E%202%20%26%26%20%40.memory_gb%20%3E%3D%2016)` #### Notes - JSONPath expressions are evaluated using PostgreSQL's `@?` operator - The `$.` prefix is automatically added to root-level field references if missing - String values in conditions must be enclosed in double quotes - Use `&&` for AND operations and `||` for OR operations - Regular expressions use `like_regex` with standard regex syntax - **Remember to URL-encode the entire JSONPath expression when making HTTP requests** - :type custom_metadata: str + :param application_run_id: Application run id, returned by `POST /runs/` endpoint (required) + :type application_run_id: str + :param item_id__in: Filter for items ids + :type item_id__in: List[str] + :param reference__in: Filter for items by their reference from the input payload + :type reference__in: List[str] + :param status__in: Filter for items in certain statuses + :type status__in: List[ItemStatus] + :param metadata: JSONPath expression to filter results by their metadata + :type metadata: str :param page: :type page: int :param page_size: :type page_size: int - :param sort: Sort the results by one or more fields. Use `+` for ascending and `-` for descending order. **Available fields:** - `run_id` - `application_version_id` - `organization_id` - `status` - `submitted_at` - `submitted_by` **Examples:** - `?sort=submitted_at` - Sort by creation time (ascending) - `?sort=-submitted_at` - Sort by creation time (descending) - `?sort=status&sort=-submitted_at` - Sort by status, then by time (descending) + :param sort: Sort the results by one or more fields. Use `+` for ascending and `-` for descending order. **Available fields:** - `item_id` - `application_run_id` - `reference` - `status` - `metadata` **Examples:** - `?sort=item_id` - Sort by id of the item (ascending) - `?sort=-application_run_id` - Sort by id of the run (descending) - `?sort=status&sort=-item_idt` - Sort by status, then by id of the item (descending) :type sort: List[str] :param _request_timeout: timeout setting for this request. If one number provided, it will be total request @@ -2827,11 +2749,12 @@ def list_runs_v1_runs_get_with_http_info( :return: Returns the result object. """ # noqa: E501 - _param = self._list_runs_v1_runs_get_serialize( - application_id=application_id, - application_version=application_version, - external_id=external_id, - custom_metadata=custom_metadata, + _param = self._list_run_results_v1_runs_application_run_id_results_get_serialize( + application_run_id=application_run_id, + item_id__in=item_id__in, + reference__in=reference__in, + status__in=status__in, + metadata=metadata, page=page, page_size=page_size, sort=sort, @@ -2842,7 +2765,7 @@ def list_runs_v1_runs_get_with_http_info( ) _response_types_map: Dict[str, Optional[str]] = { - '200': "List[RunReadResponse]", + '200': "List[ItemResultReadResponse]", '404': None, '422': "HTTPValidationError", } @@ -2858,15 +2781,16 @@ def list_runs_v1_runs_get_with_http_info( @validate_call - def list_runs_v1_runs_get_without_preload_content( + def list_run_results_v1_runs_application_run_id_results_get_without_preload_content( self, - application_id: Annotated[Optional[StrictStr], Field(description="Optional application ID filter")] = None, - application_version: Annotated[Optional[StrictStr], Field(description="Optional Version Name")] = None, - external_id: Annotated[Optional[StrictStr], Field(description="Optionally filter runs by items with this external ID")] = None, - custom_metadata: Annotated[Optional[Annotated[str, Field(strict=True, max_length=1000)]], Field(description="Use PostgreSQL JSONPath expressions to filter runs by their custom_metadata. #### URL Encoding Required **Important**: JSONPath expressions contain special characters that must be URL-encoded when used in query parameters. Most HTTP clients handle this automatically, but when constructing URLs manually, ensure proper encoding. #### Examples (Clear Format): - **Field existence**: `$.study` - Runs that have a study field defined - **Exact value match**: `$.study ? (@ == \"high\")` - Runs with specific study value - **Numeric comparison**: `$.confidence_score ? (@ > 0.75)` - Runs with confidence score greater than 0.75 - **Array operations**: `$.tags[*] ? (@ == \"draft\")` - Runs with tags array containing \"draft\" - **Complex conditions**: `$.resources ? (@.gpu_count > 2 && @.memory_gb >= 16)` - Runs with high resource requirements #### Examples (URL-Encoded Format): - **Field existence**: `%24.study` - **Exact value match**: `%24.study%20%3F%20(%40%20%3D%3D%20%22high%22)` - **Numeric comparison**: `%24.confidence_score%20%3F%20(%40%20%3E%200.75)` - **Array operations**: `%24.tags%5B*%5D%20%3F%20(%40%20%3D%3D%20%22draft%22)` - **Complex conditions**: `%24.resources%20%3F%20(%40.gpu_count%20%3E%202%20%26%26%20%40.memory_gb%20%3E%3D%2016)` #### Notes - JSONPath expressions are evaluated using PostgreSQL's `@?` operator - The `$.` prefix is automatically added to root-level field references if missing - String values in conditions must be enclosed in double quotes - Use `&&` for AND operations and `||` for OR operations - Regular expressions use `like_regex` with standard regex syntax - **Remember to URL-encode the entire JSONPath expression when making HTTP requests** ")] = None, + application_run_id: Annotated[StrictStr, Field(description="Application run id, returned by `POST /runs/` endpoint")], + item_id__in: Annotated[Optional[List[StrictStr]], Field(description="Filter for items ids")] = None, + reference__in: Annotated[Optional[List[StrictStr]], Field(description="Filter for items by their reference from the input payload")] = None, + status__in: Annotated[Optional[List[ItemStatus]], Field(description="Filter for items in certain statuses")] = None, + metadata: Annotated[Optional[Annotated[str, Field(strict=True, max_length=1000)]], Field(description="JSONPath expression to filter results by their metadata")] = None, page: Optional[Annotated[int, Field(strict=True, ge=1)]] = None, page_size: Optional[Annotated[int, Field(le=100, strict=True, ge=5)]] = None, - sort: Annotated[Optional[List[StrictStr]], Field(description="Sort the results by one or more fields. Use `+` for ascending and `-` for descending order. **Available fields:** - `run_id` - `application_version_id` - `organization_id` - `status` - `submitted_at` - `submitted_by` **Examples:** - `?sort=submitted_at` - Sort by creation time (ascending) - `?sort=-submitted_at` - Sort by creation time (descending) - `?sort=status&sort=-submitted_at` - Sort by status, then by time (descending) ")] = None, + sort: Annotated[Optional[List[StrictStr]], Field(description="Sort the results by one or more fields. Use `+` for ascending and `-` for descending order. **Available fields:** - `item_id` - `application_run_id` - `reference` - `status` - `metadata` **Examples:** - `?sort=item_id` - Sort by id of the item (ascending) - `?sort=-application_run_id` - Sort by id of the run (descending) - `?sort=status&sort=-item_idt` - Sort by status, then by id of the item (descending)")] = None, _request_timeout: Union[ None, Annotated[StrictFloat, Field(gt=0)], @@ -2880,23 +2804,25 @@ def list_runs_v1_runs_get_without_preload_content( _headers: Optional[Dict[StrictStr, Any]] = None, _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, ) -> RESTResponseType: - """List Runs + """List Run Results - List runs with filtering, sorting, and pagination capabilities. Returns paginated runs that were submitted by the user. + List results for items in an application run with filtering, sorting, and pagination capabilities. Returns paginated results for items within a specific application run. Results can be filtered by item IDs, references, status, and custom metadata using JSONPath expressions. ## JSONPath Metadata Filtering Use PostgreSQL JSONPath expressions to filter results by their metadata. ### Examples: - **Field existence**: `$.case_id` - Results that have a case_id field defined - **Exact value match**: `$.priority ? (@ == \"high\")` - Results with high priority - **Numeric comparison**: `$.confidence_score ? (@ > 0.95)` - Results with high confidence - **Array operations**: `$.flags[*] ? (@ == \"reviewed\")` - Results flagged as reviewed - **Complex conditions**: `$.metrics ? (@.accuracy > 0.9 && @.recall > 0.8)` - Results meeting performance thresholds ## Notes - JSONPath expressions are evaluated using PostgreSQL's `@?` operator - The `$.` prefix is automatically added to root-level field references if missing - String values in conditions must be enclosed in double quotes - Use `&&` for AND operations and `||` for OR operations - :param application_id: Optional application ID filter - :type application_id: str - :param application_version: Optional Version Name - :type application_version: str - :param external_id: Optionally filter runs by items with this external ID - :type external_id: str - :param custom_metadata: Use PostgreSQL JSONPath expressions to filter runs by their custom_metadata. #### URL Encoding Required **Important**: JSONPath expressions contain special characters that must be URL-encoded when used in query parameters. Most HTTP clients handle this automatically, but when constructing URLs manually, ensure proper encoding. #### Examples (Clear Format): - **Field existence**: `$.study` - Runs that have a study field defined - **Exact value match**: `$.study ? (@ == \"high\")` - Runs with specific study value - **Numeric comparison**: `$.confidence_score ? (@ > 0.75)` - Runs with confidence score greater than 0.75 - **Array operations**: `$.tags[*] ? (@ == \"draft\")` - Runs with tags array containing \"draft\" - **Complex conditions**: `$.resources ? (@.gpu_count > 2 && @.memory_gb >= 16)` - Runs with high resource requirements #### Examples (URL-Encoded Format): - **Field existence**: `%24.study` - **Exact value match**: `%24.study%20%3F%20(%40%20%3D%3D%20%22high%22)` - **Numeric comparison**: `%24.confidence_score%20%3F%20(%40%20%3E%200.75)` - **Array operations**: `%24.tags%5B*%5D%20%3F%20(%40%20%3D%3D%20%22draft%22)` - **Complex conditions**: `%24.resources%20%3F%20(%40.gpu_count%20%3E%202%20%26%26%20%40.memory_gb%20%3E%3D%2016)` #### Notes - JSONPath expressions are evaluated using PostgreSQL's `@?` operator - The `$.` prefix is automatically added to root-level field references if missing - String values in conditions must be enclosed in double quotes - Use `&&` for AND operations and `||` for OR operations - Regular expressions use `like_regex` with standard regex syntax - **Remember to URL-encode the entire JSONPath expression when making HTTP requests** - :type custom_metadata: str + :param application_run_id: Application run id, returned by `POST /runs/` endpoint (required) + :type application_run_id: str + :param item_id__in: Filter for items ids + :type item_id__in: List[str] + :param reference__in: Filter for items by their reference from the input payload + :type reference__in: List[str] + :param status__in: Filter for items in certain statuses + :type status__in: List[ItemStatus] + :param metadata: JSONPath expression to filter results by their metadata + :type metadata: str :param page: :type page: int :param page_size: :type page_size: int - :param sort: Sort the results by one or more fields. Use `+` for ascending and `-` for descending order. **Available fields:** - `run_id` - `application_version_id` - `organization_id` - `status` - `submitted_at` - `submitted_by` **Examples:** - `?sort=submitted_at` - Sort by creation time (ascending) - `?sort=-submitted_at` - Sort by creation time (descending) - `?sort=status&sort=-submitted_at` - Sort by status, then by time (descending) + :param sort: Sort the results by one or more fields. Use `+` for ascending and `-` for descending order. **Available fields:** - `item_id` - `application_run_id` - `reference` - `status` - `metadata` **Examples:** - `?sort=item_id` - Sort by id of the item (ascending) - `?sort=-application_run_id` - Sort by id of the run (descending) - `?sort=status&sort=-item_idt` - Sort by status, then by id of the item (descending) :type sort: List[str] :param _request_timeout: timeout setting for this request. If one number provided, it will be total request @@ -2920,11 +2846,12 @@ def list_runs_v1_runs_get_without_preload_content( :return: Returns the result object. """ # noqa: E501 - _param = self._list_runs_v1_runs_get_serialize( - application_id=application_id, - application_version=application_version, - external_id=external_id, - custom_metadata=custom_metadata, + _param = self._list_run_results_v1_runs_application_run_id_results_get_serialize( + application_run_id=application_run_id, + item_id__in=item_id__in, + reference__in=reference__in, + status__in=status__in, + metadata=metadata, page=page, page_size=page_size, sort=sort, @@ -2935,7 +2862,7 @@ def list_runs_v1_runs_get_without_preload_content( ) _response_types_map: Dict[str, Optional[str]] = { - '200': "List[RunReadResponse]", + '200': "List[ItemResultReadResponse]", '404': None, '422': "HTTPValidationError", } @@ -2946,12 +2873,13 @@ def list_runs_v1_runs_get_without_preload_content( return response_data.response - def _list_runs_v1_runs_get_serialize( + def _list_run_results_v1_runs_application_run_id_results_get_serialize( self, - application_id, - application_version, - external_id, - custom_metadata, + application_run_id, + item_id__in, + reference__in, + status__in, + metadata, page, page_size, sort, @@ -2964,6 +2892,9 @@ def _list_runs_v1_runs_get_serialize( _host = None _collection_formats: Dict[str, str] = { + 'item_id__in': 'multi', + 'reference__in': 'multi', + 'status__in': 'multi', 'sort': 'multi', } @@ -2977,22 +2908,24 @@ def _list_runs_v1_runs_get_serialize( _body_params: Optional[bytes] = None # process the path parameters + if application_run_id is not None: + _path_params['application_run_id'] = application_run_id # process the query parameters - if application_id is not None: + if item_id__in is not None: - _query_params.append(('application_id', application_id)) + _query_params.append(('item_id__in', item_id__in)) - if application_version is not None: + if reference__in is not None: - _query_params.append(('application_version', application_version)) + _query_params.append(('reference__in', reference__in)) - if external_id is not None: + if status__in is not None: - _query_params.append(('external_id', external_id)) + _query_params.append(('status__in', status__in)) - if custom_metadata is not None: + if metadata is not None: - _query_params.append(('custom_metadata', custom_metadata)) + _query_params.append(('metadata', metadata)) if page is not None: @@ -3027,7 +2960,7 @@ def _list_runs_v1_runs_get_serialize( return self.api_client.param_serialize( method='GET', - resource_path='/api/v1/runs', + resource_path='/api/v1/runs/{application_run_id}/results', path_params=_path_params, query_params=_query_params, header_params=_header_params, @@ -3044,11 +2977,13 @@ def _list_runs_v1_runs_get_serialize( @validate_call - def put_item_custom_metadata_by_run_v1_runs_run_id_items_external_id_custom_metadata_put( + def list_versions_by_application_id_v1_applications_application_id_versions_get( self, - run_id: Annotated[StrictStr, Field(description="The run id, returned by `POST /runs/` endpoint")], - external_id: Annotated[StrictStr, Field(description="The `external_id` that was defined for the item by the customer that triggered the run.")], - custom_metadata_update_request: CustomMetadataUpdateRequest, + application_id: StrictStr, + page: Optional[Annotated[int, Field(strict=True, ge=1)]] = None, + page_size: Optional[Annotated[int, Field(le=100, strict=True, ge=5)]] = None, + version: Annotated[Optional[StrictStr], Field(description="Semantic version of the application, example: `1.0.13`")] = None, + sort: Annotated[Optional[List[StrictStr]], Field(description="Sort the results by one or more fields. Use `+` for ascending and `-` for descending order. **Available fields:** - `application_version_id` - `version` - `application_id` - `changelog` - `created_at` **Examples:** - `?sort=application_id` - Sort by application_id ascending - `?sort=-version` - Sort by version descending - `?sort=+application_id&sort=-created_at` - Sort by application_id ascending, then created_at descending")] = None, _request_timeout: Union[ None, Annotated[StrictFloat, Field(gt=0)], @@ -3061,16 +2996,21 @@ def put_item_custom_metadata_by_run_v1_runs_run_id_items_external_id_custom_meta _content_type: Optional[StrictStr] = None, _headers: Optional[Dict[StrictStr, Any]] = None, _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, - ) -> object: - """Put Item Custom Metadata By Run + ) -> List[ApplicationVersionReadResponse]: + """List Available Application Versions + Returns a list of available application versions for a specific application. A version is considered available when it has been assigned to your organization. Within a major version, all minor and patch updates are automatically accessible unless a specific version has been deprecated. Major version upgrades, however, require explicit assignment and may be subject to contract modifications before becoming available to your organization. - :param run_id: The run id, returned by `POST /runs/` endpoint (required) - :type run_id: str - :param external_id: The `external_id` that was defined for the item by the customer that triggered the run. (required) - :type external_id: str - :param custom_metadata_update_request: (required) - :type custom_metadata_update_request: CustomMetadataUpdateRequest + :param application_id: (required) + :type application_id: str + :param page: + :type page: int + :param page_size: + :type page_size: int + :param version: Semantic version of the application, example: `1.0.13` + :type version: str + :param sort: Sort the results by one or more fields. Use `+` for ascending and `-` for descending order. **Available fields:** - `application_version_id` - `version` - `application_id` - `changelog` - `created_at` **Examples:** - `?sort=application_id` - Sort by application_id ascending - `?sort=-version` - Sort by version descending - `?sort=+application_id&sort=-created_at` - Sort by application_id ascending, then created_at descending + :type sort: List[str] :param _request_timeout: timeout setting for this request. If one number provided, it will be total request timeout. It can also be a pair (tuple) of @@ -3093,10 +3033,12 @@ def put_item_custom_metadata_by_run_v1_runs_run_id_items_external_id_custom_meta :return: Returns the result object. """ # noqa: E501 - _param = self._put_item_custom_metadata_by_run_v1_runs_run_id_items_external_id_custom_metadata_put_serialize( - run_id=run_id, - external_id=external_id, - custom_metadata_update_request=custom_metadata_update_request, + _param = self._list_versions_by_application_id_v1_applications_application_id_versions_get_serialize( + application_id=application_id, + page=page, + page_size=page_size, + version=version, + sort=sort, _request_auth=_request_auth, _content_type=_content_type, _headers=_headers, @@ -3104,7 +3046,8 @@ def put_item_custom_metadata_by_run_v1_runs_run_id_items_external_id_custom_meta ) _response_types_map: Dict[str, Optional[str]] = { - '200': "object", + '200': "List[ApplicationVersionReadResponse]", + '401': None, '422': "HTTPValidationError", } response_data = self.api_client.call_api( @@ -3119,11 +3062,13 @@ def put_item_custom_metadata_by_run_v1_runs_run_id_items_external_id_custom_meta @validate_call - def put_item_custom_metadata_by_run_v1_runs_run_id_items_external_id_custom_metadata_put_with_http_info( + def list_versions_by_application_id_v1_applications_application_id_versions_get_with_http_info( self, - run_id: Annotated[StrictStr, Field(description="The run id, returned by `POST /runs/` endpoint")], - external_id: Annotated[StrictStr, Field(description="The `external_id` that was defined for the item by the customer that triggered the run.")], - custom_metadata_update_request: CustomMetadataUpdateRequest, + application_id: StrictStr, + page: Optional[Annotated[int, Field(strict=True, ge=1)]] = None, + page_size: Optional[Annotated[int, Field(le=100, strict=True, ge=5)]] = None, + version: Annotated[Optional[StrictStr], Field(description="Semantic version of the application, example: `1.0.13`")] = None, + sort: Annotated[Optional[List[StrictStr]], Field(description="Sort the results by one or more fields. Use `+` for ascending and `-` for descending order. **Available fields:** - `application_version_id` - `version` - `application_id` - `changelog` - `created_at` **Examples:** - `?sort=application_id` - Sort by application_id ascending - `?sort=-version` - Sort by version descending - `?sort=+application_id&sort=-created_at` - Sort by application_id ascending, then created_at descending")] = None, _request_timeout: Union[ None, Annotated[StrictFloat, Field(gt=0)], @@ -3136,16 +3081,21 @@ def put_item_custom_metadata_by_run_v1_runs_run_id_items_external_id_custom_meta _content_type: Optional[StrictStr] = None, _headers: Optional[Dict[StrictStr, Any]] = None, _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, - ) -> ApiResponse[object]: - """Put Item Custom Metadata By Run + ) -> ApiResponse[List[ApplicationVersionReadResponse]]: + """List Available Application Versions + Returns a list of available application versions for a specific application. A version is considered available when it has been assigned to your organization. Within a major version, all minor and patch updates are automatically accessible unless a specific version has been deprecated. Major version upgrades, however, require explicit assignment and may be subject to contract modifications before becoming available to your organization. - :param run_id: The run id, returned by `POST /runs/` endpoint (required) - :type run_id: str - :param external_id: The `external_id` that was defined for the item by the customer that triggered the run. (required) - :type external_id: str - :param custom_metadata_update_request: (required) - :type custom_metadata_update_request: CustomMetadataUpdateRequest + :param application_id: (required) + :type application_id: str + :param page: + :type page: int + :param page_size: + :type page_size: int + :param version: Semantic version of the application, example: `1.0.13` + :type version: str + :param sort: Sort the results by one or more fields. Use `+` for ascending and `-` for descending order. **Available fields:** - `application_version_id` - `version` - `application_id` - `changelog` - `created_at` **Examples:** - `?sort=application_id` - Sort by application_id ascending - `?sort=-version` - Sort by version descending - `?sort=+application_id&sort=-created_at` - Sort by application_id ascending, then created_at descending + :type sort: List[str] :param _request_timeout: timeout setting for this request. If one number provided, it will be total request timeout. It can also be a pair (tuple) of @@ -3168,10 +3118,12 @@ def put_item_custom_metadata_by_run_v1_runs_run_id_items_external_id_custom_meta :return: Returns the result object. """ # noqa: E501 - _param = self._put_item_custom_metadata_by_run_v1_runs_run_id_items_external_id_custom_metadata_put_serialize( - run_id=run_id, - external_id=external_id, - custom_metadata_update_request=custom_metadata_update_request, + _param = self._list_versions_by_application_id_v1_applications_application_id_versions_get_serialize( + application_id=application_id, + page=page, + page_size=page_size, + version=version, + sort=sort, _request_auth=_request_auth, _content_type=_content_type, _headers=_headers, @@ -3179,7 +3131,8 @@ def put_item_custom_metadata_by_run_v1_runs_run_id_items_external_id_custom_meta ) _response_types_map: Dict[str, Optional[str]] = { - '200': "object", + '200': "List[ApplicationVersionReadResponse]", + '401': None, '422': "HTTPValidationError", } response_data = self.api_client.call_api( @@ -3194,11 +3147,13 @@ def put_item_custom_metadata_by_run_v1_runs_run_id_items_external_id_custom_meta @validate_call - def put_item_custom_metadata_by_run_v1_runs_run_id_items_external_id_custom_metadata_put_without_preload_content( + def list_versions_by_application_id_v1_applications_application_id_versions_get_without_preload_content( self, - run_id: Annotated[StrictStr, Field(description="The run id, returned by `POST /runs/` endpoint")], - external_id: Annotated[StrictStr, Field(description="The `external_id` that was defined for the item by the customer that triggered the run.")], - custom_metadata_update_request: CustomMetadataUpdateRequest, + application_id: StrictStr, + page: Optional[Annotated[int, Field(strict=True, ge=1)]] = None, + page_size: Optional[Annotated[int, Field(le=100, strict=True, ge=5)]] = None, + version: Annotated[Optional[StrictStr], Field(description="Semantic version of the application, example: `1.0.13`")] = None, + sort: Annotated[Optional[List[StrictStr]], Field(description="Sort the results by one or more fields. Use `+` for ascending and `-` for descending order. **Available fields:** - `application_version_id` - `version` - `application_id` - `changelog` - `created_at` **Examples:** - `?sort=application_id` - Sort by application_id ascending - `?sort=-version` - Sort by version descending - `?sort=+application_id&sort=-created_at` - Sort by application_id ascending, then created_at descending")] = None, _request_timeout: Union[ None, Annotated[StrictFloat, Field(gt=0)], @@ -3212,15 +3167,20 @@ def put_item_custom_metadata_by_run_v1_runs_run_id_items_external_id_custom_meta _headers: Optional[Dict[StrictStr, Any]] = None, _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, ) -> RESTResponseType: - """Put Item Custom Metadata By Run + """List Available Application Versions + Returns a list of available application versions for a specific application. A version is considered available when it has been assigned to your organization. Within a major version, all minor and patch updates are automatically accessible unless a specific version has been deprecated. Major version upgrades, however, require explicit assignment and may be subject to contract modifications before becoming available to your organization. - :param run_id: The run id, returned by `POST /runs/` endpoint (required) - :type run_id: str - :param external_id: The `external_id` that was defined for the item by the customer that triggered the run. (required) - :type external_id: str - :param custom_metadata_update_request: (required) - :type custom_metadata_update_request: CustomMetadataUpdateRequest + :param application_id: (required) + :type application_id: str + :param page: + :type page: int + :param page_size: + :type page_size: int + :param version: Semantic version of the application, example: `1.0.13` + :type version: str + :param sort: Sort the results by one or more fields. Use `+` for ascending and `-` for descending order. **Available fields:** - `application_version_id` - `version` - `application_id` - `changelog` - `created_at` **Examples:** - `?sort=application_id` - Sort by application_id ascending - `?sort=-version` - Sort by version descending - `?sort=+application_id&sort=-created_at` - Sort by application_id ascending, then created_at descending + :type sort: List[str] :param _request_timeout: timeout setting for this request. If one number provided, it will be total request timeout. It can also be a pair (tuple) of @@ -3243,10 +3203,12 @@ def put_item_custom_metadata_by_run_v1_runs_run_id_items_external_id_custom_meta :return: Returns the result object. """ # noqa: E501 - _param = self._put_item_custom_metadata_by_run_v1_runs_run_id_items_external_id_custom_metadata_put_serialize( - run_id=run_id, - external_id=external_id, - custom_metadata_update_request=custom_metadata_update_request, + _param = self._list_versions_by_application_id_v1_applications_application_id_versions_get_serialize( + application_id=application_id, + page=page, + page_size=page_size, + version=version, + sort=sort, _request_auth=_request_auth, _content_type=_content_type, _headers=_headers, @@ -3254,7 +3216,8 @@ def put_item_custom_metadata_by_run_v1_runs_run_id_items_external_id_custom_meta ) _response_types_map: Dict[str, Optional[str]] = { - '200': "object", + '200': "List[ApplicationVersionReadResponse]", + '401': None, '422': "HTTPValidationError", } response_data = self.api_client.call_api( @@ -3264,11 +3227,13 @@ def put_item_custom_metadata_by_run_v1_runs_run_id_items_external_id_custom_meta return response_data.response - def _put_item_custom_metadata_by_run_v1_runs_run_id_items_external_id_custom_metadata_put_serialize( + def _list_versions_by_application_id_v1_applications_application_id_versions_get_serialize( self, - run_id, - external_id, - custom_metadata_update_request, + application_id, + page, + page_size, + version, + sort, _request_auth, _content_type, _headers, @@ -3278,6 +3243,7 @@ def _put_item_custom_metadata_by_run_v1_runs_run_id_items_external_id_custom_met _host = None _collection_formats: Dict[str, str] = { + 'sort': 'multi', } _path_params: Dict[str, str] = {} @@ -3290,308 +3256,28 @@ def _put_item_custom_metadata_by_run_v1_runs_run_id_items_external_id_custom_met _body_params: Optional[bytes] = None # process the path parameters - if run_id is not None: - _path_params['run_id'] = run_id - if external_id is not None: - _path_params['external_id'] = external_id - # process the query parameters - # process the header parameters - # process the form parameters - # process the body parameter - if custom_metadata_update_request is not None: - _body_params = custom_metadata_update_request - - - # set the HTTP header `Accept` - if 'Accept' not in _header_params: - _header_params['Accept'] = self.api_client.select_header_accept( - [ - 'application/json' - ] - ) - - # set the HTTP header `Content-Type` - if _content_type: - _header_params['Content-Type'] = _content_type - else: - _default_content_type = ( - self.api_client.select_header_content_type( - [ - 'application/json' - ] - ) - ) - if _default_content_type is not None: - _header_params['Content-Type'] = _default_content_type - - # authentication setting - _auth_settings: List[str] = [ - 'OAuth2AuthorizationCodeBearer' - ] - - return self.api_client.param_serialize( - method='PUT', - resource_path='/api/v1/runs/{run_id}/items/{external_id}/custom-metadata', - path_params=_path_params, - query_params=_query_params, - header_params=_header_params, - body=_body_params, - post_params=_form_params, - files=_files, - auth_settings=_auth_settings, - collection_formats=_collection_formats, - _host=_host, - _request_auth=_request_auth - ) - - - - - @validate_call - def put_run_custom_metadata_v1_runs_run_id_custom_metadata_put( - self, - run_id: Annotated[StrictStr, Field(description="Run id, returned by `POST /runs/` endpoint")], - custom_metadata_update_request: CustomMetadataUpdateRequest, - _request_timeout: Union[ - None, - Annotated[StrictFloat, Field(gt=0)], - Tuple[ - Annotated[StrictFloat, Field(gt=0)], - Annotated[StrictFloat, Field(gt=0)] - ] - ] = None, - _request_auth: Optional[Dict[StrictStr, Any]] = None, - _content_type: Optional[StrictStr] = None, - _headers: Optional[Dict[StrictStr, Any]] = None, - _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, - ) -> object: - """Put Run Custom Metadata - - - :param run_id: Run id, returned by `POST /runs/` endpoint (required) - :type run_id: str - :param custom_metadata_update_request: (required) - :type custom_metadata_update_request: CustomMetadataUpdateRequest - :param _request_timeout: timeout setting for this request. If one - number provided, it will be total request - timeout. It can also be a pair (tuple) of - (connection, read) timeouts. - :type _request_timeout: int, tuple(int, int), optional - :param _request_auth: set to override the auth_settings for an a single - request; this effectively ignores the - authentication in the spec for a single request. - :type _request_auth: dict, optional - :param _content_type: force content-type for the request. - :type _content_type: str, Optional - :param _headers: set to override the headers for a single - request; this effectively ignores the headers - in the spec for a single request. - :type _headers: dict, optional - :param _host_index: set to override the host_index for a single - request; this effectively ignores the host_index - in the spec for a single request. - :type _host_index: int, optional - :return: Returns the result object. - """ # noqa: E501 - - _param = self._put_run_custom_metadata_v1_runs_run_id_custom_metadata_put_serialize( - run_id=run_id, - custom_metadata_update_request=custom_metadata_update_request, - _request_auth=_request_auth, - _content_type=_content_type, - _headers=_headers, - _host_index=_host_index - ) - - _response_types_map: Dict[str, Optional[str]] = { - '200': "object", - '404': None, - '422': "HTTPValidationError", - } - response_data = self.api_client.call_api( - *_param, - _request_timeout=_request_timeout - ) - response_data.read() - return self.api_client.response_deserialize( - response_data=response_data, - response_types_map=_response_types_map, - ).data - - - @validate_call - def put_run_custom_metadata_v1_runs_run_id_custom_metadata_put_with_http_info( - self, - run_id: Annotated[StrictStr, Field(description="Run id, returned by `POST /runs/` endpoint")], - custom_metadata_update_request: CustomMetadataUpdateRequest, - _request_timeout: Union[ - None, - Annotated[StrictFloat, Field(gt=0)], - Tuple[ - Annotated[StrictFloat, Field(gt=0)], - Annotated[StrictFloat, Field(gt=0)] - ] - ] = None, - _request_auth: Optional[Dict[StrictStr, Any]] = None, - _content_type: Optional[StrictStr] = None, - _headers: Optional[Dict[StrictStr, Any]] = None, - _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, - ) -> ApiResponse[object]: - """Put Run Custom Metadata - - - :param run_id: Run id, returned by `POST /runs/` endpoint (required) - :type run_id: str - :param custom_metadata_update_request: (required) - :type custom_metadata_update_request: CustomMetadataUpdateRequest - :param _request_timeout: timeout setting for this request. If one - number provided, it will be total request - timeout. It can also be a pair (tuple) of - (connection, read) timeouts. - :type _request_timeout: int, tuple(int, int), optional - :param _request_auth: set to override the auth_settings for an a single - request; this effectively ignores the - authentication in the spec for a single request. - :type _request_auth: dict, optional - :param _content_type: force content-type for the request. - :type _content_type: str, Optional - :param _headers: set to override the headers for a single - request; this effectively ignores the headers - in the spec for a single request. - :type _headers: dict, optional - :param _host_index: set to override the host_index for a single - request; this effectively ignores the host_index - in the spec for a single request. - :type _host_index: int, optional - :return: Returns the result object. - """ # noqa: E501 - - _param = self._put_run_custom_metadata_v1_runs_run_id_custom_metadata_put_serialize( - run_id=run_id, - custom_metadata_update_request=custom_metadata_update_request, - _request_auth=_request_auth, - _content_type=_content_type, - _headers=_headers, - _host_index=_host_index - ) - - _response_types_map: Dict[str, Optional[str]] = { - '200': "object", - '404': None, - '422': "HTTPValidationError", - } - response_data = self.api_client.call_api( - *_param, - _request_timeout=_request_timeout - ) - response_data.read() - return self.api_client.response_deserialize( - response_data=response_data, - response_types_map=_response_types_map, - ) - - - @validate_call - def put_run_custom_metadata_v1_runs_run_id_custom_metadata_put_without_preload_content( - self, - run_id: Annotated[StrictStr, Field(description="Run id, returned by `POST /runs/` endpoint")], - custom_metadata_update_request: CustomMetadataUpdateRequest, - _request_timeout: Union[ - None, - Annotated[StrictFloat, Field(gt=0)], - Tuple[ - Annotated[StrictFloat, Field(gt=0)], - Annotated[StrictFloat, Field(gt=0)] - ] - ] = None, - _request_auth: Optional[Dict[StrictStr, Any]] = None, - _content_type: Optional[StrictStr] = None, - _headers: Optional[Dict[StrictStr, Any]] = None, - _host_index: Annotated[StrictInt, Field(ge=0, le=0)] = 0, - ) -> RESTResponseType: - """Put Run Custom Metadata - - - :param run_id: Run id, returned by `POST /runs/` endpoint (required) - :type run_id: str - :param custom_metadata_update_request: (required) - :type custom_metadata_update_request: CustomMetadataUpdateRequest - :param _request_timeout: timeout setting for this request. If one - number provided, it will be total request - timeout. It can also be a pair (tuple) of - (connection, read) timeouts. - :type _request_timeout: int, tuple(int, int), optional - :param _request_auth: set to override the auth_settings for an a single - request; this effectively ignores the - authentication in the spec for a single request. - :type _request_auth: dict, optional - :param _content_type: force content-type for the request. - :type _content_type: str, Optional - :param _headers: set to override the headers for a single - request; this effectively ignores the headers - in the spec for a single request. - :type _headers: dict, optional - :param _host_index: set to override the host_index for a single - request; this effectively ignores the host_index - in the spec for a single request. - :type _host_index: int, optional - :return: Returns the result object. - """ # noqa: E501 - - _param = self._put_run_custom_metadata_v1_runs_run_id_custom_metadata_put_serialize( - run_id=run_id, - custom_metadata_update_request=custom_metadata_update_request, - _request_auth=_request_auth, - _content_type=_content_type, - _headers=_headers, - _host_index=_host_index - ) - - _response_types_map: Dict[str, Optional[str]] = { - '200': "object", - '404': None, - '422': "HTTPValidationError", - } - response_data = self.api_client.call_api( - *_param, - _request_timeout=_request_timeout - ) - return response_data.response - - - def _put_run_custom_metadata_v1_runs_run_id_custom_metadata_put_serialize( - self, - run_id, - custom_metadata_update_request, - _request_auth, - _content_type, - _headers, - _host_index, - ) -> RequestSerialized: - - _host = None - - _collection_formats: Dict[str, str] = { - } - - _path_params: Dict[str, str] = {} - _query_params: List[Tuple[str, str]] = [] - _header_params: Dict[str, Optional[str]] = _headers or {} - _form_params: List[Tuple[str, str]] = [] - _files: Dict[ - str, Union[str, bytes, List[str], List[bytes], List[Tuple[str, bytes]]] - ] = {} - _body_params: Optional[bytes] = None - - # process the path parameters - if run_id is not None: - _path_params['run_id'] = run_id + if application_id is not None: + _path_params['application_id'] = application_id # process the query parameters + if page is not None: + + _query_params.append(('page', page)) + + if page_size is not None: + + _query_params.append(('page_size', page_size)) + + if version is not None: + + _query_params.append(('version', version)) + + if sort is not None: + + _query_params.append(('sort', sort)) + # process the header parameters # process the form parameters # process the body parameter - if custom_metadata_update_request is not None: - _body_params = custom_metadata_update_request # set the HTTP header `Accept` @@ -3602,19 +3288,6 @@ def _put_run_custom_metadata_v1_runs_run_id_custom_metadata_put_serialize( ] ) - # set the HTTP header `Content-Type` - if _content_type: - _header_params['Content-Type'] = _content_type - else: - _default_content_type = ( - self.api_client.select_header_content_type( - [ - 'application/json' - ] - ) - ) - if _default_content_type is not None: - _header_params['Content-Type'] = _default_content_type # authentication setting _auth_settings: List[str] = [ @@ -3622,8 +3295,8 @@ def _put_run_custom_metadata_v1_runs_run_id_custom_metadata_put_serialize( ] return self.api_client.param_serialize( - method='PUT', - resource_path='/api/v1/runs/{run_id}/custom-metadata', + method='GET', + resource_path='/api/v1/applications/{application_id}/versions', path_params=_path_params, query_params=_query_params, header_params=_header_params, diff --git a/codegen/out/aignx/codegen/api_client.py b/codegen/out/aignx/codegen/api_client.py index 317c18a9..84b62252 100644 --- a/codegen/out/aignx/codegen/api_client.py +++ b/codegen/out/aignx/codegen/api_client.py @@ -5,7 +5,7 @@ The Aignostics Platform is a cloud-based service that enables organizations to access advanced computational pathology applications through a secure API. The platform provides standardized access to Aignostics' portfolio of computational pathology solutions, with Atlas H&E-TME serving as an example of the available API endpoints. To begin using the platform, your organization must first be registered by our business support team. If you don't have an account yet, please contact your account manager or email support@aignostics.com to get started. More information about our applications can be found on [https://platform.aignostics.com](https://platform.aignostics.com). **How to authorize and test API endpoints:** 1. Click the \"Authorize\" button in the right corner below 3. Click \"Authorize\" button in the dialog to log in with your Aignostics Platform credentials 4. After successful login, you'll be redirected back and can use \"Try it out\" on any endpoint **Note**: You only need to authorize once per session. The lock icons next to endpoints will show green when authorized. - The version of the OpenAPI document: 1.0.0.beta7 + The version of the OpenAPI document: 1.0.0-beta6 Generated by OpenAPI Generator (https://openapi-generator.tech) Do not edit the class manually. diff --git a/codegen/out/aignx/codegen/configuration.py b/codegen/out/aignx/codegen/configuration.py index 66f18632..aeff4ef7 100644 --- a/codegen/out/aignx/codegen/configuration.py +++ b/codegen/out/aignx/codegen/configuration.py @@ -5,7 +5,7 @@ The Aignostics Platform is a cloud-based service that enables organizations to access advanced computational pathology applications through a secure API. The platform provides standardized access to Aignostics' portfolio of computational pathology solutions, with Atlas H&E-TME serving as an example of the available API endpoints. To begin using the platform, your organization must first be registered by our business support team. If you don't have an account yet, please contact your account manager or email support@aignostics.com to get started. More information about our applications can be found on [https://platform.aignostics.com](https://platform.aignostics.com). **How to authorize and test API endpoints:** 1. Click the \"Authorize\" button in the right corner below 3. Click \"Authorize\" button in the dialog to log in with your Aignostics Platform credentials 4. After successful login, you'll be redirected back and can use \"Try it out\" on any endpoint **Note**: You only need to authorize once per session. The lock icons next to endpoints will show green when authorized. - The version of the OpenAPI document: 1.0.0.beta7 + The version of the OpenAPI document: 1.0.0-beta6 Generated by OpenAPI Generator (https://openapi-generator.tech) Do not edit the class manually. @@ -502,7 +502,7 @@ def to_debug_report(self) -> str: return "Python SDK Debug Report:\n"\ "OS: {env}\n"\ "Python Version: {pyversion}\n"\ - "Version of the API: 1.0.0.beta7\n"\ + "Version of the API: 1.0.0-beta6\n"\ "SDK Package Version: 1.0.0".\ format(env=sys.platform, pyversion=sys.version) diff --git a/codegen/out/aignx/codegen/exceptions.py b/codegen/out/aignx/codegen/exceptions.py index 861e13dc..70b89ded 100644 --- a/codegen/out/aignx/codegen/exceptions.py +++ b/codegen/out/aignx/codegen/exceptions.py @@ -5,7 +5,7 @@ The Aignostics Platform is a cloud-based service that enables organizations to access advanced computational pathology applications through a secure API. The platform provides standardized access to Aignostics' portfolio of computational pathology solutions, with Atlas H&E-TME serving as an example of the available API endpoints. To begin using the platform, your organization must first be registered by our business support team. If you don't have an account yet, please contact your account manager or email support@aignostics.com to get started. More information about our applications can be found on [https://platform.aignostics.com](https://platform.aignostics.com). **How to authorize and test API endpoints:** 1. Click the \"Authorize\" button in the right corner below 3. Click \"Authorize\" button in the dialog to log in with your Aignostics Platform credentials 4. After successful login, you'll be redirected back and can use \"Try it out\" on any endpoint **Note**: You only need to authorize once per session. The lock icons next to endpoints will show green when authorized. - The version of the OpenAPI document: 1.0.0.beta7 + The version of the OpenAPI document: 1.0.0-beta6 Generated by OpenAPI Generator (https://openapi-generator.tech) Do not edit the class manually. diff --git a/codegen/out/aignx/codegen/models/__init__.py b/codegen/out/aignx/codegen/models/__init__.py index 0f80e5c9..43dd75cf 100644 --- a/codegen/out/aignx/codegen/models/__init__.py +++ b/codegen/out/aignx/codegen/models/__init__.py @@ -1,34 +1,30 @@ from .item_result_read_response import * -from .run_output import * -from .artifact_state import * from .validation_error_loc_inner import * -from .run_state import * -from .item_output import * +from .application_version_read_response import * +from .item_status import * from .run_creation_response import * +from .input_artifact_read_response import * from .organization_read_response import * +from .user_payload import * from .validation_error import * from .application_read_response import * -from .application_read_short_response import * from .output_artifact_scope import * +from .item_read_response import * from .me_read_response import * from .input_artifact_creation_request import * from .item_creation_request import * -from .item_state import * -from .auth0_organization import * -from .application_version import * from .http_validation_error import * +from .transfer_urls import * from .user_read_response import * -from .run_termination_reason import * from .input_artifact import * from .output_artifact_result_read_response import * from .version_read_response import * -from .auth0_user import * from .run_read_response import * -from .artifact_termination_reason import * +from .application_run_status import * from .run_creation_request import * -from .item_termination_reason import * -from .run_item_statistics import * +from .payload_output_artifact import * +from .payload_input_artifact import * from .output_artifact_visibility import * -from .custom_metadata_update_request import * -from .artifact_output import * +from .payload_item import * +from .output_artifact_read_response import * from .output_artifact import * diff --git a/codegen/out/aignx/codegen/models/application_read_response.py b/codegen/out/aignx/codegen/models/application_read_response.py index de8d31f4..50eba226 100644 --- a/codegen/out/aignx/codegen/models/application_read_response.py +++ b/codegen/out/aignx/codegen/models/application_read_response.py @@ -5,7 +5,7 @@ The Aignostics Platform is a cloud-based service that enables organizations to access advanced computational pathology applications through a secure API. The platform provides standardized access to Aignostics' portfolio of computational pathology solutions, with Atlas H&E-TME serving as an example of the available API endpoints. To begin using the platform, your organization must first be registered by our business support team. If you don't have an account yet, please contact your account manager or email support@aignostics.com to get started. More information about our applications can be found on [https://platform.aignostics.com](https://platform.aignostics.com). **How to authorize and test API endpoints:** 1. Click the \"Authorize\" button in the right corner below 3. Click \"Authorize\" button in the dialog to log in with your Aignostics Platform credentials 4. After successful login, you'll be redirected back and can use \"Try it out\" on any endpoint **Note**: You only need to authorize once per session. The lock icons next to endpoints will show green when authorized. - The version of the OpenAPI document: 1.0.0.beta7 + The version of the OpenAPI document: 1.0.0-beta6 Generated by OpenAPI Generator (https://openapi-generator.tech) Do not edit the class manually. @@ -19,7 +19,6 @@ from pydantic import BaseModel, ConfigDict, Field, StrictStr from typing import Any, ClassVar, Dict, List -from aignx.codegen.models.application_version import ApplicationVersion from typing import Optional, Set from typing_extensions import Self @@ -31,8 +30,7 @@ class ApplicationReadResponse(BaseModel): name: StrictStr = Field(description="Application display name") regulatory_classes: List[StrictStr] = Field(description="Regulatory classes, to which the applications comply with. Possible values include: RUO, IVDR, FDA.") description: StrictStr = Field(description="Describing what the application can do ") - versions: List[ApplicationVersion] = Field(description="All version numbers available to the user") - __properties: ClassVar[List[str]] = ["application_id", "name", "regulatory_classes", "description", "versions"] + __properties: ClassVar[List[str]] = ["application_id", "name", "regulatory_classes", "description"] model_config = ConfigDict( populate_by_name=True, @@ -73,13 +71,6 @@ def to_dict(self) -> Dict[str, Any]: exclude=excluded_fields, exclude_none=True, ) - # override the default output from pydantic by calling `to_dict()` of each item in versions (list) - _items = [] - if self.versions: - for _item_versions in self.versions: - if _item_versions: - _items.append(_item_versions.to_dict()) - _dict['versions'] = _items return _dict @classmethod @@ -95,8 +86,7 @@ def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: "application_id": obj.get("application_id"), "name": obj.get("name"), "regulatory_classes": obj.get("regulatory_classes"), - "description": obj.get("description"), - "versions": [ApplicationVersion.from_dict(_item) for _item in obj["versions"]] if obj.get("versions") is not None else None + "description": obj.get("description") }) return _obj diff --git a/codegen/out/aignx/codegen/models/run_termination_reason.py b/codegen/out/aignx/codegen/models/application_run_status.py similarity index 78% rename from codegen/out/aignx/codegen/models/run_termination_reason.py rename to codegen/out/aignx/codegen/models/application_run_status.py index 4fd6006c..5198e17e 100644 --- a/codegen/out/aignx/codegen/models/run_termination_reason.py +++ b/codegen/out/aignx/codegen/models/application_run_status.py @@ -5,7 +5,7 @@ The Aignostics Platform is a cloud-based service that enables organizations to access advanced computational pathology applications through a secure API. The platform provides standardized access to Aignostics' portfolio of computational pathology solutions, with Atlas H&E-TME serving as an example of the available API endpoints. To begin using the platform, your organization must first be registered by our business support team. If you don't have an account yet, please contact your account manager or email support@aignostics.com to get started. More information about our applications can be found on [https://platform.aignostics.com](https://platform.aignostics.com). **How to authorize and test API endpoints:** 1. Click the \"Authorize\" button in the right corner below 3. Click \"Authorize\" button in the dialog to log in with your Aignostics Platform credentials 4. After successful login, you'll be redirected back and can use \"Try it out\" on any endpoint **Note**: You only need to authorize once per session. The lock icons next to endpoints will show green when authorized. - The version of the OpenAPI document: 1.0.0.beta7 + The version of the OpenAPI document: 1.0.0-beta6 Generated by OpenAPI Generator (https://openapi-generator.tech) Do not edit the class manually. @@ -18,21 +18,26 @@ from typing_extensions import Self -class RunTerminationReason(str, Enum): +class ApplicationRunStatus(str, Enum): """ - RunTerminationReason + ApplicationRunStatus """ """ allowed enum values """ - ALL_ITEMS_PROCESSED = 'ALL_ITEMS_PROCESSED' - CANCELED_BY_SYSTEM = 'CANCELED_BY_SYSTEM' - CANCELED_BY_USER = 'CANCELED_BY_USER' + CANCELED_SYSTEM = 'CANCELED_SYSTEM' + CANCELED_USER = 'CANCELED_USER' + COMPLETED = 'COMPLETED' + COMPLETED_WITH_ERROR = 'COMPLETED_WITH_ERROR' + RECEIVED = 'RECEIVED' + REJECTED = 'REJECTED' + RUNNING = 'RUNNING' + SCHEDULED = 'SCHEDULED' @classmethod def from_json(cls, json_str: str) -> Self: - """Create an instance of RunTerminationReason from a JSON string""" + """Create an instance of ApplicationRunStatus from a JSON string""" return cls(json.loads(json_str)) diff --git a/codegen/out/aignx/codegen/models/auth0_organization.py b/codegen/out/aignx/codegen/models/application_version_read_response.py similarity index 51% rename from codegen/out/aignx/codegen/models/auth0_organization.py rename to codegen/out/aignx/codegen/models/application_version_read_response.py index f61d0157..190d1f40 100644 --- a/codegen/out/aignx/codegen/models/auth0_organization.py +++ b/codegen/out/aignx/codegen/models/application_version_read_response.py @@ -5,7 +5,7 @@ The Aignostics Platform is a cloud-based service that enables organizations to access advanced computational pathology applications through a secure API. The platform provides standardized access to Aignostics' portfolio of computational pathology solutions, with Atlas H&E-TME serving as an example of the available API endpoints. To begin using the platform, your organization must first be registered by our business support team. If you don't have an account yet, please contact your account manager or email support@aignostics.com to get started. More information about our applications can be found on [https://platform.aignostics.com](https://platform.aignostics.com). **How to authorize and test API endpoints:** 1. Click the \"Authorize\" button in the right corner below 3. Click \"Authorize\" button in the dialog to log in with your Aignostics Platform credentials 4. After successful login, you'll be redirected back and can use \"Try it out\" on any endpoint **Note**: You only need to authorize once per session. The lock icons next to endpoints will show green when authorized. - The version of the OpenAPI document: 1.0.0.beta7 + The version of the OpenAPI document: 1.0.0-beta6 Generated by OpenAPI Generator (https://openapi-generator.tech) Do not edit the class manually. @@ -17,25 +17,27 @@ import re # noqa: F401 import json +from datetime import datetime from pydantic import BaseModel, ConfigDict, Field, StrictStr from typing import Any, ClassVar, Dict, List, Optional +from aignx.codegen.models.input_artifact_read_response import InputArtifactReadResponse +from aignx.codegen.models.output_artifact_read_response import OutputArtifactReadResponse from typing import Optional, Set from typing_extensions import Self -class Auth0Organization(BaseModel): +class ApplicationVersionReadResponse(BaseModel): """ - Model for Auth0 Organization object returned from Auth0 API. For details, see: https://auth0.com/docs/api/management/v2#!/Organizations/get_organizations_by_id Aignostics-specific metadata fields are extracted from the `metadata` field. + Response schema for `List Available Application Versions` endpoint """ # noqa: E501 - id: StrictStr = Field(description="Unique organization identifier") - name: Optional[StrictStr] = None - display_name: Optional[StrictStr] = None - aignostics_bucket_hmac_access_key_id: StrictStr = Field(description="HMAC access key ID for the Aignostics-provided storage bucket. Used to authenticate requests for uploading files and generating signed URLs") - aignostics_bucket_hmac_secret_access_key: StrictStr = Field(description="HMAC secret access key paired with the access key ID. Keep this credential secure.") - aignostics_bucket_name: StrictStr = Field(description="Name of the bucket provided by Aignostics for storing input artifacts (slide images)") - aignostics_bucket_protocol: StrictStr = Field(description="Protocol to use for bucket access. Defines the URL scheme for connecting to the storage service") - aignostics_logfire_token: StrictStr = Field(description="Authentication token for Logfire observability service. Enables sending application logs and performance metrics to Aignostics for monitoring and support") - aignostics_sentry_dsn: StrictStr = Field(description="Data Source Name (DSN) for Sentry error tracking service. Allows automatic reporting of errors and exceptions to Aignostics support team") - __properties: ClassVar[List[str]] = ["id", "name", "display_name", "aignostics_bucket_hmac_access_key_id", "aignostics_bucket_hmac_secret_access_key", "aignostics_bucket_name", "aignostics_bucket_protocol", "aignostics_logfire_token", "aignostics_sentry_dsn"] + application_version_id: StrictStr = Field(description="Application version ID") + version: StrictStr = Field(description="Semantic version of the application") + application_id: StrictStr = Field(description="Application ID") + flow_id: Optional[StrictStr] = None + changelog: StrictStr = Field(description="Description of the changes relative to the previous version") + input_artifacts: List[InputArtifactReadResponse] = Field(description="Lists required input fields, that should be provided by the caller") + output_artifacts: List[OutputArtifactReadResponse] = Field(description="Lists the structure of the output artifacts generated by the application") + created_at: datetime = Field(description="The timestamp when the application version was registered") + __properties: ClassVar[List[str]] = ["application_version_id", "version", "application_id", "flow_id", "changelog", "input_artifacts", "output_artifacts", "created_at"] model_config = ConfigDict( populate_by_name=True, @@ -55,7 +57,7 @@ def to_json(self) -> str: @classmethod def from_json(cls, json_str: str) -> Optional[Self]: - """Create an instance of Auth0Organization from a JSON string""" + """Create an instance of ApplicationVersionReadResponse from a JSON string""" return cls.from_dict(json.loads(json_str)) def to_dict(self) -> Dict[str, Any]: @@ -76,21 +78,30 @@ def to_dict(self) -> Dict[str, Any]: exclude=excluded_fields, exclude_none=True, ) - # set to None if name (nullable) is None + # override the default output from pydantic by calling `to_dict()` of each item in input_artifacts (list) + _items = [] + if self.input_artifacts: + for _item_input_artifacts in self.input_artifacts: + if _item_input_artifacts: + _items.append(_item_input_artifacts.to_dict()) + _dict['input_artifacts'] = _items + # override the default output from pydantic by calling `to_dict()` of each item in output_artifacts (list) + _items = [] + if self.output_artifacts: + for _item_output_artifacts in self.output_artifacts: + if _item_output_artifacts: + _items.append(_item_output_artifacts.to_dict()) + _dict['output_artifacts'] = _items + # set to None if flow_id (nullable) is None # and model_fields_set contains the field - if self.name is None and "name" in self.model_fields_set: - _dict['name'] = None - - # set to None if display_name (nullable) is None - # and model_fields_set contains the field - if self.display_name is None and "display_name" in self.model_fields_set: - _dict['display_name'] = None + if self.flow_id is None and "flow_id" in self.model_fields_set: + _dict['flow_id'] = None return _dict @classmethod def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: - """Create an instance of Auth0Organization from a dict""" + """Create an instance of ApplicationVersionReadResponse from a dict""" if obj is None: return None @@ -98,14 +109,15 @@ def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: return cls.model_validate(obj) _obj = cls.model_validate({ - "id": obj.get("id"), - "name": obj.get("name"), - "display_name": obj.get("display_name"), - "aignostics_bucket_hmac_access_key_id": obj.get("aignostics_bucket_hmac_access_key_id"), - "aignostics_bucket_hmac_secret_access_key": obj.get("aignostics_bucket_hmac_secret_access_key"), - "aignostics_bucket_name": obj.get("aignostics_bucket_name"), - "aignostics_bucket_protocol": obj.get("aignostics_bucket_protocol"), - "aignostics_logfire_token": obj.get("aignostics_logfire_token"), - "aignostics_sentry_dsn": obj.get("aignostics_sentry_dsn") + "application_version_id": obj.get("application_version_id"), + "version": obj.get("version"), + "application_id": obj.get("application_id"), + "flow_id": obj.get("flow_id"), + "changelog": obj.get("changelog"), + "input_artifacts": [InputArtifactReadResponse.from_dict(_item) for _item in obj["input_artifacts"]] if obj.get("input_artifacts") is not None else None, + "output_artifacts": [OutputArtifactReadResponse.from_dict(_item) for _item in obj["output_artifacts"]] if obj.get("output_artifacts") is not None else None, + "created_at": obj.get("created_at") }) return _obj + + diff --git a/codegen/out/aignx/codegen/models/artifact_output.py b/codegen/out/aignx/codegen/models/artifact_output.py deleted file mode 100644 index b51944d9..00000000 --- a/codegen/out/aignx/codegen/models/artifact_output.py +++ /dev/null @@ -1,39 +0,0 @@ -# coding: utf-8 - -""" - Aignostics Platform API - - The Aignostics Platform is a cloud-based service that enables organizations to access advanced computational pathology applications through a secure API. The platform provides standardized access to Aignostics' portfolio of computational pathology solutions, with Atlas H&E-TME serving as an example of the available API endpoints. To begin using the platform, your organization must first be registered by our business support team. If you don't have an account yet, please contact your account manager or email support@aignostics.com to get started. More information about our applications can be found on [https://platform.aignostics.com](https://platform.aignostics.com). **How to authorize and test API endpoints:** 1. Click the \"Authorize\" button in the right corner below 3. Click \"Authorize\" button in the dialog to log in with your Aignostics Platform credentials 4. After successful login, you'll be redirected back and can use \"Try it out\" on any endpoint **Note**: You only need to authorize once per session. The lock icons next to endpoints will show green when authorized. - - The version of the OpenAPI document: 1.0.0.beta7 - Generated by OpenAPI Generator (https://openapi-generator.tech) - - Do not edit the class manually. -""" # noqa: E501 - - -from __future__ import annotations -import json -from enum import Enum -from typing_extensions import Self - - -class ArtifactOutput(str, Enum): - """ - ArtifactOutput - """ - - """ - allowed enum values - """ - NONE = 'NONE' - AVAILABLE = 'AVAILABLE' - DELETED_BY_USER = 'DELETED_BY_USER' - DELETED_BY_SYSTEM = 'DELETED_BY_SYSTEM' - - @classmethod - def from_json(cls, json_str: str) -> Self: - """Create an instance of ArtifactOutput from a JSON string""" - return cls(json.loads(json_str)) - - diff --git a/codegen/out/aignx/codegen/models/artifact_state.py b/codegen/out/aignx/codegen/models/artifact_state.py deleted file mode 100644 index b1a63c5f..00000000 --- a/codegen/out/aignx/codegen/models/artifact_state.py +++ /dev/null @@ -1,38 +0,0 @@ -# coding: utf-8 - -""" - Aignostics Platform API - - The Aignostics Platform is a cloud-based service that enables organizations to access advanced computational pathology applications through a secure API. The platform provides standardized access to Aignostics' portfolio of computational pathology solutions, with Atlas H&E-TME serving as an example of the available API endpoints. To begin using the platform, your organization must first be registered by our business support team. If you don't have an account yet, please contact your account manager or email support@aignostics.com to get started. More information about our applications can be found on [https://platform.aignostics.com](https://platform.aignostics.com). **How to authorize and test API endpoints:** 1. Click the \"Authorize\" button in the right corner below 3. Click \"Authorize\" button in the dialog to log in with your Aignostics Platform credentials 4. After successful login, you'll be redirected back and can use \"Try it out\" on any endpoint **Note**: You only need to authorize once per session. The lock icons next to endpoints will show green when authorized. - - The version of the OpenAPI document: 1.0.0.beta7 - Generated by OpenAPI Generator (https://openapi-generator.tech) - - Do not edit the class manually. -""" # noqa: E501 - - -from __future__ import annotations -import json -from enum import Enum -from typing_extensions import Self - - -class ArtifactState(str, Enum): - """ - ArtifactState - """ - - """ - allowed enum values - """ - PENDING = 'PENDING' - PROCESSING = 'PROCESSING' - TERMINATED = 'TERMINATED' - - @classmethod - def from_json(cls, json_str: str) -> Self: - """Create an instance of ArtifactState from a JSON string""" - return cls(json.loads(json_str)) - - diff --git a/codegen/out/aignx/codegen/models/artifact_termination_reason.py b/codegen/out/aignx/codegen/models/artifact_termination_reason.py deleted file mode 100644 index 8be2863b..00000000 --- a/codegen/out/aignx/codegen/models/artifact_termination_reason.py +++ /dev/null @@ -1,39 +0,0 @@ -# coding: utf-8 - -""" - Aignostics Platform API - - The Aignostics Platform is a cloud-based service that enables organizations to access advanced computational pathology applications through a secure API. The platform provides standardized access to Aignostics' portfolio of computational pathology solutions, with Atlas H&E-TME serving as an example of the available API endpoints. To begin using the platform, your organization must first be registered by our business support team. If you don't have an account yet, please contact your account manager or email support@aignostics.com to get started. More information about our applications can be found on [https://platform.aignostics.com](https://platform.aignostics.com). **How to authorize and test API endpoints:** 1. Click the \"Authorize\" button in the right corner below 3. Click \"Authorize\" button in the dialog to log in with your Aignostics Platform credentials 4. After successful login, you'll be redirected back and can use \"Try it out\" on any endpoint **Note**: You only need to authorize once per session. The lock icons next to endpoints will show green when authorized. - - The version of the OpenAPI document: 1.0.0.beta7 - Generated by OpenAPI Generator (https://openapi-generator.tech) - - Do not edit the class manually. -""" # noqa: E501 - - -from __future__ import annotations -import json -from enum import Enum -from typing_extensions import Self - - -class ArtifactTerminationReason(str, Enum): - """ - ArtifactTerminationReason - """ - - """ - allowed enum values - """ - SUCCEEDED = 'SUCCEEDED' - USER_ERROR = 'USER_ERROR' - SYSTEM_ERROR = 'SYSTEM_ERROR' - SKIPPED = 'SKIPPED' - - @classmethod - def from_json(cls, json_str: str) -> Self: - """Create an instance of ArtifactTerminationReason from a JSON string""" - return cls(json.loads(json_str)) - - diff --git a/codegen/out/aignx/codegen/models/auth0_user.py b/codegen/out/aignx/codegen/models/auth0_user.py deleted file mode 100644 index 82780d17..00000000 --- a/codegen/out/aignx/codegen/models/auth0_user.py +++ /dev/null @@ -1,142 +0,0 @@ -# coding: utf-8 - -""" - Aignostics Platform API - - The Aignostics Platform is a cloud-based service that enables organizations to access advanced computational pathology applications through a secure API. The platform provides standardized access to Aignostics' portfolio of computational pathology solutions, with Atlas H&E-TME serving as an example of the available API endpoints. To begin using the platform, your organization must first be registered by our business support team. If you don't have an account yet, please contact your account manager or email support@aignostics.com to get started. More information about our applications can be found on [https://platform.aignostics.com](https://platform.aignostics.com). **How to authorize and test API endpoints:** 1. Click the \"Authorize\" button in the right corner below 3. Click \"Authorize\" button in the dialog to log in with your Aignostics Platform credentials 4. After successful login, you'll be redirected back and can use \"Try it out\" on any endpoint **Note**: You only need to authorize once per session. The lock icons next to endpoints will show green when authorized. - - The version of the OpenAPI document: 1.0.0.beta7 - Generated by OpenAPI Generator (https://openapi-generator.tech) - - Do not edit the class manually. -""" # noqa: E501 - - -from __future__ import annotations -import pprint -import re # noqa: F401 -import json - -from datetime import datetime -from pydantic import BaseModel, ConfigDict, Field, StrictBool, StrictStr -from typing import Any, ClassVar, Dict, List, Optional -from typing import Optional, Set -from typing_extensions import Self - -class Auth0User(BaseModel): - """ - Model for Auth0 User object returned from Auth0 API. For details, see: https://auth0.com/docs/api/management/v2/users/get-users-by-id - """ # noqa: E501 - id: StrictStr = Field(description="Unique user identifier") - email: Optional[StrictStr] = None - email_verified: Optional[StrictBool] = None - name: Optional[StrictStr] = None - given_name: Optional[StrictStr] = None - family_name: Optional[StrictStr] = None - nickname: Optional[StrictStr] = None - picture: Optional[StrictStr] = None - updated_at: Optional[datetime] = None - __properties: ClassVar[List[str]] = ["id", "email", "email_verified", "name", "given_name", "family_name", "nickname", "picture", "updated_at"] - - model_config = ConfigDict( - populate_by_name=True, - validate_assignment=True, - protected_namespaces=(), - ) - - - def to_str(self) -> str: - """Returns the string representation of the model using alias""" - return pprint.pformat(self.model_dump(by_alias=True)) - - def to_json(self) -> str: - """Returns the JSON representation of the model using alias""" - # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead - return json.dumps(self.to_dict()) - - @classmethod - def from_json(cls, json_str: str) -> Optional[Self]: - """Create an instance of Auth0User from a JSON string""" - return cls.from_dict(json.loads(json_str)) - - def to_dict(self) -> Dict[str, Any]: - """Return the dictionary representation of the model using alias. - - This has the following differences from calling pydantic's - `self.model_dump(by_alias=True)`: - - * `None` is only added to the output dict for nullable fields that - were set at model initialization. Other fields with value `None` - are ignored. - """ - excluded_fields: Set[str] = set([ - ]) - - _dict = self.model_dump( - by_alias=True, - exclude=excluded_fields, - exclude_none=True, - ) - # set to None if email (nullable) is None - # and model_fields_set contains the field - if self.email is None and "email" in self.model_fields_set: - _dict['email'] = None - - # set to None if email_verified (nullable) is None - # and model_fields_set contains the field - if self.email_verified is None and "email_verified" in self.model_fields_set: - _dict['email_verified'] = None - - # set to None if name (nullable) is None - # and model_fields_set contains the field - if self.name is None and "name" in self.model_fields_set: - _dict['name'] = None - - # set to None if given_name (nullable) is None - # and model_fields_set contains the field - if self.given_name is None and "given_name" in self.model_fields_set: - _dict['given_name'] = None - - # set to None if family_name (nullable) is None - # and model_fields_set contains the field - if self.family_name is None and "family_name" in self.model_fields_set: - _dict['family_name'] = None - - # set to None if nickname (nullable) is None - # and model_fields_set contains the field - if self.nickname is None and "nickname" in self.model_fields_set: - _dict['nickname'] = None - - # set to None if picture (nullable) is None - # and model_fields_set contains the field - if self.picture is None and "picture" in self.model_fields_set: - _dict['picture'] = None - - # set to None if updated_at (nullable) is None - # and model_fields_set contains the field - if self.updated_at is None and "updated_at" in self.model_fields_set: - _dict['updated_at'] = None - - return _dict - - @classmethod - def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: - """Create an instance of Auth0User from a dict""" - if obj is None: - return None - - if not isinstance(obj, dict): - return cls.model_validate(obj) - - _obj = cls.model_validate({ - "id": obj.get("id"), - "email": obj.get("email"), - "email_verified": obj.get("email_verified"), - "name": obj.get("name"), - "given_name": obj.get("given_name"), - "family_name": obj.get("family_name"), - "nickname": obj.get("nickname"), - "picture": obj.get("picture"), - "updated_at": obj.get("updated_at") - }) - return _obj diff --git a/codegen/out/aignx/codegen/models/http_validation_error.py b/codegen/out/aignx/codegen/models/http_validation_error.py index 90afde9c..4c326404 100644 --- a/codegen/out/aignx/codegen/models/http_validation_error.py +++ b/codegen/out/aignx/codegen/models/http_validation_error.py @@ -5,7 +5,7 @@ The Aignostics Platform is a cloud-based service that enables organizations to access advanced computational pathology applications through a secure API. The platform provides standardized access to Aignostics' portfolio of computational pathology solutions, with Atlas H&E-TME serving as an example of the available API endpoints. To begin using the platform, your organization must first be registered by our business support team. If you don't have an account yet, please contact your account manager or email support@aignostics.com to get started. More information about our applications can be found on [https://platform.aignostics.com](https://platform.aignostics.com). **How to authorize and test API endpoints:** 1. Click the \"Authorize\" button in the right corner below 3. Click \"Authorize\" button in the dialog to log in with your Aignostics Platform credentials 4. After successful login, you'll be redirected back and can use \"Try it out\" on any endpoint **Note**: You only need to authorize once per session. The lock icons next to endpoints will show green when authorized. - The version of the OpenAPI document: 1.0.0.beta7 + The version of the OpenAPI document: 1.0.0-beta6 Generated by OpenAPI Generator (https://openapi-generator.tech) Do not edit the class manually. diff --git a/codegen/out/aignx/codegen/models/input_artifact.py b/codegen/out/aignx/codegen/models/input_artifact.py index 39c5fa5a..108920cc 100644 --- a/codegen/out/aignx/codegen/models/input_artifact.py +++ b/codegen/out/aignx/codegen/models/input_artifact.py @@ -5,7 +5,7 @@ The Aignostics Platform is a cloud-based service that enables organizations to access advanced computational pathology applications through a secure API. The platform provides standardized access to Aignostics' portfolio of computational pathology solutions, with Atlas H&E-TME serving as an example of the available API endpoints. To begin using the platform, your organization must first be registered by our business support team. If you don't have an account yet, please contact your account manager or email support@aignostics.com to get started. More information about our applications can be found on [https://platform.aignostics.com](https://platform.aignostics.com). **How to authorize and test API endpoints:** 1. Click the \"Authorize\" button in the right corner below 3. Click \"Authorize\" button in the dialog to log in with your Aignostics Platform credentials 4. After successful login, you'll be redirected back and can use \"Try it out\" on any endpoint **Note**: You only need to authorize once per session. The lock icons next to endpoints will show green when authorized. - The version of the OpenAPI document: 1.0.0.beta7 + The version of the OpenAPI document: 1.0.0-beta6 Generated by OpenAPI Generator (https://openapi-generator.tech) Do not edit the class manually. diff --git a/codegen/out/aignx/codegen/models/input_artifact_creation_request.py b/codegen/out/aignx/codegen/models/input_artifact_creation_request.py index 7a3717c8..d7614bda 100644 --- a/codegen/out/aignx/codegen/models/input_artifact_creation_request.py +++ b/codegen/out/aignx/codegen/models/input_artifact_creation_request.py @@ -5,7 +5,7 @@ The Aignostics Platform is a cloud-based service that enables organizations to access advanced computational pathology applications through a secure API. The platform provides standardized access to Aignostics' portfolio of computational pathology solutions, with Atlas H&E-TME serving as an example of the available API endpoints. To begin using the platform, your organization must first be registered by our business support team. If you don't have an account yet, please contact your account manager or email support@aignostics.com to get started. More information about our applications can be found on [https://platform.aignostics.com](https://platform.aignostics.com). **How to authorize and test API endpoints:** 1. Click the \"Authorize\" button in the right corner below 3. Click \"Authorize\" button in the dialog to log in with your Aignostics Platform credentials 4. After successful login, you'll be redirected back and can use \"Try it out\" on any endpoint **Note**: You only need to authorize once per session. The lock icons next to endpoints will show green when authorized. - The version of the OpenAPI document: 1.0.0.beta7 + The version of the OpenAPI document: 1.0.0-beta6 Generated by OpenAPI Generator (https://openapi-generator.tech) Do not edit the class manually. diff --git a/codegen/out/aignx/codegen/models/run_item_statistics.py b/codegen/out/aignx/codegen/models/input_artifact_read_response.py similarity index 62% rename from codegen/out/aignx/codegen/models/run_item_statistics.py rename to codegen/out/aignx/codegen/models/input_artifact_read_response.py index a8812d35..a9a2496d 100644 --- a/codegen/out/aignx/codegen/models/run_item_statistics.py +++ b/codegen/out/aignx/codegen/models/input_artifact_read_response.py @@ -5,7 +5,7 @@ The Aignostics Platform is a cloud-based service that enables organizations to access advanced computational pathology applications through a secure API. The platform provides standardized access to Aignostics' portfolio of computational pathology solutions, with Atlas H&E-TME serving as an example of the available API endpoints. To begin using the platform, your organization must first be registered by our business support team. If you don't have an account yet, please contact your account manager or email support@aignostics.com to get started. More information about our applications can be found on [https://platform.aignostics.com](https://platform.aignostics.com). **How to authorize and test API endpoints:** 1. Click the \"Authorize\" button in the right corner below 3. Click \"Authorize\" button in the dialog to log in with your Aignostics Platform credentials 4. After successful login, you'll be redirected back and can use \"Try it out\" on any endpoint **Note**: You only need to authorize once per session. The lock icons next to endpoints will show green when authorized. - The version of the OpenAPI document: 1.0.0.beta7 + The version of the OpenAPI document: 1.0.0-beta6 Generated by OpenAPI Generator (https://openapi-generator.tech) Do not edit the class manually. @@ -17,23 +17,27 @@ import re # noqa: F401 import json -from pydantic import BaseModel, ConfigDict, Field, StrictInt +from pydantic import BaseModel, ConfigDict, Field, StrictStr, field_validator from typing import Any, ClassVar, Dict, List +from typing_extensions import Annotated from typing import Optional, Set from typing_extensions import Self -class RunItemStatistics(BaseModel): +class InputArtifactReadResponse(BaseModel): """ - RunItemStatistics + InputArtifactReadResponse """ # noqa: E501 - item_count: StrictInt = Field(description="Total number of the items in the run") - item_pending_count: StrictInt = Field(description="The number of items in `PENDING` state") - item_processing_count: StrictInt = Field(description="The number of items in `PROCESSING` state") - item_user_error_count: StrictInt = Field(description="The number of items in `TERMINATED` state, and the item termination reason is `USER_ERROR`") - item_system_error_count: StrictInt = Field(description="The number of items in `TERMINATED` state, and the item termination reason is `SYSTEM_ERROR`") - item_skipped_count: StrictInt = Field(description="The number of items in `TERMINATED` state, and the item termination reason is `SKIPPED`") - item_succeeded_count: StrictInt = Field(description="The number of items in `TERMINATED` state, and the item termination reason is `SUCCEEDED`") - __properties: ClassVar[List[str]] = ["item_count", "item_pending_count", "item_processing_count", "item_user_error_count", "item_system_error_count", "item_skipped_count", "item_succeeded_count"] + name: StrictStr + mime_type: Annotated[str, Field(strict=True)] + metadata_schema: Dict[str, Any] + __properties: ClassVar[List[str]] = ["name", "mime_type", "metadata_schema"] + + @field_validator('mime_type') + def mime_type_validate_regular_expression(cls, value): + """Validates the regular expression""" + if not re.match(r"^\w+\/\w+[-+.|\w+]+\w+$", value): + raise ValueError(r"must validate the regular expression /^\w+\/\w+[-+.|\w+]+\w+$/") + return value model_config = ConfigDict( populate_by_name=True, @@ -53,7 +57,7 @@ def to_json(self) -> str: @classmethod def from_json(cls, json_str: str) -> Optional[Self]: - """Create an instance of RunItemStatistics from a JSON string""" + """Create an instance of InputArtifactReadResponse from a JSON string""" return cls.from_dict(json.loads(json_str)) def to_dict(self) -> Dict[str, Any]: @@ -78,7 +82,7 @@ def to_dict(self) -> Dict[str, Any]: @classmethod def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: - """Create an instance of RunItemStatistics from a dict""" + """Create an instance of InputArtifactReadResponse from a dict""" if obj is None: return None @@ -86,13 +90,9 @@ def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: return cls.model_validate(obj) _obj = cls.model_validate({ - "item_count": obj.get("item_count"), - "item_pending_count": obj.get("item_pending_count"), - "item_processing_count": obj.get("item_processing_count"), - "item_user_error_count": obj.get("item_user_error_count"), - "item_system_error_count": obj.get("item_system_error_count"), - "item_skipped_count": obj.get("item_skipped_count"), - "item_succeeded_count": obj.get("item_succeeded_count") + "name": obj.get("name"), + "mime_type": obj.get("mime_type"), + "metadata_schema": obj.get("metadata_schema") }) return _obj diff --git a/codegen/out/aignx/codegen/models/item_creation_request.py b/codegen/out/aignx/codegen/models/item_creation_request.py index 619abe3b..fe71d69c 100644 --- a/codegen/out/aignx/codegen/models/item_creation_request.py +++ b/codegen/out/aignx/codegen/models/item_creation_request.py @@ -5,7 +5,7 @@ The Aignostics Platform is a cloud-based service that enables organizations to access advanced computational pathology applications through a secure API. The platform provides standardized access to Aignostics' portfolio of computational pathology solutions, with Atlas H&E-TME serving as an example of the available API endpoints. To begin using the platform, your organization must first be registered by our business support team. If you don't have an account yet, please contact your account manager or email support@aignostics.com to get started. More information about our applications can be found on [https://platform.aignostics.com](https://platform.aignostics.com). **How to authorize and test API endpoints:** 1. Click the \"Authorize\" button in the right corner below 3. Click \"Authorize\" button in the dialog to log in with your Aignostics Platform credentials 4. After successful login, you'll be redirected back and can use \"Try it out\" on any endpoint **Note**: You only need to authorize once per session. The lock icons next to endpoints will show green when authorized. - The version of the OpenAPI document: 1.0.0.beta7 + The version of the OpenAPI document: 1.0.0-beta6 Generated by OpenAPI Generator (https://openapi-generator.tech) Do not edit the class manually. @@ -17,21 +17,20 @@ import re # noqa: F401 import json -from pydantic import BaseModel, ConfigDict, Field +from pydantic import BaseModel, ConfigDict, Field, StrictStr from typing import Any, ClassVar, Dict, List, Optional -from typing_extensions import Annotated from aignx.codegen.models.input_artifact_creation_request import InputArtifactCreationRequest from typing import Optional, Set from typing_extensions import Self class ItemCreationRequest(BaseModel): """ - Individual item (slide) to be processed in a run. + Individual item (slide) to be processed in an application run. """ # noqa: E501 - external_id: Annotated[str, Field(strict=True, max_length=255)] = Field(description="Unique identifier for this item within the run. Used for referencing items. Must be unique across all items in the same run") - custom_metadata: Optional[Dict[str, Any]] = None + reference: StrictStr = Field(description="Unique identifier for this item within the run. Used for referencing results. Must be unique across all items in the same application run") + metadata: Optional[Dict[str, Any]] = None input_artifacts: List[InputArtifactCreationRequest] = Field(description="List of input artifacts for this item. For Atlas H&E-TME, typically contains one artifact (the slide image)") - __properties: ClassVar[List[str]] = ["external_id", "custom_metadata", "input_artifacts"] + __properties: ClassVar[List[str]] = ["reference", "metadata", "input_artifacts"] model_config = ConfigDict( populate_by_name=True, @@ -79,10 +78,10 @@ def to_dict(self) -> Dict[str, Any]: if _item_input_artifacts: _items.append(_item_input_artifacts.to_dict()) _dict['input_artifacts'] = _items - # set to None if custom_metadata (nullable) is None + # set to None if metadata (nullable) is None # and model_fields_set contains the field - if self.custom_metadata is None and "custom_metadata" in self.model_fields_set: - _dict['custom_metadata'] = None + if self.metadata is None and "metadata" in self.model_fields_set: + _dict['metadata'] = None return _dict @@ -96,8 +95,8 @@ def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: return cls.model_validate(obj) _obj = cls.model_validate({ - "external_id": obj.get("external_id"), - "custom_metadata": obj.get("custom_metadata"), + "reference": obj.get("reference"), + "metadata": obj.get("metadata"), "input_artifacts": [InputArtifactCreationRequest.from_dict(_item) for _item in obj["input_artifacts"]] if obj.get("input_artifacts") is not None else None }) return _obj diff --git a/codegen/out/aignx/codegen/models/item_output.py b/codegen/out/aignx/codegen/models/item_output.py deleted file mode 100644 index 03ced6c7..00000000 --- a/codegen/out/aignx/codegen/models/item_output.py +++ /dev/null @@ -1,37 +0,0 @@ -# coding: utf-8 - -""" - Aignostics Platform API - - The Aignostics Platform is a cloud-based service that enables organizations to access advanced computational pathology applications through a secure API. The platform provides standardized access to Aignostics' portfolio of computational pathology solutions, with Atlas H&E-TME serving as an example of the available API endpoints. To begin using the platform, your organization must first be registered by our business support team. If you don't have an account yet, please contact your account manager or email support@aignostics.com to get started. More information about our applications can be found on [https://platform.aignostics.com](https://platform.aignostics.com). **How to authorize and test API endpoints:** 1. Click the \"Authorize\" button in the right corner below 3. Click \"Authorize\" button in the dialog to log in with your Aignostics Platform credentials 4. After successful login, you'll be redirected back and can use \"Try it out\" on any endpoint **Note**: You only need to authorize once per session. The lock icons next to endpoints will show green when authorized. - - The version of the OpenAPI document: 1.0.0.beta7 - Generated by OpenAPI Generator (https://openapi-generator.tech) - - Do not edit the class manually. -""" # noqa: E501 - - -from __future__ import annotations -import json -from enum import Enum -from typing_extensions import Self - - -class ItemOutput(str, Enum): - """ - ItemOutput - """ - - """ - allowed enum values - """ - NONE = 'NONE' - FULL = 'FULL' - - @classmethod - def from_json(cls, json_str: str) -> Self: - """Create an instance of ItemOutput from a JSON string""" - return cls(json.loads(json_str)) - - diff --git a/codegen/out/aignx/codegen/models/application_read_short_response.py b/codegen/out/aignx/codegen/models/item_read_response.py similarity index 63% rename from codegen/out/aignx/codegen/models/application_read_short_response.py rename to codegen/out/aignx/codegen/models/item_read_response.py index f7505858..00c4b987 100644 --- a/codegen/out/aignx/codegen/models/application_read_short_response.py +++ b/codegen/out/aignx/codegen/models/item_read_response.py @@ -5,7 +5,7 @@ The Aignostics Platform is a cloud-based service that enables organizations to access advanced computational pathology applications through a secure API. The platform provides standardized access to Aignostics' portfolio of computational pathology solutions, with Atlas H&E-TME serving as an example of the available API endpoints. To begin using the platform, your organization must first be registered by our business support team. If you don't have an account yet, please contact your account manager or email support@aignostics.com to get started. More information about our applications can be found on [https://platform.aignostics.com](https://platform.aignostics.com). **How to authorize and test API endpoints:** 1. Click the \"Authorize\" button in the right corner below 3. Click \"Authorize\" button in the dialog to log in with your Aignostics Platform credentials 4. After successful login, you'll be redirected back and can use \"Try it out\" on any endpoint **Note**: You only need to authorize once per session. The lock icons next to endpoints will show green when authorized. - The version of the OpenAPI document: 1.0.0.beta7 + The version of the OpenAPI document: 1.0.0-beta6 Generated by OpenAPI Generator (https://openapi-generator.tech) Do not edit the class manually. @@ -17,22 +17,24 @@ import re # noqa: F401 import json -from pydantic import BaseModel, ConfigDict, Field, StrictStr +from datetime import datetime +from pydantic import BaseModel, ConfigDict, StrictStr from typing import Any, ClassVar, Dict, List, Optional -from aignx.codegen.models.application_version import ApplicationVersion +from aignx.codegen.models.item_status import ItemStatus from typing import Optional, Set from typing_extensions import Self -class ApplicationReadShortResponse(BaseModel): +class ItemReadResponse(BaseModel): """ - Response schema for `List available applications` and `Read Application by Id` endpoints + Response schema for `Get Item` endpoint """ # noqa: E501 - application_id: StrictStr = Field(description="Application ID") - name: StrictStr = Field(description="Application display name") - regulatory_classes: List[StrictStr] = Field(description="Regulatory classes, to which the applications comply with. Possible values include: RUO, IVDR, FDA.") - description: StrictStr = Field(description="Describing what the application can do ") - latest_version: Optional[ApplicationVersion] = None - __properties: ClassVar[List[str]] = ["application_id", "name", "regulatory_classes", "description", "latest_version"] + item_id: StrictStr + application_run_id: Optional[StrictStr] = None + reference: StrictStr + status: ItemStatus + message: Optional[StrictStr] = None + terminated_at: Optional[datetime] = None + __properties: ClassVar[List[str]] = ["item_id", "application_run_id", "reference", "status", "message", "terminated_at"] model_config = ConfigDict( populate_by_name=True, @@ -52,7 +54,7 @@ def to_json(self) -> str: @classmethod def from_json(cls, json_str: str) -> Optional[Self]: - """Create an instance of ApplicationReadShortResponse from a JSON string""" + """Create an instance of ItemReadResponse from a JSON string""" return cls.from_dict(json.loads(json_str)) def to_dict(self) -> Dict[str, Any]: @@ -73,19 +75,26 @@ def to_dict(self) -> Dict[str, Any]: exclude=excluded_fields, exclude_none=True, ) - # override the default output from pydantic by calling `to_dict()` of latest_version - if self.latest_version: - _dict['latest_version'] = self.latest_version.to_dict() - # set to None if latest_version (nullable) is None + # set to None if application_run_id (nullable) is None # and model_fields_set contains the field - if self.latest_version is None and "latest_version" in self.model_fields_set: - _dict['latest_version'] = None + if self.application_run_id is None and "application_run_id" in self.model_fields_set: + _dict['application_run_id'] = None + + # set to None if message (nullable) is None + # and model_fields_set contains the field + if self.message is None and "message" in self.model_fields_set: + _dict['message'] = None + + # set to None if terminated_at (nullable) is None + # and model_fields_set contains the field + if self.terminated_at is None and "terminated_at" in self.model_fields_set: + _dict['terminated_at'] = None return _dict @classmethod def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: - """Create an instance of ApplicationReadShortResponse from a dict""" + """Create an instance of ItemReadResponse from a dict""" if obj is None: return None @@ -93,11 +102,12 @@ def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: return cls.model_validate(obj) _obj = cls.model_validate({ - "application_id": obj.get("application_id"), - "name": obj.get("name"), - "regulatory_classes": obj.get("regulatory_classes"), - "description": obj.get("description"), - "latest_version": ApplicationVersion.from_dict(obj["latest_version"]) if obj.get("latest_version") is not None else None + "item_id": obj.get("item_id"), + "application_run_id": obj.get("application_run_id"), + "reference": obj.get("reference"), + "status": obj.get("status"), + "message": obj.get("message"), + "terminated_at": obj.get("terminated_at") }) return _obj diff --git a/codegen/out/aignx/codegen/models/item_result_read_response.py b/codegen/out/aignx/codegen/models/item_result_read_response.py index 6077fd0e..6f99c215 100644 --- a/codegen/out/aignx/codegen/models/item_result_read_response.py +++ b/codegen/out/aignx/codegen/models/item_result_read_response.py @@ -5,7 +5,7 @@ The Aignostics Platform is a cloud-based service that enables organizations to access advanced computational pathology applications through a secure API. The platform provides standardized access to Aignostics' portfolio of computational pathology solutions, with Atlas H&E-TME serving as an example of the available API endpoints. To begin using the platform, your organization must first be registered by our business support team. If you don't have an account yet, please contact your account manager or email support@aignostics.com to get started. More information about our applications can be found on [https://platform.aignostics.com](https://platform.aignostics.com). **How to authorize and test API endpoints:** 1. Click the \"Authorize\" button in the right corner below 3. Click \"Authorize\" button in the dialog to log in with your Aignostics Platform credentials 4. After successful login, you'll be redirected back and can use \"Try it out\" on any endpoint **Note**: You only need to authorize once per session. The lock icons next to endpoints will show green when authorized. - The version of the OpenAPI document: 1.0.0.beta7 + The version of the OpenAPI document: 1.0.0-beta6 Generated by OpenAPI Generator (https://openapi-generator.tech) Do not edit the class manually. @@ -20,29 +20,25 @@ from datetime import datetime from pydantic import BaseModel, ConfigDict, Field, StrictStr from typing import Any, ClassVar, Dict, List, Optional -from aignx.codegen.models.item_output import ItemOutput -from aignx.codegen.models.item_state import ItemState -from aignx.codegen.models.item_termination_reason import ItemTerminationReason +from aignx.codegen.models.item_status import ItemStatus from aignx.codegen.models.output_artifact_result_read_response import OutputArtifactResultReadResponse from typing import Optional, Set from typing_extensions import Self class ItemResultReadResponse(BaseModel): """ - Response schema for items in `List Run Items` endpoint + Response schema for items in `List Run Results` endpoint """ # noqa: E501 item_id: StrictStr = Field(description="Item UUID generated by the Platform") - external_id: StrictStr = Field(description="The external_id of the item from the user payload") - custom_metadata: Optional[Dict[str, Any]] - custom_metadata_checksum: Optional[StrictStr] = None - state: ItemState = Field(description=" The item moves from `PENDING` to `PROCESSING` to `TERMINATED` state. When terminated, consult the `termination_reason` property to see whether it was successful. ") - output: ItemOutput = Field(description="The output status of the item (NONE, FULL)") - termination_reason: Optional[ItemTerminationReason] = None - error_message: Optional[StrictStr] = None + application_run_id: StrictStr = Field(description="Application run UUID to which the item belongs") + reference: StrictStr = Field(description="The reference of the item from the user payload") + metadata: Optional[Dict[str, Any]] + status: ItemStatus = Field(description=" When the item is not processed yet, the status is set to `pending`. When the item is successfully finished, status is set to `succeeded`, and the processing results become available for download in `output_artifacts` field. When the item processing is failed because the provided item is invalid, the status is set to `error_user`. When the item processing failed because of the error in the model or platform, the status is set to `error_system`. When the application_run is canceled, the status of all pending items is set to either `cancelled_user` or `cancelled_system`. ") + error: Optional[StrictStr] = None + message: Optional[StrictStr] terminated_at: Optional[datetime] = None output_artifacts: List[OutputArtifactResultReadResponse] = Field(description=" The list of the results generated by the application algorithm. The number of files and their types depend on the particular application version, call `/v1/versions/{version_id}` to get the details. ") - error_code: Optional[StrictStr] - __properties: ClassVar[List[str]] = ["item_id", "external_id", "custom_metadata", "custom_metadata_checksum", "state", "output", "termination_reason", "error_message", "terminated_at", "output_artifacts", "error_code"] + __properties: ClassVar[List[str]] = ["item_id", "application_run_id", "reference", "metadata", "status", "error", "message", "terminated_at", "output_artifacts"] model_config = ConfigDict( populate_by_name=True, @@ -90,36 +86,26 @@ def to_dict(self) -> Dict[str, Any]: if _item_output_artifacts: _items.append(_item_output_artifacts.to_dict()) _dict['output_artifacts'] = _items - # set to None if custom_metadata (nullable) is None + # set to None if metadata (nullable) is None # and model_fields_set contains the field - if self.custom_metadata is None and "custom_metadata" in self.model_fields_set: - _dict['custom_metadata'] = None + if self.metadata is None and "metadata" in self.model_fields_set: + _dict['metadata'] = None - # set to None if custom_metadata_checksum (nullable) is None + # set to None if error (nullable) is None # and model_fields_set contains the field - if self.custom_metadata_checksum is None and "custom_metadata_checksum" in self.model_fields_set: - _dict['custom_metadata_checksum'] = None + if self.error is None and "error" in self.model_fields_set: + _dict['error'] = None - # set to None if termination_reason (nullable) is None + # set to None if message (nullable) is None # and model_fields_set contains the field - if self.termination_reason is None and "termination_reason" in self.model_fields_set: - _dict['termination_reason'] = None - - # set to None if error_message (nullable) is None - # and model_fields_set contains the field - if self.error_message is None and "error_message" in self.model_fields_set: - _dict['error_message'] = None + if self.message is None and "message" in self.model_fields_set: + _dict['message'] = None # set to None if terminated_at (nullable) is None # and model_fields_set contains the field if self.terminated_at is None and "terminated_at" in self.model_fields_set: _dict['terminated_at'] = None - # set to None if error_code (nullable) is None - # and model_fields_set contains the field - if self.error_code is None and "error_code" in self.model_fields_set: - _dict['error_code'] = None - return _dict @classmethod @@ -133,16 +119,14 @@ def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: _obj = cls.model_validate({ "item_id": obj.get("item_id"), - "external_id": obj.get("external_id"), - "custom_metadata": obj.get("custom_metadata"), - "custom_metadata_checksum": obj.get("custom_metadata_checksum"), - "state": obj.get("state"), - "output": obj.get("output"), - "termination_reason": obj.get("termination_reason"), - "error_message": obj.get("error_message"), + "application_run_id": obj.get("application_run_id"), + "reference": obj.get("reference"), + "metadata": obj.get("metadata"), + "status": obj.get("status"), + "error": obj.get("error"), + "message": obj.get("message"), "terminated_at": obj.get("terminated_at"), - "output_artifacts": [OutputArtifactResultReadResponse.from_dict(_item) for _item in obj["output_artifacts"]] if obj.get("output_artifacts") is not None else None, - "error_code": obj.get("error_code") + "output_artifacts": [OutputArtifactResultReadResponse.from_dict(_item) for _item in obj["output_artifacts"]] if obj.get("output_artifacts") is not None else None }) return _obj diff --git a/codegen/out/aignx/codegen/models/item_state.py b/codegen/out/aignx/codegen/models/item_state.py deleted file mode 100644 index 3be7db50..00000000 --- a/codegen/out/aignx/codegen/models/item_state.py +++ /dev/null @@ -1,38 +0,0 @@ -# coding: utf-8 - -""" - Aignostics Platform API - - The Aignostics Platform is a cloud-based service that enables organizations to access advanced computational pathology applications through a secure API. The platform provides standardized access to Aignostics' portfolio of computational pathology solutions, with Atlas H&E-TME serving as an example of the available API endpoints. To begin using the platform, your organization must first be registered by our business support team. If you don't have an account yet, please contact your account manager or email support@aignostics.com to get started. More information about our applications can be found on [https://platform.aignostics.com](https://platform.aignostics.com). **How to authorize and test API endpoints:** 1. Click the \"Authorize\" button in the right corner below 3. Click \"Authorize\" button in the dialog to log in with your Aignostics Platform credentials 4. After successful login, you'll be redirected back and can use \"Try it out\" on any endpoint **Note**: You only need to authorize once per session. The lock icons next to endpoints will show green when authorized. - - The version of the OpenAPI document: 1.0.0.beta7 - Generated by OpenAPI Generator (https://openapi-generator.tech) - - Do not edit the class manually. -""" # noqa: E501 - - -from __future__ import annotations -import json -from enum import Enum -from typing_extensions import Self - - -class ItemState(str, Enum): - """ - ItemState - """ - - """ - allowed enum values - """ - PENDING = 'PENDING' - PROCESSING = 'PROCESSING' - TERMINATED = 'TERMINATED' - - @classmethod - def from_json(cls, json_str: str) -> Self: - """Create an instance of ItemState from a JSON string""" - return cls(json.loads(json_str)) - - diff --git a/codegen/out/aignx/codegen/models/item_termination_reason.py b/codegen/out/aignx/codegen/models/item_status.py similarity index 82% rename from codegen/out/aignx/codegen/models/item_termination_reason.py rename to codegen/out/aignx/codegen/models/item_status.py index fee8ba98..a05a3300 100644 --- a/codegen/out/aignx/codegen/models/item_termination_reason.py +++ b/codegen/out/aignx/codegen/models/item_status.py @@ -5,7 +5,7 @@ The Aignostics Platform is a cloud-based service that enables organizations to access advanced computational pathology applications through a secure API. The platform provides standardized access to Aignostics' portfolio of computational pathology solutions, with Atlas H&E-TME serving as an example of the available API endpoints. To begin using the platform, your organization must first be registered by our business support team. If you don't have an account yet, please contact your account manager or email support@aignostics.com to get started. More information about our applications can be found on [https://platform.aignostics.com](https://platform.aignostics.com). **How to authorize and test API endpoints:** 1. Click the \"Authorize\" button in the right corner below 3. Click \"Authorize\" button in the dialog to log in with your Aignostics Platform credentials 4. After successful login, you'll be redirected back and can use \"Try it out\" on any endpoint **Note**: You only need to authorize once per session. The lock icons next to endpoints will show green when authorized. - The version of the OpenAPI document: 1.0.0.beta7 + The version of the OpenAPI document: 1.0.0-beta6 Generated by OpenAPI Generator (https://openapi-generator.tech) Do not edit the class manually. @@ -18,22 +18,24 @@ from typing_extensions import Self -class ItemTerminationReason(str, Enum): +class ItemStatus(str, Enum): """ - ItemTerminationReason + ItemStatus """ """ allowed enum values """ + PENDING = 'PENDING' + CANCELED_USER = 'CANCELED_USER' + CANCELED_SYSTEM = 'CANCELED_SYSTEM' + ERROR_USER = 'ERROR_USER' + ERROR_SYSTEM = 'ERROR_SYSTEM' SUCCEEDED = 'SUCCEEDED' - USER_ERROR = 'USER_ERROR' - SYSTEM_ERROR = 'SYSTEM_ERROR' - SKIPPED = 'SKIPPED' @classmethod def from_json(cls, json_str: str) -> Self: - """Create an instance of ItemTerminationReason from a JSON string""" + """Create an instance of ItemStatus from a JSON string""" return cls(json.loads(json_str)) diff --git a/codegen/out/aignx/codegen/models/me_read_response.py b/codegen/out/aignx/codegen/models/me_read_response.py index c5d15358..01a7dada 100644 --- a/codegen/out/aignx/codegen/models/me_read_response.py +++ b/codegen/out/aignx/codegen/models/me_read_response.py @@ -5,7 +5,7 @@ The Aignostics Platform is a cloud-based service that enables organizations to access advanced computational pathology applications through a secure API. The platform provides standardized access to Aignostics' portfolio of computational pathology solutions, with Atlas H&E-TME serving as an example of the available API endpoints. To begin using the platform, your organization must first be registered by our business support team. If you don't have an account yet, please contact your account manager or email support@aignostics.com to get started. More information about our applications can be found on [https://platform.aignostics.com](https://platform.aignostics.com). **How to authorize and test API endpoints:** 1. Click the \"Authorize\" button in the right corner below 3. Click \"Authorize\" button in the dialog to log in with your Aignostics Platform credentials 4. After successful login, you'll be redirected back and can use \"Try it out\" on any endpoint **Note**: You only need to authorize once per session. The lock icons next to endpoints will show green when authorized. - The version of the OpenAPI document: 1.0.0.beta7 + The version of the OpenAPI document: 1.0.0-beta6 Generated by OpenAPI Generator (https://openapi-generator.tech) Do not edit the class manually. diff --git a/codegen/out/aignx/codegen/models/organization_read_response.py b/codegen/out/aignx/codegen/models/organization_read_response.py index 6a097cf5..ccdeda69 100644 --- a/codegen/out/aignx/codegen/models/organization_read_response.py +++ b/codegen/out/aignx/codegen/models/organization_read_response.py @@ -5,7 +5,7 @@ The Aignostics Platform is a cloud-based service that enables organizations to access advanced computational pathology applications through a secure API. The platform provides standardized access to Aignostics' portfolio of computational pathology solutions, with Atlas H&E-TME serving as an example of the available API endpoints. To begin using the platform, your organization must first be registered by our business support team. If you don't have an account yet, please contact your account manager or email support@aignostics.com to get started. More information about our applications can be found on [https://platform.aignostics.com](https://platform.aignostics.com). **How to authorize and test API endpoints:** 1. Click the \"Authorize\" button in the right corner below 3. Click \"Authorize\" button in the dialog to log in with your Aignostics Platform credentials 4. After successful login, you'll be redirected back and can use \"Try it out\" on any endpoint **Note**: You only need to authorize once per session. The lock icons next to endpoints will show green when authorized. - The version of the OpenAPI document: 1.0.0.beta7 + The version of the OpenAPI document: 1.0.0-beta6 Generated by OpenAPI Generator (https://openapi-generator.tech) Do not edit the class manually. diff --git a/codegen/out/aignx/codegen/models/output_artifact.py b/codegen/out/aignx/codegen/models/output_artifact.py index a6eb745e..0c01758e 100644 --- a/codegen/out/aignx/codegen/models/output_artifact.py +++ b/codegen/out/aignx/codegen/models/output_artifact.py @@ -5,7 +5,7 @@ The Aignostics Platform is a cloud-based service that enables organizations to access advanced computational pathology applications through a secure API. The platform provides standardized access to Aignostics' portfolio of computational pathology solutions, with Atlas H&E-TME serving as an example of the available API endpoints. To begin using the platform, your organization must first be registered by our business support team. If you don't have an account yet, please contact your account manager or email support@aignostics.com to get started. More information about our applications can be found on [https://platform.aignostics.com](https://platform.aignostics.com). **How to authorize and test API endpoints:** 1. Click the \"Authorize\" button in the right corner below 3. Click \"Authorize\" button in the dialog to log in with your Aignostics Platform credentials 4. After successful login, you'll be redirected back and can use \"Try it out\" on any endpoint **Note**: You only need to authorize once per session. The lock icons next to endpoints will show green when authorized. - The version of the OpenAPI document: 1.0.0.beta7 + The version of the OpenAPI document: 1.0.0-beta6 Generated by OpenAPI Generator (https://openapi-generator.tech) Do not edit the class manually. diff --git a/codegen/out/aignx/codegen/models/output_artifact_read_response.py b/codegen/out/aignx/codegen/models/output_artifact_read_response.py new file mode 100644 index 00000000..8aeba8c0 --- /dev/null +++ b/codegen/out/aignx/codegen/models/output_artifact_read_response.py @@ -0,0 +1,102 @@ +# coding: utf-8 + +""" + Aignostics Platform API + + The Aignostics Platform is a cloud-based service that enables organizations to access advanced computational pathology applications through a secure API. The platform provides standardized access to Aignostics' portfolio of computational pathology solutions, with Atlas H&E-TME serving as an example of the available API endpoints. To begin using the platform, your organization must first be registered by our business support team. If you don't have an account yet, please contact your account manager or email support@aignostics.com to get started. More information about our applications can be found on [https://platform.aignostics.com](https://platform.aignostics.com). **How to authorize and test API endpoints:** 1. Click the \"Authorize\" button in the right corner below 3. Click \"Authorize\" button in the dialog to log in with your Aignostics Platform credentials 4. After successful login, you'll be redirected back and can use \"Try it out\" on any endpoint **Note**: You only need to authorize once per session. The lock icons next to endpoints will show green when authorized. + + The version of the OpenAPI document: 1.0.0-beta6 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict, Field, StrictStr, field_validator +from typing import Any, ClassVar, Dict, List +from typing_extensions import Annotated +from aignx.codegen.models.output_artifact_scope import OutputArtifactScope +from typing import Optional, Set +from typing_extensions import Self + +class OutputArtifactReadResponse(BaseModel): + """ + OutputArtifactReadResponse + """ # noqa: E501 + name: StrictStr + mime_type: Annotated[str, Field(strict=True)] + metadata_schema: Dict[str, Any] + scope: OutputArtifactScope + __properties: ClassVar[List[str]] = ["name", "mime_type", "metadata_schema", "scope"] + + @field_validator('mime_type') + def mime_type_validate_regular_expression(cls, value): + """Validates the regular expression""" + if not re.match(r"^\w+\/\w+[-+.|\w+]+\w+$", value): + raise ValueError(r"must validate the regular expression /^\w+\/\w+[-+.|\w+]+\w+$/") + return value + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of OutputArtifactReadResponse from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of OutputArtifactReadResponse from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "name": obj.get("name"), + "mime_type": obj.get("mime_type"), + "metadata_schema": obj.get("metadata_schema"), + "scope": obj.get("scope") + }) + return _obj + + diff --git a/codegen/out/aignx/codegen/models/output_artifact_result_read_response.py b/codegen/out/aignx/codegen/models/output_artifact_result_read_response.py index 1ccd0c57..d971af05 100644 --- a/codegen/out/aignx/codegen/models/output_artifact_result_read_response.py +++ b/codegen/out/aignx/codegen/models/output_artifact_result_read_response.py @@ -5,7 +5,7 @@ The Aignostics Platform is a cloud-based service that enables organizations to access advanced computational pathology applications through a secure API. The platform provides standardized access to Aignostics' portfolio of computational pathology solutions, with Atlas H&E-TME serving as an example of the available API endpoints. To begin using the platform, your organization must first be registered by our business support team. If you don't have an account yet, please contact your account manager or email support@aignostics.com to get started. More information about our applications can be found on [https://platform.aignostics.com](https://platform.aignostics.com). **How to authorize and test API endpoints:** 1. Click the \"Authorize\" button in the right corner below 3. Click \"Authorize\" button in the dialog to log in with your Aignostics Platform credentials 4. After successful login, you'll be redirected back and can use \"Try it out\" on any endpoint **Note**: You only need to authorize once per session. The lock icons next to endpoints will show green when authorized. - The version of the OpenAPI document: 1.0.0.beta7 + The version of the OpenAPI document: 1.0.0-beta6 Generated by OpenAPI Generator (https://openapi-generator.tech) Do not edit the class manually. @@ -20,9 +20,6 @@ from pydantic import BaseModel, ConfigDict, Field, StrictStr from typing import Any, ClassVar, Dict, List, Optional from typing_extensions import Annotated -from aignx.codegen.models.artifact_output import ArtifactOutput -from aignx.codegen.models.artifact_state import ArtifactState -from aignx.codegen.models.artifact_termination_reason import ArtifactTerminationReason from typing import Optional, Set from typing_extensions import Self @@ -32,14 +29,9 @@ class OutputArtifactResultReadResponse(BaseModel): """ # noqa: E501 output_artifact_id: StrictStr = Field(description="The Id of the artifact. Used internally") name: StrictStr = Field(description=" Name of the output from the output schema from the `/v1/versions/{version_id}` endpoint. ") - metadata: Optional[Dict[str, Any]] = None - state: ArtifactState = Field(description="The current state of the artifact (PENDING, PROCESSING, TERMINATED)") - termination_reason: Optional[ArtifactTerminationReason] = None - output: ArtifactOutput = Field(description="The output status of the artifact (NONE, FULL)") - error_message: Optional[StrictStr] = None + metadata: Dict[str, Any] = Field(description="The metadata of the output artifact, provided by the application") download_url: Optional[Annotated[str, Field(min_length=1, strict=True, max_length=2083)]] - error_code: Optional[StrictStr] - __properties: ClassVar[List[str]] = ["output_artifact_id", "name", "metadata", "state", "termination_reason", "output", "error_message", "download_url", "error_code"] + __properties: ClassVar[List[str]] = ["output_artifact_id", "name", "metadata", "download_url"] model_config = ConfigDict( populate_by_name=True, @@ -80,31 +72,11 @@ def to_dict(self) -> Dict[str, Any]: exclude=excluded_fields, exclude_none=True, ) - # set to None if metadata (nullable) is None - # and model_fields_set contains the field - if self.metadata is None and "metadata" in self.model_fields_set: - _dict['metadata'] = None - - # set to None if termination_reason (nullable) is None - # and model_fields_set contains the field - if self.termination_reason is None and "termination_reason" in self.model_fields_set: - _dict['termination_reason'] = None - - # set to None if error_message (nullable) is None - # and model_fields_set contains the field - if self.error_message is None and "error_message" in self.model_fields_set: - _dict['error_message'] = None - # set to None if download_url (nullable) is None # and model_fields_set contains the field if self.download_url is None and "download_url" in self.model_fields_set: _dict['download_url'] = None - # set to None if error_code (nullable) is None - # and model_fields_set contains the field - if self.error_code is None and "error_code" in self.model_fields_set: - _dict['error_code'] = None - return _dict @classmethod @@ -120,12 +92,7 @@ def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: "output_artifact_id": obj.get("output_artifact_id"), "name": obj.get("name"), "metadata": obj.get("metadata"), - "state": obj.get("state"), - "termination_reason": obj.get("termination_reason"), - "output": obj.get("output"), - "error_message": obj.get("error_message"), - "download_url": obj.get("download_url"), - "error_code": obj.get("error_code") + "download_url": obj.get("download_url") }) return _obj diff --git a/codegen/out/aignx/codegen/models/output_artifact_scope.py b/codegen/out/aignx/codegen/models/output_artifact_scope.py index 0428e0c8..4ebfd703 100644 --- a/codegen/out/aignx/codegen/models/output_artifact_scope.py +++ b/codegen/out/aignx/codegen/models/output_artifact_scope.py @@ -5,7 +5,7 @@ The Aignostics Platform is a cloud-based service that enables organizations to access advanced computational pathology applications through a secure API. The platform provides standardized access to Aignostics' portfolio of computational pathology solutions, with Atlas H&E-TME serving as an example of the available API endpoints. To begin using the platform, your organization must first be registered by our business support team. If you don't have an account yet, please contact your account manager or email support@aignostics.com to get started. More information about our applications can be found on [https://platform.aignostics.com](https://platform.aignostics.com). **How to authorize and test API endpoints:** 1. Click the \"Authorize\" button in the right corner below 3. Click \"Authorize\" button in the dialog to log in with your Aignostics Platform credentials 4. After successful login, you'll be redirected back and can use \"Try it out\" on any endpoint **Note**: You only need to authorize once per session. The lock icons next to endpoints will show green when authorized. - The version of the OpenAPI document: 1.0.0.beta7 + The version of the OpenAPI document: 1.0.0-beta6 Generated by OpenAPI Generator (https://openapi-generator.tech) Do not edit the class manually. diff --git a/codegen/out/aignx/codegen/models/output_artifact_visibility.py b/codegen/out/aignx/codegen/models/output_artifact_visibility.py index 94ecf8c6..37f96998 100644 --- a/codegen/out/aignx/codegen/models/output_artifact_visibility.py +++ b/codegen/out/aignx/codegen/models/output_artifact_visibility.py @@ -5,7 +5,7 @@ The Aignostics Platform is a cloud-based service that enables organizations to access advanced computational pathology applications through a secure API. The platform provides standardized access to Aignostics' portfolio of computational pathology solutions, with Atlas H&E-TME serving as an example of the available API endpoints. To begin using the platform, your organization must first be registered by our business support team. If you don't have an account yet, please contact your account manager or email support@aignostics.com to get started. More information about our applications can be found on [https://platform.aignostics.com](https://platform.aignostics.com). **How to authorize and test API endpoints:** 1. Click the \"Authorize\" button in the right corner below 3. Click \"Authorize\" button in the dialog to log in with your Aignostics Platform credentials 4. After successful login, you'll be redirected back and can use \"Try it out\" on any endpoint **Note**: You only need to authorize once per session. The lock icons next to endpoints will show green when authorized. - The version of the OpenAPI document: 1.0.0.beta7 + The version of the OpenAPI document: 1.0.0-beta6 Generated by OpenAPI Generator (https://openapi-generator.tech) Do not edit the class manually. diff --git a/codegen/out/aignx/codegen/models/custom_metadata_update_request.py b/codegen/out/aignx/codegen/models/payload_input_artifact.py similarity index 72% rename from codegen/out/aignx/codegen/models/custom_metadata_update_request.py rename to codegen/out/aignx/codegen/models/payload_input_artifact.py index 65e03027..d6d82585 100644 --- a/codegen/out/aignx/codegen/models/custom_metadata_update_request.py +++ b/codegen/out/aignx/codegen/models/payload_input_artifact.py @@ -5,7 +5,7 @@ The Aignostics Platform is a cloud-based service that enables organizations to access advanced computational pathology applications through a secure API. The platform provides standardized access to Aignostics' portfolio of computational pathology solutions, with Atlas H&E-TME serving as an example of the available API endpoints. To begin using the platform, your organization must first be registered by our business support team. If you don't have an account yet, please contact your account manager or email support@aignostics.com to get started. More information about our applications can be found on [https://platform.aignostics.com](https://platform.aignostics.com). **How to authorize and test API endpoints:** 1. Click the \"Authorize\" button in the right corner below 3. Click \"Authorize\" button in the dialog to log in with your Aignostics Platform credentials 4. After successful login, you'll be redirected back and can use \"Try it out\" on any endpoint **Note**: You only need to authorize once per session. The lock icons next to endpoints will show green when authorized. - The version of the OpenAPI document: 1.0.0.beta7 + The version of the OpenAPI document: 1.0.0-beta6 Generated by OpenAPI Generator (https://openapi-generator.tech) Do not edit the class manually. @@ -17,18 +17,20 @@ import re # noqa: F401 import json -from pydantic import BaseModel, ConfigDict, StrictStr +from pydantic import BaseModel, ConfigDict, Field, StrictStr from typing import Any, ClassVar, Dict, List, Optional +from typing_extensions import Annotated from typing import Optional, Set from typing_extensions import Self -class CustomMetadataUpdateRequest(BaseModel): +class PayloadInputArtifact(BaseModel): """ - CustomMetadataUpdateRequest + PayloadInputArtifact """ # noqa: E501 - custom_metadata: Optional[Dict[str, Any]] = None - custom_metadata_checksum: Optional[StrictStr] = None - __properties: ClassVar[List[str]] = ["custom_metadata", "custom_metadata_checksum"] + input_artifact_id: Optional[StrictStr] = None + metadata: Dict[str, Any] + download_url: Annotated[str, Field(min_length=1, strict=True)] + __properties: ClassVar[List[str]] = ["input_artifact_id", "metadata", "download_url"] model_config = ConfigDict( populate_by_name=True, @@ -48,7 +50,7 @@ def to_json(self) -> str: @classmethod def from_json(cls, json_str: str) -> Optional[Self]: - """Create an instance of CustomMetadataUpdateRequest from a JSON string""" + """Create an instance of PayloadInputArtifact from a JSON string""" return cls.from_dict(json.loads(json_str)) def to_dict(self) -> Dict[str, Any]: @@ -69,21 +71,11 @@ def to_dict(self) -> Dict[str, Any]: exclude=excluded_fields, exclude_none=True, ) - # set to None if custom_metadata (nullable) is None - # and model_fields_set contains the field - if self.custom_metadata is None and "custom_metadata" in self.model_fields_set: - _dict['custom_metadata'] = None - - # set to None if custom_metadata_checksum (nullable) is None - # and model_fields_set contains the field - if self.custom_metadata_checksum is None and "custom_metadata_checksum" in self.model_fields_set: - _dict['custom_metadata_checksum'] = None - return _dict @classmethod def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: - """Create an instance of CustomMetadataUpdateRequest from a dict""" + """Create an instance of PayloadInputArtifact from a dict""" if obj is None: return None @@ -91,8 +83,9 @@ def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: return cls.model_validate(obj) _obj = cls.model_validate({ - "custom_metadata": obj.get("custom_metadata"), - "custom_metadata_checksum": obj.get("custom_metadata_checksum") + "input_artifact_id": obj.get("input_artifact_id"), + "metadata": obj.get("metadata"), + "download_url": obj.get("download_url") }) return _obj diff --git a/codegen/out/aignx/codegen/models/payload_item.py b/codegen/out/aignx/codegen/models/payload_item.py new file mode 100644 index 00000000..33d2c021 --- /dev/null +++ b/codegen/out/aignx/codegen/models/payload_item.py @@ -0,0 +1,117 @@ +# coding: utf-8 + +""" + Aignostics Platform API + + The Aignostics Platform is a cloud-based service that enables organizations to access advanced computational pathology applications through a secure API. The platform provides standardized access to Aignostics' portfolio of computational pathology solutions, with Atlas H&E-TME serving as an example of the available API endpoints. To begin using the platform, your organization must first be registered by our business support team. If you don't have an account yet, please contact your account manager or email support@aignostics.com to get started. More information about our applications can be found on [https://platform.aignostics.com](https://platform.aignostics.com). **How to authorize and test API endpoints:** 1. Click the \"Authorize\" button in the right corner below 3. Click \"Authorize\" button in the dialog to log in with your Aignostics Platform credentials 4. After successful login, you'll be redirected back and can use \"Try it out\" on any endpoint **Note**: You only need to authorize once per session. The lock icons next to endpoints will show green when authorized. + + The version of the OpenAPI document: 1.0.0-beta6 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict, StrictStr +from typing import Any, ClassVar, Dict, List +from aignx.codegen.models.payload_input_artifact import PayloadInputArtifact +from aignx.codegen.models.payload_output_artifact import PayloadOutputArtifact +from typing import Optional, Set +from typing_extensions import Self + +class PayloadItem(BaseModel): + """ + PayloadItem + """ # noqa: E501 + item_id: StrictStr + input_artifacts: Dict[str, PayloadInputArtifact] + output_artifacts: Dict[str, PayloadOutputArtifact] + __properties: ClassVar[List[str]] = ["item_id", "input_artifacts", "output_artifacts"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of PayloadItem from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + # override the default output from pydantic by calling `to_dict()` of each value in input_artifacts (dict) + _field_dict = {} + if self.input_artifacts: + for _key_input_artifacts in self.input_artifacts: + if self.input_artifacts[_key_input_artifacts]: + _field_dict[_key_input_artifacts] = self.input_artifacts[_key_input_artifacts].to_dict() + _dict['input_artifacts'] = _field_dict + # override the default output from pydantic by calling `to_dict()` of each value in output_artifacts (dict) + _field_dict = {} + if self.output_artifacts: + for _key_output_artifacts in self.output_artifacts: + if self.output_artifacts[_key_output_artifacts]: + _field_dict[_key_output_artifacts] = self.output_artifacts[_key_output_artifacts].to_dict() + _dict['output_artifacts'] = _field_dict + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of PayloadItem from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "item_id": obj.get("item_id"), + "input_artifacts": dict( + (_k, PayloadInputArtifact.from_dict(_v)) + for _k, _v in obj["input_artifacts"].items() + ) + if obj.get("input_artifacts") is not None + else None, + "output_artifacts": dict( + (_k, PayloadOutputArtifact.from_dict(_v)) + for _k, _v in obj["output_artifacts"].items() + ) + if obj.get("output_artifacts") is not None + else None + }) + return _obj + + diff --git a/codegen/out/aignx/codegen/models/payload_output_artifact.py b/codegen/out/aignx/codegen/models/payload_output_artifact.py new file mode 100644 index 00000000..f03a3161 --- /dev/null +++ b/codegen/out/aignx/codegen/models/payload_output_artifact.py @@ -0,0 +1,98 @@ +# coding: utf-8 + +""" + Aignostics Platform API + + The Aignostics Platform is a cloud-based service that enables organizations to access advanced computational pathology applications through a secure API. The platform provides standardized access to Aignostics' portfolio of computational pathology solutions, with Atlas H&E-TME serving as an example of the available API endpoints. To begin using the platform, your organization must first be registered by our business support team. If you don't have an account yet, please contact your account manager or email support@aignostics.com to get started. More information about our applications can be found on [https://platform.aignostics.com](https://platform.aignostics.com). **How to authorize and test API endpoints:** 1. Click the \"Authorize\" button in the right corner below 3. Click \"Authorize\" button in the dialog to log in with your Aignostics Platform credentials 4. After successful login, you'll be redirected back and can use \"Try it out\" on any endpoint **Note**: You only need to authorize once per session. The lock icons next to endpoints will show green when authorized. + + The version of the OpenAPI document: 1.0.0-beta6 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict, StrictStr +from typing import Any, ClassVar, Dict, List +from aignx.codegen.models.transfer_urls import TransferUrls +from typing import Optional, Set +from typing_extensions import Self + +class PayloadOutputArtifact(BaseModel): + """ + PayloadOutputArtifact + """ # noqa: E501 + output_artifact_id: StrictStr + data: TransferUrls + metadata: TransferUrls + __properties: ClassVar[List[str]] = ["output_artifact_id", "data", "metadata"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of PayloadOutputArtifact from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + # override the default output from pydantic by calling `to_dict()` of data + if self.data: + _dict['data'] = self.data.to_dict() + # override the default output from pydantic by calling `to_dict()` of metadata + if self.metadata: + _dict['metadata'] = self.metadata.to_dict() + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of PayloadOutputArtifact from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "output_artifact_id": obj.get("output_artifact_id"), + "data": TransferUrls.from_dict(obj["data"]) if obj.get("data") is not None else None, + "metadata": TransferUrls.from_dict(obj["metadata"]) if obj.get("metadata") is not None else None + }) + return _obj + + diff --git a/codegen/out/aignx/codegen/models/run_creation_request.py b/codegen/out/aignx/codegen/models/run_creation_request.py index 605e222a..cdd45bda 100644 --- a/codegen/out/aignx/codegen/models/run_creation_request.py +++ b/codegen/out/aignx/codegen/models/run_creation_request.py @@ -5,7 +5,7 @@ The Aignostics Platform is a cloud-based service that enables organizations to access advanced computational pathology applications through a secure API. The platform provides standardized access to Aignostics' portfolio of computational pathology solutions, with Atlas H&E-TME serving as an example of the available API endpoints. To begin using the platform, your organization must first be registered by our business support team. If you don't have an account yet, please contact your account manager or email support@aignostics.com to get started. More information about our applications can be found on [https://platform.aignostics.com](https://platform.aignostics.com). **How to authorize and test API endpoints:** 1. Click the \"Authorize\" button in the right corner below 3. Click \"Authorize\" button in the dialog to log in with your Aignostics Platform credentials 4. After successful login, you'll be redirected back and can use \"Try it out\" on any endpoint **Note**: You only need to authorize once per session. The lock icons next to endpoints will show green when authorized. - The version of the OpenAPI document: 1.0.0.beta7 + The version of the OpenAPI document: 1.0.0-beta6 Generated by OpenAPI Generator (https://openapi-generator.tech) Do not edit the class manually. @@ -26,13 +26,12 @@ class RunCreationRequest(BaseModel): """ - Request schema for `Initiate Run` endpoint. It describes which application version is chosen, and which user data should be processed. + Request schema for `Initiate Application Run` endpoint. It describes which application version is chosen, and which user data should be processed. """ # noqa: E501 - application_id: StrictStr = Field(description="Unique ID for the application to use for processing") - version_number: Optional[StrictStr] = None - custom_metadata: Optional[Dict[str, Any]] = None + application_version_id: StrictStr = Field(description="Unique ID for the application version to use for processing. Must include version suffix (e.g., 'he-tme:v1.0.0-beta')") + metadata: Optional[Dict[str, Any]] = None items: Annotated[List[ItemCreationRequest], Field(min_length=1)] = Field(description="List of items (slides) to process. Each item represents a whole slide image (WSI) with its associated metadata and artifacts") - __properties: ClassVar[List[str]] = ["application_id", "version_number", "custom_metadata", "items"] + __properties: ClassVar[List[str]] = ["application_version_id", "metadata", "items"] model_config = ConfigDict( populate_by_name=True, @@ -80,15 +79,10 @@ def to_dict(self) -> Dict[str, Any]: if _item_items: _items.append(_item_items.to_dict()) _dict['items'] = _items - # set to None if version_number (nullable) is None + # set to None if metadata (nullable) is None # and model_fields_set contains the field - if self.version_number is None and "version_number" in self.model_fields_set: - _dict['version_number'] = None - - # set to None if custom_metadata (nullable) is None - # and model_fields_set contains the field - if self.custom_metadata is None and "custom_metadata" in self.model_fields_set: - _dict['custom_metadata'] = None + if self.metadata is None and "metadata" in self.model_fields_set: + _dict['metadata'] = None return _dict @@ -102,9 +96,8 @@ def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: return cls.model_validate(obj) _obj = cls.model_validate({ - "application_id": obj.get("application_id"), - "version_number": obj.get("version_number"), - "custom_metadata": obj.get("custom_metadata"), + "application_version_id": obj.get("application_version_id"), + "metadata": obj.get("metadata"), "items": [ItemCreationRequest.from_dict(_item) for _item in obj["items"]] if obj.get("items") is not None else None }) return _obj diff --git a/codegen/out/aignx/codegen/models/run_creation_response.py b/codegen/out/aignx/codegen/models/run_creation_response.py index 89c6d72c..a02b25d2 100644 --- a/codegen/out/aignx/codegen/models/run_creation_response.py +++ b/codegen/out/aignx/codegen/models/run_creation_response.py @@ -5,7 +5,7 @@ The Aignostics Platform is a cloud-based service that enables organizations to access advanced computational pathology applications through a secure API. The platform provides standardized access to Aignostics' portfolio of computational pathology solutions, with Atlas H&E-TME serving as an example of the available API endpoints. To begin using the platform, your organization must first be registered by our business support team. If you don't have an account yet, please contact your account manager or email support@aignostics.com to get started. More information about our applications can be found on [https://platform.aignostics.com](https://platform.aignostics.com). **How to authorize and test API endpoints:** 1. Click the \"Authorize\" button in the right corner below 3. Click \"Authorize\" button in the dialog to log in with your Aignostics Platform credentials 4. After successful login, you'll be redirected back and can use \"Try it out\" on any endpoint **Note**: You only need to authorize once per session. The lock icons next to endpoints will show green when authorized. - The version of the OpenAPI document: 1.0.0.beta7 + The version of the OpenAPI document: 1.0.0-beta6 Generated by OpenAPI Generator (https://openapi-generator.tech) Do not edit the class manually. @@ -26,8 +26,8 @@ class RunCreationResponse(BaseModel): """ RunCreationResponse """ # noqa: E501 - run_id: Optional[StrictStr] = 'Run id' - __properties: ClassVar[List[str]] = ["run_id"] + application_run_id: Optional[StrictStr] = 'Application run id' + __properties: ClassVar[List[str]] = ["application_run_id"] model_config = ConfigDict( populate_by_name=True, @@ -80,7 +80,7 @@ def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: return cls.model_validate(obj) _obj = cls.model_validate({ - "run_id": obj.get("run_id") if obj.get("run_id") is not None else 'Run id' + "application_run_id": obj.get("application_run_id") if obj.get("application_run_id") is not None else 'Application run id' }) return _obj diff --git a/codegen/out/aignx/codegen/models/run_output.py b/codegen/out/aignx/codegen/models/run_output.py deleted file mode 100644 index 46d466a4..00000000 --- a/codegen/out/aignx/codegen/models/run_output.py +++ /dev/null @@ -1,38 +0,0 @@ -# coding: utf-8 - -""" - Aignostics Platform API - - The Aignostics Platform is a cloud-based service that enables organizations to access advanced computational pathology applications through a secure API. The platform provides standardized access to Aignostics' portfolio of computational pathology solutions, with Atlas H&E-TME serving as an example of the available API endpoints. To begin using the platform, your organization must first be registered by our business support team. If you don't have an account yet, please contact your account manager or email support@aignostics.com to get started. More information about our applications can be found on [https://platform.aignostics.com](https://platform.aignostics.com). **How to authorize and test API endpoints:** 1. Click the \"Authorize\" button in the right corner below 3. Click \"Authorize\" button in the dialog to log in with your Aignostics Platform credentials 4. After successful login, you'll be redirected back and can use \"Try it out\" on any endpoint **Note**: You only need to authorize once per session. The lock icons next to endpoints will show green when authorized. - - The version of the OpenAPI document: 1.0.0.beta7 - Generated by OpenAPI Generator (https://openapi-generator.tech) - - Do not edit the class manually. -""" # noqa: E501 - - -from __future__ import annotations -import json -from enum import Enum -from typing_extensions import Self - - -class RunOutput(str, Enum): - """ - RunOutput - """ - - """ - allowed enum values - """ - NONE = 'NONE' - PARTIAL = 'PARTIAL' - FULL = 'FULL' - - @classmethod - def from_json(cls, json_str: str) -> Self: - """Create an instance of RunOutput from a JSON string""" - return cls(json.loads(json_str)) - - diff --git a/codegen/out/aignx/codegen/models/run_read_response.py b/codegen/out/aignx/codegen/models/run_read_response.py index 723288ac..84c30a62 100644 --- a/codegen/out/aignx/codegen/models/run_read_response.py +++ b/codegen/out/aignx/codegen/models/run_read_response.py @@ -5,7 +5,7 @@ The Aignostics Platform is a cloud-based service that enables organizations to access advanced computational pathology applications through a secure API. The platform provides standardized access to Aignostics' portfolio of computational pathology solutions, with Atlas H&E-TME serving as an example of the available API endpoints. To begin using the platform, your organization must first be registered by our business support team. If you don't have an account yet, please contact your account manager or email support@aignostics.com to get started. More information about our applications can be found on [https://platform.aignostics.com](https://platform.aignostics.com). **How to authorize and test API endpoints:** 1. Click the \"Authorize\" button in the right corner below 3. Click \"Authorize\" button in the dialog to log in with your Aignostics Platform credentials 4. After successful login, you'll be redirected back and can use \"Try it out\" on any endpoint **Note**: You only need to authorize once per session. The lock icons next to endpoints will show green when authorized. - The version of the OpenAPI document: 1.0.0.beta7 + The version of the OpenAPI document: 1.0.0-beta6 Generated by OpenAPI Generator (https://openapi-generator.tech) Do not edit the class manually. @@ -20,10 +20,8 @@ from datetime import datetime from pydantic import BaseModel, ConfigDict, Field, StrictStr from typing import Any, ClassVar, Dict, List, Optional -from aignx.codegen.models.run_item_statistics import RunItemStatistics -from aignx.codegen.models.run_output import RunOutput -from aignx.codegen.models.run_state import RunState -from aignx.codegen.models.run_termination_reason import RunTerminationReason +from aignx.codegen.models.application_run_status import ApplicationRunStatus +from aignx.codegen.models.user_payload import UserPayload from typing import Optional, Set from typing_extensions import Self @@ -31,21 +29,17 @@ class RunReadResponse(BaseModel): """ Response schema for `Get run details` endpoint """ # noqa: E501 - run_id: StrictStr = Field(description="UUID of the application") - application_id: StrictStr = Field(description="Application id") - version_number: StrictStr = Field(description="Application version number") - state: RunState = Field(description="When the run request is received by the Platform, the `state` of it is set to `PENDING`. The state changes to `PROCESSING` when at least one item is being processed. After `PROCESSING`, the state of the run can switch back to `PENDING` if there are no processing items, or to `TERMINATED` when the run finished processing.") - output: RunOutput = Field(description="The status of the output of the run. When 0 items are successfully processed the output is `NONE`, after one item is successfully processed, the value is set to `PARTIAL`. When all items of the run are successfully processed, the output is set to `FULL`.") - termination_reason: Optional[RunTerminationReason] - error_code: Optional[StrictStr] - error_message: Optional[StrictStr] - statistics: RunItemStatistics = Field(description="Aggregated statistics of the run execution") - custom_metadata: Optional[Dict[str, Any]] = None - custom_metadata_checksum: Optional[StrictStr] = None - submitted_at: datetime = Field(description="Timestamp showing when the run was triggered") - submitted_by: StrictStr = Field(description="Id of the user who triggered the run") + application_run_id: StrictStr = Field(description="UUID of the application") + application_version_id: StrictStr = Field(description="ID of the application version") + organization_id: StrictStr = Field(description="Organization of the owner of the application run") + user_payload: Optional[UserPayload] = None + status: ApplicationRunStatus = Field(description="When the application run request is received by the Platform, the `status` of it is set to `running`. When the application run is scheduled, the input items will be processed and the result will be generated incrementally. The results can be downloaded via `/v1/runs/{run_id}/results` endpoint. When all items are processed and all results are generated, the application status is set to `completed`. If the processing is done, but some items fail, the status is set to `completed_with_error`. When the application run reaches the threshold of number of failed items, the whole application run is set to `canceled_system` and the remaining pending items are not processed. When the application run fails, the finished item results are available for download. If the application run is canceled by calling `POST /v1/runs/{run_id}/cancel` endpoint, the processing of the items is stopped, and the application status is set to `cancelled_user` ") + message: Optional[StrictStr] + metadata: Optional[Dict[str, Any]] = None + triggered_at: datetime = Field(description="Timestamp showing when the application run was triggered") + triggered_by: StrictStr = Field(description="Id of the user who triggered the application run") terminated_at: Optional[datetime] = None - __properties: ClassVar[List[str]] = ["run_id", "application_id", "version_number", "state", "output", "termination_reason", "error_code", "error_message", "statistics", "custom_metadata", "custom_metadata_checksum", "submitted_at", "submitted_by", "terminated_at"] + __properties: ClassVar[List[str]] = ["application_run_id", "application_version_id", "organization_id", "user_payload", "status", "message", "metadata", "triggered_at", "triggered_by", "terminated_at"] model_config = ConfigDict( populate_by_name=True, @@ -86,33 +80,23 @@ def to_dict(self) -> Dict[str, Any]: exclude=excluded_fields, exclude_none=True, ) - # override the default output from pydantic by calling `to_dict()` of statistics - if self.statistics: - _dict['statistics'] = self.statistics.to_dict() - # set to None if termination_reason (nullable) is None + # override the default output from pydantic by calling `to_dict()` of user_payload + if self.user_payload: + _dict['user_payload'] = self.user_payload.to_dict() + # set to None if user_payload (nullable) is None # and model_fields_set contains the field - if self.termination_reason is None and "termination_reason" in self.model_fields_set: - _dict['termination_reason'] = None + if self.user_payload is None and "user_payload" in self.model_fields_set: + _dict['user_payload'] = None - # set to None if error_code (nullable) is None + # set to None if message (nullable) is None # and model_fields_set contains the field - if self.error_code is None and "error_code" in self.model_fields_set: - _dict['error_code'] = None + if self.message is None and "message" in self.model_fields_set: + _dict['message'] = None - # set to None if error_message (nullable) is None + # set to None if metadata (nullable) is None # and model_fields_set contains the field - if self.error_message is None and "error_message" in self.model_fields_set: - _dict['error_message'] = None - - # set to None if custom_metadata (nullable) is None - # and model_fields_set contains the field - if self.custom_metadata is None and "custom_metadata" in self.model_fields_set: - _dict['custom_metadata'] = None - - # set to None if custom_metadata_checksum (nullable) is None - # and model_fields_set contains the field - if self.custom_metadata_checksum is None and "custom_metadata_checksum" in self.model_fields_set: - _dict['custom_metadata_checksum'] = None + if self.metadata is None and "metadata" in self.model_fields_set: + _dict['metadata'] = None # set to None if terminated_at (nullable) is None # and model_fields_set contains the field @@ -131,19 +115,15 @@ def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: return cls.model_validate(obj) _obj = cls.model_validate({ - "run_id": obj.get("run_id"), - "application_id": obj.get("application_id"), - "version_number": obj.get("version_number"), - "state": obj.get("state"), - "output": obj.get("output"), - "termination_reason": obj.get("termination_reason"), - "error_code": obj.get("error_code"), - "error_message": obj.get("error_message"), - "statistics": RunItemStatistics.from_dict(obj["statistics"]) if obj.get("statistics") is not None else None, - "custom_metadata": obj.get("custom_metadata"), - "custom_metadata_checksum": obj.get("custom_metadata_checksum"), - "submitted_at": obj.get("submitted_at"), - "submitted_by": obj.get("submitted_by"), + "application_run_id": obj.get("application_run_id"), + "application_version_id": obj.get("application_version_id"), + "organization_id": obj.get("organization_id"), + "user_payload": UserPayload.from_dict(obj["user_payload"]) if obj.get("user_payload") is not None else None, + "status": obj.get("status"), + "message": obj.get("message"), + "metadata": obj.get("metadata"), + "triggered_at": obj.get("triggered_at"), + "triggered_by": obj.get("triggered_by"), "terminated_at": obj.get("terminated_at") }) return _obj diff --git a/codegen/out/aignx/codegen/models/run_state.py b/codegen/out/aignx/codegen/models/run_state.py deleted file mode 100644 index 5881053b..00000000 --- a/codegen/out/aignx/codegen/models/run_state.py +++ /dev/null @@ -1,38 +0,0 @@ -# coding: utf-8 - -""" - Aignostics Platform API - - The Aignostics Platform is a cloud-based service that enables organizations to access advanced computational pathology applications through a secure API. The platform provides standardized access to Aignostics' portfolio of computational pathology solutions, with Atlas H&E-TME serving as an example of the available API endpoints. To begin using the platform, your organization must first be registered by our business support team. If you don't have an account yet, please contact your account manager or email support@aignostics.com to get started. More information about our applications can be found on [https://platform.aignostics.com](https://platform.aignostics.com). **How to authorize and test API endpoints:** 1. Click the \"Authorize\" button in the right corner below 3. Click \"Authorize\" button in the dialog to log in with your Aignostics Platform credentials 4. After successful login, you'll be redirected back and can use \"Try it out\" on any endpoint **Note**: You only need to authorize once per session. The lock icons next to endpoints will show green when authorized. - - The version of the OpenAPI document: 1.0.0.beta7 - Generated by OpenAPI Generator (https://openapi-generator.tech) - - Do not edit the class manually. -""" # noqa: E501 - - -from __future__ import annotations -import json -from enum import Enum -from typing_extensions import Self - - -class RunState(str, Enum): - """ - RunState - """ - - """ - allowed enum values - """ - PENDING = 'PENDING' - PROCESSING = 'PROCESSING' - TERMINATED = 'TERMINATED' - - @classmethod - def from_json(cls, json_str: str) -> Self: - """Create an instance of RunState from a JSON string""" - return cls(json.loads(json_str)) - - diff --git a/codegen/out/aignx/codegen/models/application_version.py b/codegen/out/aignx/codegen/models/transfer_urls.py similarity index 81% rename from codegen/out/aignx/codegen/models/application_version.py rename to codegen/out/aignx/codegen/models/transfer_urls.py index 10668a1f..a29ebd30 100644 --- a/codegen/out/aignx/codegen/models/application_version.py +++ b/codegen/out/aignx/codegen/models/transfer_urls.py @@ -5,7 +5,7 @@ The Aignostics Platform is a cloud-based service that enables organizations to access advanced computational pathology applications through a secure API. The platform provides standardized access to Aignostics' portfolio of computational pathology solutions, with Atlas H&E-TME serving as an example of the available API endpoints. To begin using the platform, your organization must first be registered by our business support team. If you don't have an account yet, please contact your account manager or email support@aignostics.com to get started. More information about our applications can be found on [https://platform.aignostics.com](https://platform.aignostics.com). **How to authorize and test API endpoints:** 1. Click the \"Authorize\" button in the right corner below 3. Click \"Authorize\" button in the dialog to log in with your Aignostics Platform credentials 4. After successful login, you'll be redirected back and can use \"Try it out\" on any endpoint **Note**: You only need to authorize once per session. The lock icons next to endpoints will show green when authorized. - The version of the OpenAPI document: 1.0.0.beta7 + The version of the OpenAPI document: 1.0.0-beta6 Generated by OpenAPI Generator (https://openapi-generator.tech) Do not edit the class manually. @@ -17,19 +17,19 @@ import re # noqa: F401 import json -from datetime import datetime -from pydantic import BaseModel, ConfigDict, Field, StrictStr +from pydantic import BaseModel, ConfigDict, Field from typing import Any, ClassVar, Dict, List +from typing_extensions import Annotated from typing import Optional, Set from typing_extensions import Self -class ApplicationVersion(BaseModel): +class TransferUrls(BaseModel): """ - ApplicationVersion + TransferUrls """ # noqa: E501 - number: StrictStr = Field(description="The number of the latest version") - released_at: datetime = Field(description="The timestamp for when the application version was made available in the Platform") - __properties: ClassVar[List[str]] = ["number", "released_at"] + upload_url: Annotated[str, Field(min_length=1, strict=True)] + download_url: Annotated[str, Field(min_length=1, strict=True)] + __properties: ClassVar[List[str]] = ["upload_url", "download_url"] model_config = ConfigDict( populate_by_name=True, @@ -49,7 +49,7 @@ def to_json(self) -> str: @classmethod def from_json(cls, json_str: str) -> Optional[Self]: - """Create an instance of ApplicationVersion from a JSON string""" + """Create an instance of TransferUrls from a JSON string""" return cls.from_dict(json.loads(json_str)) def to_dict(self) -> Dict[str, Any]: @@ -74,7 +74,7 @@ def to_dict(self) -> Dict[str, Any]: @classmethod def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: - """Create an instance of ApplicationVersion from a dict""" + """Create an instance of TransferUrls from a dict""" if obj is None: return None @@ -82,8 +82,8 @@ def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: return cls.model_validate(obj) _obj = cls.model_validate({ - "number": obj.get("number"), - "released_at": obj.get("released_at") + "upload_url": obj.get("upload_url"), + "download_url": obj.get("download_url") }) return _obj diff --git a/codegen/out/aignx/codegen/models/user_payload.py b/codegen/out/aignx/codegen/models/user_payload.py new file mode 100644 index 00000000..48300f62 --- /dev/null +++ b/codegen/out/aignx/codegen/models/user_payload.py @@ -0,0 +1,119 @@ +# coding: utf-8 + +""" + Aignostics Platform API + + The Aignostics Platform is a cloud-based service that enables organizations to access advanced computational pathology applications through a secure API. The platform provides standardized access to Aignostics' portfolio of computational pathology solutions, with Atlas H&E-TME serving as an example of the available API endpoints. To begin using the platform, your organization must first be registered by our business support team. If you don't have an account yet, please contact your account manager or email support@aignostics.com to get started. More information about our applications can be found on [https://platform.aignostics.com](https://platform.aignostics.com). **How to authorize and test API endpoints:** 1. Click the \"Authorize\" button in the right corner below 3. Click \"Authorize\" button in the dialog to log in with your Aignostics Platform credentials 4. After successful login, you'll be redirected back and can use \"Try it out\" on any endpoint **Note**: You only need to authorize once per session. The lock icons next to endpoints will show green when authorized. + + The version of the OpenAPI document: 1.0.0-beta6 + Generated by OpenAPI Generator (https://openapi-generator.tech) + + Do not edit the class manually. +""" # noqa: E501 + + +from __future__ import annotations +import pprint +import re # noqa: F401 +import json + +from pydantic import BaseModel, ConfigDict, StrictStr +from typing import Any, ClassVar, Dict, List, Optional +from aignx.codegen.models.payload_item import PayloadItem +from aignx.codegen.models.payload_output_artifact import PayloadOutputArtifact +from typing import Optional, Set +from typing_extensions import Self + +class UserPayload(BaseModel): + """ + UserPayload + """ # noqa: E501 + application_id: StrictStr + application_run_id: StrictStr + global_output_artifacts: Optional[Dict[str, PayloadOutputArtifact]] + items: List[PayloadItem] + __properties: ClassVar[List[str]] = ["application_id", "application_run_id", "global_output_artifacts", "items"] + + model_config = ConfigDict( + populate_by_name=True, + validate_assignment=True, + protected_namespaces=(), + ) + + + def to_str(self) -> str: + """Returns the string representation of the model using alias""" + return pprint.pformat(self.model_dump(by_alias=True)) + + def to_json(self) -> str: + """Returns the JSON representation of the model using alias""" + # TODO: pydantic v2: use .model_dump_json(by_alias=True, exclude_unset=True) instead + return json.dumps(self.to_dict()) + + @classmethod + def from_json(cls, json_str: str) -> Optional[Self]: + """Create an instance of UserPayload from a JSON string""" + return cls.from_dict(json.loads(json_str)) + + def to_dict(self) -> Dict[str, Any]: + """Return the dictionary representation of the model using alias. + + This has the following differences from calling pydantic's + `self.model_dump(by_alias=True)`: + + * `None` is only added to the output dict for nullable fields that + were set at model initialization. Other fields with value `None` + are ignored. + """ + excluded_fields: Set[str] = set([ + ]) + + _dict = self.model_dump( + by_alias=True, + exclude=excluded_fields, + exclude_none=True, + ) + # override the default output from pydantic by calling `to_dict()` of each value in global_output_artifacts (dict) + _field_dict = {} + if self.global_output_artifacts: + for _key_global_output_artifacts in self.global_output_artifacts: + if self.global_output_artifacts[_key_global_output_artifacts]: + _field_dict[_key_global_output_artifacts] = self.global_output_artifacts[_key_global_output_artifacts].to_dict() + _dict['global_output_artifacts'] = _field_dict + # override the default output from pydantic by calling `to_dict()` of each item in items (list) + _items = [] + if self.items: + for _item_items in self.items: + if _item_items: + _items.append(_item_items.to_dict()) + _dict['items'] = _items + # set to None if global_output_artifacts (nullable) is None + # and model_fields_set contains the field + if self.global_output_artifacts is None and "global_output_artifacts" in self.model_fields_set: + _dict['global_output_artifacts'] = None + + return _dict + + @classmethod + def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: + """Create an instance of UserPayload from a dict""" + if obj is None: + return None + + if not isinstance(obj, dict): + return cls.model_validate(obj) + + _obj = cls.model_validate({ + "application_id": obj.get("application_id"), + "application_run_id": obj.get("application_run_id"), + "global_output_artifacts": dict( + (_k, PayloadOutputArtifact.from_dict(_v)) + for _k, _v in obj["global_output_artifacts"].items() + ) + if obj.get("global_output_artifacts") is not None + else None, + "items": [PayloadItem.from_dict(_item) for _item in obj["items"]] if obj.get("items") is not None else None + }) + return _obj + + diff --git a/codegen/out/aignx/codegen/models/user_read_response.py b/codegen/out/aignx/codegen/models/user_read_response.py index 8926ff6c..e772a3da 100644 --- a/codegen/out/aignx/codegen/models/user_read_response.py +++ b/codegen/out/aignx/codegen/models/user_read_response.py @@ -5,7 +5,7 @@ The Aignostics Platform is a cloud-based service that enables organizations to access advanced computational pathology applications through a secure API. The platform provides standardized access to Aignostics' portfolio of computational pathology solutions, with Atlas H&E-TME serving as an example of the available API endpoints. To begin using the platform, your organization must first be registered by our business support team. If you don't have an account yet, please contact your account manager or email support@aignostics.com to get started. More information about our applications can be found on [https://platform.aignostics.com](https://platform.aignostics.com). **How to authorize and test API endpoints:** 1. Click the \"Authorize\" button in the right corner below 3. Click \"Authorize\" button in the dialog to log in with your Aignostics Platform credentials 4. After successful login, you'll be redirected back and can use \"Try it out\" on any endpoint **Note**: You only need to authorize once per session. The lock icons next to endpoints will show green when authorized. - The version of the OpenAPI document: 1.0.0.beta7 + The version of the OpenAPI document: 1.0.0-beta6 Generated by OpenAPI Generator (https://openapi-generator.tech) Do not edit the class manually. diff --git a/codegen/out/aignx/codegen/models/validation_error.py b/codegen/out/aignx/codegen/models/validation_error.py index 15ea51b3..2a82c6fc 100644 --- a/codegen/out/aignx/codegen/models/validation_error.py +++ b/codegen/out/aignx/codegen/models/validation_error.py @@ -5,7 +5,7 @@ The Aignostics Platform is a cloud-based service that enables organizations to access advanced computational pathology applications through a secure API. The platform provides standardized access to Aignostics' portfolio of computational pathology solutions, with Atlas H&E-TME serving as an example of the available API endpoints. To begin using the platform, your organization must first be registered by our business support team. If you don't have an account yet, please contact your account manager or email support@aignostics.com to get started. More information about our applications can be found on [https://platform.aignostics.com](https://platform.aignostics.com). **How to authorize and test API endpoints:** 1. Click the \"Authorize\" button in the right corner below 3. Click \"Authorize\" button in the dialog to log in with your Aignostics Platform credentials 4. After successful login, you'll be redirected back and can use \"Try it out\" on any endpoint **Note**: You only need to authorize once per session. The lock icons next to endpoints will show green when authorized. - The version of the OpenAPI document: 1.0.0.beta7 + The version of the OpenAPI document: 1.0.0-beta6 Generated by OpenAPI Generator (https://openapi-generator.tech) Do not edit the class manually. diff --git a/codegen/out/aignx/codegen/models/validation_error_loc_inner.py b/codegen/out/aignx/codegen/models/validation_error_loc_inner.py index 2f752037..9631f2c9 100644 --- a/codegen/out/aignx/codegen/models/validation_error_loc_inner.py +++ b/codegen/out/aignx/codegen/models/validation_error_loc_inner.py @@ -5,7 +5,7 @@ The Aignostics Platform is a cloud-based service that enables organizations to access advanced computational pathology applications through a secure API. The platform provides standardized access to Aignostics' portfolio of computational pathology solutions, with Atlas H&E-TME serving as an example of the available API endpoints. To begin using the platform, your organization must first be registered by our business support team. If you don't have an account yet, please contact your account manager or email support@aignostics.com to get started. More information about our applications can be found on [https://platform.aignostics.com](https://platform.aignostics.com). **How to authorize and test API endpoints:** 1. Click the \"Authorize\" button in the right corner below 3. Click \"Authorize\" button in the dialog to log in with your Aignostics Platform credentials 4. After successful login, you'll be redirected back and can use \"Try it out\" on any endpoint **Note**: You only need to authorize once per session. The lock icons next to endpoints will show green when authorized. - The version of the OpenAPI document: 1.0.0.beta7 + The version of the OpenAPI document: 1.0.0-beta6 Generated by OpenAPI Generator (https://openapi-generator.tech) Do not edit the class manually. diff --git a/codegen/out/aignx/codegen/models/version_read_response.py b/codegen/out/aignx/codegen/models/version_read_response.py index f122822a..5e922f89 100644 --- a/codegen/out/aignx/codegen/models/version_read_response.py +++ b/codegen/out/aignx/codegen/models/version_read_response.py @@ -5,7 +5,7 @@ The Aignostics Platform is a cloud-based service that enables organizations to access advanced computational pathology applications through a secure API. The platform provides standardized access to Aignostics' portfolio of computational pathology solutions, with Atlas H&E-TME serving as an example of the available API endpoints. To begin using the platform, your organization must first be registered by our business support team. If you don't have an account yet, please contact your account manager or email support@aignostics.com to get started. More information about our applications can be found on [https://platform.aignostics.com](https://platform.aignostics.com). **How to authorize and test API endpoints:** 1. Click the \"Authorize\" button in the right corner below 3. Click \"Authorize\" button in the dialog to log in with your Aignostics Platform credentials 4. After successful login, you'll be redirected back and can use \"Try it out\" on any endpoint **Note**: You only need to authorize once per session. The lock icons next to endpoints will show green when authorized. - The version of the OpenAPI document: 1.0.0.beta7 + The version of the OpenAPI document: 1.0.0-beta6 Generated by OpenAPI Generator (https://openapi-generator.tech) Do not edit the class manually. @@ -19,7 +19,7 @@ from datetime import datetime from pydantic import BaseModel, ConfigDict, Field, StrictStr -from typing import Any, ClassVar, Dict, List +from typing import Any, ClassVar, Dict, List, Optional from aignx.codegen.models.input_artifact import InputArtifact from aignx.codegen.models.output_artifact import OutputArtifact from typing import Optional, Set @@ -27,14 +27,17 @@ class VersionReadResponse(BaseModel): """ - Base Response schema for the `Application Version Details` endpoint + Response schema for `Application Version Details` endpoint """ # noqa: E501 - version_number: StrictStr = Field(description="Semantic version of the application") + application_version_id: StrictStr = Field(description="Application version ID") + version: StrictStr = Field(description="Semantic version of the application") + application_id: StrictStr = Field(description="Application ID") + flow_id: Optional[StrictStr] = None changelog: StrictStr = Field(description="Description of the changes relative to the previous version") input_artifacts: List[InputArtifact] = Field(description="List of the input fields, provided by the User") output_artifacts: List[OutputArtifact] = Field(description="List of the output fields, generated by the application") - released_at: datetime = Field(description="The timestamp when the application version was registered") - __properties: ClassVar[List[str]] = ["version_number", "changelog", "input_artifacts", "output_artifacts", "released_at"] + created_at: datetime = Field(description="The timestamp when the application version was registered") + __properties: ClassVar[List[str]] = ["application_version_id", "version", "application_id", "flow_id", "changelog", "input_artifacts", "output_artifacts", "created_at"] model_config = ConfigDict( populate_by_name=True, @@ -89,6 +92,11 @@ def to_dict(self) -> Dict[str, Any]: if _item_output_artifacts: _items.append(_item_output_artifacts.to_dict()) _dict['output_artifacts'] = _items + # set to None if flow_id (nullable) is None + # and model_fields_set contains the field + if self.flow_id is None and "flow_id" in self.model_fields_set: + _dict['flow_id'] = None + return _dict @classmethod @@ -101,11 +109,14 @@ def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: return cls.model_validate(obj) _obj = cls.model_validate({ - "version_number": obj.get("version_number"), + "application_version_id": obj.get("application_version_id"), + "version": obj.get("version"), + "application_id": obj.get("application_id"), + "flow_id": obj.get("flow_id"), "changelog": obj.get("changelog"), "input_artifacts": [InputArtifact.from_dict(_item) for _item in obj["input_artifacts"]] if obj.get("input_artifacts") is not None else None, "output_artifacts": [OutputArtifact.from_dict(_item) for _item in obj["output_artifacts"]] if obj.get("output_artifacts") is not None else None, - "released_at": obj.get("released_at") + "created_at": obj.get("created_at") }) return _obj diff --git a/codegen/out/aignx/codegen/rest.py b/codegen/out/aignx/codegen/rest.py index 88f1ea20..b1af27c9 100644 --- a/codegen/out/aignx/codegen/rest.py +++ b/codegen/out/aignx/codegen/rest.py @@ -5,7 +5,7 @@ The Aignostics Platform is a cloud-based service that enables organizations to access advanced computational pathology applications through a secure API. The platform provides standardized access to Aignostics' portfolio of computational pathology solutions, with Atlas H&E-TME serving as an example of the available API endpoints. To begin using the platform, your organization must first be registered by our business support team. If you don't have an account yet, please contact your account manager or email support@aignostics.com to get started. More information about our applications can be found on [https://platform.aignostics.com](https://platform.aignostics.com). **How to authorize and test API endpoints:** 1. Click the \"Authorize\" button in the right corner below 3. Click \"Authorize\" button in the dialog to log in with your Aignostics Platform credentials 4. After successful login, you'll be redirected back and can use \"Try it out\" on any endpoint **Note**: You only need to authorize once per session. The lock icons next to endpoints will show green when authorized. - The version of the OpenAPI document: 1.0.0.beta7 + The version of the OpenAPI document: 1.0.0-beta6 Generated by OpenAPI Generator (https://openapi-generator.tech) Do not edit the class manually. diff --git a/codegen/out/docs/PublicApi.md b/codegen/out/docs/PublicApi.md index 751a2681..d742b75e 100644 --- a/codegen/out/docs/PublicApi.md +++ b/codegen/out/docs/PublicApi.md @@ -4,23 +4,22 @@ All URIs are relative to */api* Method | HTTP request | Description ------------- | ------------- | ------------- -[**application_version_details_v1_applications_application_id_versions_version_get**](PublicApi.md#application_version_details_v1_applications_application_id_versions_version_get) | **GET** /v1/applications/{application_id}/versions/{version} | Application Version Details -[**cancel_run_v1_runs_run_id_cancel_post**](PublicApi.md#cancel_run_v1_runs_run_id_cancel_post) | **POST** /v1/runs/{run_id}/cancel | Cancel Run -[**create_run_v1_runs_post**](PublicApi.md#create_run_v1_runs_post) | **POST** /v1/runs | Initiate Run -[**delete_run_items_v1_runs_run_id_artifacts_delete**](PublicApi.md#delete_run_items_v1_runs_run_id_artifacts_delete) | **DELETE** /v1/runs/{run_id}/artifacts | Delete Run Items -[**get_item_by_run_v1_runs_run_id_items_external_id_get**](PublicApi.md#get_item_by_run_v1_runs_run_id_items_external_id_get) | **GET** /v1/runs/{run_id}/items/{external_id} | Get Item By Run +[**application_version_details_v1_versions_application_version_id_get**](PublicApi.md#application_version_details_v1_versions_application_version_id_get) | **GET** /v1/versions/{application_version_id} | Application Version Details +[**cancel_application_run_v1_runs_application_run_id_cancel_post**](PublicApi.md#cancel_application_run_v1_runs_application_run_id_cancel_post) | **POST** /v1/runs/{application_run_id}/cancel | Cancel Application Run +[**create_application_run_v1_runs_post**](PublicApi.md#create_application_run_v1_runs_post) | **POST** /v1/runs | Initiate Application Run +[**delete_application_run_results_v1_runs_application_run_id_results_delete**](PublicApi.md#delete_application_run_results_v1_runs_application_run_id_results_delete) | **DELETE** /v1/runs/{application_run_id}/results | Delete Application Run Results +[**get_item_v1_items_item_id_get**](PublicApi.md#get_item_v1_items_item_id_get) | **GET** /v1/items/{item_id} | Get Item [**get_me_v1_me_get**](PublicApi.md#get_me_v1_me_get) | **GET** /v1/me | Get current user -[**get_run_v1_runs_run_id_get**](PublicApi.md#get_run_v1_runs_run_id_get) | **GET** /v1/runs/{run_id} | Get run details +[**get_run_v1_runs_application_run_id_get**](PublicApi.md#get_run_v1_runs_application_run_id_get) | **GET** /v1/runs/{application_run_id} | Get run details +[**list_application_runs_v1_runs_get**](PublicApi.md#list_application_runs_v1_runs_get) | **GET** /v1/runs | List Application Runs [**list_applications_v1_applications_get**](PublicApi.md#list_applications_v1_applications_get) | **GET** /v1/applications | List available applications -[**list_run_items_v1_runs_run_id_items_get**](PublicApi.md#list_run_items_v1_runs_run_id_items_get) | **GET** /v1/runs/{run_id}/items | List Run Items -[**list_runs_v1_runs_get**](PublicApi.md#list_runs_v1_runs_get) | **GET** /v1/runs | List Runs -[**put_item_custom_metadata_by_run_v1_runs_run_id_items_external_id_custom_metadata_put**](PublicApi.md#put_item_custom_metadata_by_run_v1_runs_run_id_items_external_id_custom_metadata_put) | **PUT** /v1/runs/{run_id}/items/{external_id}/custom-metadata | Put Item Custom Metadata By Run -[**put_run_custom_metadata_v1_runs_run_id_custom_metadata_put**](PublicApi.md#put_run_custom_metadata_v1_runs_run_id_custom_metadata_put) | **PUT** /v1/runs/{run_id}/custom-metadata | Put Run Custom Metadata +[**list_run_results_v1_runs_application_run_id_results_get**](PublicApi.md#list_run_results_v1_runs_application_run_id_results_get) | **GET** /v1/runs/{application_run_id}/results | List Run Results +[**list_versions_by_application_id_v1_applications_application_id_versions_get**](PublicApi.md#list_versions_by_application_id_v1_applications_application_id_versions_get) | **GET** /v1/applications/{application_id}/versions | List Available Application Versions [**read_application_by_id_v1_applications_application_id_get**](PublicApi.md#read_application_by_id_v1_applications_application_id_get) | **GET** /v1/applications/{application_id} | Read Application By Id -# **application_version_details_v1_applications_application_id_versions_version_get** -> VersionReadResponse application_version_details_v1_applications_application_id_versions_version_get(application_id, version) +# **application_version_details_v1_versions_application_version_id_get** +> VersionReadResponse application_version_details_v1_versions_application_version_id_get(application_version_id) Application Version Details @@ -53,16 +52,15 @@ configuration.access_token = os.environ["ACCESS_TOKEN"] with aignx.codegen.ApiClient(configuration) as api_client: # Create an instance of the API class api_instance = aignx.codegen.PublicApi(api_client) - application_id = 'application_id_example' # str | - version = 'version_example' # str | + application_version_id = 'application_version_id_example' # str | try: # Application Version Details - api_response = api_instance.application_version_details_v1_applications_application_id_versions_version_get(application_id, version) - print("The response of PublicApi->application_version_details_v1_applications_application_id_versions_version_get:\n") + api_response = api_instance.application_version_details_v1_versions_application_version_id_get(application_version_id) + print("The response of PublicApi->application_version_details_v1_versions_application_version_id_get:\n") pprint(api_response) except Exception as e: - print("Exception when calling PublicApi->application_version_details_v1_applications_application_id_versions_version_get: %s\n" % e) + print("Exception when calling PublicApi->application_version_details_v1_versions_application_version_id_get: %s\n" % e) ``` @@ -72,8 +70,7 @@ with aignx.codegen.ApiClient(configuration) as api_client: Name | Type | Description | Notes ------------- | ------------- | ------------- | ------------- - **application_id** | **str**| | - **version** | **str**| | + **application_version_id** | **str**| | ### Return type @@ -94,17 +91,17 @@ Name | Type | Description | Notes |-------------|-------------|------------------| **200** | Successful Response | - | **403** | Forbidden - You don't have permission to see this version | - | -**404** | Not Found - Application version with given ID is not available to you or does not exist | - | +**404** | Not Found - Application version with given ID does not exist | - | **422** | Validation Error | - | [[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) -# **cancel_run_v1_runs_run_id_cancel_post** -> object cancel_run_v1_runs_run_id_cancel_post(run_id) +# **cancel_application_run_v1_runs_application_run_id_cancel_post** +> cancel_application_run_v1_runs_application_run_id_cancel_post(application_run_id) -Cancel Run +Cancel Application Run -The run can be canceled by the user who created the run. The execution can be canceled any time while the application is not in a final state. The pending items will not be processed and will not add to the cost. When the application is canceled, the already completed items stay available for download. +The application run can be canceled by the user who created the application run. The execution can be canceled any time while the application is not in a final state. The pending items will not be processed and will not add to the cost. When the application is canceled, the already completed items stay available for download. ### Example @@ -132,15 +129,13 @@ configuration.access_token = os.environ["ACCESS_TOKEN"] with aignx.codegen.ApiClient(configuration) as api_client: # Create an instance of the API class api_instance = aignx.codegen.PublicApi(api_client) - run_id = 'run_id_example' # str | Run id, returned by `POST /runs/` endpoint + application_run_id = 'application_run_id_example' # str | Application run id, returned by `POST /runs/` endpoint try: - # Cancel Run - api_response = api_instance.cancel_run_v1_runs_run_id_cancel_post(run_id) - print("The response of PublicApi->cancel_run_v1_runs_run_id_cancel_post:\n") - pprint(api_response) + # Cancel Application Run + api_instance.cancel_application_run_v1_runs_application_run_id_cancel_post(application_run_id) except Exception as e: - print("Exception when calling PublicApi->cancel_run_v1_runs_run_id_cancel_post: %s\n" % e) + print("Exception when calling PublicApi->cancel_application_run_v1_runs_application_run_id_cancel_post: %s\n" % e) ``` @@ -150,11 +145,11 @@ with aignx.codegen.ApiClient(configuration) as api_client: Name | Type | Description | Notes ------------- | ------------- | ------------- | ------------- - **run_id** | **str**| Run id, returned by `POST /runs/` endpoint | + **application_run_id** | **str**| Application run id, returned by `POST /runs/` endpoint | ### Return type -**object** +void (empty response body) ### Authorization @@ -169,20 +164,19 @@ Name | Type | Description | Notes | Status code | Description | Response headers | |-------------|-------------|------------------| -**202** | Successful Response | - | +**202** | Run cancelled successfully | - | **404** | Run not found | - | **403** | Forbidden - You don't have permission to cancel this run | - | -**409** | Conflict - The Run is already cancelled | - | **422** | Validation Error | - | [[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) -# **create_run_v1_runs_post** -> RunCreationResponse create_run_v1_runs_post(run_creation_request) +# **create_application_run_v1_runs_post** +> RunCreationResponse create_application_run_v1_runs_post(run_creation_request) -Initiate Run +Initiate Application Run -This endpoint initiates a processing run for a selected application and version, and returns a `run_id` for tracking purposes. Slide processing occurs asynchronously, allowing you to retrieve results for individual slides as soon as they complete processing. The system typically processes slides in batches of four, though this number may be reduced during periods of high demand. Below is an example of the required payload for initiating an Atlas H&E TME processing run. ### Payload The payload includes `application_id`, optional `version_number`, and `items` base fields. `application_id` is the unique identifier for the application. `version_number` is the semantic version to use. If not provided, the latest available version will be used. `items` includes the list of the items to process (slides, in case of HETA application). Every item has a set of standard fields defined by the API, plus the custom_metadata, specific to the chosen application. Example payload structure with the comments: ``` { application_id: \"he-tme\", version_number: \"1.0.0-beta\", items: [{ \"external_id\": \"slide_1\", \"input_artifacts\": [{ \"name\": \"user_slide\", \"download_url\": \"https://...\", \"custom_metadata\": { \"specimen\": { \"disease\": \"LUNG_CANCER\", \"tissue\": \"LUNG\" }, \"staining_method\": \"H&E\", \"width_px\": 136223, \"height_px\": 87761, \"resolution_mpp\": 0.2628238, \"media-type\":\"image/tiff\", \"checksum_base64_crc32c\": \"64RKKA==\" } }] }] } ``` | Parameter | Description | | :---- | :---- | | `application_id` required | Unique ID for the application | | `version_number` optional | Semantic version of the application. If not provided, the latest available version will be used | | `items` required | List of submitted items (WSIs) with parameters described below. | | `external_id` required | Unique WSI name or ID for easy reference to items, provided by the caller. The external_id should be unique across all items of the run. | | `input_artifacts` required | List of provided artifacts for a WSI; at the moment Atlas H&E-TME receives only 1 artifact per slide (the slide itself), but for some other applications this can be a slide and an segmentation map | | `name` required | Type of artifact; Atlas H&E-TME supports only `\"input_slide\"` | | `download_url` required | Signed URL to the input file in the S3 or GCS; Should be valid for at least 6 days | | `specimen: disease` required | Supported cancer types for Atlas H&E-TME (see full list in Atlas H&E-TME manual) | | `specimen: tissue` required | Supported tissue types for Atlas H&E-TME (see full list in Atlas H&E-TME manual) | | `staining_method` required | WSI stain /bio-marker; Atlas H&E-TME supports only `\"H&E\"` | | `width_px` required | Integer value. Number of pixels of the WSI in the X dimension. | | `height_px` required | Integer value. Number of pixels of the WSI in the Y dimension. | | `resolution_mpp` required | Resolution of WSI in micrometers per pixel; check allowed range in Atlas H&E-TME manual | | `media-type` required | Supported media formats; available values are: image/tiff (for .tiff or .tif WSI) application/dicom (for DICOM ) application/zip (for zipped DICOM) application/octet-stream (for .svs WSI) | | `checksum_base64_crc32c` required | Base64 encoded big-endian CRC32C checksum of the WSI image | ### Response The endpoint returns the run UUID. After that the job is scheduled for the execution in the background. To check the status of the run call `v1/runs/{run_id}`. ### Rejection Apart from the authentication, authorization and malformed input error, the request can be rejected when the quota limit is exceeded. More details on quotas is described in the documentation +This endpoint initiates a processing run for a selected application version and returns an `application_run_id` for tracking purposes. Slide processing occurs asynchronously, allowing you to retrieve results for individual slides as soon as they complete processing. The system typically processes slides in batches of four, though this number may be reduced during periods of high demand. Below is an example of the required payload for initiating an Atlas H&E TME processing run. ### Payload The payload includes `application_version_id` and `items` base fields. `application_version_id` is the id used for `/v1/versions/{application_id}` endpoint. `items` includes the list of the items to process (slides, in case of HETA application). Every item has a set of standard fields defined by the API, plus the metadata, specific to the chosen application. Example payload structure with the comments: ``` { application_version_id: \"he-tme:v1.0.0-beta\", items: [{ \"reference\": \"slide_1\", \"input_artifacts\": [{ \"name\": \"user_slide\", \"download_url\": \"https://...\", \"metadata\": { \"specimen\": { \"disease\": \"LUNG_CANCER\", \"tissue\": \"LUNG\" }, \"staining_method\": \"H&E\", \"width_px\": 136223, \"height_px\": 87761, \"resolution_mpp\": 0.2628238, \"media-type\":\"image/tiff\", \"checksum_base64_crc32c\": \"64RKKA==\" } }] }] } ``` | Parameter | Description | | :---- | :---- | | `application_version_id` required | Unique ID for the application (must include version) | | `items` required | List of submitted items (WSIs) with parameters described below. | | `reference` required | Unique WSI name or ID for easy reference to results, provided by the caller. The reference should be unique across all items of the application run. | | `input_artifacts` required | List of provided artifacts for a WSI; at the moment Atlas H&E-TME receives only 1 artifact per slide (the slide itself), but for some other applications this can be a slide and an segmentation map | | `name` required | Type of artifact; Atlas H&E-TME supports only `\"input_slide\"` | | `download_url` required | Signed URL to the input file in the S3 or GCS; Should be valid for at least 6 days | | `specimen: disease` required | Supported cancer types for Atlas H&E-TME (see full list in Atlas H&E-TME manual) | | `specimen: tissue` required | Supported tissue types for Atlas H&E-TME (see full list in Atlas H&E-TME manual) | | `staining_method` required | WSI stain /bio-marker; Atlas H&E-TME supports only `\"H&E\"` | | `width_px` required | Integer value. Number of pixels of the WSI in the X dimension. | | `height_px` required | Integer value. Number of pixels of the WSI in the Y dimension. | | `resolution_mpp` required | Resolution of WSI in micrometers per pixel; check allowed range in Atlas H&E-TME manual | | `media-type` required | Supported media formats; available values are: image/tiff (for .tiff or .tif WSI) application/dicom (for DICOM ) application/zip (for zipped DICOM) application/octet-stream (for .svs WSI) | | `checksum_base64_crc32c` required | Base64 encoded big-endian CRC32C checksum of the WSI image | ### Response The endpoint returns the application run UUID. After that the job is scheduled for the execution in the background. To check the status of the run call `v1/runs/{application_run_id}`. ### Rejection Apart from the authentication, authorization and malformed input error, the request can be rejected when the quota limit is exceeded. More details on quotas is described in the documentation ### Example @@ -215,12 +209,12 @@ with aignx.codegen.ApiClient(configuration) as api_client: run_creation_request = aignx.codegen.RunCreationRequest() # RunCreationRequest | try: - # Initiate Run - api_response = api_instance.create_run_v1_runs_post(run_creation_request) - print("The response of PublicApi->create_run_v1_runs_post:\n") + # Initiate Application Run + api_response = api_instance.create_application_run_v1_runs_post(run_creation_request) + print("The response of PublicApi->create_application_run_v1_runs_post:\n") pprint(api_response) except Exception as e: - print("Exception when calling PublicApi->create_run_v1_runs_post: %s\n" % e) + print("Exception when calling PublicApi->create_application_run_v1_runs_post: %s\n" % e) ``` @@ -257,12 +251,12 @@ Name | Type | Description | Notes [[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) -# **delete_run_items_v1_runs_run_id_artifacts_delete** -> object delete_run_items_v1_runs_run_id_artifacts_delete(run_id) +# **delete_application_run_results_v1_runs_application_run_id_results_delete** +> delete_application_run_results_v1_runs_application_run_id_results_delete(application_run_id) -Delete Run Items +Delete Application Run Results -This endpoint allows the caller to explicitly delete artifacts generated by a run. It can only be invoked when the run has reached a final state (PROCESSED, CANCELED_SYSTEM, CANCELED_USER). Note that by default, all artifacts are automatically deleted 30 days after the run finishes, regardless of whether the caller explicitly requests deletion. +This endpoint allows the caller to explicitly delete outputs generated by an application. It can only be invoked when the application run has reached a final state (COMPLETED, COMPLETED_WITH_ERROR, CANCELED_USER, or CANCELED_SYSTEM). Note that by default, all outputs are automatically deleted 30 days after the application run finishes, regardless of whether the caller explicitly requests deletion. ### Example @@ -290,15 +284,13 @@ configuration.access_token = os.environ["ACCESS_TOKEN"] with aignx.codegen.ApiClient(configuration) as api_client: # Create an instance of the API class api_instance = aignx.codegen.PublicApi(api_client) - run_id = 'run_id_example' # str | Run id, returned by `POST /runs/` endpoint + application_run_id = 'application_run_id_example' # str | Application run id, returned by `POST /runs/` endpoint try: - # Delete Run Items - api_response = api_instance.delete_run_items_v1_runs_run_id_artifacts_delete(run_id) - print("The response of PublicApi->delete_run_items_v1_runs_run_id_artifacts_delete:\n") - pprint(api_response) + # Delete Application Run Results + api_instance.delete_application_run_results_v1_runs_application_run_id_results_delete(application_run_id) except Exception as e: - print("Exception when calling PublicApi->delete_run_items_v1_runs_run_id_artifacts_delete: %s\n" % e) + print("Exception when calling PublicApi->delete_application_run_results_v1_runs_application_run_id_results_delete: %s\n" % e) ``` @@ -308,11 +300,11 @@ with aignx.codegen.ApiClient(configuration) as api_client: Name | Type | Description | Notes ------------- | ------------- | ------------- | ------------- - **run_id** | **str**| Run id, returned by `POST /runs/` endpoint | + **application_run_id** | **str**| Application run id, returned by `POST /runs/` endpoint | ### Return type -**object** +void (empty response body) ### Authorization @@ -327,18 +319,18 @@ Name | Type | Description | Notes | Status code | Description | Response headers | |-------------|-------------|------------------| -**200** | Run artifacts deleted | - | -**404** | Run not found | - | +**204** | All application outputs successfully deleted | - | +**404** | Application run not found | - | **422** | Validation Error | - | [[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) -# **get_item_by_run_v1_runs_run_id_items_external_id_get** -> ItemResultReadResponse get_item_by_run_v1_runs_run_id_items_external_id_get(run_id, external_id) +# **get_item_v1_items_item_id_get** +> ItemReadResponse get_item_v1_items_item_id_get(item_id) -Get Item By Run +Get Item -Retrieve details of a specific item (slide) by its external ID and the run ID. +Retrieve details of a specific item (slide) by its ID. ### Example @@ -346,7 +338,7 @@ Retrieve details of a specific item (slide) by its external ID and the run ID. ```python import aignx.codegen -from aignx.codegen.models.item_result_read_response import ItemResultReadResponse +from aignx.codegen.models.item_read_response import ItemReadResponse from aignx.codegen.rest import ApiException from pprint import pprint @@ -367,16 +359,15 @@ configuration.access_token = os.environ["ACCESS_TOKEN"] with aignx.codegen.ApiClient(configuration) as api_client: # Create an instance of the API class api_instance = aignx.codegen.PublicApi(api_client) - run_id = 'run_id_example' # str | The run id, returned by `POST /runs/` endpoint - external_id = 'external_id_example' # str | The `external_id` that was defined for the item by the customer that triggered the run. + item_id = 'item_id_example' # str | try: - # Get Item By Run - api_response = api_instance.get_item_by_run_v1_runs_run_id_items_external_id_get(run_id, external_id) - print("The response of PublicApi->get_item_by_run_v1_runs_run_id_items_external_id_get:\n") + # Get Item + api_response = api_instance.get_item_v1_items_item_id_get(item_id) + print("The response of PublicApi->get_item_v1_items_item_id_get:\n") pprint(api_response) except Exception as e: - print("Exception when calling PublicApi->get_item_by_run_v1_runs_run_id_items_external_id_get: %s\n" % e) + print("Exception when calling PublicApi->get_item_v1_items_item_id_get: %s\n" % e) ``` @@ -386,12 +377,11 @@ with aignx.codegen.ApiClient(configuration) as api_client: Name | Type | Description | Notes ------------- | ------------- | ------------- | ------------- - **run_id** | **str**| The run id, returned by `POST /runs/` endpoint | - **external_id** | **str**| The `external_id` that was defined for the item by the customer that triggered the run. | + **item_id** | **str**| | ### Return type -[**ItemResultReadResponse**](ItemResultReadResponse.md) +[**ItemReadResponse**](ItemReadResponse.md) ### Authorization @@ -407,8 +397,8 @@ Name | Type | Description | Notes | Status code | Description | Response headers | |-------------|-------------|------------------| **200** | Successful Response | - | -**404** | Not Found - Item with given ID does not exist | - | **403** | Forbidden - You don't have permission to see this item | - | +**404** | Not Found - Item with given ID does not exist | - | **422** | Validation Error | - | [[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) @@ -484,12 +474,12 @@ This endpoint does not need any parameter. [[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) -# **get_run_v1_runs_run_id_get** -> RunReadResponse get_run_v1_runs_run_id_get(run_id) +# **get_run_v1_runs_application_run_id_get** +> RunReadResponse get_run_v1_runs_application_run_id_get(application_run_id) Get run details -This endpoint allows the caller to retrieve the current status of a run along with other relevant run details. A run becomes available immediately after it is created through the POST `/runs/` endpoint. To download the output results, use GET `/runs/{run_id}/` items to get outputs for all slides. Access to a run is restricted to the user who created it. +This endpoint allows the caller to retrieve the current status of an application run along with other relevant run details. A run becomes available immediately after it is created through the POST `/runs/` endpoint. To download the output results, use GET `/runs/{application_run_id}/` results to get outputs for all slides. Access to a run is restricted to the user who created it. ### Example @@ -518,15 +508,15 @@ configuration.access_token = os.environ["ACCESS_TOKEN"] with aignx.codegen.ApiClient(configuration) as api_client: # Create an instance of the API class api_instance = aignx.codegen.PublicApi(api_client) - run_id = 'run_id_example' # str | Run id, returned by `POST /runs/` endpoint + application_run_id = 'application_run_id_example' # str | Application run id, returned by `POST /runs/` endpoint try: # Get run details - api_response = api_instance.get_run_v1_runs_run_id_get(run_id) - print("The response of PublicApi->get_run_v1_runs_run_id_get:\n") + api_response = api_instance.get_run_v1_runs_application_run_id_get(application_run_id) + print("The response of PublicApi->get_run_v1_runs_application_run_id_get:\n") pprint(api_response) except Exception as e: - print("Exception when calling PublicApi->get_run_v1_runs_run_id_get: %s\n" % e) + print("Exception when calling PublicApi->get_run_v1_runs_application_run_id_get: %s\n" % e) ``` @@ -536,7 +526,7 @@ with aignx.codegen.ApiClient(configuration) as api_client: Name | Type | Description | Notes ------------- | ------------- | ------------- | ------------- - **run_id** | **str**| Run id, returned by `POST /runs/` endpoint | + **application_run_id** | **str**| Application run id, returned by `POST /runs/` endpoint | ### Return type @@ -556,18 +546,18 @@ Name | Type | Description | Notes | Status code | Description | Response headers | |-------------|-------------|------------------| **200** | Successful Response | - | -**404** | Run not found because it was deleted. | - | +**404** | Application run not found because it was deleted. | - | **403** | Forbidden - You don't have permission to see this run | - | **422** | Validation Error | - | [[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) -# **list_applications_v1_applications_get** -> List[ApplicationReadShortResponse] list_applications_v1_applications_get(page=page, page_size=page_size, sort=sort) +# **list_application_runs_v1_runs_get** +> List[RunReadResponse] list_application_runs_v1_runs_get(application_id=application_id, application_version=application_version, metadata=metadata, page=page, page_size=page_size, sort=sort) -List available applications +List Application Runs -Returns the list of the applications, available to the caller. The application is available if any of the versions of the application is assigned to the caller’s organization. The response is paginated and sorted according to the provided parameters. +List application runs with filtering, sorting, and pagination capabilities. Returns paginated application runs that were triggered by the user. ### Example @@ -575,7 +565,7 @@ Returns the list of the applications, available to the caller. The application ```python import aignx.codegen -from aignx.codegen.models.application_read_short_response import ApplicationReadShortResponse +from aignx.codegen.models.run_read_response import RunReadResponse from aignx.codegen.rest import ApiException from pprint import pprint @@ -596,17 +586,20 @@ configuration.access_token = os.environ["ACCESS_TOKEN"] with aignx.codegen.ApiClient(configuration) as api_client: # Create an instance of the API class api_instance = aignx.codegen.PublicApi(api_client) + application_id = 'application_id_example' # str | Optional application ID filter (optional) + application_version = 'application_version_example' # str | Optional application version filter (optional) + metadata = '$.project' # str | Use PostgreSQL JSONPath expressions to filter runs by their metadata. #### URL Encoding Required **Important**: JSONPath expressions contain special characters that must be URL-encoded when used in query parameters. Most HTTP clients handle this automatically, but when constructing URLs manually, ensure proper encoding. #### Examples (Clear Format): - **Field existence**: `$.project` - Runs that have a project field defined - **Exact value match**: `$.project ? (@ == \"cancer-research\")` - Runs with specific project value - **Numeric comparison**: `$.duration_hours ? (@ < 2)` - Runs with duration less than 2 hours - **Array operations**: `$.tags[*] ? (@ == \"production\")` - Runs tagged with \"production\" - **Complex conditions**: `$.resources ? (@.gpu_count > 2 && @.memory_gb >= 16)` - Runs with high resource requirements #### Examples (URL-Encoded Format): - **Field existence**: `%24.project` - **Exact value match**: `%24.project%20%3F%20(%40%20%3D%3D%20%22cancer-research%22)` - **Numeric comparison**: `%24.duration_hours%20%3F%20(%40%20%3C%202)` - **Array operations**: `%24.tags%5B*%5D%20%3F%20(%40%20%3D%3D%20%22production%22)` - **Complex conditions**: `%24.resources%20%3F%20(%40.gpu_count%20%3E%202%20%26%26%20%40.memory_gb%20%3E%3D%2016)` #### Notes - JSONPath expressions are evaluated using PostgreSQL's `@?` operator - The `$.` prefix is automatically added to root-level field references if missing - String values in conditions must be enclosed in double quotes - Use `&&` for AND operations and `||` for OR operations - Regular expressions use `like_regex` with standard regex syntax - **Remember to URL-encode the entire JSONPath expression when making HTTP requests** (optional) page = 1 # int | (optional) (default to 1) page_size = 50 # int | (optional) (default to 50) - sort = ['sort_example'] # List[str] | Sort the results by one or more fields. Use `+` for ascending and `-` for descending order. **Available fields:** - `application_id` - `name` - `description` - `regulatory_classes` **Examples:** - `?sort=application_id` - Sort by application_id ascending - `?sort=-name` - Sort by name descending - `?sort=+description&sort=name` - Sort by description ascending, then name descending (optional) + sort = ['sort_example'] # List[str] | Sort the results by one or more fields. Use `+` for ascending and `-` for descending order. **Available fields:** - `application_run_id` - `application_version_id` - `organization_id` - `status` - `triggered_at` - `triggered_by` **Examples:** - `?sort=triggered_at` - Sort by creation time (ascending) - `?sort=-triggered_at` - Sort by creation time (descending) - `?sort=status&sort=-triggered_at` - Sort by status, then by time (descending) (optional) try: - # List available applications - api_response = api_instance.list_applications_v1_applications_get(page=page, page_size=page_size, sort=sort) - print("The response of PublicApi->list_applications_v1_applications_get:\n") + # List Application Runs + api_response = api_instance.list_application_runs_v1_runs_get(application_id=application_id, application_version=application_version, metadata=metadata, page=page, page_size=page_size, sort=sort) + print("The response of PublicApi->list_application_runs_v1_runs_get:\n") pprint(api_response) except Exception as e: - print("Exception when calling PublicApi->list_applications_v1_applications_get: %s\n" % e) + print("Exception when calling PublicApi->list_application_runs_v1_runs_get: %s\n" % e) ``` @@ -616,13 +609,16 @@ with aignx.codegen.ApiClient(configuration) as api_client: Name | Type | Description | Notes ------------- | ------------- | ------------- | ------------- + **application_id** | **str**| Optional application ID filter | [optional] + **application_version** | **str**| Optional application version filter | [optional] + **metadata** | **str**| Use PostgreSQL JSONPath expressions to filter runs by their metadata. #### URL Encoding Required **Important**: JSONPath expressions contain special characters that must be URL-encoded when used in query parameters. Most HTTP clients handle this automatically, but when constructing URLs manually, ensure proper encoding. #### Examples (Clear Format): - **Field existence**: `$.project` - Runs that have a project field defined - **Exact value match**: `$.project ? (@ == \"cancer-research\")` - Runs with specific project value - **Numeric comparison**: `$.duration_hours ? (@ < 2)` - Runs with duration less than 2 hours - **Array operations**: `$.tags[*] ? (@ == \"production\")` - Runs tagged with \"production\" - **Complex conditions**: `$.resources ? (@.gpu_count > 2 && @.memory_gb >= 16)` - Runs with high resource requirements #### Examples (URL-Encoded Format): - **Field existence**: `%24.project` - **Exact value match**: `%24.project%20%3F%20(%40%20%3D%3D%20%22cancer-research%22)` - **Numeric comparison**: `%24.duration_hours%20%3F%20(%40%20%3C%202)` - **Array operations**: `%24.tags%5B*%5D%20%3F%20(%40%20%3D%3D%20%22production%22)` - **Complex conditions**: `%24.resources%20%3F%20(%40.gpu_count%20%3E%202%20%26%26%20%40.memory_gb%20%3E%3D%2016)` #### Notes - JSONPath expressions are evaluated using PostgreSQL's `@?` operator - The `$.` prefix is automatically added to root-level field references if missing - String values in conditions must be enclosed in double quotes - Use `&&` for AND operations and `||` for OR operations - Regular expressions use `like_regex` with standard regex syntax - **Remember to URL-encode the entire JSONPath expression when making HTTP requests** | [optional] **page** | **int**| | [optional] [default to 1] **page_size** | **int**| | [optional] [default to 50] - **sort** | [**List[str]**](str.md)| Sort the results by one or more fields. Use `+` for ascending and `-` for descending order. **Available fields:** - `application_id` - `name` - `description` - `regulatory_classes` **Examples:** - `?sort=application_id` - Sort by application_id ascending - `?sort=-name` - Sort by name descending - `?sort=+description&sort=name` - Sort by description ascending, then name descending | [optional] + **sort** | [**List[str]**](str.md)| Sort the results by one or more fields. Use `+` for ascending and `-` for descending order. **Available fields:** - `application_run_id` - `application_version_id` - `organization_id` - `status` - `triggered_at` - `triggered_by` **Examples:** - `?sort=triggered_at` - Sort by creation time (ascending) - `?sort=-triggered_at` - Sort by creation time (descending) - `?sort=status&sort=-triggered_at` - Sort by status, then by time (descending) | [optional] ### Return type -[**List[ApplicationReadShortResponse]**](ApplicationReadShortResponse.md) +[**List[RunReadResponse]**](RunReadResponse.md) ### Authorization @@ -637,18 +633,18 @@ Name | Type | Description | Notes | Status code | Description | Response headers | |-------------|-------------|------------------| -**200** | A list of applications available to the caller | - | -**401** | Unauthorized - Invalid or missing authentication | - | +**200** | Successful Response | - | +**404** | Application run not found | - | **422** | Validation Error | - | [[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) -# **list_run_items_v1_runs_run_id_items_get** -> List[ItemResultReadResponse] list_run_items_v1_runs_run_id_items_get(run_id, item_id__in=item_id__in, external_id__in=external_id__in, state=state, termination_reason=termination_reason, custom_metadata=custom_metadata, page=page, page_size=page_size, sort=sort) +# **list_applications_v1_applications_get** +> List[ApplicationReadResponse] list_applications_v1_applications_get(page=page, page_size=page_size, sort=sort) -List Run Items +List available applications -List items in a run with filtering, sorting, and pagination capabilities. Returns paginated items within a specific run. Results can be filtered by item IDs, external_ids, status, and custom_metadata using JSONPath expressions. ## JSONPath Metadata Filtering Use PostgreSQL JSONPath expressions to filter items using their custom_metadata. ### Examples: - **Field existence**: `$.case_id` - Results that have a case_id field defined - **Exact value match**: `$.priority ? (@ == \"high\")` - Results with high priority - **Numeric comparison**: `$.confidence_score ? (@ > 0.95)` - Results with high confidence - **Array operations**: `$.flags[*] ? (@ == \"reviewed\")` - Results flagged as reviewed - **Complex conditions**: `$.metrics ? (@.accuracy > 0.9 && @.recall > 0.8)` - Results meeting performance thresholds ## Notes - JSONPath expressions are evaluated using PostgreSQL's `@?` operator - The `$.` prefix is automatically added to root-level field references if missing - String values in conditions must be enclosed in double quotes - Use `&&` for AND operations and `||` for OR operations +Returns the list of the applications, available to the caller. The application is available if any of the versions of the application is assigned to the caller’s organization. The response is paginated and sorted according to the provided parameters. ### Example @@ -656,9 +652,7 @@ List items in a run with filtering, sorting, and pagination capabilities. Retur ```python import aignx.codegen -from aignx.codegen.models.item_result_read_response import ItemResultReadResponse -from aignx.codegen.models.item_state import ItemState -from aignx.codegen.models.item_termination_reason import ItemTerminationReason +from aignx.codegen.models.application_read_response import ApplicationReadResponse from aignx.codegen.rest import ApiException from pprint import pprint @@ -679,23 +673,17 @@ configuration.access_token = os.environ["ACCESS_TOKEN"] with aignx.codegen.ApiClient(configuration) as api_client: # Create an instance of the API class api_instance = aignx.codegen.PublicApi(api_client) - run_id = 'run_id_example' # str | Run id, returned by `POST /runs/` endpoint - item_id__in = ['item_id__in_example'] # List[str] | Filter for item ids (optional) - external_id__in = ['external_id__in_example'] # List[str] | Filter for items by their external_id from the input payload (optional) - state = aignx.codegen.ItemState() # ItemState | Filter items by their state (optional) - termination_reason = aignx.codegen.ItemTerminationReason() # ItemTerminationReason | Filter items by their termination reason. Only applies to TERMINATED items. (optional) - custom_metadata = '$' # str | JSONPath expression to filter items by their custom_metadata (optional) page = 1 # int | (optional) (default to 1) page_size = 50 # int | (optional) (default to 50) - sort = ['sort_example'] # List[str] | Sort the items by one or more fields. Use `+` for ascending and `-` for descending order. **Available fields:** - `item_id` - `run_id` - `external_id` - `custom_metadata` **Examples:** - `?sort=item_id` - Sort by id of the item (ascending) - `?sort=-external_id` - Sort by external ID (descending) - `?sort=custom_metadata&sort=-external_id` - Sort by metadata, then by external ID (descending) (optional) + sort = ['sort_example'] # List[str] | Sort the results by one or more fields. Use `+` for ascending and `-` for descending order. **Available fields:** - `application_id` - `name` - `description` - `regulatory_classes` **Examples:** - `?sort=application_id` - Sort by application_id ascending - `?sort=-name` - Sort by name descending - `?sort=+description&sort=name` - Sort by description ascending, then name descending (optional) try: - # List Run Items - api_response = api_instance.list_run_items_v1_runs_run_id_items_get(run_id, item_id__in=item_id__in, external_id__in=external_id__in, state=state, termination_reason=termination_reason, custom_metadata=custom_metadata, page=page, page_size=page_size, sort=sort) - print("The response of PublicApi->list_run_items_v1_runs_run_id_items_get:\n") + # List available applications + api_response = api_instance.list_applications_v1_applications_get(page=page, page_size=page_size, sort=sort) + print("The response of PublicApi->list_applications_v1_applications_get:\n") pprint(api_response) except Exception as e: - print("Exception when calling PublicApi->list_run_items_v1_runs_run_id_items_get: %s\n" % e) + print("Exception when calling PublicApi->list_applications_v1_applications_get: %s\n" % e) ``` @@ -705,19 +693,13 @@ with aignx.codegen.ApiClient(configuration) as api_client: Name | Type | Description | Notes ------------- | ------------- | ------------- | ------------- - **run_id** | **str**| Run id, returned by `POST /runs/` endpoint | - **item_id__in** | [**List[str]**](str.md)| Filter for item ids | [optional] - **external_id__in** | [**List[str]**](str.md)| Filter for items by their external_id from the input payload | [optional] - **state** | [**ItemState**](.md)| Filter items by their state | [optional] - **termination_reason** | [**ItemTerminationReason**](.md)| Filter items by their termination reason. Only applies to TERMINATED items. | [optional] - **custom_metadata** | **str**| JSONPath expression to filter items by their custom_metadata | [optional] **page** | **int**| | [optional] [default to 1] **page_size** | **int**| | [optional] [default to 50] - **sort** | [**List[str]**](str.md)| Sort the items by one or more fields. Use `+` for ascending and `-` for descending order. **Available fields:** - `item_id` - `run_id` - `external_id` - `custom_metadata` **Examples:** - `?sort=item_id` - Sort by id of the item (ascending) - `?sort=-external_id` - Sort by external ID (descending) - `?sort=custom_metadata&sort=-external_id` - Sort by metadata, then by external ID (descending) | [optional] + **sort** | [**List[str]**](str.md)| Sort the results by one or more fields. Use `+` for ascending and `-` for descending order. **Available fields:** - `application_id` - `name` - `description` - `regulatory_classes` **Examples:** - `?sort=application_id` - Sort by application_id ascending - `?sort=-name` - Sort by name descending - `?sort=+description&sort=name` - Sort by description ascending, then name descending | [optional] ### Return type -[**List[ItemResultReadResponse]**](ItemResultReadResponse.md) +[**List[ApplicationReadResponse]**](ApplicationReadResponse.md) ### Authorization @@ -732,18 +714,18 @@ Name | Type | Description | Notes | Status code | Description | Response headers | |-------------|-------------|------------------| -**200** | Successful Response | - | -**404** | Run not found | - | +**200** | A list of applications available to the caller | - | +**401** | Unauthorized - Invalid or missing authentication | - | **422** | Validation Error | - | [[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) -# **list_runs_v1_runs_get** -> List[RunReadResponse] list_runs_v1_runs_get(application_id=application_id, application_version=application_version, external_id=external_id, custom_metadata=custom_metadata, page=page, page_size=page_size, sort=sort) +# **list_run_results_v1_runs_application_run_id_results_get** +> List[ItemResultReadResponse] list_run_results_v1_runs_application_run_id_results_get(application_run_id, item_id__in=item_id__in, reference__in=reference__in, status__in=status__in, metadata=metadata, page=page, page_size=page_size, sort=sort) -List Runs +List Run Results -List runs with filtering, sorting, and pagination capabilities. Returns paginated runs that were submitted by the user. +List results for items in an application run with filtering, sorting, and pagination capabilities. Returns paginated results for items within a specific application run. Results can be filtered by item IDs, references, status, and custom metadata using JSONPath expressions. ## JSONPath Metadata Filtering Use PostgreSQL JSONPath expressions to filter results by their metadata. ### Examples: - **Field existence**: `$.case_id` - Results that have a case_id field defined - **Exact value match**: `$.priority ? (@ == \"high\")` - Results with high priority - **Numeric comparison**: `$.confidence_score ? (@ > 0.95)` - Results with high confidence - **Array operations**: `$.flags[*] ? (@ == \"reviewed\")` - Results flagged as reviewed - **Complex conditions**: `$.metrics ? (@.accuracy > 0.9 && @.recall > 0.8)` - Results meeting performance thresholds ## Notes - JSONPath expressions are evaluated using PostgreSQL's `@?` operator - The `$.` prefix is automatically added to root-level field references if missing - String values in conditions must be enclosed in double quotes - Use `&&` for AND operations and `||` for OR operations ### Example @@ -751,7 +733,8 @@ List runs with filtering, sorting, and pagination capabilities. Returns paginat ```python import aignx.codegen -from aignx.codegen.models.run_read_response import RunReadResponse +from aignx.codegen.models.item_result_read_response import ItemResultReadResponse +from aignx.codegen.models.item_status import ItemStatus from aignx.codegen.rest import ApiException from pprint import pprint @@ -772,21 +755,22 @@ configuration.access_token = os.environ["ACCESS_TOKEN"] with aignx.codegen.ApiClient(configuration) as api_client: # Create an instance of the API class api_instance = aignx.codegen.PublicApi(api_client) - application_id = 'application_id_example' # str | Optional application ID filter (optional) - application_version = 'application_version_example' # str | Optional Version Name (optional) - external_id = 'external_id_example' # str | Optionally filter runs by items with this external ID (optional) - custom_metadata = '$' # str | Use PostgreSQL JSONPath expressions to filter runs by their custom_metadata. #### URL Encoding Required **Important**: JSONPath expressions contain special characters that must be URL-encoded when used in query parameters. Most HTTP clients handle this automatically, but when constructing URLs manually, ensure proper encoding. #### Examples (Clear Format): - **Field existence**: `$.study` - Runs that have a study field defined - **Exact value match**: `$.study ? (@ == \"high\")` - Runs with specific study value - **Numeric comparison**: `$.confidence_score ? (@ > 0.75)` - Runs with confidence score greater than 0.75 - **Array operations**: `$.tags[*] ? (@ == \"draft\")` - Runs with tags array containing \"draft\" - **Complex conditions**: `$.resources ? (@.gpu_count > 2 && @.memory_gb >= 16)` - Runs with high resource requirements #### Examples (URL-Encoded Format): - **Field existence**: `%24.study` - **Exact value match**: `%24.study%20%3F%20(%40%20%3D%3D%20%22high%22)` - **Numeric comparison**: `%24.confidence_score%20%3F%20(%40%20%3E%200.75)` - **Array operations**: `%24.tags%5B*%5D%20%3F%20(%40%20%3D%3D%20%22draft%22)` - **Complex conditions**: `%24.resources%20%3F%20(%40.gpu_count%20%3E%202%20%26%26%20%40.memory_gb%20%3E%3D%2016)` #### Notes - JSONPath expressions are evaluated using PostgreSQL's `@?` operator - The `$.` prefix is automatically added to root-level field references if missing - String values in conditions must be enclosed in double quotes - Use `&&` for AND operations and `||` for OR operations - Regular expressions use `like_regex` with standard regex syntax - **Remember to URL-encode the entire JSONPath expression when making HTTP requests** (optional) + application_run_id = 'application_run_id_example' # str | Application run id, returned by `POST /runs/` endpoint + item_id__in = ['item_id__in_example'] # List[str] | Filter for items ids (optional) + reference__in = ['reference__in_example'] # List[str] | Filter for items by their reference from the input payload (optional) + status__in = [aignx.codegen.ItemStatus()] # List[ItemStatus] | Filter for items in certain statuses (optional) + metadata = '$.project' # str | JSONPath expression to filter results by their metadata (optional) page = 1 # int | (optional) (default to 1) page_size = 50 # int | (optional) (default to 50) - sort = ['sort_example'] # List[str] | Sort the results by one or more fields. Use `+` for ascending and `-` for descending order. **Available fields:** - `run_id` - `application_version_id` - `organization_id` - `status` - `submitted_at` - `submitted_by` **Examples:** - `?sort=submitted_at` - Sort by creation time (ascending) - `?sort=-submitted_at` - Sort by creation time (descending) - `?sort=status&sort=-submitted_at` - Sort by status, then by time (descending) (optional) + sort = ['sort_example'] # List[str] | Sort the results by one or more fields. Use `+` for ascending and `-` for descending order. **Available fields:** - `item_id` - `application_run_id` - `reference` - `status` - `metadata` **Examples:** - `?sort=item_id` - Sort by id of the item (ascending) - `?sort=-application_run_id` - Sort by id of the run (descending) - `?sort=status&sort=-item_idt` - Sort by status, then by id of the item (descending) (optional) try: - # List Runs - api_response = api_instance.list_runs_v1_runs_get(application_id=application_id, application_version=application_version, external_id=external_id, custom_metadata=custom_metadata, page=page, page_size=page_size, sort=sort) - print("The response of PublicApi->list_runs_v1_runs_get:\n") + # List Run Results + api_response = api_instance.list_run_results_v1_runs_application_run_id_results_get(application_run_id, item_id__in=item_id__in, reference__in=reference__in, status__in=status__in, metadata=metadata, page=page, page_size=page_size, sort=sort) + print("The response of PublicApi->list_run_results_v1_runs_application_run_id_results_get:\n") pprint(api_response) except Exception as e: - print("Exception when calling PublicApi->list_runs_v1_runs_get: %s\n" % e) + print("Exception when calling PublicApi->list_run_results_v1_runs_application_run_id_results_get: %s\n" % e) ``` @@ -796,17 +780,18 @@ with aignx.codegen.ApiClient(configuration) as api_client: Name | Type | Description | Notes ------------- | ------------- | ------------- | ------------- - **application_id** | **str**| Optional application ID filter | [optional] - **application_version** | **str**| Optional Version Name | [optional] - **external_id** | **str**| Optionally filter runs by items with this external ID | [optional] - **custom_metadata** | **str**| Use PostgreSQL JSONPath expressions to filter runs by their custom_metadata. #### URL Encoding Required **Important**: JSONPath expressions contain special characters that must be URL-encoded when used in query parameters. Most HTTP clients handle this automatically, but when constructing URLs manually, ensure proper encoding. #### Examples (Clear Format): - **Field existence**: `$.study` - Runs that have a study field defined - **Exact value match**: `$.study ? (@ == \"high\")` - Runs with specific study value - **Numeric comparison**: `$.confidence_score ? (@ > 0.75)` - Runs with confidence score greater than 0.75 - **Array operations**: `$.tags[*] ? (@ == \"draft\")` - Runs with tags array containing \"draft\" - **Complex conditions**: `$.resources ? (@.gpu_count > 2 && @.memory_gb >= 16)` - Runs with high resource requirements #### Examples (URL-Encoded Format): - **Field existence**: `%24.study` - **Exact value match**: `%24.study%20%3F%20(%40%20%3D%3D%20%22high%22)` - **Numeric comparison**: `%24.confidence_score%20%3F%20(%40%20%3E%200.75)` - **Array operations**: `%24.tags%5B*%5D%20%3F%20(%40%20%3D%3D%20%22draft%22)` - **Complex conditions**: `%24.resources%20%3F%20(%40.gpu_count%20%3E%202%20%26%26%20%40.memory_gb%20%3E%3D%2016)` #### Notes - JSONPath expressions are evaluated using PostgreSQL's `@?` operator - The `$.` prefix is automatically added to root-level field references if missing - String values in conditions must be enclosed in double quotes - Use `&&` for AND operations and `||` for OR operations - Regular expressions use `like_regex` with standard regex syntax - **Remember to URL-encode the entire JSONPath expression when making HTTP requests** | [optional] + **application_run_id** | **str**| Application run id, returned by `POST /runs/` endpoint | + **item_id__in** | [**List[str]**](str.md)| Filter for items ids | [optional] + **reference__in** | [**List[str]**](str.md)| Filter for items by their reference from the input payload | [optional] + **status__in** | [**List[ItemStatus]**](ItemStatus.md)| Filter for items in certain statuses | [optional] + **metadata** | **str**| JSONPath expression to filter results by their metadata | [optional] **page** | **int**| | [optional] [default to 1] **page_size** | **int**| | [optional] [default to 50] - **sort** | [**List[str]**](str.md)| Sort the results by one or more fields. Use `+` for ascending and `-` for descending order. **Available fields:** - `run_id` - `application_version_id` - `organization_id` - `status` - `submitted_at` - `submitted_by` **Examples:** - `?sort=submitted_at` - Sort by creation time (ascending) - `?sort=-submitted_at` - Sort by creation time (descending) - `?sort=status&sort=-submitted_at` - Sort by status, then by time (descending) | [optional] + **sort** | [**List[str]**](str.md)| Sort the results by one or more fields. Use `+` for ascending and `-` for descending order. **Available fields:** - `item_id` - `application_run_id` - `reference` - `status` - `metadata` **Examples:** - `?sort=item_id` - Sort by id of the item (ascending) - `?sort=-application_run_id` - Sort by id of the run (descending) - `?sort=status&sort=-item_idt` - Sort by status, then by id of the item (descending) | [optional] ### Return type -[**List[RunReadResponse]**](RunReadResponse.md) +[**List[ItemResultReadResponse]**](ItemResultReadResponse.md) ### Authorization @@ -822,93 +807,17 @@ Name | Type | Description | Notes | Status code | Description | Response headers | |-------------|-------------|------------------| **200** | Successful Response | - | -**404** | Run not found | - | +**404** | Application run not found | - | **422** | Validation Error | - | [[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) -# **put_item_custom_metadata_by_run_v1_runs_run_id_items_external_id_custom_metadata_put** -> object put_item_custom_metadata_by_run_v1_runs_run_id_items_external_id_custom_metadata_put(run_id, external_id, custom_metadata_update_request) - -Put Item Custom Metadata By Run - -### Example - -* OAuth Authentication (OAuth2AuthorizationCodeBearer): - -```python -import aignx.codegen -from aignx.codegen.models.custom_metadata_update_request import CustomMetadataUpdateRequest -from aignx.codegen.rest import ApiException -from pprint import pprint - -# Defining the host is optional and defaults to /api -# See configuration.py for a list of all supported configuration parameters. -configuration = aignx.codegen.Configuration( - host = "/api" -) - -# The client must configure the authentication and authorization parameters -# in accordance with the API server security policy. -# Examples for each auth method are provided below, use the example that -# satisfies your auth use case. - -configuration.access_token = os.environ["ACCESS_TOKEN"] - -# Enter a context with an instance of the API client -with aignx.codegen.ApiClient(configuration) as api_client: - # Create an instance of the API class - api_instance = aignx.codegen.PublicApi(api_client) - run_id = 'run_id_example' # str | The run id, returned by `POST /runs/` endpoint - external_id = 'external_id_example' # str | The `external_id` that was defined for the item by the customer that triggered the run. - custom_metadata_update_request = aignx.codegen.CustomMetadataUpdateRequest() # CustomMetadataUpdateRequest | - - try: - # Put Item Custom Metadata By Run - api_response = api_instance.put_item_custom_metadata_by_run_v1_runs_run_id_items_external_id_custom_metadata_put(run_id, external_id, custom_metadata_update_request) - print("The response of PublicApi->put_item_custom_metadata_by_run_v1_runs_run_id_items_external_id_custom_metadata_put:\n") - pprint(api_response) - except Exception as e: - print("Exception when calling PublicApi->put_item_custom_metadata_by_run_v1_runs_run_id_items_external_id_custom_metadata_put: %s\n" % e) -``` - - - -### Parameters - - -Name | Type | Description | Notes -------------- | ------------- | ------------- | ------------- - **run_id** | **str**| The run id, returned by `POST /runs/` endpoint | - **external_id** | **str**| The `external_id` that was defined for the item by the customer that triggered the run. | - **custom_metadata_update_request** | [**CustomMetadataUpdateRequest**](CustomMetadataUpdateRequest.md)| | - -### Return type - -**object** - -### Authorization - -[OAuth2AuthorizationCodeBearer](../README.md#OAuth2AuthorizationCodeBearer) - -### HTTP request headers - - - **Content-Type**: application/json - - **Accept**: application/json - -### HTTP response details - -| Status code | Description | Response headers | -|-------------|-------------|------------------| -**200** | Successful Response | - | -**422** | Validation Error | - | - -[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) +# **list_versions_by_application_id_v1_applications_application_id_versions_get** +> List[ApplicationVersionReadResponse] list_versions_by_application_id_v1_applications_application_id_versions_get(application_id, page=page, page_size=page_size, version=version, sort=sort) -# **put_run_custom_metadata_v1_runs_run_id_custom_metadata_put** -> object put_run_custom_metadata_v1_runs_run_id_custom_metadata_put(run_id, custom_metadata_update_request) +List Available Application Versions -Put Run Custom Metadata +Returns a list of available application versions for a specific application. A version is considered available when it has been assigned to your organization. Within a major version, all minor and patch updates are automatically accessible unless a specific version has been deprecated. Major version upgrades, however, require explicit assignment and may be subject to contract modifications before becoming available to your organization. ### Example @@ -916,7 +825,7 @@ Put Run Custom Metadata ```python import aignx.codegen -from aignx.codegen.models.custom_metadata_update_request import CustomMetadataUpdateRequest +from aignx.codegen.models.application_version_read_response import ApplicationVersionReadResponse from aignx.codegen.rest import ApiException from pprint import pprint @@ -937,16 +846,19 @@ configuration.access_token = os.environ["ACCESS_TOKEN"] with aignx.codegen.ApiClient(configuration) as api_client: # Create an instance of the API class api_instance = aignx.codegen.PublicApi(api_client) - run_id = 'run_id_example' # str | Run id, returned by `POST /runs/` endpoint - custom_metadata_update_request = aignx.codegen.CustomMetadataUpdateRequest() # CustomMetadataUpdateRequest | + application_id = 'application_id_example' # str | + page = 1 # int | (optional) (default to 1) + page_size = 50 # int | (optional) (default to 50) + version = 'version_example' # str | Semantic version of the application, example: `1.0.13` (optional) + sort = ['sort_example'] # List[str] | Sort the results by one or more fields. Use `+` for ascending and `-` for descending order. **Available fields:** - `application_version_id` - `version` - `application_id` - `changelog` - `created_at` **Examples:** - `?sort=application_id` - Sort by application_id ascending - `?sort=-version` - Sort by version descending - `?sort=+application_id&sort=-created_at` - Sort by application_id ascending, then created_at descending (optional) try: - # Put Run Custom Metadata - api_response = api_instance.put_run_custom_metadata_v1_runs_run_id_custom_metadata_put(run_id, custom_metadata_update_request) - print("The response of PublicApi->put_run_custom_metadata_v1_runs_run_id_custom_metadata_put:\n") + # List Available Application Versions + api_response = api_instance.list_versions_by_application_id_v1_applications_application_id_versions_get(application_id, page=page, page_size=page_size, version=version, sort=sort) + print("The response of PublicApi->list_versions_by_application_id_v1_applications_application_id_versions_get:\n") pprint(api_response) except Exception as e: - print("Exception when calling PublicApi->put_run_custom_metadata_v1_runs_run_id_custom_metadata_put: %s\n" % e) + print("Exception when calling PublicApi->list_versions_by_application_id_v1_applications_application_id_versions_get: %s\n" % e) ``` @@ -956,12 +868,15 @@ with aignx.codegen.ApiClient(configuration) as api_client: Name | Type | Description | Notes ------------- | ------------- | ------------- | ------------- - **run_id** | **str**| Run id, returned by `POST /runs/` endpoint | - **custom_metadata_update_request** | [**CustomMetadataUpdateRequest**](CustomMetadataUpdateRequest.md)| | + **application_id** | **str**| | + **page** | **int**| | [optional] [default to 1] + **page_size** | **int**| | [optional] [default to 50] + **version** | **str**| Semantic version of the application, example: `1.0.13` | [optional] + **sort** | [**List[str]**](str.md)| Sort the results by one or more fields. Use `+` for ascending and `-` for descending order. **Available fields:** - `application_version_id` - `version` - `application_id` - `changelog` - `created_at` **Examples:** - `?sort=application_id` - Sort by application_id ascending - `?sort=-version` - Sort by version descending - `?sort=+application_id&sort=-created_at` - Sort by application_id ascending, then created_at descending | [optional] ### Return type -**object** +[**List[ApplicationVersionReadResponse]**](ApplicationVersionReadResponse.md) ### Authorization @@ -969,15 +884,15 @@ Name | Type | Description | Notes ### HTTP request headers - - **Content-Type**: application/json + - **Content-Type**: Not defined - **Accept**: application/json ### HTTP response details | Status code | Description | Response headers | |-------------|-------------|------------------| -**200** | Successful Response | - | -**404** | Run not found | - | +**200** | A list of application versions for a given application ID available to the caller | - | +**401** | Unauthorized - Invalid or missing authentication | - | **422** | Validation Error | - | [[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) diff --git a/docs/partials/README_main.md b/docs/partials/README_main.md index 3e0d3466..b4bf4b82 100644 --- a/docs/partials/README_main.md +++ b/docs/partials/README_main.md @@ -3,7 +3,7 @@ The **Aignostics Python SDK** includes multiple pathways to interact with the **Aignostics Platform**: -1. Use the **Aignostics Launchpad** to analyze whole slide images with advanced computational pathology applications like +1. Use the **Aignostics Launchpad** to analyze whole slide images with advanced computational pathology applications like [Atlas H&E-TME](https://www.aignostics.com/products/he-tme-profiling-product) directly from your desktop. View your results by launching popular tools such as [QuPath](https://qupath.github.io/) and Python Notebooks with one click. The app runs on Mac OS X, Windows, and Linux. @@ -38,34 +38,33 @@ more about how we achieve ## Quick Start > [!Note] -> See as follows for a quick start guide to get you up and running with the Aignostics Python SDK as quickly as possible. -> If you first want to learn bout the basic concepts and components of the Aignostics Platform skip to that section below. -> The further reading section points you to reference documentation listing all available CLI commands, methods and classes provided by the client library, operations of the API, how we achieve operational excellence, security, and more. +> See as follows for a quick start guide to get you up and running with the Aignostics Python SDK as quickly as possible. +> If you first want to learn bout the basic concepts and components of the Aignostics Platform skip to that section below. +> The further reading section points you to reference documentation listing all available CLI commands, methods and classes provided by the client library, operations of the API, how we achieve operational excellence, security, and more. > If you are not familiar with terminology please check the glossary at the end of this document. ### Launchpad: Run your first computational pathology analysis in 10 minutes from your desktop The **Aignostics Launchpad** is a graphical desktop application that allows you to run -applications on whole slide images (WSIs) from your computer, and inspect results with QuPath and Python Notebooks with one click. It is designed to be user-friendly and intuitive, for use by Research Pathologists and Data Scientists. +applications on whole slide images (WSIs) from your computer, and inspect results with QuPath and Python Notebooks with one click. It is designed to be user-friendly and intuitive, for use by Research Pathologists and Data Scientists. The Launchpad is available for Mac OS X, Windows, and Linux, and can be installed easily: -1. Visit the [Quick Start](https://platform.aignostics.com/getting-started/quick-start) +1. Visit the [Quick Start](https://platform.aignostics.com/getting-started/quick-start) page in the Aignostics Console. 2. Copy the installation script and paste it into your terminal - compatible with MacOS, Windows, and Linux. 3. Launch the application by running `uvx aignostics launchpad`. -4. Follow the intuitive graphical interface to analyze public datasets or your own whole slide images +4. Follow the intuitive graphical interface to analyze public datasets or your own whole slide images with [Atlas H&E-TME](https://www.aignostics.com/products/he-tme-profiling-product) and other computational pathology applications. > [!Note] -> The Launchpad features a growing ecosystem of extensions that seamlessly integrate with standard digital pathology tools. To use the Launchpad with all available extensions, run `uvx --from "aignostics[qupath,marimo]" aignostics launchpad`. Currently available extensions are: -> +> The Launchpad features a growing ecosystem of extensions that seamlessly integrate with standard digital pathology tools. To use the Launchpad with all available extensions, run `uvx --with aignostics[qupath,marimo] aignostics launchpad`. Currently available extensions are: > 1. **QuPath extension**: View your application results in [QuPath](https://qupath.github.io/) with a single click. The Launchpad creates QuPath projects on-the-fly. > 2. **Marimo extension**: Analyze your application results using [Marimo](https://marimo.io/) notebooks embedded in the Launchpad. You don't have to leave the Launchpad to do real data science. ### CLI: Manage datasets and application runs from your terminal -The Python SDK includes the **Aignostics CLI**, a Command-Line Interface (CLI) that allows you to +The Python SDK includes the **Aignostics CLI**, a Command-Line Interface that allows you to interact with the Aignostics Platform directly from your terminal or shell script. See as follows for a simple example where we download a sample dataset for the [Atlas @@ -86,7 +85,7 @@ nano tcga_luad/metadata.csv uvx aignostics application run upload he-tme data/tcga_luad/run.csv # Submit the application run and print tha run id uvx aignostics application run submit he-tme data/tcga_luad/run.csv -# Check the status of the application run you submitted +# Check the status of the application run you triggered uvx aignostics application run list # Incrementally download results when they become available # Fill in the id from the output in the previous step @@ -123,10 +122,10 @@ to learn about all commands and options available. > [your personal dashboard on the Aignostics Platform website](https://platform.aignostics.com/getting-started/quick-start) > and follow the steps outlined in the `Use in Python Notebooks` section. -The Python SDK includes Jupyter and Marimo notebooks to help you get started interacting +The Python SDK includes Jupyter and Marimo notebooks to help you get started interacting with the Aignostics Platform in your notebook environment. -The notebooks showcase the interaction with the Aignostics Platform using our "Test Application". To run one them, +The notebooks showcase the interaction with the Aignostics Platform using our "Test Application". To run one them, please follow the steps outlined in the snippet below to clone this repository and start either the [Jupyter](https://docs.jupyter.org/en/latest/index.html) ([examples/notebook.ipynb](https://github.com/aignostics/python-sdk/blob/main/examples/notebook.ipynb)) @@ -160,12 +159,12 @@ uv run marimo edit examples/notebook.py Next to using the Launchpad, CLI and example notebooks, the Python SDK includes the *Aignostics Client Library* for integration with your Python Codebase. -The following sections outline how to install the Python SDK for this purpose and +The following sections outline how to install the Python SDK for this purpose and interact with the Client. ### Installation -The Aignostics Python SDK is published on the [Python Package Index (PyPI)](https://pypi.org/project/aignostics/), +The Aignostics Python SDK is published on the [Python Package Index (PyPI)](https://pypi.org/project/aignostics/), is compatible with Python 3.11 and above, and can be installed via via `uv` or `pip`: **Install with [uv](https://docs.astral.sh/uv/):** If you don't have uv @@ -185,7 +184,7 @@ pip install aignostics #### Usage -The following snippet shows how to use the Client to submit an application +The following snippet shows how to use the Client to trigger an application run: ```python @@ -193,21 +192,21 @@ from aignostics import platform # initialize the client client = platform.Client() -# submit an application run -application_run = client.runs.submit( - application_id="test-app", +# trigger an application run +application_run = client.runs.create( + application_version="two-task-dummy:v0.35.0", items=[ platform.InputItem( - external_id="slide-1", + reference="slide-1", input_artifacts=[ platform.InputArtifact( - name="whole_slide_image", + name="user_slide", download_url="
", metadata={ - "checksum_base64_crc32c": "AAAAAA==", - "resolution_mpp": 0.25, - "width_px": 1000, - "height_px": 1000, + "checksum_crc32c": "AAAAAA==", + "base_mpp": 0.25, + "width": 1000, + "height": 1000, }, ) ], @@ -225,23 +224,22 @@ to learn about all classes and methods. ##### Defining the input for an application run -When creating an application run, you need to specify the `application_id` and optionally the -`application_version` (version number) of the application you want to run. If you omit the version, -the latest version will be used automatically. Additionally, you need to define the input items you -want to process in the run. The input items are defined as follows: +Next to the `application_version` of the application you want to run, you have +to define the input items you want to process in the run. The input items are +defined as follows: ```python platform.InputItem( - external_id="1", + reference="1", input_artifacts=[ platform.InputArtifact( - name="whole_slide_image", # defined by the application version's input artifact schema + name="user_slide", # defined by the application version input_artifact schema download_url="", - metadata={ # defined by the application version's input artifact schema - "checksum_base64_crc32c": "N+LWCg==", - "resolution_mpp": 0.46499982, - "width_px": 3728, - "height_px": 3640, + metadata={ # defined by the application version input_artifact schema + "checksum_crc32c": "N+LWCg==", + "base_mpp": 0.46499982, + "width": 3728, + "height": 3640, }, ) ], @@ -254,7 +252,8 @@ string. This is used to identify the item in the results later on. The data & metadata you need to provide for each item. The required artifacts depend on the application version you want to run - in the case of test application, there is only one artifact required, which is the image to process on. The -artifact name is defined as `whole_slide_image` for this application. +artifact name is defined as `user_slide` for the `two-task-dummy` application +and `whole_slide_image` for the `he-tme` application. The `download_url` is a signed URL that allows the Aignostics Platform to download the image data later during processing. diff --git a/docs/source/_static/openapi_v1.json b/docs/source/_static/openapi_v1.json index ed47ea22..6888f1ae 100644 --- a/docs/source/_static/openapi_v1.json +++ b/docs/source/_static/openapi_v1.json @@ -1,9 +1,9 @@ { "openapi": "3.1.0", "info": { - "title": "Aignostics Platform API", - "description": "\nThe Aignostics Platform is a cloud-based service that enables organizations to access advanced computational pathology applications through a secure API. The platform provides standardized access to Aignostics' portfolio of computational pathology solutions, with Atlas H&E-TME serving as an example of the available API endpoints. \n\nTo begin using the platform, your organization must first be registered by our business support team. If you don't have an account yet, please contact your account manager or email support@aignostics.com to get started. \n\nMore information about our applications can be found on [https://platform.aignostics.com](https://platform.aignostics.com).\n\n**How to authorize and test API endpoints:**\n\n1. Click the \"Authorize\" button in the right corner below\n3. Click \"Authorize\" button in the dialog to log in with your Aignostics Platform credentials\n4. After successful login, you'll be redirected back and can use \"Try it out\" on any endpoint\n\n**Note**: You only need to authorize once per session. The lock icons next to endpoints will show green when authorized.\n\n", - "version": "1.0.0.beta7" + "title": "Aignostics Platform API reference", + "description": "\nPagination is done via `page` and `page_size`. Sorting via `sort` query parameter.\nThe `sort` query parameter can be provided multiple times. The sorting direction can be indicated via\n`+` (ascending) or `-` (descending) (e.g. `/v1/applications?sort=+name)`.", + "version": "1.0.0" }, "servers": [ { @@ -16,8 +16,8 @@ "tags": [ "Public" ], - "summary": "List available applications", - "description": "Returns the list of the applications, available to the caller.\n\nThe application is available if any of the versions of the application is assigned to the caller’s organization.\nThe response is paginated and sorted according to the provided parameters.", + "summary": "List Applications", + "description": "Returns the list of the applications, available to the caller.\n\nThe application is available if any of the version of the application is assigned to the\nuser organization. To switch between organizations, the user should re-login and choose the\nneeded organization.", "operationId": "list_applications_v1_applications_get", "security": [ { @@ -37,7 +37,7 @@ } }, { - "name": "page-size", + "name": "page_size", "in": "query", "required": false, "schema": { @@ -45,7 +45,7 @@ "maximum": 100, "minimum": 5, "default": 50, - "title": "Page-Size" + "title": "Page Size" } }, { @@ -64,56 +64,25 @@ "type": "null" } ], - "description": "Sort the results by one or more fields. Use `+` for ascending and `-` for descending order.\n\n**Available fields:**\n- `application_id`\n- `name`\n- `description`\n- `regulatory_classes`\n\n**Examples:**\n- `?sort=application_id` - Sort by application_id ascending\n- `?sort=-name` - Sort by name descending\n- `?sort=+description&sort=name` - Sort by description ascending, then name descending", "title": "Sort" - }, - "description": "Sort the results by one or more fields. Use `+` for ascending and `-` for descending order.\n\n**Available fields:**\n- `application_id`\n- `name`\n- `description`\n- `regulatory_classes`\n\n**Examples:**\n- `?sort=application_id` - Sort by application_id ascending\n- `?sort=-name` - Sort by name descending\n- `?sort=+description&sort=name` - Sort by description ascending, then name descending" + } } ], "responses": { "200": { - "description": "A list of applications available to the caller", + "description": "Successful Response", "content": { "application/json": { "schema": { "type": "array", "items": { - "$ref": "#/components/schemas/ApplicationReadShortResponse" + "$ref": "#/components/schemas/ApplicationReadResponse" }, "title": "Response List Applications V1 Applications Get" - }, - "example": [ - { - "application_id": "he-tme", - "name": "Atlas H&E-TME", - "regulatory_classes": [ - "RUO" - ], - "description": "The Atlas H&E TME is an AI application designed to examine FFPE (formalin-fixed, paraffin-embedded) tissues stained with H&E (hematoxylin and eosin), delivering comprehensive insights into the tumor microenvironment.", - "latest_version": { - "number": "1.0.0", - "released_at": "2025-09-01T19:01:05.401Z" - } - }, - { - "application_id": "test-app", - "name": "Test Application", - "regulatory_classes": [ - "RUO" - ], - "description": "This is the test application with two algorithms: TissueQc and Tissue Segmentation", - "latest_version": { - "number": "2.0.0", - "released_at": "2025-09-02T19:01:05.401Z" - } - } - ] + } } } }, - "401": { - "description": "Unauthorized - Invalid or missing authentication" - }, "422": { "description": "Validation Error", "content": { @@ -127,14 +96,14 @@ } } }, - "/v1/applications/{application_id}": { + "/v1/applications/{application_id}/versions": { "get": { "tags": [ "Public" ], - "summary": "Read Application By Id", - "description": "Retrieve details of a specific application by its ID.", - "operationId": "read_application_by_id_v1_applications__application_id__get", + "summary": "List Versions By Application Id", + "description": "Returns the list of the application versions for this application, available to the caller.\n\nThe application version is available if it is assigned to the user's organization.\n\nThe application versions are assigned to the organization by the Aignostics admin. To\nassign or unassign a version from your organization, please contact Aignostics support team.", + "operationId": "list_versions_by_application_id_v1_applications__application_id__versions_get", "security": [ { "OAuth2AuthorizationCodeBearer": [] @@ -149,68 +118,86 @@ "type": "string", "title": "Application Id" } - } - ], - "responses": { - "200": { - "description": "Successful Response", - "content": { - "application/json": { - "schema": { - "$ref": "#/components/schemas/ApplicationReadResponse" - } - } - } }, - "403": { - "description": "Forbidden - You don't have permission to see this application" + { + "name": "page", + "in": "query", + "required": false, + "schema": { + "type": "integer", + "minimum": 1, + "default": 1, + "title": "Page" + } }, - "404": { - "description": "Not Found - Application with the given ID does not exist" + { + "name": "page_size", + "in": "query", + "required": false, + "schema": { + "type": "integer", + "maximum": 100, + "minimum": 5, + "default": 50, + "title": "Page Size" + } }, - "422": { - "description": "Validation Error", - "content": { - "application/json": { - "schema": { - "$ref": "#/components/schemas/HTTPValidationError" + { + "name": "version", + "in": "query", + "required": false, + "schema": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" } - } + ], + "title": "Version" } - } - } - } - }, - "/v1/applications/{application_id}/versions/{version}": { - "get": { - "tags": [ - "Public" - ], - "summary": "Application Version Details", - "description": "Get the application version details\n\nAllows caller to retrieve information about application version based on provided application version ID.", - "operationId": "application_version_details_v1_applications__application_id__versions__version__get", - "security": [ - { - "OAuth2AuthorizationCodeBearer": [] - } - ], - "parameters": [ + }, { - "name": "application_id", - "in": "path", - "required": true, + "name": "include", + "in": "query", + "required": false, "schema": { - "type": "string", - "title": "Application Id" + "anyOf": [ + { + "type": "array", + "prefixItems": [ + { + "type": "string" + } + ], + "minItems": 1, + "maxItems": 1 + }, + { + "type": "null" + } + ], + "title": "Include" } }, { - "name": "version", - "in": "path", - "required": true, + "name": "sort", + "in": "query", + "required": false, "schema": { - "type": "string", - "title": "Version" + "anyOf": [ + { + "type": "array", + "items": { + "type": "string" + } + }, + { + "type": "null" + } + ], + "title": "Sort" } } ], @@ -220,201 +207,15 @@ "content": { "application/json": { "schema": { - "$ref": "#/components/schemas/VersionReadResponse" - }, - "example": { - "version_number": "0.4.4", - "changelog": "New deployment", - "input_artifacts": [ - { - "name": "whole_slide_image", - "mime_type": "image/tiff", - "metadata_schema": { - "type": "object", - "$defs": { - "LungCancerMetadata": { - "type": "object", - "title": "LungCancerMetadata", - "required": [ - "type", - "tissue" - ], - "properties": { - "type": { - "enum": [ - "lung" - ], - "type": "string", - "const": "lung", - "title": "Type" - }, - "tissue": { - "enum": [ - "lung", - "lymph node", - "liver", - "adrenal gland", - "bone", - "brain" - ], - "type": "string", - "title": "Tissue" - } - }, - "additionalProperties": false - } - }, - "title": "ExternalImageMetadata", - "$schema": "http://json-schema.org/draft-07/schema#", - "required": [ - "checksum_crc32c", - "base_mpp", - "width", - "height", - "cancer" - ], - "properties": { - "stain": { - "enum": [ - "H&E" - ], - "type": "string", - "const": "H&E", - "title": "Stain", - "default": "H&E" - }, - "width": { - "type": "integer", - "title": "Width", - "maximum": 150000, - "minimum": 1 - }, - "cancer": { - "anyOf": [ - { - "$ref": "#/$defs/LungCancerMetadata" - } - ], - "title": "Cancer" - }, - "height": { - "type": "integer", - "title": "Height", - "maximum": 150000, - "minimum": 1 - }, - "base_mpp": { - "type": "number", - "title": "Base Mpp", - "maximum": 0.5, - "minimum": 0.125 - }, - "mime_type": { - "enum": [ - "application/dicom", - "image/tiff" - ], - "type": "string", - "title": "Mime Type", - "default": "image/tiff" - }, - "checksum_crc32c": { - "type": "string", - "title": "Checksum Crc32C" - } - }, - "description": "Metadata corresponding to an external image.", - "additionalProperties": false - } - } - ], - "output_artifacts": [ - { - "name": "tissue_qc:tiff_heatmap", - "mime_type": "image/tiff", - "metadata_schema": { - "type": "object", - "title": "HeatmapMetadata", - "$schema": "http://json-schema.org/draft-07/schema#", - "required": [ - "checksum_crc32c", - "width", - "height", - "class_colors" - ], - "properties": { - "width": { - "type": "integer", - "title": "Width" - }, - "height": { - "type": "integer", - "title": "Height" - }, - "base_mpp": { - "type": "number", - "title": "Base Mpp", - "maximum": 0.5, - "minimum": 0.125 - }, - "mime_type": { - "enum": [ - "image/tiff" - ], - "type": "string", - "const": "image/tiff", - "title": "Mime Type", - "default": "image/tiff" - }, - "class_colors": { - "type": "object", - "title": "Class Colors", - "additionalProperties": { - "type": "array", - "maxItems": 3, - "minItems": 3, - "prefixItems": [ - { - "type": "integer", - "maximum": 255, - "minimum": 0 - }, - { - "type": "integer", - "maximum": 255, - "minimum": 0 - }, - { - "type": "integer", - "maximum": 255, - "minimum": 0 - } - ] - } - }, - "checksum_crc32c": { - "type": "string", - "title": "Checksum Crc32C" - } - }, - "description": "Metadata corresponding to a segmentation heatmap file.", - "additionalProperties": false - }, - "scope": "ITEM", - "visibility": "EXTERNAL" - } - ], - "released_at": "2025-04-16T08:45:20.655972Z" + "type": "array", + "items": { + "$ref": "#/components/schemas/ApplicationVersionReadResponse" + }, + "title": "Response List Versions By Application Id V1 Applications Application Id Versions Get" } } } }, - "403": { - "description": "Forbidden - You don't have permission to see this version" - }, - "404": { - "description": "Not Found - Application version with given ID is not available to you or does not exist" - }, "422": { "description": "Validation Error", "content": { @@ -433,9 +234,9 @@ "tags": [ "Public" ], - "summary": "List Runs", - "description": "List runs with filtering, sorting, and pagination capabilities.\n\nReturns paginated runs that were submitted by the user.", - "operationId": "list_runs_v1_runs_get", + "summary": "List Application Runs", + "description": "The endpoint returns the application runs triggered by the caller. After the application run\nis created by POST /v1/runs, it becomes available for the current endpoint", + "operationId": "list_application_runs_v1_runs_get", "security": [ { "OAuth2AuthorizationCodeBearer": [] @@ -456,10 +257,6 @@ } ], "description": "Optional application ID filter", - "examples": [ - "tissue-segmentation", - "heta" - ], "title": "Application Id" }, "description": "Optional application ID filter" @@ -477,87 +274,35 @@ "type": "null" } ], - "description": "Optional Version Name", - "examples": [ - "1.0.2", - "1.0.1-beta2" - ], + "description": "Optional application version filter", "title": "Application Version" }, - "description": "Optional Version Name" - }, - { - "name": "external_id", - "in": "query", - "required": false, - "schema": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "description": "Optionally filter runs by items with this external ID", - "examples": [ - "slide_001", - "patient_12345_sample_A" - ], - "title": "External Id" - }, - "description": "Optionally filter runs by items with this external ID" + "description": "Optional application version filter" }, { - "name": "custom_metadata", + "name": "include", "in": "query", "required": false, "schema": { "anyOf": [ { - "type": "string", - "maxLength": 1000 + "type": "array", + "prefixItems": [ + { + "type": "string" + } + ], + "minItems": 1, + "maxItems": 1 }, { "type": "null" } ], - "description": "Use PostgreSQL JSONPath expressions to filter runs by their custom_metadata.\n#### URL Encoding Required\n**Important**: JSONPath expressions contain special characters that must be URL-encoded when used in query parameters. Most HTTP clients handle this automatically, but when constructing URLs manually, ensure proper encoding.\n\n#### Examples (Clear Format):\n- **Field existence**: `$.study` - Runs that have a study field defined\n- **Exact value match**: `$.study ? (@ == \"high\")` - Runs with specific study value\n- **Numeric comparison**: `$.confidence_score ? (@ > 0.75)` - Runs with confidence score greater than 0.75\n- **Array operations**: `$.tags[*] ? (@ == \"draft\")` - Runs with tags array containing \"draft\"\n- **Complex conditions**: `$.resources ? (@.gpu_count > 2 && @.memory_gb >= 16)` - Runs with high resource requirements\n\n#### Examples (URL-Encoded Format):\n- **Field existence**: `%24.study`\n- **Exact value match**: `%24.study%20%3F%20(%40%20%3D%3D%20%22high%22)`\n- **Numeric comparison**: `%24.confidence_score%20%3F%20(%40%20%3E%200.75)`\n- **Array operations**: `%24.tags%5B*%5D%20%3F%20(%40%20%3D%3D%20%22draft%22)`\n- **Complex conditions**: `%24.resources%20%3F%20(%40.gpu_count%20%3E%202%20%26%26%20%40.memory_gb%20%3E%3D%2016)`\n\n#### Notes\n- JSONPath expressions are evaluated using PostgreSQL's `@?` operator\n- The `$.` prefix is automatically added to root-level field references if missing\n- String values in conditions must be enclosed in double quotes\n- Use `&&` for AND operations and `||` for OR operations\n- Regular expressions use `like_regex` with standard regex syntax\n- **Remember to URL-encode the entire JSONPath expression when making HTTP requests**\n\n ", - "title": "Custom Metadata" + "description": "Request optional output values. Used internally by the platform", + "title": "Include" }, - "description": "Use PostgreSQL JSONPath expressions to filter runs by their custom_metadata.\n#### URL Encoding Required\n**Important**: JSONPath expressions contain special characters that must be URL-encoded when used in query parameters. Most HTTP clients handle this automatically, but when constructing URLs manually, ensure proper encoding.\n\n#### Examples (Clear Format):\n- **Field existence**: `$.study` - Runs that have a study field defined\n- **Exact value match**: `$.study ? (@ == \"high\")` - Runs with specific study value\n- **Numeric comparison**: `$.confidence_score ? (@ > 0.75)` - Runs with confidence score greater than 0.75\n- **Array operations**: `$.tags[*] ? (@ == \"draft\")` - Runs with tags array containing \"draft\"\n- **Complex conditions**: `$.resources ? (@.gpu_count > 2 && @.memory_gb >= 16)` - Runs with high resource requirements\n\n#### Examples (URL-Encoded Format):\n- **Field existence**: `%24.study`\n- **Exact value match**: `%24.study%20%3F%20(%40%20%3D%3D%20%22high%22)`\n- **Numeric comparison**: `%24.confidence_score%20%3F%20(%40%20%3E%200.75)`\n- **Array operations**: `%24.tags%5B*%5D%20%3F%20(%40%20%3D%3D%20%22draft%22)`\n- **Complex conditions**: `%24.resources%20%3F%20(%40.gpu_count%20%3E%202%20%26%26%20%40.memory_gb%20%3E%3D%2016)`\n\n#### Notes\n- JSONPath expressions are evaluated using PostgreSQL's `@?` operator\n- The `$.` prefix is automatically added to root-level field references if missing\n- String values in conditions must be enclosed in double quotes\n- Use `&&` for AND operations and `||` for OR operations\n- Regular expressions use `like_regex` with standard regex syntax\n- **Remember to URL-encode the entire JSONPath expression when making HTTP requests**\n\n ", - "examples": { - "no_filter": { - "summary": "No filter (returns all)", - "description": "Returns all items without filtering by custom metadata", - "value": "$" - }, - "field_exists": { - "summary": "Check if field exists", - "description": "Find applications that have a project field defined", - "value": "$.study" - }, - "field_has_value": { - "summary": "Check if field has a certain value", - "description": "Compare a field value against a certain value", - "value": "$.study ? (@ == \"abc-1\")" - }, - "numeric_comparisons": { - "summary": "Compare to a numeric value of a field", - "description": "Compare a field value against a numeric value of a field", - "value": "$.confidence_score ? (@ > 0.75)" - }, - "array_operations": { - "summary": "Check if an array contains a certain value", - "description": "Check if an array contains a certain value", - "value": "$.tags[*] ? (@ == \"draft\")" - }, - "complex_filters": { - "summary": "Combine multiple checks", - "description": "Combine multiple checks", - "value": "$.resources ? (@.gpu_count > 2 && @.memory_gb >= 16)" - } - } + "description": "Request optional output values. Used internally by the platform" }, { "name": "page", @@ -598,10 +343,8 @@ "type": "null" } ], - "description": "Sort the results by one or more fields. Use `+` for ascending and `-` for descending order.\n\n**Available fields:**\n- `run_id`\n- `application_version_id`\n- `organization_id`\n- `status`\n- `submitted_at`\n- `submitted_by`\n\n**Examples:**\n- `?sort=submitted_at` - Sort by creation time (ascending)\n- `?sort=-submitted_at` - Sort by creation time (descending)\n- `?sort=status&sort=-submitted_at` - Sort by status, then by time (descending)\n", "title": "Sort" - }, - "description": "Sort the results by one or more fields. Use `+` for ascending and `-` for descending order.\n\n**Available fields:**\n- `run_id`\n- `application_version_id`\n- `organization_id`\n- `status`\n- `submitted_at`\n- `submitted_by`\n\n**Examples:**\n- `?sort=submitted_at` - Sort by creation time (ascending)\n- `?sort=-submitted_at` - Sort by creation time (descending)\n- `?sort=status&sort=-submitted_at` - Sort by status, then by time (descending)\n" + } } ], "responses": { @@ -614,13 +357,13 @@ "items": { "$ref": "#/components/schemas/RunReadResponse" }, - "title": "Response List Runs V1 Runs Get" + "title": "Response List Application Runs V1 Runs Get" } } } }, "404": { - "description": "Run not found" + "description": "Application run not found" }, "422": { "description": "Validation Error", @@ -638,9 +381,9 @@ "tags": [ "Public" ], - "summary": "Initiate Run", - "description": "This endpoint initiates a processing run for a selected application and version, and returns a `run_id` for tracking purposes.\n\nSlide processing occurs asynchronously, allowing you to retrieve results for individual slides as soon as they\ncomplete processing. The system typically processes slides in batches of four, though this number may be reduced\nduring periods of high demand.\nBelow is an example of the required payload for initiating an Atlas H&E TME processing run.\n\n\n### Payload\n\nThe payload includes `application_id`, optional `version_number`, and `items` base fields.\n\n`application_id` is the unique identifier for the application.\n`version_number` is the semantic version to use. If not provided, the latest available version will be used.\n\n`items` includes the list of the items to process (slides, in case of HETA application).\nEvery item has a set of standard fields defined by the API, plus the custom_metadata, specific to the\nchosen application.\n\nExample payload structure with the comments:\n```\n{\n application_id: \"he-tme\",\n version_number: \"1.0.0-beta\",\n items: [{\n \"external_id\": \"slide_1\",\n \"input_artifacts\": [{\n \"name\": \"user_slide\",\n \"download_url\": \"https://...\",\n \"custom_metadata\": {\n \"specimen\": {\n \"disease\": \"LUNG_CANCER\",\n \"tissue\": \"LUNG\"\n },\n \"staining_method\": \"H&E\",\n \"width_px\": 136223,\n \"height_px\": 87761,\n \"resolution_mpp\": 0.2628238,\n \"media-type\":\"image/tiff\",\n \"checksum_base64_crc32c\": \"64RKKA==\"\n }\n }]\n }]\n}\n```\n\n| Parameter | Description |\n| :---- | :---- |\n| `application_id` required | Unique ID for the application |\n| `version_number` optional | Semantic version of the application. If not provided, the latest available version will be used |\n| `items` required | List of submitted items (WSIs) with parameters described below. |\n| `external_id` required | Unique WSI name or ID for easy reference to items, provided by the caller. The external_id should be unique across all items of the run. |\n| `input_artifacts` required | List of provided artifacts for a WSI; at the moment Atlas H&E-TME receives only 1 artifact per slide (the slide itself), but for some other applications this can be a slide and an segmentation map |\n| `name` required | Type of artifact; Atlas H&E-TME supports only `\"input_slide\"` |\n| `download_url` required | Signed URL to the input file in the S3 or GCS; Should be valid for at least 6 days |\n| `specimen: disease` required | Supported cancer types for Atlas H&E-TME (see full list in Atlas H&E-TME manual) |\n| `specimen: tissue` required | Supported tissue types for Atlas H&E-TME (see full list in Atlas H&E-TME manual) |\n| `staining_method` required | WSI stain /bio-marker; Atlas H&E-TME supports only `\"H&E\"` |\n| `width_px` required | Integer value. Number of pixels of the WSI in the X dimension. |\n| `height_px` required | Integer value. Number of pixels of the WSI in the Y dimension. |\n| `resolution_mpp` required | Resolution of WSI in micrometers per pixel; check allowed range in Atlas H&E-TME manual |\n| `media-type` required | Supported media formats; available values are: image/tiff (for .tiff or .tif WSI) application/dicom (for DICOM ) application/zip (for zipped DICOM) application/octet-stream (for .svs WSI) |\n| `checksum_base64_crc32c` required | Base64 encoded big-endian CRC32C checksum of the WSI image |\n\n\n\n### Response\n\nThe endpoint returns the run UUID. After that the job is scheduled for the\nexecution in the background.\n\nTo check the status of the run call `v1/runs/{run_id}`.\n\n### Rejection\n\nApart from the authentication, authorization and malformed input error, the request can be\nrejected when the quota limit is exceeded. More details on quotas is described in the\ndocumentation", - "operationId": "create_run_v1_runs_post", + "summary": "Create Application Run", + "description": "The endpoint is used to process the input items by the chosen application version. The endpoint\nreturns the `application_run_id`. The processing fo the items is done asynchronously.\n\nTo check the status or cancel the execution, use the /v1/runs/{application_run_id} endpoint.\n\n### Payload\n\nThe payload includes `application_version_id` and `items` base fields.\n\n`application_version_id` is the id used for `/v1/versions/{application_id}` endpoint.\n\n`items` includes the list of the items to process (slides, in case of HETA application).\nEvery item has a set of standard fields defined by the API, plus the metadata, specific to the\nchosen application.\n\nExample payload structure with the comments:\n```\n{\n application_version_id: \"test-app:v0.0.2\",\n items: [{\n \"reference\": \"slide_1\", <-- Input ID to connect the input and the output artifact\n \"input_artifacts\": [{\n \"name\": \"input_slide\", <-- Name of the artifact defined by the application (For HETA it is\"input_slide\")\n \"download_url\": \"https://...\", <-- signed URL to the input file in the S3 or GCS. Should be valid for more than 6 days\n \"metadata\": { <-- The metadata fields defined by the application. (The example fields set for a slide files are provided)\n \"checksum_base64_crc32c\": \"abc12==\",\n \"mime_type\": \"image/tiff\",\n \"height\": 100,\n \"weight\": 500,\n \"mpp\": 0.543\n }\n }]\n }]\n}\n```\n\n### Response\n\nThe endpoint returns the application run UUID. After that the job is scheduled for the\nexecution in the background.\n\nTo check the status of the run call `v1/runs/{application_run_id}`.\n\n### Rejection\n\nApart from the authentication, authorization and malformed input error, the request can be\nrejected when the quota limit is exceeded. More details on quotas is described in the\ndocumentation", + "operationId": "create_application_run_v1_runs_post", "security": [ { "OAuth2AuthorizationCodeBearer": [] @@ -668,13 +411,7 @@ } }, "404": { - "description": "Application version not found" - }, - "403": { - "description": "Forbidden - You don't have permission to create this run" - }, - "400": { - "description": "Bad Request - Input validation failed" + "description": "Application run not found" }, "422": { "description": "Validation Error", @@ -689,14 +426,14 @@ } } }, - "/v1/runs/{run_id}": { + "/v1/runs/{application_run_id}": { "get": { "tags": [ "Public" ], - "summary": "Get run details", - "description": "This endpoint allows the caller to retrieve the current status of a run along with other relevant run details.\n A run becomes available immediately after it is created through the POST `/runs/` endpoint.\n\n To download the output results, use GET `/runs/{run_id}/` items to get outputs for all slides.\nAccess to a run is restricted to the user who created it.", - "operationId": "get_run_v1_runs__run_id__get", + "summary": "Get Run", + "description": "Returns the details of the application run. The application run is available as soon as it is\ncreated via `POST /runs/` endpoint. To download the items results, call\n`/runs/{application_run_id}/results`.\n\nThe application is only available to the user who triggered it, regardless of the role.", + "operationId": "get_run_v1_runs__application_run_id__get", "security": [ { "OAuth2AuthorizationCodeBearer": [] @@ -704,16 +441,39 @@ ], "parameters": [ { - "name": "run_id", + "name": "application_run_id", "in": "path", "required": true, "schema": { "type": "string", "format": "uuid", - "description": "Run id, returned by `POST /runs/` endpoint", - "title": "Run Id" + "description": "Application run id, returned by `POST /runs/` endpoint", + "title": "Application Run Id" }, - "description": "Run id, returned by `POST /runs/` endpoint" + "description": "Application run id, returned by `POST /runs/` endpoint" + }, + { + "name": "include", + "in": "query", + "required": false, + "schema": { + "anyOf": [ + { + "type": "array", + "prefixItems": [ + { + "type": "string" + } + ], + "minItems": 1, + "maxItems": 1 + }, + { + "type": "null" + } + ], + "title": "Include" + } } ], "responses": { @@ -728,10 +488,7 @@ } }, "404": { - "description": "Run not found because it was deleted." - }, - "403": { - "description": "Forbidden - You don't have permission to see this run" + "description": "Application run not found" }, "422": { "description": "Validation Error", @@ -746,14 +503,14 @@ } } }, - "/v1/runs/{run_id}/cancel": { + "/v1/runs/{application_run_id}/cancel": { "post": { "tags": [ "Public" ], - "summary": "Cancel Run", - "description": "The run can be canceled by the user who created the run.\n\nThe execution can be canceled any time while the application is not in a final state. The\npending items will not be processed and will not add to the cost.\n\nWhen the application is canceled, the already completed items stay available for download.", - "operationId": "cancel_run_v1_runs__run_id__cancel_post", + "summary": "Cancel Application Run", + "description": "The application run can be canceled by the user who created the application run.\n\nThe execution can be canceled any time while the application is not in a final state. The\npending items will not be processed and will not add to the cost.\n\nWhen the application is canceled, the already completed items stay available for download.", + "operationId": "cancel_application_run_v1_runs__application_run_id__cancel_post", "security": [ { "OAuth2AuthorizationCodeBearer": [] @@ -761,16 +518,16 @@ ], "parameters": [ { - "name": "run_id", + "name": "application_run_id", "in": "path", "required": true, "schema": { "type": "string", "format": "uuid", - "description": "Run id, returned by `POST /runs/` endpoint", - "title": "Run Id" + "description": "Application run id, returned by `POST /runs/` endpoint", + "title": "Application Run Id" }, - "description": "Run id, returned by `POST /runs/` endpoint" + "description": "Application run id, returned by `POST /runs/` endpoint" } ], "responses": { @@ -783,13 +540,7 @@ } }, "404": { - "description": "Run not found" - }, - "403": { - "description": "Forbidden - You don't have permission to cancel this run" - }, - "409": { - "description": "Conflict - The Run is already cancelled" + "description": "Application run not found" }, "422": { "description": "Validation Error", @@ -804,14 +555,14 @@ } } }, - "/v1/runs/{run_id}/items": { + "/v1/runs/{application_run_id}/results": { "get": { "tags": [ "Public" ], - "summary": "List Run Items", - "description": "List items in a run with filtering, sorting, and pagination capabilities.\n\nReturns paginated items within a specific run. Results can be filtered\nby item IDs, external_ids, status, and custom_metadata using JSONPath expressions.\n\n## JSONPath Metadata Filtering\nUse PostgreSQL JSONPath expressions to filter items using their custom_metadata.\n\n### Examples:\n- **Field existence**: `$.case_id` - Results that have a case_id field defined\n- **Exact value match**: `$.priority ? (@ == \"high\")` - Results with high priority\n- **Numeric comparison**: `$.confidence_score ? (@ > 0.95)` - Results with high confidence\n- **Array operations**: `$.flags[*] ? (@ == \"reviewed\")` - Results flagged as reviewed\n- **Complex conditions**: `$.metrics ? (@.accuracy > 0.9 && @.recall > 0.8)` - Results meeting performance thresholds\n\n## Notes\n- JSONPath expressions are evaluated using PostgreSQL's `@?` operator\n- The `$.` prefix is automatically added to root-level field references if missing\n- String values in conditions must be enclosed in double quotes\n- Use `&&` for AND operations and `||` for OR operations", - "operationId": "list_run_items_v1_runs__run_id__items_get", + "summary": "List Run Results", + "description": "Get the list of the results for the run items", + "operationId": "list_run_results_v1_runs__application_run_id__results_get", "security": [ { "OAuth2AuthorizationCodeBearer": [] @@ -819,16 +570,16 @@ ], "parameters": [ { - "name": "run_id", + "name": "application_run_id", "in": "path", "required": true, "schema": { "type": "string", "format": "uuid", - "description": "Run id, returned by `POST /runs/` endpoint", - "title": "Run Id" + "description": "Application run id, returned by `POST /runs/` endpoint", + "title": "Application Run Id" }, - "description": "Run id, returned by `POST /runs/` endpoint" + "description": "Application run id, returned by `POST /runs/` endpoint" }, { "name": "item_id__in", @@ -847,13 +598,13 @@ "type": "null" } ], - "description": "Filter for item ids", + "description": "Filter for items ids", "title": "Item Id In" }, - "description": "Filter for item ids" + "description": "Filter for items ids" }, { - "name": "external_id__in", + "name": "reference__in", "in": "query", "required": false, "schema": { @@ -868,97 +619,31 @@ "type": "null" } ], - "description": "Filter for items by their external_id from the input payload", - "title": "External Id In" - }, - "description": "Filter for items by their external_id from the input payload" - }, - { - "name": "state", - "in": "query", - "required": false, - "schema": { - "anyOf": [ - { - "$ref": "#/components/schemas/ItemState" - }, - { - "type": "null" - } - ], - "description": "Filter items by their state", - "title": "State" - }, - "description": "Filter items by their state" - }, - { - "name": "termination_reason", - "in": "query", - "required": false, - "schema": { - "anyOf": [ - { - "$ref": "#/components/schemas/ItemTerminationReason" - }, - { - "type": "null" - } - ], - "description": "Filter items by their termination reason. Only applies to TERMINATED items.", - "title": "Termination Reason" + "description": "Filter for items by their reference from the input payload", + "title": "Reference In" }, - "description": "Filter items by their termination reason. Only applies to TERMINATED items." + "description": "Filter for items by their reference from the input payload" }, { - "name": "custom_metadata", + "name": "status__in", "in": "query", "required": false, "schema": { "anyOf": [ { - "type": "string", - "maxLength": 1000 + "type": "array", + "items": { + "$ref": "#/components/schemas/ItemStatus" + } }, { "type": "null" } ], - "description": "JSONPath expression to filter items by their custom_metadata", - "title": "Custom Metadata" + "description": "Filter for items in certain statuses", + "title": "Status In" }, - "description": "JSONPath expression to filter items by their custom_metadata", - "examples": { - "no_filter": { - "summary": "No filter (returns all)", - "description": "Returns all items without filtering by custom metadata", - "value": "$" - }, - "field_exists": { - "summary": "Check if field exists", - "description": "Find items that have a project field defined", - "value": "$.project" - }, - "field_has_value": { - "summary": "Check if field has a certain value", - "description": "Compare a field value against a certain value", - "value": "$.project ? (@ == \"cancer-research\")" - }, - "numeric_comparisons": { - "summary": "Compare to a numeric value of a field", - "description": "Compare a field value against a numeric value of a field", - "value": "$.duration_hours ? (@ < 2)" - }, - "array_operations": { - "summary": "Check if an array contains a certain value", - "description": "Check if an array contains a certain value", - "value": "$.tags[*] ? (@ == \"production\")" - }, - "complex_filters": { - "summary": "Combine multiple checks", - "description": "Combine multiple checks", - "value": "$.resources ? (@.gpu_count > 2 && @.memory_gb >= 16)" - } - } + "description": "Filter for items in certain statuses" }, { "name": "page", @@ -999,10 +684,8 @@ "type": "null" } ], - "description": "Sort the items by one or more fields. Use `+` for ascending and `-` for descending order.\n **Available fields:**\n- `item_id`\n- `run_id`\n- `external_id`\n- `custom_metadata`\n\n**Examples:**\n- `?sort=item_id` - Sort by id of the item (ascending)\n- `?sort=-external_id` - Sort by external ID (descending)\n- `?sort=custom_metadata&sort=-external_id` - Sort by metadata, then by external ID (descending)", "title": "Sort" - }, - "description": "Sort the items by one or more fields. Use `+` for ascending and `-` for descending order.\n **Available fields:**\n- `item_id`\n- `run_id`\n- `external_id`\n- `custom_metadata`\n\n**Examples:**\n- `?sort=item_id` - Sort by id of the item (ascending)\n- `?sort=-external_id` - Sort by external ID (descending)\n- `?sort=custom_metadata&sort=-external_id` - Sort by metadata, then by external ID (descending)" + } } ], "responses": { @@ -1015,13 +698,13 @@ "items": { "$ref": "#/components/schemas/ItemResultReadResponse" }, - "title": "Response List Run Items V1 Runs Run Id Items Get" + "title": "Response List Run Results V1 Runs Application Run Id Results Get" } } } }, "404": { - "description": "Run not found" + "description": "Application run not found" }, "422": { "description": "Validation Error", @@ -1034,16 +717,14 @@ } } } - } - }, - "/v1/runs/{run_id}/items/{external_id}": { - "get": { + }, + "delete": { "tags": [ "Public" ], - "summary": "Get Item By Run", - "description": "Retrieve details of a specific item (slide) by its external ID and the run ID.", - "operationId": "get_item_by_run_v1_runs__run_id__items__external_id__get", + "summary": "Delete Application Run Results", + "description": "Delete the application run results. It can only be called when the application is in a final\nstate (meaning it's not in `received` or `pending` states). To delete the results of the running\nartifacts, first call `POST /v1/runs/{application_run_id}/cancel` to cancel the application run.\n\nThe output results are deleted automatically 30 days after the application run is finished.", + "operationId": "delete_application_run_results_v1_runs__application_run_id__results_delete", "security": [ { "OAuth2AuthorizationCodeBearer": [] @@ -1051,45 +732,24 @@ ], "parameters": [ { - "name": "run_id", + "name": "application_run_id", "in": "path", "required": true, "schema": { "type": "string", "format": "uuid", - "description": "The run id, returned by `POST /runs/` endpoint", - "title": "Run Id" - }, - "description": "The run id, returned by `POST /runs/` endpoint" - }, - { - "name": "external_id", - "in": "path", - "required": true, - "schema": { - "type": "string", - "description": "The `external_id` that was defined for the item by the customer that triggered the run.", - "title": "External Id" + "description": "Application run id, returned by `POST /runs/` endpoint", + "title": "Application Run Id" }, - "description": "The `external_id` that was defined for the item by the customer that triggered the run." + "description": "Application run id, returned by `POST /runs/` endpoint" } ], "responses": { - "200": { - "description": "Successful Response", - "content": { - "application/json": { - "schema": { - "$ref": "#/components/schemas/ItemResultReadResponse" - } - } - } + "204": { + "description": "Successful Response" }, "404": { - "description": "Not Found - Item with given ID does not exist" - }, - "403": { - "description": "Forbidden - You don't have permission to see this item" + "description": "Application run not found" }, "422": { "description": "Validation Error", @@ -1104,195 +764,12 @@ } } }, - "/v1/runs/{run_id}/artifacts": { - "delete": { + "/v1/me": { + "get": { "tags": [ "Public" ], - "summary": "Delete Run Items", - "description": "This endpoint allows the caller to explicitly delete artifacts generated by a run.\nIt can only be invoked when the run has reached a final state\n(PROCESSED, CANCELED_SYSTEM, CANCELED_USER).\nNote that by default, all artifacts are automatically deleted 30 days after the run finishes,\n regardless of whether the caller explicitly requests deletion.", - "operationId": "delete_run_items_v1_runs__run_id__artifacts_delete", - "security": [ - { - "OAuth2AuthorizationCodeBearer": [] - } - ], - "parameters": [ - { - "name": "run_id", - "in": "path", - "required": true, - "schema": { - "type": "string", - "format": "uuid", - "description": "Run id, returned by `POST /runs/` endpoint", - "title": "Run Id" - }, - "description": "Run id, returned by `POST /runs/` endpoint" - } - ], - "responses": { - "200": { - "description": "Run artifacts deleted", - "content": { - "application/json": { - "schema": {} - } - } - }, - "404": { - "description": "Run not found" - }, - "422": { - "description": "Validation Error", - "content": { - "application/json": { - "schema": { - "$ref": "#/components/schemas/HTTPValidationError" - } - } - } - } - } - } - }, - "/v1/runs/{run_id}/custom-metadata": { - "put": { - "tags": [ - "Public" - ], - "summary": "Put Run Custom Metadata", - "operationId": "put_run_custom_metadata_v1_runs__run_id__custom_metadata_put", - "security": [ - { - "OAuth2AuthorizationCodeBearer": [] - } - ], - "parameters": [ - { - "name": "run_id", - "in": "path", - "required": true, - "schema": { - "type": "string", - "format": "uuid", - "description": "Run id, returned by `POST /runs/` endpoint", - "title": "Run Id" - }, - "description": "Run id, returned by `POST /runs/` endpoint" - } - ], - "requestBody": { - "required": true, - "content": { - "application/json": { - "schema": { - "$ref": "#/components/schemas/CustomMetadataUpdateRequest" - } - } - } - }, - "responses": { - "200": { - "description": "Successful Response", - "content": { - "application/json": { - "schema": {} - } - } - }, - "404": { - "description": "Run not found" - }, - "422": { - "description": "Validation Error", - "content": { - "application/json": { - "schema": { - "$ref": "#/components/schemas/HTTPValidationError" - } - } - } - } - } - } - }, - "/v1/runs/{run_id}/items/{external_id}/custom-metadata": { - "put": { - "tags": [ - "Public" - ], - "summary": "Put Item Custom Metadata By Run", - "operationId": "put_item_custom_metadata_by_run_v1_runs__run_id__items__external_id__custom_metadata_put", - "security": [ - { - "OAuth2AuthorizationCodeBearer": [] - } - ], - "parameters": [ - { - "name": "run_id", - "in": "path", - "required": true, - "schema": { - "type": "string", - "format": "uuid", - "description": "The run id, returned by `POST /runs/` endpoint", - "title": "Run Id" - }, - "description": "The run id, returned by `POST /runs/` endpoint" - }, - { - "name": "external_id", - "in": "path", - "required": true, - "schema": { - "type": "string", - "description": "The `external_id` that was defined for the item by the customer that triggered the run.", - "title": "External Id" - }, - "description": "The `external_id` that was defined for the item by the customer that triggered the run." - } - ], - "requestBody": { - "required": true, - "content": { - "application/json": { - "schema": { - "$ref": "#/components/schemas/CustomMetadataUpdateRequest" - } - } - } - }, - "responses": { - "200": { - "description": "Successful Response", - "content": { - "application/json": { - "schema": {} - } - } - }, - "422": { - "description": "Validation Error", - "content": { - "application/json": { - "schema": { - "$ref": "#/components/schemas/HTTPValidationError" - } - } - } - } - } - } - }, - "/v1/me": { - "get": { - "tags": [ - "Public" - ], - "summary": "Get current user", - "description": "Retrieves your identity details, including name, email, and organization.\nThis is useful for verifying that the request is being made under the correct user profile\nand organization context, as well as confirming that the expected environment variables are correctly set\n(in case you are using Python SDK)", + "summary": "Get Me", "operationId": "get_me_v1_me_get", "responses": { "200": { @@ -1323,7 +800,7 @@ "title": "Application Id", "description": "Application ID", "examples": [ - "he-tme" + "h-e-tme" ] }, "name": { @@ -1331,7 +808,7 @@ "title": "Name", "description": "Application display name", "examples": [ - "Atlas H&E-TME" + "HETA" ] }, "regulatory_classes": { @@ -1340,28 +817,17 @@ }, "type": "array", "title": "Regulatory Classes", - "description": "Regulatory classes, to which the applications comply with. Possible values include: RUO, IVDR, FDA.", + "description": "Regulatory class, to which the applications compliance", "examples": [ [ - "RUO" + "RuO" ] ] }, "description": { "type": "string", "title": "Description", - "description": "Describing what the application can do ", - "examples": [ - "The Atlas H&E TME is an AI application designed to examine FFPE (formalin-fixed, paraffin-embedded) tissues stained with H&E (hematoxylin and eosin), delivering comprehensive insights into the tumor microenvironment." - ] - }, - "versions": { - "items": { - "$ref": "#/components/schemas/ApplicationVersion" - }, - "type": "array", - "title": "Versions", - "description": "All version numbers available to the user" + "description": "Application documentations" } }, "type": "object", @@ -1369,167 +835,96 @@ "application_id", "name", "regulatory_classes", - "description", - "versions" + "description" ], - "title": "ApplicationReadResponse", - "description": "Response schema for `List available applications` and `Read Application by Id` endpoints" + "title": "ApplicationReadResponse" + }, + "ApplicationRunStatus": { + "type": "string", + "enum": [ + "CANCELED_SYSTEM", + "CANCELED_USER", + "COMPLETED", + "COMPLETED_WITH_ERROR", + "RECEIVED", + "REJECTED", + "RUNNING", + "SCHEDULED" + ], + "title": "ApplicationRunStatus" }, - "ApplicationReadShortResponse": { + "ApplicationVersionReadResponse": { "properties": { - "application_id": { + "application_version_id": { "type": "string", - "title": "Application Id", - "description": "Application ID", + "title": "Application Version Id", + "description": "Application version ID", "examples": [ - "he-tme" + "h-e-tme:v0.0.1" ] }, - "name": { + "version": { "type": "string", - "title": "Name", - "description": "Application display name", - "examples": [ - "Atlas H&E-TME" - ] - }, - "regulatory_classes": { - "items": { - "type": "string" - }, - "type": "array", - "title": "Regulatory Classes", - "description": "Regulatory classes, to which the applications comply with. Possible values include: RUO, IVDR, FDA.", - "examples": [ - [ - "RUO" - ] - ] + "title": "Version", + "description": "Semantic version of the application" }, - "description": { + "application_id": { "type": "string", - "title": "Description", - "description": "Describing what the application can do ", - "examples": [ - "The Atlas H&E TME is an AI application designed to examine FFPE (formalin-fixed, paraffin-embedded) tissues stained with H&E (hematoxylin and eosin), delivering comprehensive insights into the tumor microenvironment." - ] + "title": "Application Id", + "description": "Application ID" }, - "latest_version": { + "flow_id": { "anyOf": [ { - "$ref": "#/components/schemas/ApplicationVersion" + "type": "string", + "format": "uuid" }, { "type": "null" } ], - "description": "The version with highest version number available to the user" - } - }, - "type": "object", - "required": [ - "application_id", - "name", - "regulatory_classes", - "description" - ], - "title": "ApplicationReadShortResponse", - "description": "Response schema for `List available applications` and `Read Application by Id` endpoints" - }, - "ApplicationVersion": { - "properties": { - "number": { + "title": "Flow Id", + "description": "Flow ID, used internally by the platform" + }, + "changelog": { "type": "string", - "title": "Number", - "description": "The number of the latest version", - "examples": [ - "1.0.0" - ] + "title": "Changelog", + "description": "Description of the changes relative to the previous version" }, - "released_at": { + "input_artifacts": { + "items": { + "$ref": "#/components/schemas/InputArtifactReadResponse" + }, + "type": "array", + "title": "Input Artifacts", + "description": "List of the input fields, provided by the User" + }, + "output_artifacts": { + "items": { + "$ref": "#/components/schemas/OutputArtifactReadResponse" + }, + "type": "array", + "title": "Output Artifacts", + "description": "List of the output fields, generated by the application" + }, + "created_at": { "type": "string", "format": "date-time", - "title": "Released At", - "description": "The timestamp for when the application version was made available in the Platform", - "examples": [ - "2025-09-15T10:30:45.123Z" - ] + "title": "Created At", + "description": "The timestamp when the application version was registered" } }, "type": "object", "required": [ - "number", - "released_at" - ], - "title": "ApplicationVersion" - }, - "ArtifactOutput": { - "type": "string", - "enum": [ - "NONE", - "AVAILABLE", - "DELETED_BY_USER", - "DELETED_BY_SYSTEM" - ], - "title": "ArtifactOutput" - }, - "ArtifactState": { - "type": "string", - "enum": [ - "PENDING", - "PROCESSING", - "TERMINATED" - ], - "title": "ArtifactState" - }, - "ArtifactTerminationReason": { - "type": "string", - "enum": [ - "SUCCEEDED", - "USER_ERROR", - "SYSTEM_ERROR", - "SKIPPED" + "application_version_id", + "version", + "application_id", + "changelog", + "input_artifacts", + "output_artifacts", + "created_at" ], - "title": "ArtifactTerminationReason" - }, - "CustomMetadataUpdateRequest": { - "properties": { - "custom_metadata": { - "anyOf": [ - { - "type": "object" - }, - { - "type": "null" - } - ], - "title": "Custom Metadata", - "description": "JSON metadata that should be set for the run", - "examples": [ - { - "department": "D1", - "study": "abc-1" - } - ] - }, - "custom_metadata_checksum": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "title": "Custom Metadata Checksum", - "description": "Optional field to verify that the latest custom metadata was known. If set to the checksum retrieved via the /runs endpoint, it must match the checksum of the current value in the database.", - "examples": [ - "f54fe109" - ] - } - }, - "type": "object", - "title": "CustomMetadataUpdateRequest" + "title": "ApplicationVersionReadResponse" }, "HTTPValidationError": { "properties": { @@ -1544,41 +939,14 @@ "type": "object", "title": "HTTPValidationError" }, - "InputArtifact": { - "properties": { - "name": { - "type": "string", - "title": "Name" - }, - "mime_type": { - "type": "string", - "pattern": "^\\w+\\/\\w+[-+.|\\w+]+\\w+$", - "title": "Mime Type", - "examples": [ - "image/tiff" - ] - }, - "metadata_schema": { - "type": "object", - "title": "Metadata Schema" - } - }, - "type": "object", - "required": [ - "name", - "mime_type", - "metadata_schema" - ], - "title": "InputArtifact" - }, "InputArtifactCreationRequest": { "properties": { "name": { "type": "string", "title": "Name", - "description": "Type of artifact. For Atlas H&E-TME, use \"input_slide\"", + "description": "The artifact name according to the application version. List of required artifacts is returned by `/v1/versions/{application_version_id}`. The artifact names are located in the `input_artifacts.[].name` value", "examples": [ - "input_slide" + "slide" ] }, "download_url": { @@ -1613,37 +981,43 @@ "download_url", "metadata" ], - "title": "InputArtifactCreationRequest", - "description": "Input artifact containing the slide image and associated metadata." + "title": "InputArtifactCreationRequest" }, - "ItemCreationRequest": { + "InputArtifactReadResponse": { "properties": { - "external_id": { + "name": { + "type": "string", + "title": "Name" + }, + "mime_type": { "type": "string", - "maxLength": 255, - "title": "External Id", - "description": "Unique identifier for this item within the run. Used for referencing items. Must be unique across all items in the same run", + "pattern": "^\\w+\\/\\w+[-+.|\\w+]+\\w+$", + "title": "Mime Type", "examples": [ - "slide_1", - "patient_001_slide_A", - "sample_12345" + "image/tiff" ] }, - "custom_metadata": { - "anyOf": [ - { - "type": "object" - }, - { - "type": "null" - } - ], - "title": "Custom Metadata", - "description": "Optional JSON custom_metadata to store additional information alongside an item.", + "metadata_schema": { + "type": "object", + "title": "Metadata Schema" + } + }, + "type": "object", + "required": [ + "name", + "mime_type", + "metadata_schema" + ], + "title": "InputArtifactReadResponse" + }, + "ItemCreationRequest": { + "properties": { + "reference": { + "type": "string", + "title": "Reference", + "description": "The ID of the slide provided by the caller. The reference should be unique across all items of the application run", "examples": [ - { - "case": "abc" - } + "case-no-1" ] }, "input_artifacts": { @@ -1652,44 +1026,15 @@ }, "type": "array", "title": "Input Artifacts", - "description": "List of input artifacts for this item. For Atlas H&E-TME, typically contains one artifact (the slide image)", - "examples": [ - [ - { - "download_url": "https://example-bucket.s3.amazonaws.com/slide1.tiff", - "metadata": { - "checksum_base64_crc32c": "64RKKA==", - "height_px": 87761, - "media-type": "image/tiff", - "resolution_mpp": 0.2628238, - "specimen": { - "disease": "LUNG_CANCER", - "tissue": "LUNG" - }, - "staining_method": "H&E", - "width_px": 136223 - }, - "name": "input_slide" - } - ] - ] + "description": "All the input files of the item, required by the application version" } }, "type": "object", "required": [ - "external_id", + "reference", "input_artifacts" ], - "title": "ItemCreationRequest", - "description": "Individual item (slide) to be processed in a run." - }, - "ItemOutput": { - "type": "string", - "enum": [ - "NONE", - "FULL" - ], - "title": "ItemOutput" + "title": "ItemCreationRequest" }, "ItemResultReadResponse": { "properties": { @@ -1699,61 +1044,22 @@ "title": "Item Id", "description": "Item UUID generated by the Platform" }, - "external_id": { + "application_run_id": { "type": "string", - "title": "External Id", - "description": "The external_id of the item from the user payload", - "examples": [ - "slide_1" - ] - }, - "custom_metadata": { - "anyOf": [ - { - "type": "object" - }, - { - "type": "null" - } - ], - "title": "Custom Metadata", - "description": "The custom_metadata of the item that has been provided by the user on run creation." - }, - "custom_metadata_checksum": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "title": "Custom Metadata Checksum", - "description": "The checksum of the `custom_metadata` field.\nCan be used in the `PUT /runs/{run-id}/items/{external_id}/custom_metadata`\nrequest to avoid unwanted override of the values in concurrent requests.", - "examples": [ - "f54fe109" - ] - }, - "state": { - "$ref": "#/components/schemas/ItemState", - "description": "\nThe item moves from `PENDING` to `PROCESSING` to `TERMINATED` state.\nWhen terminated, consult the `termination_reason` property to see whether it was successful.\n " + "format": "uuid", + "title": "Application Run Id", + "description": "Application run UUID to which the item belongs" }, - "output": { - "$ref": "#/components/schemas/ItemOutput", - "description": "The output status of the item (NONE, FULL)" + "reference": { + "type": "string", + "title": "Reference", + "description": "The reference of the item from the user payload" }, - "termination_reason": { - "anyOf": [ - { - "$ref": "#/components/schemas/ItemTerminationReason" - }, - { - "type": "null" - } - ], - "description": "\nWhen the `state` is `TERMINATED` this will explain why\n`SUCCEEDED` -> Successful processing.\n`USER_ERROR` -> Failed because the provided input was invalid.\n`SYSTEM_ERROR` -> There was an error in the model or platform.\n`SKIPPED` -> Was cancelled\n" + "status": { + "$ref": "#/components/schemas/ItemStatus", + "description": "\nWhen the item is not processed yet, the status is set to `pending`.\n\nWhen the item is successfully finished, status is set to `succeeded`, and the processing results\nbecome available for download in `output_artifacts` field.\n\nWhen the item processing is failed because the provided item is invalid, the status is set to\n`error_user`. When the item processing failed because of the error in the model or platform,\nthe status is set to `error_system`. When the application_run is canceled, the status of all\npending items is set to either `cancelled_user` or `cancelled_system`.\n " }, - "error_message": { + "error": { "anyOf": [ { "type": "string" @@ -1762,30 +1068,8 @@ "type": "null" } ], - "title": "Error Message", - "description": "\n The error message in case the `termination_reason` is in `USER_ERROR` or `SYSTEM_ERROR`\n ", - "examples": [ - "This item was not processed because the threshold of 3 items finishing in error state (user or system error) was reached before the item was processed.", - "The item was not processed because the run was cancelled by the user before the item was processed.User error raised by Application because the input data provided by the user cannot be processed:\nThe image width is 123000 px, but the maximum width is 100000 px", - "A system error occurred during the item execution:\n System went out of memory in cell classification", - "An unknown system error occurred during the item execution" - ] - }, - "terminated_at": { - "anyOf": [ - { - "type": "string", - "format": "date-time" - }, - { - "type": "null" - } - ], - "title": "Terminated At", - "description": "Timestamp showing when the item reached a terminal state.", - "examples": [ - "2024-01-15T10:30:45.123Z" - ] + "title": "Error", + "description": "\nThe error message in case the item is in `error_system` or `error_user` state\n " }, "output_artifacts": { "items": { @@ -1794,52 +1078,30 @@ "type": "array", "title": "Output Artifacts", "description": "\nThe list of the results generated by the application algorithm. The number of files and their\ntypes depend on the particular application version, call `/v1/versions/{version_id}` to get\nthe details.\n " - }, - "error_code": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "title": "Error Code", - "description": "Error code describing the error that occurred during item processing.", - "readOnly": true } }, "type": "object", "required": [ "item_id", - "external_id", - "custom_metadata", - "state", - "output", - "output_artifacts", - "error_code" + "application_run_id", + "reference", + "status", + "error", + "output_artifacts" ], - "title": "ItemResultReadResponse", - "description": "Response schema for items in `List Run Items` endpoint" + "title": "ItemResultReadResponse" }, - "ItemState": { + "ItemStatus": { "type": "string", "enum": [ "PENDING", - "PROCESSING", - "TERMINATED" - ], - "title": "ItemState" - }, - "ItemTerminationReason": { - "type": "string", - "enum": [ - "SUCCEEDED", - "USER_ERROR", - "SYSTEM_ERROR", - "SKIPPED" + "CANCELED_USER", + "CANCELED_SYSTEM", + "ERROR_USER", + "ERROR_SYSTEM", + "SUCCEEDED" ], - "title": "ItemTerminationReason" + "title": "ItemStatus" }, "MeReadResponse": { "properties": { @@ -1855,18 +1117,13 @@ "user", "organization" ], - "title": "MeReadResponse", - "description": "Response schema for `Get current user` endpoint" + "title": "MeReadResponse" }, "OrganizationReadResponse": { "properties": { "id": { "type": "string", - "title": "Id", - "description": "Unique organization identifier", - "examples": [ - "org_123456" - ] + "title": "Id" }, "name": { "anyOf": [ @@ -1877,11 +1134,7 @@ "type": "null" } ], - "title": "Name", - "description": "Organization name (E.g. “aignx”)", - "examples": [ - "aignx" - ] + "title": "Name" }, "display_name": { "anyOf": [ @@ -1892,59 +1145,31 @@ "type": "null" } ], - "title": "Display Name", - "description": "Public organization name (E.g. “Aignostics GmbH”)", - "examples": [ - "Aignostics GmbH" - ] + "title": "Display Name" }, "aignostics_bucket_hmac_access_key_id": { "type": "string", - "title": "Aignostics Bucket Hmac Access Key Id", - "description": "HMAC access key ID for the Aignostics-provided storage bucket. Used to authenticate requests for uploading files and generating signed URLs", - "examples": [ - "YOUR_HMAC_ACCESS_KEY_ID" - ] + "title": "Aignostics Bucket Hmac Access Key Id" }, "aignostics_bucket_hmac_secret_access_key": { "type": "string", - "title": "Aignostics Bucket Hmac Secret Access Key", - "description": "HMAC secret access key paired with the access key ID. Keep this credential secure.", - "examples": [ - "YOUR/HMAC/SECRET_ACCESS_KEY" - ] + "title": "Aignostics Bucket Hmac Secret Access Key" }, "aignostics_bucket_name": { "type": "string", - "title": "Aignostics Bucket Name", - "description": "Name of the bucket provided by Aignostics for storing input artifacts (slide images)", - "examples": [ - "aignostics-platform-bucket" - ] + "title": "Aignostics Bucket Name" }, "aignostics_bucket_protocol": { "type": "string", - "title": "Aignostics Bucket Protocol", - "description": "Protocol to use for bucket access. Defines the URL scheme for connecting to the storage service", - "examples": [ - "gs" - ] + "title": "Aignostics Bucket Protocol" }, "aignostics_logfire_token": { "type": "string", - "title": "Aignostics Logfire Token", - "description": "Authentication token for Logfire observability service. Enables sending application logs and performance metrics to Aignostics for monitoring and support", - "examples": [ - "your-logfire-token" - ] + "title": "Aignostics Logfire Token" }, "aignostics_sentry_dsn": { "type": "string", - "title": "Aignostics Sentry Dsn", - "description": "Data Source Name (DSN) for Sentry error tracking service. Allows automatic reporting of errors and exceptions to Aignostics support team", - "examples": [ - "https://2354s3#ewsha@o44.ingest.us.sentry.io/34345123432" - ] + "title": "Aignostics Sentry Dsn" } }, "type": "object", @@ -1958,9 +1183,9 @@ "aignostics_sentry_dsn" ], "title": "OrganizationReadResponse", - "description": "Part of response schema for Organization object in `Get current user` endpoint.\nThis model corresponds to the response schema returned from\nAuth0 GET /v2/organizations/{id} endpoint, flattens out the metadata out\nand doesn't return branding or token_quota objects.\nFor details, see:\nhttps://auth0.com/docs/api/management/v2/organizations/get-organizations-by-id\n\n#### Configuration for integrating with Aignostics Platform services.\n\nThe Aignostics Platform API requires signed URLs for input artifacts (slide images). To simplify this process,\nAignostics provides a dedicated storage bucket. The HMAC credentials below grant read and write\naccess to this bucket, allowing you to upload files and generate the signed URLs needed for API calls.\n\nAdditionally, logging and error reporting tokens enable Aignostics to provide better support and monitor\nsystem performance for your integration." + "description": "This model corresponds to the response schema returned from\nAuth0 GET /v2/organizations/{id} endpoint, flattens out the metadata out\nand doesn't return branding or token_quota objects.\nFor details, see:\nhttps://auth0.com/docs/api/management/v2/organizations/get-organizations-by-id" }, - "OutputArtifact": { + "OutputArtifactReadResponse": { "properties": { "name": { "type": "string", @@ -1980,9 +1205,6 @@ }, "scope": { "$ref": "#/components/schemas/OutputArtifactScope" - }, - "visibility": { - "$ref": "#/components/schemas/OutputArtifactVisibility" } }, "type": "object", @@ -1990,10 +1212,9 @@ "name", "mime_type", "metadata_schema", - "scope", - "visibility" + "scope" ], - "title": "OutputArtifact" + "title": "OutputArtifactReadResponse" }, "OutputArtifactResultReadResponse": { "properties": { @@ -2006,53 +1227,12 @@ "name": { "type": "string", "title": "Name", - "description": "\nName of the output from the output schema from the `/v1/versions/{version_id}` endpoint.\n ", - "examples": [ - "tissue_qc:tiff_heatmap" - ] + "description": "\nName of the output from the output schema from the `/v1/versions/{version_id}` endpoint.\n " }, "metadata": { - "anyOf": [ - { - "type": "object" - }, - { - "type": "null" - } - ], + "type": "object", "title": "Metadata", - "description": "The metadata of the output artifact, provided by the application. Can only be None if the artifact itself was deleted." - }, - "state": { - "$ref": "#/components/schemas/ArtifactState", - "description": "The current state of the artifact (PENDING, PROCESSING, TERMINATED)" - }, - "termination_reason": { - "anyOf": [ - { - "$ref": "#/components/schemas/ArtifactTerminationReason" - }, - { - "type": "null" - } - ], - "description": "The reason for termination when state is TERMINATED" - }, - "output": { - "$ref": "#/components/schemas/ArtifactOutput", - "description": "The output status of the artifact (NONE, FULL)" - }, - "error_message": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "title": "Error Message", - "description": "Error message when artifact is in error state" + "description": "The metadata of the output artifact, provided by the application" }, "download_url": { "anyOf": [ @@ -2067,30 +1247,15 @@ } ], "title": "Download Url", - "description": "\nThe download URL to the output file. The URL is valid for 1 hour after the endpoint is called.\nA new URL is generated every time the endpoint is called.\n " - }, - "error_code": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "title": "Error Code", - "description": "Error code describing the error that occurred during artifact processing.", - "readOnly": true + "description": "\nThe download URL to the output file. The URL is valid for 1 hour after the endpoint is called.\nA new URL is generated every time the endpoint is called.\n " } }, "type": "object", "required": [ "output_artifact_id", "name", - "state", - "output", - "download_url", - "error_code" + "metadata", + "download_url" ], "title": "OutputArtifactResultReadResponse" }, @@ -2102,55 +1267,91 @@ ], "title": "OutputArtifactScope" }, - "OutputArtifactVisibility": { - "type": "string", - "enum": [ - "INTERNAL", - "EXTERNAL" + "PayloadInputArtifact": { + "properties": { + "input_artifact_id": { + "type": "string", + "format": "uuid", + "title": "Input Artifact Id" + }, + "metadata": { + "type": "object", + "title": "Metadata" + }, + "download_url": { + "type": "string", + "minLength": 1, + "format": "uri", + "title": "Download Url" + } + }, + "type": "object", + "required": [ + "metadata", + "download_url" ], - "title": "OutputArtifactVisibility" + "title": "PayloadInputArtifact" }, - "RunCreationRequest": { + "PayloadItem": { "properties": { - "application_id": { + "item_id": { "type": "string", - "title": "Application Id", - "description": "Unique ID for the application to use for processing", - "examples": [ - "he-tme" - ] + "format": "uuid", + "title": "Item Id" }, - "version_number": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "title": "Version Number", - "description": "Semantic version of the application to use for processing. If not provided, the latest available version will be used", - "examples": [ - "1.0.0-beta1" - ] + "input_artifacts": { + "additionalProperties": { + "$ref": "#/components/schemas/PayloadInputArtifact" + }, + "type": "object", + "title": "Input Artifacts" }, - "custom_metadata": { - "anyOf": [ - { - "type": "object" - }, - { - "type": "null" - } - ], - "title": "Custom Metadata", - "description": "Optional JSON metadata to store additional information alongside the run", + "output_artifacts": { + "additionalProperties": { + "$ref": "#/components/schemas/PayloadOutputArtifact" + }, + "type": "object", + "title": "Output Artifacts" + } + }, + "type": "object", + "required": [ + "item_id", + "input_artifacts", + "output_artifacts" + ], + "title": "PayloadItem" + }, + "PayloadOutputArtifact": { + "properties": { + "output_artifact_id": { + "type": "string", + "format": "uuid", + "title": "Output Artifact Id" + }, + "data": { + "$ref": "#/components/schemas/TransferUrls" + }, + "metadata": { + "$ref": "#/components/schemas/TransferUrls" + } + }, + "type": "object", + "required": [ + "output_artifact_id", + "data", + "metadata" + ], + "title": "PayloadOutputArtifact" + }, + "RunCreationRequest": { + "properties": { + "application_version_id": { + "type": "string", + "title": "Application Version Id", + "description": "Application version ID", "examples": [ - { - "department": "D1", - "study": "abc-1" - } + "h-e-tme:v1.2.3" ] }, "items": { @@ -2158,303 +1359,155 @@ "$ref": "#/components/schemas/ItemCreationRequest" }, "type": "array", - "minItems": 1, "title": "Items", - "description": "List of items (slides) to process. Each item represents a whole slide image (WSI) with its associated metadata and artifacts", - "examples": [ - [ - { - "external_id": "slide_1", - "input_artifacts": [ - { - "download_url": "https://example-bucket.s3.amazonaws.com/slide1.tiff?signature=...", - "metadata": { - "checksum_base64_crc32c": "64RKKA==", - "height_px": 87761, - "media-type": "image/tiff", - "resolution_mpp": 0.2628238, - "specimen": { - "disease": "LUNG_CANCER", - "tissue": "LUNG" - }, - "staining_method": "H&E", - "width_px": 136223 - }, - "name": "input_slide" - } - ] - } - ] - ] + "description": "List of the items to process by the application" } }, "type": "object", "required": [ - "application_id", + "application_version_id", "items" ], "title": "RunCreationRequest", - "description": "Request schema for `Initiate Run` endpoint.\nIt describes which application version is chosen, and which user data should be processed." + "description": "Application run payload. It describes which application version is chosen, and which user data\nshould be processed." }, "RunCreationResponse": { "properties": { - "run_id": { + "application_run_id": { "type": "string", "format": "uuid", - "title": "Run Id", - "default": "Run id", - "examples": [ - "3fa85f64-5717-4562-b3fc-2c963f66afa6" - ] + "title": "Application Run Id", + "default": "Application run id" } }, "type": "object", "title": "RunCreationResponse" }, - "RunItemStatistics": { - "properties": { - "item_count": { - "type": "integer", - "title": "Item Count", - "description": "Total number of the items in the run" - }, - "item_pending_count": { - "type": "integer", - "title": "Item Pending Count", - "description": "The number of items in `PENDING` state" - }, - "item_processing_count": { - "type": "integer", - "title": "Item Processing Count", - "description": "The number of items in `PROCESSING` state" - }, - "item_user_error_count": { - "type": "integer", - "title": "Item User Error Count", - "description": "The number of items in `TERMINATED` state, and the item termination reason is `USER_ERROR`" - }, - "item_system_error_count": { - "type": "integer", - "title": "Item System Error Count", - "description": "The number of items in `TERMINATED` state, and the item termination reason is `SYSTEM_ERROR`" - }, - "item_skipped_count": { - "type": "integer", - "title": "Item Skipped Count", - "description": "The number of items in `TERMINATED` state, and the item termination reason is `SKIPPED`" - }, - "item_succeeded_count": { - "type": "integer", - "title": "Item Succeeded Count", - "description": "The number of items in `TERMINATED` state, and the item termination reason is `SUCCEEDED`" - } - }, - "type": "object", - "required": [ - "item_count", - "item_pending_count", - "item_processing_count", - "item_user_error_count", - "item_system_error_count", - "item_skipped_count", - "item_succeeded_count" - ], - "title": "RunItemStatistics" - }, - "RunOutput": { - "type": "string", - "enum": [ - "NONE", - "PARTIAL", - "FULL" - ], - "title": "RunOutput" - }, "RunReadResponse": { "properties": { - "run_id": { + "application_run_id": { "type": "string", "format": "uuid", - "title": "Run Id", + "title": "Application Run Id", "description": "UUID of the application" }, - "application_id": { + "application_version_id": { "type": "string", - "title": "Application Id", - "description": "Application id", - "examples": [ - "he-tme" - ] + "title": "Application Version Id", + "description": "ID of the application version" }, - "version_number": { + "organization_id": { "type": "string", - "title": "Version Number", - "description": "Application version number", - "examples": [ - "0.4.4" - ] - }, - "state": { - "$ref": "#/components/schemas/RunState", - "description": "When the run request is received by the Platform, the `state` of it is set to\n`PENDING`. The state changes to `PROCESSING` when at least one item is being processed. After `PROCESSING`, the\nstate of the run can switch back to `PENDING` if there are no processing items, or to `TERMINATED` when the run\nfinished processing." - }, - "output": { - "$ref": "#/components/schemas/RunOutput", - "description": "The status of the output of the run. When 0 items are successfully processed the output is\n`NONE`, after one item is successfully processed, the value is set to `PARTIAL`. When all items of the run are\nsuccessfully processed, the output is set to `FULL`." - }, - "termination_reason": { - "anyOf": [ - { - "$ref": "#/components/schemas/RunTerminationReason" - }, - { - "type": "null" - } - ], - "description": "The termination reason of the run. When the run is not in `TERMINATED` state, the\n termination_reason is `null`. If all items of of the run are processed (successfully or with an error), then\n termination_reason is set to `ALL_ITEMS_PROCESSED`. If the run is cancelled by the user, the value is set to\n `CANCELED_BY_USER`. If the run reaches the threshold of number of failed items, the Platform cancels the run\n and sets the termination_reason to `CANCELED_BY_SYSTEM`.\n " - }, - "error_code": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "title": "Error Code", - "description": "When the termination_reason is set to CANCELED_BY_SYSTEM, the error_code is set to define the\n structured description of the error.", - "examples": [ - "SCHEDULER.ITEMS_WITH_ERROR_THRESHOLD_REACHED" - ] + "title": "Organization Id", + "description": "Organization of the owner of the application run" }, - "error_message": { + "user_payload": { "anyOf": [ { - "type": "string" + "$ref": "#/components/schemas/UserPayload" }, { "type": "null" } ], - "title": "Error Message", - "description": "When the termination_reason is set to CANCELED_BY_SYSTEM, the error_message is set to provide\n more insights to the error cause.", - "examples": [ - "Run canceled given errors on more than 10 items." - ] + "description": "Field used internally by the Platform" }, - "statistics": { - "$ref": "#/components/schemas/RunItemStatistics", - "description": "Aggregated statistics of the run execution" + "status": { + "$ref": "#/components/schemas/ApplicationRunStatus", + "description": "\nWhen the application run request is received by the Platform, the `status` of it is set to\n`received`. Then it is transitioned to `scheduled`, when it is scheduled for the processing.\nWhen the application run is scheduled, it will process the input items and generate the result\nincrementally. As soon as the first result is generated, the state is changed to `running`.\nThe results can be downloaded via `/v1/runs/{run_id}/results` endpoint.\nWhen all items are processed and all results are generated, the application status is set to\n`completed`. If the processing is done, but some items fail, the status is set to\n`completed_with_error`.\n\nWhen the application run request is rejected by the Platform before scheduling, it is transferred\nto `rejected`. When the application run reaches the threshold of number of failed items, the whole\napplication run is set to `canceled_system` and the remaining pending items are not processed.\nWhen the application run fails, the finished item results are available for download.\n\nIf the application run is canceled by calling `POST /v1/runs/{run_id}/cancel` endpoint, the\nprocessing of the items is stopped, and the application status is set to `cancelled_user`\n " }, - "custom_metadata": { - "anyOf": [ - { - "type": "object" - }, - { - "type": "null" - } - ], - "title": "Custom Metadata", - "description": "Optional JSON metadata that was stored in alongside the run by the user", - "examples": [ - { - "department": "D1", - "study": "abc-1" - } - ] + "triggered_at": { + "type": "string", + "format": "date-time", + "title": "Triggered At", + "description": "Timestamp showing when the application run was triggered" }, - "custom_metadata_checksum": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "title": "Custom Metadata Checksum", - "description": "The checksum of the `custom_metadata` field. Can be used in the `PUT /runs/{run-id}/custom_metadata`\nrequest to avoid unwanted override of the values in concurrent requests.", - "examples": [ - "f54fe109" - ] + "triggered_by": { + "type": "string", + "title": "Triggered By", + "description": "Id of the user who triggered the application run" + } + }, + "type": "object", + "required": [ + "application_run_id", + "application_version_id", + "organization_id", + "status", + "triggered_at", + "triggered_by" + ], + "title": "RunReadResponse" + }, + "TransferUrls": { + "properties": { + "upload_url": { + "type": "string", + "minLength": 1, + "format": "uri", + "title": "Upload Url" }, - "submitted_at": { + "download_url": { "type": "string", - "format": "date-time", - "title": "Submitted At", - "description": "Timestamp showing when the run was triggered" + "minLength": 1, + "format": "uri", + "title": "Download Url" + } + }, + "type": "object", + "required": [ + "upload_url", + "download_url" + ], + "title": "TransferUrls" + }, + "UserPayload": { + "properties": { + "application_id": { + "type": "string", + "title": "Application Id" }, - "submitted_by": { + "application_run_id": { "type": "string", - "title": "Submitted By", - "description": "Id of the user who triggered the run", - "examples": [ - "auth0|123456" - ] + "format": "uuid", + "title": "Application Run Id" }, - "terminated_at": { + "global_output_artifacts": { "anyOf": [ { - "type": "string", - "format": "date-time" + "additionalProperties": { + "$ref": "#/components/schemas/PayloadOutputArtifact" + }, + "type": "object" }, { "type": "null" } ], - "title": "Terminated At", - "description": "Timestamp showing when the run reached a terminal state.", - "examples": [ - "2024-01-15T10:30:45.123Z" - ] + "title": "Global Output Artifacts" + }, + "items": { + "items": { + "$ref": "#/components/schemas/PayloadItem" + }, + "type": "array", + "title": "Items" } }, "type": "object", "required": [ - "run_id", "application_id", - "version_number", - "state", - "output", - "termination_reason", - "error_code", - "error_message", - "statistics", - "submitted_at", - "submitted_by" - ], - "title": "RunReadResponse", - "description": "Response schema for `Get run details` endpoint" - }, - "RunState": { - "type": "string", - "enum": [ - "PENDING", - "PROCESSING", - "TERMINATED" - ], - "title": "RunState" - }, - "RunTerminationReason": { - "type": "string", - "enum": [ - "ALL_ITEMS_PROCESSED", - "CANCELED_BY_SYSTEM", - "CANCELED_BY_USER" + "application_run_id", + "global_output_artifacts", + "items" ], - "title": "RunTerminationReason" + "title": "UserPayload" }, "UserReadResponse": { "properties": { "id": { "type": "string", - "title": "Id", - "description": "Unique user identifier", - "examples": [ - "auth0|123456" - ] + "title": "Id" }, "email": { "anyOf": [ @@ -2465,11 +1518,7 @@ "type": "null" } ], - "title": "Email", - "description": "User email", - "examples": [ - "user@domain.com" - ] + "title": "Email" }, "email_verified": { "anyOf": [ @@ -2480,10 +1529,7 @@ "type": "null" } ], - "title": "Email Verified", - "examples": [ - true - ] + "title": "Email Verified" }, "name": { "anyOf": [ @@ -2494,11 +1540,7 @@ "type": "null" } ], - "title": "Name", - "description": "First and last name of the user", - "examples": [ - "Jane Doe" - ] + "title": "Name" }, "given_name": { "anyOf": [ @@ -2509,10 +1551,7 @@ "type": "null" } ], - "title": "Given Name", - "examples": [ - "Jane" - ] + "title": "Given Name" }, "family_name": { "anyOf": [ @@ -2523,10 +1562,7 @@ "type": "null" } ], - "title": "Family Name", - "examples": [ - "Doe" - ] + "title": "Family Name" }, "nickname": { "anyOf": [ @@ -2537,10 +1573,7 @@ "type": "null" } ], - "title": "Nickname", - "examples": [ - "jdoe" - ] + "title": "Nickname" }, "picture": { "anyOf": [ @@ -2551,10 +1584,7 @@ "type": "null" } ], - "title": "Picture", - "examples": [ - "https://example.com/jdoe.jpg" - ] + "title": "Picture" }, "updated_at": { "anyOf": [ @@ -2566,10 +1596,7 @@ "type": "null" } ], - "title": "Updated At", - "examples": [ - "2023-10-05T14:48:00.000Z" - ] + "title": "Updated At" } }, "type": "object", @@ -2577,7 +1604,7 @@ "id" ], "title": "UserReadResponse", - "description": "Part of response schema for User object in `Get current user` endpoint.\nThis model corresponds to the response schema returned from\nAuth0 GET /v2/users/{id} endpoint.\nFor details, see:\nhttps://auth0.com/docs/api/management/v2/users/get-users-by-id" + "description": "This model corresponds to the response schema returned from\nAuth0 GET /v2/users/{id} endpoint.\nFor details, see:\nhttps://auth0.com/docs/api/management/v2/users/get-users-by-id" }, "ValidationError": { "properties": { @@ -2611,52 +1638,6 @@ "type" ], "title": "ValidationError" - }, - "VersionReadResponse": { - "properties": { - "version_number": { - "type": "string", - "title": "Version Number", - "description": "Semantic version of the application" - }, - "changelog": { - "type": "string", - "title": "Changelog", - "description": "Description of the changes relative to the previous version" - }, - "input_artifacts": { - "items": { - "$ref": "#/components/schemas/InputArtifact" - }, - "type": "array", - "title": "Input Artifacts", - "description": "List of the input fields, provided by the User" - }, - "output_artifacts": { - "items": { - "$ref": "#/components/schemas/OutputArtifact" - }, - "type": "array", - "title": "Output Artifacts", - "description": "List of the output fields, generated by the application" - }, - "released_at": { - "type": "string", - "format": "date-time", - "title": "Released At", - "description": "The timestamp when the application version was registered" - } - }, - "type": "object", - "required": [ - "version_number", - "changelog", - "input_artifacts", - "output_artifacts", - "released_at" - ], - "title": "VersionReadResponse", - "description": "Base Response schema for the `Application Version Details` endpoint" } }, "securitySchemes": { @@ -2665,8 +1646,8 @@ "flows": { "authorizationCode": { "scopes": {}, - "authorizationUrl": "https://aignostics-platform-staging.eu.auth0.com/authorize", - "tokenUrl": "https://aignostics-platform-staging.eu.auth0.com/oauth/token" + "authorizationUrl": "https://aignostics-platform.eu.auth0.com/authorize", + "tokenUrl": "https://aignostics-platform.eu.auth0.com/oauth/token" } } } diff --git a/docs/source/_static/openapi_v1.yaml b/docs/source/_static/openapi_v1.yaml index 38d8d50c..e6108903 100644 --- a/docs/source/_static/openapi_v1.yaml +++ b/docs/source/_static/openapi_v1.yaml @@ -1,2203 +1,72 @@ -components: - schemas: - ApplicationReadResponse: - description: Response schema for `List available applications` and `Read Application - by Id` endpoints - properties: - application_id: - description: Application ID - examples: - - he-tme - title: Application Id - type: string - description: - description: 'Describing what the application can do ' - examples: - - The Atlas H&E TME is an AI application designed to examine FFPE (formalin-fixed, - paraffin-embedded) tissues stained with H&E (hematoxylin and eosin), delivering - comprehensive insights into the tumor microenvironment. - title: Description - type: string - name: - description: Application display name - examples: - - Atlas H&E-TME - title: Name - type: string - regulatory_classes: - description: 'Regulatory classes, to which the applications comply with. - Possible values include: RUO, IVDR, FDA.' - examples: - - - RUO - items: - type: string - title: Regulatory Classes - type: array - versions: - description: All version numbers available to the user - items: - $ref: '#/components/schemas/ApplicationVersion' - title: Versions - type: array - required: - - application_id - - name - - regulatory_classes - - description - - versions - title: ApplicationReadResponse - type: object - ApplicationReadShortResponse: - description: Response schema for `List available applications` and `Read Application - by Id` endpoints - properties: - application_id: - description: Application ID - examples: - - he-tme - title: Application Id - type: string - description: - description: 'Describing what the application can do ' - examples: - - The Atlas H&E TME is an AI application designed to examine FFPE (formalin-fixed, - paraffin-embedded) tissues stained with H&E (hematoxylin and eosin), delivering - comprehensive insights into the tumor microenvironment. - title: Description - type: string - latest_version: - anyOf: - - $ref: '#/components/schemas/ApplicationVersion' - - type: 'null' - description: The version with highest version number available to the user - name: - description: Application display name - examples: - - Atlas H&E-TME - title: Name - type: string - regulatory_classes: - description: 'Regulatory classes, to which the applications comply with. - Possible values include: RUO, IVDR, FDA.' - examples: - - - RUO - items: - type: string - title: Regulatory Classes - type: array - required: - - application_id - - name - - regulatory_classes - - description - title: ApplicationReadShortResponse - type: object - ApplicationVersion: - properties: - number: - description: The number of the latest version - examples: - - 1.0.0 - title: Number - type: string - released_at: - description: The timestamp for when the application version was made available - in the Platform - examples: - - '2025-09-15T10:30:45.123Z' - format: date-time - title: Released At - type: string - required: - - number - - released_at - title: ApplicationVersion - type: object - ArtifactOutput: - enum: - - NONE - - AVAILABLE - - DELETED_BY_USER - - DELETED_BY_SYSTEM - title: ArtifactOutput - type: string - ArtifactState: - enum: - - PENDING - - PROCESSING - - TERMINATED - title: ArtifactState - type: string - ArtifactTerminationReason: - enum: - - SUCCEEDED - - USER_ERROR - - SYSTEM_ERROR - - SKIPPED - title: ArtifactTerminationReason - type: string - CustomMetadataUpdateRequest: - properties: - custom_metadata: - anyOf: - - type: object - - type: 'null' - description: JSON metadata that should be set for the run - examples: - - department: D1 - study: abc-1 - title: Custom Metadata - custom_metadata_checksum: - anyOf: - - type: string - - type: 'null' - description: Optional field to verify that the latest custom metadata was - known. If set to the checksum retrieved via the /runs endpoint, it must - match the checksum of the current value in the database. - examples: - - f54fe109 - title: Custom Metadata Checksum - title: CustomMetadataUpdateRequest - type: object - HTTPValidationError: - properties: - detail: - items: - $ref: '#/components/schemas/ValidationError' - title: Detail - type: array - title: HTTPValidationError - type: object - InputArtifact: - properties: - metadata_schema: - title: Metadata Schema - type: object - mime_type: - examples: - - image/tiff - pattern: ^\w+\/\w+[-+.|\w+]+\w+$ - title: Mime Type - type: string - name: - title: Name - type: string - required: - - name - - mime_type - - metadata_schema - title: InputArtifact - type: object - InputArtifactCreationRequest: - description: Input artifact containing the slide image and associated metadata. - properties: - download_url: - description: '[Signed URL](https://cloud.google.com/cdn/docs/using-signed-urls) - to the input artifact file. The URL should be valid for at least 6 days - from the payload submission time.' - examples: - - https://example.com/case-no-1-slide.tiff - format: uri - maxLength: 2083 - minLength: 1 - title: Download Url - type: string - metadata: - description: The metadata of the artifact, required by the application version. - The JSON schema of the metadata can be requested by `/v1/versions/{application_version_id}`. - The schema is located in `input_artifacts.[].metadata_schema` - examples: - - checksum_base64_crc32c: 752f9554 - height: 2000 - height_mpp: 0.5 - width: 10000 - width_mpp: 0.5 - title: Metadata - type: object - name: - description: Type of artifact. For Atlas H&E-TME, use "input_slide" - examples: - - input_slide - title: Name - type: string - required: - - name - - download_url - - metadata - title: InputArtifactCreationRequest - type: object - ItemCreationRequest: - description: Individual item (slide) to be processed in a run. - properties: - custom_metadata: - anyOf: - - type: object - - type: 'null' - description: Optional JSON custom_metadata to store additional information - alongside an item. - examples: - - case: abc - title: Custom Metadata - external_id: - description: Unique identifier for this item within the run. Used for referencing - items. Must be unique across all items in the same run - examples: - - slide_1 - - patient_001_slide_A - - sample_12345 - maxLength: 255 - title: External Id - type: string - input_artifacts: - description: List of input artifacts for this item. For Atlas H&E-TME, typically - contains one artifact (the slide image) - examples: - - - download_url: https://example-bucket.s3.amazonaws.com/slide1.tiff - metadata: - checksum_base64_crc32c: 64RKKA== - height_px: 87761 - media-type: image/tiff - resolution_mpp: 0.2628238 - specimen: - disease: LUNG_CANCER - tissue: LUNG - staining_method: H&E - width_px: 136223 - name: input_slide - items: - $ref: '#/components/schemas/InputArtifactCreationRequest' - title: Input Artifacts - type: array - required: - - external_id - - input_artifacts - title: ItemCreationRequest - type: object - ItemOutput: - enum: - - NONE - - FULL - title: ItemOutput - type: string - ItemResultReadResponse: - description: Response schema for items in `List Run Items` endpoint - properties: - custom_metadata: - anyOf: - - type: object - - type: 'null' - description: The custom_metadata of the item that has been provided by the - user on run creation. - title: Custom Metadata - custom_metadata_checksum: - anyOf: - - type: string - - type: 'null' - description: 'The checksum of the `custom_metadata` field. - - Can be used in the `PUT /runs/{run-id}/items/{external_id}/custom_metadata` - - request to avoid unwanted override of the values in concurrent requests.' - examples: - - f54fe109 - title: Custom Metadata Checksum - error_code: - anyOf: - - type: string - - type: 'null' - description: Error code describing the error that occurred during item processing. - readOnly: true - title: Error Code - error_message: - anyOf: - - type: string - - type: 'null' - description: "\n The error message in case the `termination_reason` is\ - \ in `USER_ERROR` or `SYSTEM_ERROR`\n " - examples: - - This item was not processed because the threshold of 3 items finishing - in error state (user or system error) was reached before the item was - processed. - - 'The item was not processed because the run was cancelled by the user - before the item was processed.User error raised by Application because - the input data provided by the user cannot be processed: - - The image width is 123000 px, but the maximum width is 100000 px' - - "A system error occurred during the item execution:\n System went out\ - \ of memory in cell classification" - - An unknown system error occurred during the item execution - title: Error Message - external_id: - description: The external_id of the item from the user payload - examples: - - slide_1 - title: External Id - type: string - item_id: - description: Item UUID generated by the Platform - format: uuid - title: Item Id - type: string - output: - $ref: '#/components/schemas/ItemOutput' - description: The output status of the item (NONE, FULL) - output_artifacts: - description: "\nThe list of the results generated by the application algorithm.\ - \ The number of files and their\ntypes depend on the particular application\ - \ version, call `/v1/versions/{version_id}` to get\nthe details.\n " - items: - $ref: '#/components/schemas/OutputArtifactResultReadResponse' - title: Output Artifacts - type: array - state: - $ref: '#/components/schemas/ItemState' - description: "\nThe item moves from `PENDING` to `PROCESSING` to `TERMINATED`\ - \ state.\nWhen terminated, consult the `termination_reason` property to\ - \ see whether it was successful.\n " - terminated_at: - anyOf: - - format: date-time - type: string - - type: 'null' - description: Timestamp showing when the item reached a terminal state. - examples: - - '2024-01-15T10:30:45.123Z' - title: Terminated At - termination_reason: - anyOf: - - $ref: '#/components/schemas/ItemTerminationReason' - - type: 'null' - description: ' - - When the `state` is `TERMINATED` this will explain why - - `SUCCEEDED` -> Successful processing. - - `USER_ERROR` -> Failed because the provided input was invalid. - - `SYSTEM_ERROR` -> There was an error in the model or platform. - - `SKIPPED` -> Was cancelled - - ' - required: - - item_id - - external_id - - custom_metadata - - state - - output - - output_artifacts - - error_code - title: ItemResultReadResponse - type: object - ItemState: - enum: - - PENDING - - PROCESSING - - TERMINATED - title: ItemState - type: string - ItemTerminationReason: - enum: - - SUCCEEDED - - USER_ERROR - - SYSTEM_ERROR - - SKIPPED - title: ItemTerminationReason - type: string - MeReadResponse: - description: Response schema for `Get current user` endpoint - properties: - organization: - $ref: '#/components/schemas/OrganizationReadResponse' - user: - $ref: '#/components/schemas/UserReadResponse' - required: - - user - - organization - title: MeReadResponse - type: object - OrganizationReadResponse: - description: 'Part of response schema for Organization object in `Get current - user` endpoint. - - This model corresponds to the response schema returned from - - Auth0 GET /v2/organizations/{id} endpoint, flattens out the metadata out - - and doesn''t return branding or token_quota objects. - - For details, see: - - https://auth0.com/docs/api/management/v2/organizations/get-organizations-by-id - - - #### Configuration for integrating with Aignostics Platform services. - - - The Aignostics Platform API requires signed URLs for input artifacts (slide - images). To simplify this process, - - Aignostics provides a dedicated storage bucket. The HMAC credentials below - grant read and write - - access to this bucket, allowing you to upload files and generate the signed - URLs needed for API calls. - - - Additionally, logging and error reporting tokens enable Aignostics to provide - better support and monitor - - system performance for your integration.' - properties: - aignostics_bucket_hmac_access_key_id: - description: HMAC access key ID for the Aignostics-provided storage bucket. - Used to authenticate requests for uploading files and generating signed - URLs - examples: - - YOUR_HMAC_ACCESS_KEY_ID - title: Aignostics Bucket Hmac Access Key Id - type: string - aignostics_bucket_hmac_secret_access_key: - description: HMAC secret access key paired with the access key ID. Keep - this credential secure. - examples: - - YOUR/HMAC/SECRET_ACCESS_KEY - title: Aignostics Bucket Hmac Secret Access Key - type: string - aignostics_bucket_name: - description: Name of the bucket provided by Aignostics for storing input - artifacts (slide images) - examples: - - aignostics-platform-bucket - title: Aignostics Bucket Name - type: string - aignostics_bucket_protocol: - description: Protocol to use for bucket access. Defines the URL scheme for - connecting to the storage service - examples: - - gs - title: Aignostics Bucket Protocol - type: string - aignostics_logfire_token: - description: Authentication token for Logfire observability service. Enables - sending application logs and performance metrics to Aignostics for monitoring - and support - examples: - - your-logfire-token - title: Aignostics Logfire Token - type: string - aignostics_sentry_dsn: - description: Data Source Name (DSN) for Sentry error tracking service. Allows - automatic reporting of errors and exceptions to Aignostics support team - examples: - - https://2354s3#ewsha@o44.ingest.us.sentry.io/34345123432 - title: Aignostics Sentry Dsn - type: string - display_name: - anyOf: - - type: string - - type: 'null' - description: "Public organization name (E.g. \u201CAignostics GmbH\u201D\ - )" - examples: - - Aignostics GmbH - title: Display Name - id: - description: Unique organization identifier - examples: - - org_123456 - title: Id - type: string - name: - anyOf: - - type: string - - type: 'null' - description: "Organization name (E.g. \u201Caignx\u201D)" - examples: - - aignx - title: Name - required: - - id - - aignostics_bucket_hmac_access_key_id - - aignostics_bucket_hmac_secret_access_key - - aignostics_bucket_name - - aignostics_bucket_protocol - - aignostics_logfire_token - - aignostics_sentry_dsn - title: OrganizationReadResponse - type: object - OutputArtifact: - properties: - metadata_schema: - title: Metadata Schema - type: object - mime_type: - examples: - - application/vnd.apache.parquet - pattern: ^\w+\/\w+[-+.|\w+]+\w+$ - title: Mime Type - type: string - name: - title: Name - type: string - scope: - $ref: '#/components/schemas/OutputArtifactScope' - visibility: - $ref: '#/components/schemas/OutputArtifactVisibility' - required: - - name - - mime_type - - metadata_schema - - scope - - visibility - title: OutputArtifact - type: object - OutputArtifactResultReadResponse: - properties: - download_url: - anyOf: - - format: uri - maxLength: 2083 - minLength: 1 - type: string - - type: 'null' - description: "\nThe download URL to the output file. The URL is valid for\ - \ 1 hour after the endpoint is called.\nA new URL is generated every time\ - \ the endpoint is called.\n " - title: Download Url - error_code: - anyOf: - - type: string - - type: 'null' - description: Error code describing the error that occurred during artifact - processing. - readOnly: true - title: Error Code - error_message: - anyOf: - - type: string - - type: 'null' - description: Error message when artifact is in error state - title: Error Message - metadata: - anyOf: - - type: object - - type: 'null' - description: The metadata of the output artifact, provided by the application. - Can only be None if the artifact itself was deleted. - title: Metadata - name: - description: "\nName of the output from the output schema from the `/v1/versions/{version_id}`\ - \ endpoint.\n " - examples: - - tissue_qc:tiff_heatmap - title: Name - type: string - output: - $ref: '#/components/schemas/ArtifactOutput' - description: The output status of the artifact (NONE, FULL) - output_artifact_id: - description: The Id of the artifact. Used internally - format: uuid - title: Output Artifact Id - type: string - state: - $ref: '#/components/schemas/ArtifactState' - description: The current state of the artifact (PENDING, PROCESSING, TERMINATED) - termination_reason: - anyOf: - - $ref: '#/components/schemas/ArtifactTerminationReason' - - type: 'null' - description: The reason for termination when state is TERMINATED - required: - - output_artifact_id - - name - - state - - output - - download_url - - error_code - title: OutputArtifactResultReadResponse - type: object - OutputArtifactScope: - enum: - - ITEM - - GLOBAL - title: OutputArtifactScope - type: string - OutputArtifactVisibility: - enum: - - INTERNAL - - EXTERNAL - title: OutputArtifactVisibility - type: string - RunCreationRequest: - description: 'Request schema for `Initiate Run` endpoint. - - It describes which application version is chosen, and which user data should - be processed.' - properties: - application_id: - description: Unique ID for the application to use for processing - examples: - - he-tme - title: Application Id - type: string - custom_metadata: - anyOf: - - type: object - - type: 'null' - description: Optional JSON metadata to store additional information alongside - the run - examples: - - department: D1 - study: abc-1 - title: Custom Metadata - items: - description: List of items (slides) to process. Each item represents a whole - slide image (WSI) with its associated metadata and artifacts - examples: - - - external_id: slide_1 - input_artifacts: - - download_url: https://example-bucket.s3.amazonaws.com/slide1.tiff?signature=... - metadata: - checksum_base64_crc32c: 64RKKA== - height_px: 87761 - media-type: image/tiff - resolution_mpp: 0.2628238 - specimen: - disease: LUNG_CANCER - tissue: LUNG - staining_method: H&E - width_px: 136223 - name: input_slide - items: - $ref: '#/components/schemas/ItemCreationRequest' - minItems: 1 - title: Items - type: array - version_number: - anyOf: - - type: string - - type: 'null' - description: Semantic version of the application to use for processing. - If not provided, the latest available version will be used - examples: - - 1.0.0-beta1 - title: Version Number - required: - - application_id - - items - title: RunCreationRequest - type: object - RunCreationResponse: - properties: - run_id: - default: Run id - examples: - - 3fa85f64-5717-4562-b3fc-2c963f66afa6 - format: uuid - title: Run Id - type: string - title: RunCreationResponse - type: object - RunItemStatistics: - properties: - item_count: - description: Total number of the items in the run - title: Item Count - type: integer - item_pending_count: - description: The number of items in `PENDING` state - title: Item Pending Count - type: integer - item_processing_count: - description: The number of items in `PROCESSING` state - title: Item Processing Count - type: integer - item_skipped_count: - description: The number of items in `TERMINATED` state, and the item termination - reason is `SKIPPED` - title: Item Skipped Count - type: integer - item_succeeded_count: - description: The number of items in `TERMINATED` state, and the item termination - reason is `SUCCEEDED` - title: Item Succeeded Count - type: integer - item_system_error_count: - description: The number of items in `TERMINATED` state, and the item termination - reason is `SYSTEM_ERROR` - title: Item System Error Count - type: integer - item_user_error_count: - description: The number of items in `TERMINATED` state, and the item termination - reason is `USER_ERROR` - title: Item User Error Count - type: integer - required: - - item_count - - item_pending_count - - item_processing_count - - item_user_error_count - - item_system_error_count - - item_skipped_count - - item_succeeded_count - title: RunItemStatistics - type: object - RunOutput: - enum: - - NONE - - PARTIAL - - FULL - title: RunOutput - type: string - RunReadResponse: - description: Response schema for `Get run details` endpoint - properties: - application_id: - description: Application id - examples: - - he-tme - title: Application Id - type: string - custom_metadata: - anyOf: - - type: object - - type: 'null' - description: Optional JSON metadata that was stored in alongside the run - by the user - examples: - - department: D1 - study: abc-1 - title: Custom Metadata - custom_metadata_checksum: - anyOf: - - type: string - - type: 'null' - description: 'The checksum of the `custom_metadata` field. Can be used in - the `PUT /runs/{run-id}/custom_metadata` - - request to avoid unwanted override of the values in concurrent requests.' - examples: - - f54fe109 - title: Custom Metadata Checksum - error_code: - anyOf: - - type: string - - type: 'null' - description: "When the termination_reason is set to CANCELED_BY_SYSTEM,\ - \ the error_code is set to define the\n structured description\ - \ of the error." - examples: - - SCHEDULER.ITEMS_WITH_ERROR_THRESHOLD_REACHED - title: Error Code - error_message: - anyOf: - - type: string - - type: 'null' - description: "When the termination_reason is set to CANCELED_BY_SYSTEM,\ - \ the error_message is set to provide\n more insights to the error\ - \ cause." - examples: - - Run canceled given errors on more than 10 items. - title: Error Message - output: - $ref: '#/components/schemas/RunOutput' - description: 'The status of the output of the run. When 0 items are successfully - processed the output is - - `NONE`, after one item is successfully processed, the value is set to - `PARTIAL`. When all items of the run are - - successfully processed, the output is set to `FULL`.' - run_id: - description: UUID of the application - format: uuid - title: Run Id - type: string - state: - $ref: '#/components/schemas/RunState' - description: 'When the run request is received by the Platform, the `state` - of it is set to - - `PENDING`. The state changes to `PROCESSING` when at least one item is - being processed. After `PROCESSING`, the - - state of the run can switch back to `PENDING` if there are no processing - items, or to `TERMINATED` when the run - - finished processing.' - statistics: - $ref: '#/components/schemas/RunItemStatistics' - description: Aggregated statistics of the run execution - submitted_at: - description: Timestamp showing when the run was triggered - format: date-time - title: Submitted At - type: string - submitted_by: - description: Id of the user who triggered the run - examples: - - auth0|123456 - title: Submitted By - type: string - terminated_at: - anyOf: - - format: date-time - type: string - - type: 'null' - description: Timestamp showing when the run reached a terminal state. - examples: - - '2024-01-15T10:30:45.123Z' - title: Terminated At - termination_reason: - anyOf: - - $ref: '#/components/schemas/RunTerminationReason' - - type: 'null' - description: "The termination reason of the run. When the run is not in\ - \ `TERMINATED` state, the\n termination_reason is `null`. If all\ - \ items of of the run are processed (successfully or with an error), then\n\ - \ termination_reason is set to `ALL_ITEMS_PROCESSED`. If the run\ - \ is cancelled by the user, the value is set to\n `CANCELED_BY_USER`.\ - \ If the run reaches the threshold of number of failed items, the Platform\ - \ cancels the run\n and sets the termination_reason to `CANCELED_BY_SYSTEM`.\n\ - \ " - version_number: - description: Application version number - examples: - - 0.4.4 - title: Version Number - type: string - required: - - run_id - - application_id - - version_number - - state - - output - - termination_reason - - error_code - - error_message - - statistics - - submitted_at - - submitted_by - title: RunReadResponse - type: object - RunState: - enum: - - PENDING - - PROCESSING - - TERMINATED - title: RunState - type: string - RunTerminationReason: - enum: - - ALL_ITEMS_PROCESSED - - CANCELED_BY_SYSTEM - - CANCELED_BY_USER - title: RunTerminationReason - type: string - UserReadResponse: - description: 'Part of response schema for User object in `Get current user` - endpoint. - - This model corresponds to the response schema returned from - - Auth0 GET /v2/users/{id} endpoint. - - For details, see: - - https://auth0.com/docs/api/management/v2/users/get-users-by-id' - properties: - email: - anyOf: - - type: string - - type: 'null' - description: User email - examples: - - user@domain.com - title: Email - email_verified: - anyOf: - - type: boolean - - type: 'null' - examples: - - true - title: Email Verified - family_name: - anyOf: - - type: string - - type: 'null' - examples: - - Doe - title: Family Name - given_name: - anyOf: - - type: string - - type: 'null' - examples: - - Jane - title: Given Name - id: - description: Unique user identifier - examples: - - auth0|123456 - title: Id - type: string - name: - anyOf: - - type: string - - type: 'null' - description: First and last name of the user - examples: - - Jane Doe - title: Name - nickname: - anyOf: - - type: string - - type: 'null' - examples: - - jdoe - title: Nickname - picture: - anyOf: - - type: string - - type: 'null' - examples: - - https://example.com/jdoe.jpg - title: Picture - updated_at: - anyOf: - - format: date-time - type: string - - type: 'null' - examples: - - '2023-10-05T14:48:00.000Z' - title: Updated At - required: - - id - title: UserReadResponse - type: object - ValidationError: - properties: - loc: - items: - anyOf: - - type: string - - type: integer - title: Location - type: array - msg: - title: Message - type: string - type: - title: Error Type - type: string - required: - - loc - - msg - - type - title: ValidationError - type: object - VersionReadResponse: - description: Base Response schema for the `Application Version Details` endpoint - properties: - changelog: - description: Description of the changes relative to the previous version - title: Changelog - type: string - input_artifacts: - description: List of the input fields, provided by the User - items: - $ref: '#/components/schemas/InputArtifact' - title: Input Artifacts - type: array - output_artifacts: - description: List of the output fields, generated by the application - items: - $ref: '#/components/schemas/OutputArtifact' - title: Output Artifacts - type: array - released_at: - description: The timestamp when the application version was registered - format: date-time - title: Released At - type: string - version_number: - description: Semantic version of the application - title: Version Number - type: string - required: - - version_number - - changelog - - input_artifacts - - output_artifacts - - released_at - title: VersionReadResponse - type: object - securitySchemes: - OAuth2AuthorizationCodeBearer: - flows: - authorizationCode: - authorizationUrl: https://aignostics-platform-staging.eu.auth0.com/authorize - scopes: {} - tokenUrl: https://aignostics-platform-staging.eu.auth0.com/oauth/token - type: oauth2 -info: - description: "\nThe Aignostics Platform is a cloud-based service that enables organizations\ - \ to access advanced computational pathology applications through a secure API.\ - \ The platform provides standardized access to Aignostics' portfolio of computational\ - \ pathology solutions, with Atlas H&E-TME serving as an example of the available\ - \ API endpoints. \n\nTo begin using the platform, your organization must first\ - \ be registered by our business support team. If you don't have an account yet,\ - \ please contact your account manager or email support@aignostics.com to get started.\ - \ \n\nMore information about our applications can be found on (https://platform.aignostics.com).\n\ - \n**How to authorize and test API endpoints:**\n\n1. Click the \"Authorize\" button\ - \ in the right corner below\n3. Click \"Authorize\" button in the dialog to log\ - \ in with your Aignostics Platform credentials\n4. After successful login, you'll\ - \ be redirected back and can use \"Try it out\" on any endpoint\n\n**Note**: You\ - \ only need to authorize once per session. The lock icons next to endpoints will\ - \ show green when authorized.\n\n" - title: Aignostics Platform API - version: 1.0.0.beta7 -openapi: 3.1.0 -paths: - /v1/applications: - get: - description: "Returns the list of the applications, available to the caller.\n\ - \nThe application is available if any of the versions of the application is\ - \ assigned to the caller\u2019s organization.\nThe response is paginated and\ - \ sorted according to the provided parameters." - operationId: list_applications_v1_applications_get - parameters: - - in: query - name: page - required: false - schema: - default: 1 - minimum: 1 - title: Page - type: integer - - in: query - name: page-size - required: false - schema: - default: 50 - maximum: 100 - minimum: 5 - title: Page-Size - type: integer - - description: 'Sort the results by one or more fields. Use `+` for ascending - and `-` for descending order. - - - **Available fields:** - - - `application_id` - - - `name` - - - `description` - - - `regulatory_classes` - - - **Examples:** - - - `?sort=application_id` - Sort by application_id ascending - - - `?sort=-name` - Sort by name descending - - - `?sort=+description&sort=name` - Sort by description ascending, then name - descending' - in: query - name: sort - required: false - schema: - anyOf: - - items: - type: string - type: array - - type: 'null' - description: 'Sort the results by one or more fields. Use `+` for ascending - and `-` for descending order. - - - **Available fields:** - - - `application_id` - - - `name` - - - `description` - - - `regulatory_classes` - - - **Examples:** - - - `?sort=application_id` - Sort by application_id ascending - - - `?sort=-name` - Sort by name descending - - - `?sort=+description&sort=name` - Sort by description ascending, then - name descending' - title: Sort - responses: - '200': - content: - application/json: - example: - - application_id: he-tme - description: The Atlas H&E TME is an AI application designed to examine - FFPE (formalin-fixed, paraffin-embedded) tissues stained with H&E - (hematoxylin and eosin), delivering comprehensive insights into - the tumor microenvironment. - latest_version: - number: 1.0.0 - released_at: '2025-09-01T19:01:05.401Z' - name: Atlas H&E-TME - regulatory_classes: - - RUO - - application_id: test-app - description: 'This is the test application with two algorithms: TissueQc - and Tissue Segmentation' - latest_version: - number: 2.0.0 - released_at: '2025-09-02T19:01:05.401Z' - name: Test Application - regulatory_classes: - - RUO - schema: - items: - $ref: '#/components/schemas/ApplicationReadShortResponse' - title: Response List Applications V1 Applications Get - type: array - description: A list of applications available to the caller - '401': - description: Unauthorized - Invalid or missing authentication - '422': - content: - application/json: - schema: - $ref: '#/components/schemas/HTTPValidationError' - description: Validation Error - security: - - OAuth2AuthorizationCodeBearer: [] - summary: List available applications - tags: - - Public - /v1/applications/{application_id}: - get: - description: Retrieve details of a specific application by its ID. - operationId: read_application_by_id_v1_applications__application_id__get - parameters: - - in: path - name: application_id - required: true - schema: - title: Application Id - type: string - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/ApplicationReadResponse' - description: Successful Response - '403': - description: Forbidden - You don't have permission to see this application - '404': - description: Not Found - Application with the given ID does not exist - '422': - content: - application/json: - schema: - $ref: '#/components/schemas/HTTPValidationError' - description: Validation Error - security: - - OAuth2AuthorizationCodeBearer: [] - summary: Read Application By Id - tags: - - Public - /v1/applications/{application_id}/versions/{version}: - get: - description: 'Get the application version details - - - Allows caller to retrieve information about application version based on - provided application version ID.' - operationId: application_version_details_v1_applications__application_id__versions__version__get - parameters: - - in: path - name: application_id - required: true - schema: - title: Application Id - type: string - - in: path - name: version - required: true - schema: - title: Version - type: string - responses: - '200': - content: - application/json: - example: - changelog: New deployment - input_artifacts: - - metadata_schema: - $defs: - LungCancerMetadata: - additionalProperties: false - properties: - tissue: - enum: - - lung - - lymph node - - liver - - adrenal gland - - bone - - brain - title: Tissue - type: string - type: - const: lung - enum: - - lung - title: Type - type: string - required: - - type - - tissue - title: LungCancerMetadata - type: object - $schema: http://json-schema.org/draft-07/schema# - additionalProperties: false - description: Metadata corresponding to an external image. - properties: - base_mpp: - maximum: 0.5 - minimum: 0.125 - title: Base Mpp - type: number - cancer: - anyOf: - - $ref: '#/$defs/LungCancerMetadata' - title: Cancer - checksum_crc32c: - title: Checksum Crc32C - type: string - height: - maximum: 150000 - minimum: 1 - title: Height - type: integer - mime_type: - default: image/tiff - enum: - - application/dicom - - image/tiff - title: Mime Type - type: string - stain: - const: H&E - default: H&E - enum: - - H&E - title: Stain - type: string - width: - maximum: 150000 - minimum: 1 - title: Width - type: integer - required: - - checksum_crc32c - - base_mpp - - width - - height - - cancer - title: ExternalImageMetadata - type: object - mime_type: image/tiff - name: whole_slide_image - output_artifacts: - - metadata_schema: - $schema: http://json-schema.org/draft-07/schema# - additionalProperties: false - description: Metadata corresponding to a segmentation heatmap - file. - properties: - base_mpp: - maximum: 0.5 - minimum: 0.125 - title: Base Mpp - type: number - checksum_crc32c: - title: Checksum Crc32C - type: string - class_colors: - additionalProperties: - maxItems: 3 - minItems: 3 - prefixItems: - - maximum: 255 - minimum: 0 - type: integer - - maximum: 255 - minimum: 0 - type: integer - - maximum: 255 - minimum: 0 - type: integer - type: array - title: Class Colors - type: object - height: - title: Height - type: integer - mime_type: - const: image/tiff - default: image/tiff - enum: - - image/tiff - title: Mime Type - type: string - width: - title: Width - type: integer - required: - - checksum_crc32c - - width - - height - - class_colors - title: HeatmapMetadata - type: object - mime_type: image/tiff - name: tissue_qc:tiff_heatmap - scope: ITEM - visibility: EXTERNAL - released_at: '2025-04-16T08:45:20.655972Z' - version_number: 0.4.4 - schema: - $ref: '#/components/schemas/VersionReadResponse' - description: Successful Response - '403': - description: Forbidden - You don't have permission to see this version - '404': - description: Not Found - Application version with given ID is not available - to you or does not exist - '422': - content: - application/json: - schema: - $ref: '#/components/schemas/HTTPValidationError' - description: Validation Error - security: - - OAuth2AuthorizationCodeBearer: [] - summary: Application Version Details - tags: - - Public - /v1/me: - get: - description: 'Retrieves your identity details, including name, email, and organization. - - This is useful for verifying that the request is being made under the correct - user profile - - and organization context, as well as confirming that the expected environment - variables are correctly set - - (in case you are using Python SDK)' - operationId: get_me_v1_me_get - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/MeReadResponse' - description: Successful Response - security: - - OAuth2AuthorizationCodeBearer: [] - summary: Get current user - tags: - - Public - /v1/runs: - get: - description: 'List runs with filtering, sorting, and pagination capabilities. - - - Returns paginated runs that were submitted by the user.' - operationId: list_runs_v1_runs_get - parameters: - - description: Optional application ID filter - in: query - name: application_id - required: false - schema: - anyOf: - - type: string - - type: 'null' - description: Optional application ID filter - examples: - - tissue-segmentation - - heta - title: Application Id - - description: Optional Version Name - in: query - name: application_version - required: false - schema: - anyOf: - - type: string - - type: 'null' - description: Optional Version Name - examples: - - 1.0.2 - - 1.0.1-beta2 - title: Application Version - - description: Optionally filter runs by items with this external ID - in: query - name: external_id - required: false - schema: - anyOf: - - type: string - - type: 'null' - description: Optionally filter runs by items with this external ID - examples: - - slide_001 - - patient_12345_sample_A - title: External Id - - description: "Use PostgreSQL JSONPath expressions to filter runs by their\ - \ custom_metadata.\n#### URL Encoding Required\n**Important**: JSONPath\ - \ expressions contain special characters that must be URL-encoded when used\ - \ in query parameters. Most HTTP clients handle this automatically, but\ - \ when constructing URLs manually, ensure proper encoding.\n\n#### Examples\ - \ (Clear Format):\n- **Field existence**: `$.study` - Runs that have a study\ - \ field defined\n- **Exact value match**: `$.study ? (@ == \"high\")` -\ - \ Runs with specific study value\n- **Numeric comparison**: `$.confidence_score\ - \ ? (@ > 0.75)` - Runs with confidence score greater than 0.75\n- **Array\ - \ operations**: `$.tags[*] ? (@ == \"draft\")` - Runs with tags array containing\ - \ \"draft\"\n- **Complex conditions**: `$.resources ? (@.gpu_count > 2 &&\ - \ @.memory_gb >= 16)` - Runs with high resource requirements\n\n#### Examples\ - \ (URL-Encoded Format):\n- **Field existence**: `%24.study`\n- **Exact value\ - \ match**: `%24.study%20%3F%20(%40%20%3D%3D%20%22high%22)`\n- **Numeric\ - \ comparison**: `%24.confidence_score%20%3F%20(%40%20%3E%200.75)`\n- **Array\ - \ operations**: `%24.tags%5B*%5D%20%3F%20(%40%20%3D%3D%20%22draft%22)`\n\ - - **Complex conditions**: `%24.resources%20%3F%20(%40.gpu_count%20%3E%202%20%26%26%20%40.memory_gb%20%3E%3D%2016)`\n\ - \n#### Notes\n- JSONPath expressions are evaluated using PostgreSQL's `@?`\ - \ operator\n- The `$.` prefix is automatically added to root-level field\ - \ references if missing\n- String values in conditions must be enclosed\ - \ in double quotes\n- Use `&&` for AND operations and `||` for OR operations\n\ - - Regular expressions use `like_regex` with standard regex syntax\n- **Remember\ - \ to URL-encode the entire JSONPath expression when making HTTP requests**\n\ - \n " - examples: - array_operations: - description: Check if an array contains a certain value - summary: Check if an array contains a certain value - value: $.tags[*] ? (@ == "draft") - complex_filters: - description: Combine multiple checks - summary: Combine multiple checks - value: $.resources ? (@.gpu_count > 2 && @.memory_gb >= 16) - field_exists: - description: Find applications that have a project field defined - summary: Check if field exists - value: $.study - field_has_value: - description: Compare a field value against a certain value - summary: Check if field has a certain value - value: $.study ? (@ == "abc-1") - no_filter: - description: Returns all items without filtering by custom metadata - summary: No filter (returns all) - value: $ - numeric_comparisons: - description: Compare a field value against a numeric value of a field - summary: Compare to a numeric value of a field - value: $.confidence_score ? (@ > 0.75) - in: query - name: custom_metadata - required: false - schema: - anyOf: - - maxLength: 1000 - type: string - - type: 'null' - description: "Use PostgreSQL JSONPath expressions to filter runs by their\ - \ custom_metadata.\n#### URL Encoding Required\n**Important**: JSONPath\ - \ expressions contain special characters that must be URL-encoded when\ - \ used in query parameters. Most HTTP clients handle this automatically,\ - \ but when constructing URLs manually, ensure proper encoding.\n\n####\ - \ Examples (Clear Format):\n- **Field existence**: `$.study` - Runs that\ - \ have a study field defined\n- **Exact value match**: `$.study ? (@ ==\ - \ \"high\")` - Runs with specific study value\n- **Numeric comparison**:\ - \ `$.confidence_score ? (@ > 0.75)` - Runs with confidence score greater\ - \ than 0.75\n- **Array operations**: `$.tags[*] ? (@ == \"draft\")` -\ - \ Runs with tags array containing \"draft\"\n- **Complex conditions**:\ - \ `$.resources ? (@.gpu_count > 2 && @.memory_gb >= 16)` - Runs with high\ - \ resource requirements\n\n#### Examples (URL-Encoded Format):\n- **Field\ - \ existence**: `%24.study`\n- **Exact value match**: `%24.study%20%3F%20(%40%20%3D%3D%20%22high%22)`\n\ - - **Numeric comparison**: `%24.confidence_score%20%3F%20(%40%20%3E%200.75)`\n\ - - **Array operations**: `%24.tags%5B*%5D%20%3F%20(%40%20%3D%3D%20%22draft%22)`\n\ - - **Complex conditions**: `%24.resources%20%3F%20(%40.gpu_count%20%3E%202%20%26%26%20%40.memory_gb%20%3E%3D%2016)`\n\ - \n#### Notes\n- JSONPath expressions are evaluated using PostgreSQL's\ - \ `@?` operator\n- The `$.` prefix is automatically added to root-level\ - \ field references if missing\n- String values in conditions must be enclosed\ - \ in double quotes\n- Use `&&` for AND operations and `||` for OR operations\n\ - - Regular expressions use `like_regex` with standard regex syntax\n- **Remember\ - \ to URL-encode the entire JSONPath expression when making HTTP requests**\n\ - \n " - title: Custom Metadata - - in: query - name: page - required: false - schema: - default: 1 - minimum: 1 - title: Page - type: integer - - in: query - name: page_size - required: false - schema: - default: 50 - maximum: 100 - minimum: 5 - title: Page Size - type: integer - - description: 'Sort the results by one or more fields. Use `+` for ascending - and `-` for descending order. - - - **Available fields:** - - - `run_id` - - - `application_version_id` - - - `organization_id` - - - `status` - - - `submitted_at` - - - `submitted_by` - - - **Examples:** - - - `?sort=submitted_at` - Sort by creation time (ascending) - - - `?sort=-submitted_at` - Sort by creation time (descending) - - - `?sort=status&sort=-submitted_at` - Sort by status, then by time (descending) - - ' - in: query - name: sort - required: false - schema: - anyOf: - - items: - type: string - type: array - - type: 'null' - description: 'Sort the results by one or more fields. Use `+` for ascending - and `-` for descending order. - - - **Available fields:** - - - `run_id` - - - `application_version_id` - - - `organization_id` - - - `status` - - - `submitted_at` - - - `submitted_by` - - - **Examples:** - - - `?sort=submitted_at` - Sort by creation time (ascending) - - - `?sort=-submitted_at` - Sort by creation time (descending) - - - `?sort=status&sort=-submitted_at` - Sort by status, then by time (descending) - - ' - title: Sort - responses: - '200': - content: - application/json: - schema: - items: - $ref: '#/components/schemas/RunReadResponse' - title: Response List Runs V1 Runs Get - type: array - description: Successful Response - '404': - description: Run not found - '422': - content: - application/json: - schema: - $ref: '#/components/schemas/HTTPValidationError' - description: Validation Error - security: - - OAuth2AuthorizationCodeBearer: [] - summary: List Runs - tags: - - Public - post: - description: "This endpoint initiates a processing run for a selected application\ - \ and version, and returns a `run_id` for tracking purposes.\n\nSlide processing\ - \ occurs asynchronously, allowing you to retrieve results for individual slides\ - \ as soon as they\ncomplete processing. The system typically processes slides\ - \ in batches of four, though this number may be reduced\nduring periods of\ - \ high demand.\nBelow is an example of the required payload for initiating\ - \ an Atlas H&E TME processing run.\n\n\n### Payload\n\nThe payload includes\ - \ `application_id`, optional `version_number`, and `items` base fields.\n\n\ - `application_id` is the unique identifier for the application.\n`version_number`\ - \ is the semantic version to use. If not provided, the latest available version\ - \ will be used.\n\n`items` includes the list of the items to process (slides,\ - \ in case of HETA application).\nEvery item has a set of standard fields defined\ - \ by the API, plus the custom_metadata, specific to the\nchosen application.\n\ - \nExample payload structure with the comments:\n```\n{\n application_id:\ - \ \"he-tme\",\n version_number: \"1.0.0-beta\",\n items: [{\n \ - \ \"external_id\": \"slide_1\",\n \"input_artifacts\": [{\n \ - \ \"name\": \"user_slide\",\n \"download_url\": \"https://...\"\ - ,\n \"custom_metadata\": {\n \"specimen\": {\n \ - \ \"disease\": \"LUNG_CANCER\",\n \"tissue\"\ - : \"LUNG\"\n },\n \"staining_method\": \"H&E\"\ - ,\n \"width_px\": 136223,\n \"height_px\": 87761,\n\ - \ \"resolution_mpp\": 0.2628238,\n \"media-type\"\ - :\"image/tiff\",\n \"checksum_base64_crc32c\": \"64RKKA==\"\ - \n }\n }]\n }]\n}\n```\n\n| Parameter | Description\ - \ |\n| :---- | :---- |\n| `application_id` required | Unique ID for the application\ - \ |\n| `version_number` optional | Semantic version of the application. If\ - \ not provided, the latest available version will be used |\n| `items` required\ - \ | List of submitted items (WSIs) with parameters described below. |\n| `external_id`\ - \ required | Unique WSI name or ID for easy reference to items, provided by\ - \ the caller. The external_id should be unique across all items of the run.\ - \ |\n| `input_artifacts` required | List of provided artifacts for a WSI;\ - \ at the moment Atlas H&E-TME receives only 1 artifact per slide (the slide\ - \ itself), but for some other applications this can be a slide and an segmentation\ - \ map |\n| `name` required | Type of artifact; Atlas H&E-TME supports only\ - \ `\"input_slide\"` |\n| `download_url` required | Signed URL to the input\ - \ file in the S3 or GCS; Should be valid for at least 6 days |\n| `specimen:\ - \ disease` required | Supported cancer types for Atlas H&E-TME (see full list\ - \ in Atlas H&E-TME manual) |\n| `specimen: tissue` required | Supported tissue\ - \ types for Atlas H&E-TME (see full list in Atlas H&E-TME manual) |\n| `staining_method`\ - \ required | WSI stain /bio-marker; Atlas H&E-TME supports only `\"H&E\"`\ - \ |\n| `width_px` required | Integer value. Number of pixels of the WSI in\ - \ the X dimension. |\n| `height_px` required | Integer value. Number of pixels\ - \ of the WSI in the Y dimension. |\n| `resolution_mpp` required | Resolution\ - \ of WSI in micrometers per pixel; check allowed range in Atlas H&E-TME manual\ - \ |\n| `media-type` required | Supported media formats; available values are:\ - \ image/tiff (for .tiff or .tif WSI) application/dicom (for DICOM ) application/zip\ - \ (for zipped DICOM) application/octet-stream (for .svs WSI) |\n| `checksum_base64_crc32c`\ - \ required | Base64 encoded big-endian CRC32C checksum of the WSI image |\n\ - \n\n\n### Response\n\nThe endpoint returns the run UUID. After that the job\ - \ is scheduled for the\nexecution in the background.\n\nTo check the status\ - \ of the run call `v1/runs/{run_id}`.\n\n### Rejection\n\nApart from the authentication,\ - \ authorization and malformed input error, the request can be\nrejected when\ - \ the quota limit is exceeded. More details on quotas is described in the\n\ - documentation" - operationId: create_run_v1_runs_post - requestBody: - content: - application/json: - schema: - $ref: '#/components/schemas/RunCreationRequest' - required: true - responses: - '201': - content: - application/json: - schema: - $ref: '#/components/schemas/RunCreationResponse' - description: Successful Response - '400': - description: Bad Request - Input validation failed - '403': - description: Forbidden - You don't have permission to create this run - '404': - description: Application version not found - '422': - content: - application/json: - schema: - $ref: '#/components/schemas/HTTPValidationError' - description: Validation Error - security: - - OAuth2AuthorizationCodeBearer: [] - summary: Initiate Run - tags: - - Public - /v1/runs/{run_id}: - get: - description: "This endpoint allows the caller to retrieve the current status\ - \ of a run along with other relevant run details.\n A run becomes available\ - \ immediately after it is created through the POST `/runs/` endpoint.\n\n\ - \ To download the output results, use GET `/runs/{run_id}/` items to get outputs\ - \ for all slides.\nAccess to a run is restricted to the user who created it." - operationId: get_run_v1_runs__run_id__get - parameters: - - description: Run id, returned by `POST /runs/` endpoint - in: path - name: run_id - required: true - schema: - description: Run id, returned by `POST /runs/` endpoint - format: uuid - title: Run Id - type: string - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/RunReadResponse' - description: Successful Response - '403': - description: Forbidden - You don't have permission to see this run - '404': - description: Run not found because it was deleted. - '422': - content: - application/json: - schema: - $ref: '#/components/schemas/HTTPValidationError' - description: Validation Error - security: - - OAuth2AuthorizationCodeBearer: [] - summary: Get run details - tags: - - Public - /v1/runs/{run_id}/artifacts: - delete: - description: "This endpoint allows the caller to explicitly delete artifacts\ - \ generated by a run.\nIt can only be invoked when the run has reached a final\ - \ state\n(PROCESSED, CANCELED_SYSTEM, CANCELED_USER).\nNote that by default,\ - \ all artifacts are automatically deleted 30 days after the run finishes,\n\ - \ regardless of whether the caller explicitly requests deletion." - operationId: delete_run_items_v1_runs__run_id__artifacts_delete - parameters: - - description: Run id, returned by `POST /runs/` endpoint - in: path - name: run_id - required: true - schema: - description: Run id, returned by `POST /runs/` endpoint - format: uuid - title: Run Id - type: string - responses: - '200': - content: - application/json: - schema: {} - description: Run artifacts deleted - '404': - description: Run not found - '422': - content: - application/json: - schema: - $ref: '#/components/schemas/HTTPValidationError' - description: Validation Error - security: - - OAuth2AuthorizationCodeBearer: [] - summary: Delete Run Items - tags: - - Public - /v1/runs/{run_id}/cancel: - post: - description: 'The run can be canceled by the user who created the run. - - - The execution can be canceled any time while the application is not in a final - state. The - - pending items will not be processed and will not add to the cost. - - - When the application is canceled, the already completed items stay available - for download.' - operationId: cancel_run_v1_runs__run_id__cancel_post - parameters: - - description: Run id, returned by `POST /runs/` endpoint - in: path - name: run_id - required: true - schema: - description: Run id, returned by `POST /runs/` endpoint - format: uuid - title: Run Id - type: string - responses: - '202': - content: - application/json: - schema: {} - description: Successful Response - '403': - description: Forbidden - You don't have permission to cancel this run - '404': - description: Run not found - '409': - description: Conflict - The Run is already cancelled - '422': - content: - application/json: - schema: - $ref: '#/components/schemas/HTTPValidationError' - description: Validation Error - security: - - OAuth2AuthorizationCodeBearer: [] - summary: Cancel Run - tags: - - Public - /v1/runs/{run_id}/custom-metadata: - put: - operationId: put_run_custom_metadata_v1_runs__run_id__custom_metadata_put - parameters: - - description: Run id, returned by `POST /runs/` endpoint - in: path - name: run_id - required: true - schema: - description: Run id, returned by `POST /runs/` endpoint - format: uuid - title: Run Id - type: string - requestBody: - content: - application/json: - schema: - $ref: '#/components/schemas/CustomMetadataUpdateRequest' - required: true - responses: - '200': - content: - application/json: - schema: {} - description: Successful Response - '404': - description: Run not found - '422': - content: - application/json: - schema: - $ref: '#/components/schemas/HTTPValidationError' - description: Validation Error - security: - - OAuth2AuthorizationCodeBearer: [] - summary: Put Run Custom Metadata - tags: - - Public - /v1/runs/{run_id}/items: - get: - description: 'List items in a run with filtering, sorting, and pagination capabilities. - - - Returns paginated items within a specific run. Results can be filtered - - by item IDs, external_ids, status, and custom_metadata using JSONPath expressions. - - - ## JSONPath Metadata Filtering - - Use PostgreSQL JSONPath expressions to filter items using their custom_metadata. - - - ### Examples: - - - **Field existence**: `$.case_id` - Results that have a case_id field defined - - - **Exact value match**: `$.priority ? (@ == "high")` - Results with high - priority - - - **Numeric comparison**: `$.confidence_score ? (@ > 0.95)` - Results with - high confidence - - - **Array operations**: `$.flags[*] ? (@ == "reviewed")` - Results flagged - as reviewed - - - **Complex conditions**: `$.metrics ? (@.accuracy > 0.9 && @.recall > 0.8)` - - Results meeting performance thresholds - - - ## Notes - - - JSONPath expressions are evaluated using PostgreSQL''s `@?` operator - - - The `$.` prefix is automatically added to root-level field references if - missing - - - String values in conditions must be enclosed in double quotes - - - Use `&&` for AND operations and `||` for OR operations' - operationId: list_run_items_v1_runs__run_id__items_get - parameters: - - description: Run id, returned by `POST /runs/` endpoint - in: path - name: run_id - required: true - schema: - description: Run id, returned by `POST /runs/` endpoint - format: uuid - title: Run Id - type: string - - description: Filter for item ids - in: query - name: item_id__in - required: false - schema: - anyOf: - - items: - format: uuid - type: string - type: array - - type: 'null' - description: Filter for item ids - title: Item Id In - - description: Filter for items by their external_id from the input payload - in: query - name: external_id__in - required: false - schema: - anyOf: - - items: - type: string - type: array - - type: 'null' - description: Filter for items by their external_id from the input payload - title: External Id In - - description: Filter items by their state - in: query - name: state - required: false - schema: - anyOf: - - $ref: '#/components/schemas/ItemState' - - type: 'null' - description: Filter items by their state - title: State - - description: Filter items by their termination reason. Only applies to TERMINATED - items. - in: query - name: termination_reason - required: false - schema: - anyOf: - - $ref: '#/components/schemas/ItemTerminationReason' - - type: 'null' - description: Filter items by their termination reason. Only applies to TERMINATED - items. - title: Termination Reason - - description: JSONPath expression to filter items by their custom_metadata - examples: - array_operations: - description: Check if an array contains a certain value - summary: Check if an array contains a certain value - value: $.tags[*] ? (@ == "production") - complex_filters: - description: Combine multiple checks - summary: Combine multiple checks - value: $.resources ? (@.gpu_count > 2 && @.memory_gb >= 16) - field_exists: - description: Find items that have a project field defined - summary: Check if field exists - value: $.project - field_has_value: - description: Compare a field value against a certain value - summary: Check if field has a certain value - value: $.project ? (@ == "cancer-research") - no_filter: - description: Returns all items without filtering by custom metadata - summary: No filter (returns all) - value: $ - numeric_comparisons: - description: Compare a field value against a numeric value of a field - summary: Compare to a numeric value of a field - value: $.duration_hours ? (@ < 2) - in: query - name: custom_metadata - required: false - schema: - anyOf: - - maxLength: 1000 - type: string - - type: 'null' - description: JSONPath expression to filter items by their custom_metadata - title: Custom Metadata - - in: query - name: page - required: false - schema: - default: 1 - minimum: 1 - title: Page - type: integer - - in: query - name: page_size - required: false - schema: - default: 50 - maximum: 100 - minimum: 5 - title: Page Size - type: integer - - description: "Sort the items by one or more fields. Use `+` for ascending\ - \ and `-` for descending order.\n **Available fields:**\n\ - - `item_id`\n- `run_id`\n- `external_id`\n- `custom_metadata`\n\n**Examples:**\n\ - - `?sort=item_id` - Sort by id of the item (ascending)\n- `?sort=-external_id`\ - \ - Sort by external ID (descending)\n- `?sort=custom_metadata&sort=-external_id`\ - \ - Sort by metadata, then by external ID (descending)" - in: query - name: sort - required: false - schema: - anyOf: - - items: - type: string - type: array - - type: 'null' - description: "Sort the items by one or more fields. Use `+` for ascending\ - \ and `-` for descending order.\n **Available fields:**\n\ - - `item_id`\n- `run_id`\n- `external_id`\n- `custom_metadata`\n\n**Examples:**\n\ - - `?sort=item_id` - Sort by id of the item (ascending)\n- `?sort=-external_id`\ - \ - Sort by external ID (descending)\n- `?sort=custom_metadata&sort=-external_id`\ - \ - Sort by metadata, then by external ID (descending)" - title: Sort - responses: - '200': - content: - application/json: - schema: - items: - $ref: '#/components/schemas/ItemResultReadResponse' - title: Response List Run Items V1 Runs Run Id Items Get - type: array - description: Successful Response - '404': - description: Run not found - '422': - content: - application/json: - schema: - $ref: '#/components/schemas/HTTPValidationError' - description: Validation Error - security: - - OAuth2AuthorizationCodeBearer: [] - summary: List Run Items - tags: - - Public - /v1/runs/{run_id}/items/{external_id}: - get: - description: Retrieve details of a specific item (slide) by its external ID - and the run ID. - operationId: get_item_by_run_v1_runs__run_id__items__external_id__get - parameters: - - description: The run id, returned by `POST /runs/` endpoint - in: path - name: run_id - required: true - schema: - description: The run id, returned by `POST /runs/` endpoint - format: uuid - title: Run Id - type: string - - description: The `external_id` that was defined for the item by the customer - that triggered the run. - in: path - name: external_id - required: true - schema: - description: The `external_id` that was defined for the item by the customer - that triggered the run. - title: External Id - type: string - responses: - '200': - content: - application/json: - schema: - $ref: '#/components/schemas/ItemResultReadResponse' - description: Successful Response - '403': - description: Forbidden - You don't have permission to see this item - '404': - description: Not Found - Item with given ID does not exist - '422': - content: - application/json: - schema: - $ref: '#/components/schemas/HTTPValidationError' - description: Validation Error - security: - - OAuth2AuthorizationCodeBearer: [] - summary: Get Item By Run - tags: - - Public - /v1/runs/{run_id}/items/{external_id}/custom-metadata: - put: - operationId: put_item_custom_metadata_by_run_v1_runs__run_id__items__external_id__custom_metadata_put - parameters: - - description: The run id, returned by `POST /runs/` endpoint - in: path - name: run_id - required: true - schema: - description: The run id, returned by `POST /runs/` endpoint - format: uuid - title: Run Id - type: string - - description: The `external_id` that was defined for the item by the customer - that triggered the run. - in: path - name: external_id - required: true - schema: - description: The `external_id` that was defined for the item by the customer - that triggered the run. - title: External Id - type: string - requestBody: - content: - application/json: - schema: - $ref: '#/components/schemas/CustomMetadataUpdateRequest' - required: true - responses: - '200': - content: - application/json: - schema: {} - description: Successful Response - '422': - content: - application/json: - schema: - $ref: '#/components/schemas/HTTPValidationError' - description: Validation Error - security: - - OAuth2AuthorizationCodeBearer: [] - summary: Put Item Custom Metadata By Run - tags: - - Public -servers: -- url: /api +Traceback (most recent call last): + File "/Users/helmut/Code/python-sdk/.nox/docs-3-13/lib/python3.13/site-packages/jsonschema/_format.py", line 304, in + import rfc3987 +ModuleNotFoundError: No module named 'rfc3987' + +During handling of the above exception, another exception occurred: + +Traceback (most recent call last): + File "/Users/helmut/Code/python-sdk/.nox/docs-3-13/bin/aignostics", line 4, in + from aignostics.cli import cli + File "/Users/helmut/Code/python-sdk/src/aignostics/cli.py", line 67, in + prepare_cli(cli, f"🔬 Aignostics Python SDK v{__version__} - built with love in Berlin 🐻") + ~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + File "/Users/helmut/Code/python-sdk/src/aignostics/utils/_cli.py", line 19, in prepare_cli + for _cli in locate_implementations(typer.Typer): + ~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^ + File "/Users/helmut/Code/python-sdk/src/aignostics/utils/_di.py", line 37, in locate_implementations + module = importlib.import_module(f"{__project_name__}.{name}") + File "/Users/helmut/.local/share/uv/python/cpython-3.13.6-macos-aarch64-none/lib/python3.13/importlib/__init__.py", line 88, in import_module + return _bootstrap._gcd_import(name[level:], package, level) + ~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + File "/Users/helmut/Code/python-sdk/src/aignostics/application/__init__.py", line 3, in + from ._cli import cli + File "/Users/helmut/Code/python-sdk/src/aignostics/application/_cli.py", line 12, in + from aignostics.bucket import Service as BucketService + File "/Users/helmut/Code/python-sdk/src/aignostics/bucket/__init__.py", line 7, in + from ._cli import cli + File "/Users/helmut/Code/python-sdk/src/aignostics/bucket/_cli.py", line 16, in + from ._service import DownloadProgress, Service + File "/Users/helmut/Code/python-sdk/src/aignostics/bucket/_service.py", line 16, in + from aignostics.platform import Service as PlatformService + File "/Users/helmut/Code/python-sdk/src/aignostics/platform/__init__.py", line 28, in + from ._cli import cli + File "/Users/helmut/Code/python-sdk/src/aignostics/platform/_cli.py", line 10, in + from ._service import Service + File "/Users/helmut/Code/python-sdk/src/aignostics/platform/_service.py", line 17, in + from ._client import Client + File "/Users/helmut/Code/python-sdk/src/aignostics/platform/_client.py", line 13, in + from aignostics.platform.resources.runs import ApplicationRun, Runs + File "/Users/helmut/Code/python-sdk/src/aignostics/platform/resources/runs.py", line 28, in + from jsonschema.exceptions import ValidationError + File "/Users/helmut/Code/python-sdk/.nox/docs-3-13/lib/python3.13/site-packages/jsonschema/__init__.py", line 13, in + from jsonschema._format import FormatChecker + File "/Users/helmut/Code/python-sdk/.nox/docs-3-13/lib/python3.13/site-packages/jsonschema/_format.py", line 328, in + from rfc3987_syntax import is_valid_syntax as _rfc3987_is_valid_syntax + File "/Users/helmut/Code/python-sdk/.nox/docs-3-13/lib/python3.13/site-packages/rfc3987_syntax/__init__.py", line 1, in + from .syntax_helpers import * + File "/Users/helmut/Code/python-sdk/.nox/docs-3-13/lib/python3.13/site-packages/rfc3987_syntax/syntax_helpers.py", line 88, in + is_valid_syntax_ihier_part = make_syntax_validator("ihier_part") + File "/Users/helmut/Code/python-sdk/.nox/docs-3-13/lib/python3.13/site-packages/rfc3987_syntax/syntax_helpers.py", line 66, in make_syntax_validator + parser = Lark(grammar, start=rule_name, parser=RFC3987_SYNTAX_PARSER_TYPE) + File "/Users/helmut/Code/python-sdk/.nox/docs-3-13/lib/python3.13/site-packages/lark/lark.py", line 407, in __init__ + self.terminals, self.rules, self.ignore_tokens = self.grammar.compile(self.options.start, terminals_to_keep) + ~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + File "/Users/helmut/Code/python-sdk/.nox/docs-3-13/lib/python3.13/site-packages/lark/load_grammar.py", line 738, in compile + tree = transformer.transform(rule_tree) + File "/Users/helmut/Code/python-sdk/.nox/docs-3-13/lib/python3.13/site-packages/lark/visitors.py", line 264, in transform + tree = t.transform(tree) + File "/Users/helmut/Code/python-sdk/.nox/docs-3-13/lib/python3.13/site-packages/lark/visitors.py", line 284, in transform + subtree.children = list(self._transform_children(subtree.children)) + ~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + File "/Users/helmut/Code/python-sdk/.nox/docs-3-13/lib/python3.13/site-packages/lark/visitors.py", line 146, in _transform_children + res = self._transform_tree(c) + File "/Users/helmut/Code/python-sdk/.nox/docs-3-13/lib/python3.13/site-packages/lark/visitors.py", line 280, in _transform_tree + return self._call_userfunc(tree) + ~~~~~~~~~~~~~~~~~~~^^^^^^ + File "/Users/helmut/Code/python-sdk/.nox/docs-3-13/lib/python3.13/site-packages/lark/visitors.py", line 115, in _call_userfunc + f = getattr(self, tree.data) + File "/Users/helmut/Code/python-sdk/.nox/docs-3-13/lib/python3.13/site-packages/lark/visitors.py", line 484, in __get__ + def __get__(self, instance, owner=None): + +KeyboardInterrupt diff --git a/docs/source/_static/sdk_item_custom_metadata_schema_latest.json b/docs/source/_static/sdk_item_custom_metadata_schema_latest.json deleted file mode 100644 index d0fe3df0..00000000 --- a/docs/source/_static/sdk_item_custom_metadata_schema_latest.json +++ /dev/null @@ -1,89 +0,0 @@ -{ - "$defs": { - "PlatformBucketMetadata": { - "description": "Platform bucket storage metadata for items.", - "properties": { - "bucket_name": { - "description": "Name of the cloud storage bucket", - "title": "Bucket Name", - "type": "string" - }, - "object_key": { - "description": "Object key/path within the bucket", - "title": "Object Key", - "type": "string" - }, - "signed_download_url": { - "description": "Signed URL for downloading the object", - "title": "Signed Download Url", - "type": "string" - } - }, - "required": [ - "bucket_name", - "object_key", - "signed_download_url" - ], - "title": "PlatformBucketMetadata", - "type": "object" - } - }, - "additionalProperties": false, - "description": "Complete Item SDK metadata schema.\n\nThis model defines the structure and validation rules for SDK metadata\nthat is attached to individual items within application runs. It includes\ninformation about where the item is stored in the platform's cloud storage.", - "properties": { - "schema_version": { - "description": "Schema version for this metadata format", - "pattern": "^\\d+\\.\\d+\\.\\d+-?.*$", - "title": "Schema Version", - "type": "string" - }, - "created_at": { - "description": "ISO 8601 timestamp when the metadata was first created", - "title": "Created At", - "type": "string" - }, - "updated_at": { - "description": "ISO 8601 timestamp when the metadata was last updated", - "title": "Updated At", - "type": "string" - }, - "tags": { - "anyOf": [ - { - "items": { - "type": "string" - }, - "type": "array", - "uniqueItems": true - }, - { - "type": "null" - } - ], - "default": null, - "description": "Optional list of tags associated with the item", - "title": "Tags" - }, - "platform_bucket": { - "anyOf": [ - { - "$ref": "#/$defs/PlatformBucketMetadata" - }, - { - "type": "null" - } - ], - "default": null, - "description": "Platform bucket storage information" - } - }, - "required": [ - "schema_version", - "created_at", - "updated_at" - ], - "title": "ItemSdkMetadata", - "type": "object", - "$schema": "https://json-schema.org/draft/2020-12/schema", - "$id": "https://raw.githubusercontent.com/aignostics/python-sdk/main/docs/source/_static/item_sdk_metadata_schema_v0.0.3.json" -} \ No newline at end of file diff --git a/docs/source/_static/sdk_item_custom_metadata_schema_v0.0.1.json b/docs/source/_static/sdk_item_custom_metadata_schema_v0.0.1.json deleted file mode 100644 index 38ac2668..00000000 --- a/docs/source/_static/sdk_item_custom_metadata_schema_v0.0.1.json +++ /dev/null @@ -1,60 +0,0 @@ -{ - "$defs": { - "PlatformBucketMetadata": { - "description": "Platform bucket storage metadata for items.", - "properties": { - "bucket_name": { - "description": "Name of the cloud storage bucket", - "title": "Bucket Name", - "type": "string" - }, - "object_key": { - "description": "Object key/path within the bucket", - "title": "Object Key", - "type": "string" - }, - "signed_download_url": { - "description": "Signed URL for downloading the object", - "title": "Signed Download Url", - "type": "string" - } - }, - "required": [ - "bucket_name", - "object_key", - "signed_download_url" - ], - "title": "PlatformBucketMetadata", - "type": "object" - } - }, - "additionalProperties": false, - "description": "Complete Item SDK metadata schema.\n\nThis model defines the structure and validation rules for SDK metadata\nthat is attached to individual items within application runs. It includes\ninformation about where the item is stored in the platform's cloud storage.", - "properties": { - "schema_version": { - "description": "Schema version for this metadata format", - "pattern": "^\\d+\\.\\d+\\.\\d+-?.*$", - "title": "Schema Version", - "type": "string" - }, - "platform_bucket": { - "anyOf": [ - { - "$ref": "#/$defs/PlatformBucketMetadata" - }, - { - "type": "null" - } - ], - "default": null, - "description": "Platform bucket storage information" - } - }, - "required": [ - "schema_version" - ], - "title": "ItemSdkMetadata", - "type": "object", - "$schema": "https://json-schema.org/draft/2020-12/schema", - "$id": "https://raw.githubusercontent.com/aignostics/python-sdk/main/docs/source/_static/item_sdk_metadata_schema_v0.0.1.json" -} \ No newline at end of file diff --git a/docs/source/_static/sdk_item_custom_metadata_schema_v0.0.3.json b/docs/source/_static/sdk_item_custom_metadata_schema_v0.0.3.json deleted file mode 100644 index d0fe3df0..00000000 --- a/docs/source/_static/sdk_item_custom_metadata_schema_v0.0.3.json +++ /dev/null @@ -1,89 +0,0 @@ -{ - "$defs": { - "PlatformBucketMetadata": { - "description": "Platform bucket storage metadata for items.", - "properties": { - "bucket_name": { - "description": "Name of the cloud storage bucket", - "title": "Bucket Name", - "type": "string" - }, - "object_key": { - "description": "Object key/path within the bucket", - "title": "Object Key", - "type": "string" - }, - "signed_download_url": { - "description": "Signed URL for downloading the object", - "title": "Signed Download Url", - "type": "string" - } - }, - "required": [ - "bucket_name", - "object_key", - "signed_download_url" - ], - "title": "PlatformBucketMetadata", - "type": "object" - } - }, - "additionalProperties": false, - "description": "Complete Item SDK metadata schema.\n\nThis model defines the structure and validation rules for SDK metadata\nthat is attached to individual items within application runs. It includes\ninformation about where the item is stored in the platform's cloud storage.", - "properties": { - "schema_version": { - "description": "Schema version for this metadata format", - "pattern": "^\\d+\\.\\d+\\.\\d+-?.*$", - "title": "Schema Version", - "type": "string" - }, - "created_at": { - "description": "ISO 8601 timestamp when the metadata was first created", - "title": "Created At", - "type": "string" - }, - "updated_at": { - "description": "ISO 8601 timestamp when the metadata was last updated", - "title": "Updated At", - "type": "string" - }, - "tags": { - "anyOf": [ - { - "items": { - "type": "string" - }, - "type": "array", - "uniqueItems": true - }, - { - "type": "null" - } - ], - "default": null, - "description": "Optional list of tags associated with the item", - "title": "Tags" - }, - "platform_bucket": { - "anyOf": [ - { - "$ref": "#/$defs/PlatformBucketMetadata" - }, - { - "type": "null" - } - ], - "default": null, - "description": "Platform bucket storage information" - } - }, - "required": [ - "schema_version", - "created_at", - "updated_at" - ], - "title": "ItemSdkMetadata", - "type": "object", - "$schema": "https://json-schema.org/draft/2020-12/schema", - "$id": "https://raw.githubusercontent.com/aignostics/python-sdk/main/docs/source/_static/item_sdk_metadata_schema_v0.0.3.json" -} \ No newline at end of file diff --git a/docs/source/_static/sdk_run_custom_metadata_schema_latest.json b/docs/source/_static/sdk_run_custom_metadata_schema_latest.json deleted file mode 100644 index 674f060c..00000000 --- a/docs/source/_static/sdk_run_custom_metadata_schema_latest.json +++ /dev/null @@ -1,489 +0,0 @@ -{ - "$defs": { - "CIMetadata": { - "description": "CI/CD environment metadata.", - "properties": { - "github": { - "anyOf": [ - { - "$ref": "#/$defs/GitHubCIMetadata" - }, - { - "type": "null" - } - ], - "default": null, - "description": "GitHub Actions metadata" - }, - "pytest": { - "anyOf": [ - { - "$ref": "#/$defs/PytestCIMetadata" - }, - { - "type": "null" - } - ], - "default": null, - "description": "Pytest test metadata" - } - }, - "title": "CIMetadata", - "type": "object" - }, - "GitHubCIMetadata": { - "description": "GitHub Actions CI metadata.", - "properties": { - "action": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "default": null, - "description": "GitHub Action name", - "title": "Action" - }, - "job": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "default": null, - "description": "GitHub job name", - "title": "Job" - }, - "ref": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "default": null, - "description": "Git reference", - "title": "Ref" - }, - "ref_name": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "default": null, - "description": "Git reference name", - "title": "Ref Name" - }, - "ref_type": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "default": null, - "description": "Git reference type (branch, tag)", - "title": "Ref Type" - }, - "repository": { - "description": "Repository name (owner/repo)", - "title": "Repository", - "type": "string" - }, - "run_attempt": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "default": null, - "description": "Attempt number for this run", - "title": "Run Attempt" - }, - "run_id": { - "description": "Unique ID for this workflow run", - "title": "Run Id", - "type": "string" - }, - "run_number": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "default": null, - "description": "Run number for this workflow", - "title": "Run Number" - }, - "run_url": { - "description": "URL to the workflow run", - "title": "Run Url", - "type": "string" - }, - "runner_arch": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "default": null, - "description": "Runner architecture (x64, ARM64, etc.)", - "title": "Runner Arch" - }, - "runner_os": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "default": null, - "description": "Runner operating system", - "title": "Runner Os" - }, - "sha": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "default": null, - "description": "Git commit SHA", - "title": "Sha" - }, - "workflow": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "default": null, - "description": "Workflow name", - "title": "Workflow" - }, - "workflow_ref": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "default": null, - "description": "Reference to the workflow file", - "title": "Workflow Ref" - } - }, - "required": [ - "repository", - "run_id", - "run_url" - ], - "title": "GitHubCIMetadata", - "type": "object" - }, - "PytestCIMetadata": { - "description": "Pytest test execution metadata.", - "properties": { - "current_test": { - "description": "Current test being executed", - "title": "Current Test", - "type": "string" - }, - "markers": { - "anyOf": [ - { - "items": { - "type": "string" - }, - "type": "array" - }, - { - "type": "null" - } - ], - "default": null, - "description": "Pytest markers applied to the test", - "title": "Markers" - } - }, - "required": [ - "current_test" - ], - "title": "PytestCIMetadata", - "type": "object" - }, - "SchedulingMetadata": { - "description": "Scheduling metadata for run execution.", - "properties": { - "due_date": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "default": null, - "description": "Requested completion time (ISO 8601). Scheduler will try to complete before this time.", - "title": "Due Date" - }, - "deadline": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "default": null, - "description": "Hard deadline (ISO 8601). Run may be aborted if processing exceeds this time.", - "title": "Deadline" - } - }, - "title": "SchedulingMetadata", - "type": "object" - }, - "SubmissionMetadata": { - "description": "Metadata about how the SDK was invoked.", - "properties": { - "date": { - "description": "ISO 8601 timestamp of submission", - "title": "Date", - "type": "string" - }, - "interface": { - "description": "How the SDK was accessed (script, cli, launchpad)", - "enum": [ - "script", - "cli", - "launchpad" - ], - "title": "Interface", - "type": "string" - }, - "initiator": { - "description": "Who/what initiated the run (user, test, bridge)", - "enum": [ - "user", - "test", - "bridge" - ], - "title": "Initiator", - "type": "string" - } - }, - "required": [ - "date", - "interface", - "initiator" - ], - "title": "SubmissionMetadata", - "type": "object" - }, - "UserMetadata": { - "description": "User information metadata.", - "properties": { - "organization_id": { - "description": "User's organization ID", - "title": "Organization Id", - "type": "string" - }, - "organization_name": { - "description": "User's organization name", - "title": "Organization Name", - "type": "string" - }, - "user_email": { - "description": "User's email address", - "title": "User Email", - "type": "string" - }, - "user_id": { - "description": "User's unique ID", - "title": "User Id", - "type": "string" - } - }, - "required": [ - "organization_id", - "organization_name", - "user_email", - "user_id" - ], - "title": "UserMetadata", - "type": "object" - }, - "WorkflowMetadata": { - "description": "Workflow control metadata.", - "properties": { - "onboard_to_aignostics_portal": { - "default": false, - "description": "Whether to onboard results to the Aignostics Portal", - "title": "Onboard To Aignostics Portal", - "type": "boolean" - }, - "validate_only": { - "default": false, - "description": "Whether to only validate without running analysis", - "title": "Validate Only", - "type": "boolean" - } - }, - "title": "WorkflowMetadata", - "type": "object" - } - }, - "additionalProperties": false, - "description": "Complete Run SDK metadata schema.\n\nThis model defines the structure and validation rules for SDK metadata\nthat is attached to application runs. It includes information about:\n- SDK version and timestamps\n- User information (when available)\n- CI/CD environment context (GitHub Actions, pytest)\n- Workflow control flags\n- Scheduling information\n- Optional user note", - "properties": { - "schema_version": { - "description": "Schema version for this metadata format", - "pattern": "^\\d+\\.\\d+\\.\\d+-?.*$", - "title": "Schema Version", - "type": "string" - }, - "created_at": { - "description": "ISO 8601 timestamp when the metadata was first created", - "title": "Created At", - "type": "string" - }, - "updated_at": { - "description": "ISO 8601 timestamp when the metadata was last updated", - "title": "Updated At", - "type": "string" - }, - "tags": { - "anyOf": [ - { - "items": { - "type": "string" - }, - "type": "array", - "uniqueItems": true - }, - { - "type": "null" - } - ], - "default": null, - "description": "Optional list of tags associated with the run", - "title": "Tags" - }, - "submission": { - "$ref": "#/$defs/SubmissionMetadata", - "description": "Submission context metadata" - }, - "user_agent": { - "description": "User agent string for the SDK client", - "title": "User Agent", - "type": "string" - }, - "user": { - "anyOf": [ - { - "$ref": "#/$defs/UserMetadata" - }, - { - "type": "null" - } - ], - "default": null, - "description": "User information (when authenticated)" - }, - "ci": { - "anyOf": [ - { - "$ref": "#/$defs/CIMetadata" - }, - { - "type": "null" - } - ], - "default": null, - "description": "CI/CD environment metadata" - }, - "note": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "default": null, - "description": "Optional user note for the run", - "title": "Note" - }, - "workflow": { - "anyOf": [ - { - "$ref": "#/$defs/WorkflowMetadata" - }, - { - "type": "null" - } - ], - "default": null, - "description": "Workflow control flags" - }, - "scheduling": { - "anyOf": [ - { - "$ref": "#/$defs/SchedulingMetadata" - }, - { - "type": "null" - } - ], - "default": null, - "description": "Scheduling information" - } - }, - "required": [ - "schema_version", - "created_at", - "updated_at", - "submission", - "user_agent" - ], - "title": "RunSdkMetadata", - "type": "object", - "$schema": "https://json-schema.org/draft/2020-12/schema", - "$id": "https://raw.githubusercontent.com/aignostics/python-sdk/main/docs/source/_static/sdk_metadata_schema_v0.0.4.json" -} \ No newline at end of file diff --git a/docs/source/_static/sdk_run_custom_metadata_schema_v0.0.1.json b/docs/source/_static/sdk_run_custom_metadata_schema_v0.0.1.json deleted file mode 100644 index 4200eb60..00000000 --- a/docs/source/_static/sdk_run_custom_metadata_schema_v0.0.1.json +++ /dev/null @@ -1,460 +0,0 @@ -{ - "$defs": { - "CIMetadata": { - "description": "CI/CD environment metadata.", - "properties": { - "github": { - "anyOf": [ - { - "$ref": "#/$defs/GitHubCIMetadata" - }, - { - "type": "null" - } - ], - "default": null, - "description": "GitHub Actions metadata" - }, - "pytest": { - "anyOf": [ - { - "$ref": "#/$defs/PytestCIMetadata" - }, - { - "type": "null" - } - ], - "default": null, - "description": "Pytest test metadata" - } - }, - "title": "CIMetadata", - "type": "object" - }, - "GitHubCIMetadata": { - "description": "GitHub Actions CI metadata.", - "properties": { - "action": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "default": null, - "description": "GitHub Action name", - "title": "Action" - }, - "job": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "default": null, - "description": "GitHub job name", - "title": "Job" - }, - "ref": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "default": null, - "description": "Git reference", - "title": "Ref" - }, - "ref_name": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "default": null, - "description": "Git reference name", - "title": "Ref Name" - }, - "ref_type": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "default": null, - "description": "Git reference type (branch, tag)", - "title": "Ref Type" - }, - "repository": { - "description": "Repository name (owner/repo)", - "title": "Repository", - "type": "string" - }, - "run_attempt": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "default": null, - "description": "Attempt number for this run", - "title": "Run Attempt" - }, - "run_id": { - "description": "Unique ID for this workflow run", - "title": "Run Id", - "type": "string" - }, - "run_number": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "default": null, - "description": "Run number for this workflow", - "title": "Run Number" - }, - "run_url": { - "description": "URL to the workflow run", - "title": "Run Url", - "type": "string" - }, - "runner_arch": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "default": null, - "description": "Runner architecture (x64, ARM64, etc.)", - "title": "Runner Arch" - }, - "runner_os": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "default": null, - "description": "Runner operating system", - "title": "Runner Os" - }, - "sha": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "default": null, - "description": "Git commit SHA", - "title": "Sha" - }, - "workflow": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "default": null, - "description": "Workflow name", - "title": "Workflow" - }, - "workflow_ref": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "default": null, - "description": "Reference to the workflow file", - "title": "Workflow Ref" - } - }, - "required": [ - "repository", - "run_id", - "run_url" - ], - "title": "GitHubCIMetadata", - "type": "object" - }, - "PytestCIMetadata": { - "description": "Pytest test execution metadata.", - "properties": { - "current_test": { - "description": "Current test being executed", - "title": "Current Test", - "type": "string" - }, - "markers": { - "anyOf": [ - { - "items": { - "type": "string" - }, - "type": "array" - }, - { - "type": "null" - } - ], - "default": null, - "description": "Pytest markers applied to the test", - "title": "Markers" - } - }, - "required": [ - "current_test" - ], - "title": "PytestCIMetadata", - "type": "object" - }, - "SchedulingMetadata": { - "description": "Scheduling metadata for run execution.", - "properties": { - "due_date": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "default": null, - "description": "Requested completion time (ISO 8601). Scheduler will try to complete before this time.", - "title": "Due Date" - }, - "deadline": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "default": null, - "description": "Hard deadline (ISO 8601). Run may be aborted if processing exceeds this time.", - "title": "Deadline" - } - }, - "title": "SchedulingMetadata", - "type": "object" - }, - "SubmissionMetadata": { - "description": "Metadata about how the SDK was invoked.", - "properties": { - "date": { - "description": "ISO 8601 timestamp of submission", - "title": "Date", - "type": "string" - }, - "interface": { - "description": "How the SDK was accessed (script, cli, launchpad)", - "enum": [ - "script", - "cli", - "launchpad" - ], - "title": "Interface", - "type": "string" - }, - "source": { - "description": "Who/what initiated the run (user, test, bridge)", - "enum": [ - "user", - "test", - "bridge" - ], - "title": "Source", - "type": "string" - } - }, - "required": [ - "date", - "interface", - "source" - ], - "title": "SubmissionMetadata", - "type": "object" - }, - "UserMetadata": { - "description": "User information metadata.", - "properties": { - "organization_id": { - "description": "User's organization ID", - "title": "Organization Id", - "type": "string" - }, - "organization_name": { - "description": "User's organization name", - "title": "Organization Name", - "type": "string" - }, - "user_email": { - "description": "User's email address", - "title": "User Email", - "type": "string" - }, - "user_id": { - "description": "User's unique ID", - "title": "User Id", - "type": "string" - } - }, - "required": [ - "organization_id", - "organization_name", - "user_email", - "user_id" - ], - "title": "UserMetadata", - "type": "object" - }, - "WorkflowMetadata": { - "description": "Workflow control metadata.", - "properties": { - "onboard_to_aignostics_portal": { - "default": false, - "description": "Whether to onboard results to the Aignostics Portal", - "title": "Onboard To Aignostics Portal", - "type": "boolean" - }, - "validate_only": { - "default": false, - "description": "Whether to only validate without running analysis", - "title": "Validate Only", - "type": "boolean" - } - }, - "title": "WorkflowMetadata", - "type": "object" - } - }, - "additionalProperties": false, - "description": "Complete SDK metadata schema.\n\nThis model defines the structure and validation rules for SDK metadata\nthat is attached to application runs. It includes information about:\n- SDK version and submission details\n- User information (when available)\n- CI/CD environment context (GitHub Actions, pytest)\n- Workflow control flags\n- Scheduling information\n- Optional user note", - "properties": { - "schema_version": { - "description": "Schema version for this metadata format", - "pattern": "^\\d+\\.\\d+\\.\\d+-?.*$", - "title": "Schema Version", - "type": "string" - }, - "submission": { - "$ref": "#/$defs/SubmissionMetadata", - "description": "Submission context metadata" - }, - "user_agent": { - "description": "User agent string for the SDK client", - "title": "User Agent", - "type": "string" - }, - "user": { - "anyOf": [ - { - "$ref": "#/$defs/UserMetadata" - }, - { - "type": "null" - } - ], - "default": null, - "description": "User information (when authenticated)" - }, - "ci": { - "anyOf": [ - { - "$ref": "#/$defs/CIMetadata" - }, - { - "type": "null" - } - ], - "default": null, - "description": "CI/CD environment metadata" - }, - "note": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "default": null, - "description": "Optional user note for the run", - "title": "Note" - }, - "workflow": { - "anyOf": [ - { - "$ref": "#/$defs/WorkflowMetadata" - }, - { - "type": "null" - } - ], - "default": null, - "description": "Workflow control flags" - }, - "scheduling": { - "anyOf": [ - { - "$ref": "#/$defs/SchedulingMetadata" - }, - { - "type": "null" - } - ], - "default": null, - "description": "Scheduling information" - } - }, - "required": [ - "schema_version", - "submission", - "user_agent" - ], - "title": "SdkMetadata", - "type": "object", - "$schema": "https://json-schema.org/draft/2020-12/schema", - "$id": "https://raw.githubusercontent.com/aignostics/python-sdk/main/docs/source/_static/sdk_metadata_schema_v0.0.1.json" -} \ No newline at end of file diff --git a/docs/source/_static/sdk_run_custom_metadata_schema_v0.0.2.json b/docs/source/_static/sdk_run_custom_metadata_schema_v0.0.2.json deleted file mode 100644 index c299f8b3..00000000 --- a/docs/source/_static/sdk_run_custom_metadata_schema_v0.0.2.json +++ /dev/null @@ -1,460 +0,0 @@ -{ - "$defs": { - "CIMetadata": { - "description": "CI/CD environment metadata.", - "properties": { - "github": { - "anyOf": [ - { - "$ref": "#/$defs/GitHubCIMetadata" - }, - { - "type": "null" - } - ], - "default": null, - "description": "GitHub Actions metadata" - }, - "pytest": { - "anyOf": [ - { - "$ref": "#/$defs/PytestCIMetadata" - }, - { - "type": "null" - } - ], - "default": null, - "description": "Pytest test metadata" - } - }, - "title": "CIMetadata", - "type": "object" - }, - "GitHubCIMetadata": { - "description": "GitHub Actions CI metadata.", - "properties": { - "action": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "default": null, - "description": "GitHub Action name", - "title": "Action" - }, - "job": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "default": null, - "description": "GitHub job name", - "title": "Job" - }, - "ref": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "default": null, - "description": "Git reference", - "title": "Ref" - }, - "ref_name": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "default": null, - "description": "Git reference name", - "title": "Ref Name" - }, - "ref_type": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "default": null, - "description": "Git reference type (branch, tag)", - "title": "Ref Type" - }, - "repository": { - "description": "Repository name (owner/repo)", - "title": "Repository", - "type": "string" - }, - "run_attempt": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "default": null, - "description": "Attempt number for this run", - "title": "Run Attempt" - }, - "run_id": { - "description": "Unique ID for this workflow run", - "title": "Run Id", - "type": "string" - }, - "run_number": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "default": null, - "description": "Run number for this workflow", - "title": "Run Number" - }, - "run_url": { - "description": "URL to the workflow run", - "title": "Run Url", - "type": "string" - }, - "runner_arch": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "default": null, - "description": "Runner architecture (x64, ARM64, etc.)", - "title": "Runner Arch" - }, - "runner_os": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "default": null, - "description": "Runner operating system", - "title": "Runner Os" - }, - "sha": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "default": null, - "description": "Git commit SHA", - "title": "Sha" - }, - "workflow": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "default": null, - "description": "Workflow name", - "title": "Workflow" - }, - "workflow_ref": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "default": null, - "description": "Reference to the workflow file", - "title": "Workflow Ref" - } - }, - "required": [ - "repository", - "run_id", - "run_url" - ], - "title": "GitHubCIMetadata", - "type": "object" - }, - "PytestCIMetadata": { - "description": "Pytest test execution metadata.", - "properties": { - "current_test": { - "description": "Current test being executed", - "title": "Current Test", - "type": "string" - }, - "markers": { - "anyOf": [ - { - "items": { - "type": "string" - }, - "type": "array" - }, - { - "type": "null" - } - ], - "default": null, - "description": "Pytest markers applied to the test", - "title": "Markers" - } - }, - "required": [ - "current_test" - ], - "title": "PytestCIMetadata", - "type": "object" - }, - "SchedulingMetadata": { - "description": "Scheduling metadata for run execution.", - "properties": { - "due_date": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "default": null, - "description": "Requested completion time (ISO 8601). Scheduler will try to complete before this time.", - "title": "Due Date" - }, - "deadline": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "default": null, - "description": "Hard deadline (ISO 8601). Run may be aborted if processing exceeds this time.", - "title": "Deadline" - } - }, - "title": "SchedulingMetadata", - "type": "object" - }, - "SubmissionMetadata": { - "description": "Metadata about how the SDK was invoked.", - "properties": { - "date": { - "description": "ISO 8601 timestamp of submission", - "title": "Date", - "type": "string" - }, - "interface": { - "description": "How the SDK was accessed (script, cli, launchpad)", - "enum": [ - "script", - "cli", - "launchpad" - ], - "title": "Interface", - "type": "string" - }, - "initiator": { - "description": "Who/what initiated the run (user, test, bridge)", - "enum": [ - "user", - "test", - "bridge" - ], - "title": "Initiator", - "type": "string" - } - }, - "required": [ - "date", - "interface", - "initiator" - ], - "title": "SubmissionMetadata", - "type": "object" - }, - "UserMetadata": { - "description": "User information metadata.", - "properties": { - "organization_id": { - "description": "User's organization ID", - "title": "Organization Id", - "type": "string" - }, - "organization_name": { - "description": "User's organization name", - "title": "Organization Name", - "type": "string" - }, - "user_email": { - "description": "User's email address", - "title": "User Email", - "type": "string" - }, - "user_id": { - "description": "User's unique ID", - "title": "User Id", - "type": "string" - } - }, - "required": [ - "organization_id", - "organization_name", - "user_email", - "user_id" - ], - "title": "UserMetadata", - "type": "object" - }, - "WorkflowMetadata": { - "description": "Workflow control metadata.", - "properties": { - "onboard_to_aignostics_portal": { - "default": false, - "description": "Whether to onboard results to the Aignostics Portal", - "title": "Onboard To Aignostics Portal", - "type": "boolean" - }, - "validate_only": { - "default": false, - "description": "Whether to only validate without running analysis", - "title": "Validate Only", - "type": "boolean" - } - }, - "title": "WorkflowMetadata", - "type": "object" - } - }, - "additionalProperties": false, - "description": "Complete Run SDK metadata schema.\n\nThis model defines the structure and validation rules for SDK metadata\nthat is attached to application runs. It includes information about:\n- SDK version and submission details\n- User information (when available)\n- CI/CD environment context (GitHub Actions, pytest)\n- Workflow control flags\n- Scheduling information\n- Optional user note", - "properties": { - "schema_version": { - "description": "Schema version for this metadata format", - "pattern": "^\\d+\\.\\d+\\.\\d+-?.*$", - "title": "Schema Version", - "type": "string" - }, - "submission": { - "$ref": "#/$defs/SubmissionMetadata", - "description": "Submission context metadata" - }, - "user_agent": { - "description": "User agent string for the SDK client", - "title": "User Agent", - "type": "string" - }, - "user": { - "anyOf": [ - { - "$ref": "#/$defs/UserMetadata" - }, - { - "type": "null" - } - ], - "default": null, - "description": "User information (when authenticated)" - }, - "ci": { - "anyOf": [ - { - "$ref": "#/$defs/CIMetadata" - }, - { - "type": "null" - } - ], - "default": null, - "description": "CI/CD environment metadata" - }, - "note": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "default": null, - "description": "Optional user note for the run", - "title": "Note" - }, - "workflow": { - "anyOf": [ - { - "$ref": "#/$defs/WorkflowMetadata" - }, - { - "type": "null" - } - ], - "default": null, - "description": "Workflow control flags" - }, - "scheduling": { - "anyOf": [ - { - "$ref": "#/$defs/SchedulingMetadata" - }, - { - "type": "null" - } - ], - "default": null, - "description": "Scheduling information" - } - }, - "required": [ - "schema_version", - "submission", - "user_agent" - ], - "title": "RunSdkMetadata", - "type": "object", - "$schema": "https://json-schema.org/draft/2020-12/schema", - "$id": "https://raw.githubusercontent.com/aignostics/python-sdk/main/docs/source/_static/sdk_metadata_schema_v0.0.2.json" -} \ No newline at end of file diff --git a/docs/source/_static/sdk_run_custom_metadata_schema_v0.0.4.json b/docs/source/_static/sdk_run_custom_metadata_schema_v0.0.4.json deleted file mode 100644 index 674f060c..00000000 --- a/docs/source/_static/sdk_run_custom_metadata_schema_v0.0.4.json +++ /dev/null @@ -1,489 +0,0 @@ -{ - "$defs": { - "CIMetadata": { - "description": "CI/CD environment metadata.", - "properties": { - "github": { - "anyOf": [ - { - "$ref": "#/$defs/GitHubCIMetadata" - }, - { - "type": "null" - } - ], - "default": null, - "description": "GitHub Actions metadata" - }, - "pytest": { - "anyOf": [ - { - "$ref": "#/$defs/PytestCIMetadata" - }, - { - "type": "null" - } - ], - "default": null, - "description": "Pytest test metadata" - } - }, - "title": "CIMetadata", - "type": "object" - }, - "GitHubCIMetadata": { - "description": "GitHub Actions CI metadata.", - "properties": { - "action": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "default": null, - "description": "GitHub Action name", - "title": "Action" - }, - "job": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "default": null, - "description": "GitHub job name", - "title": "Job" - }, - "ref": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "default": null, - "description": "Git reference", - "title": "Ref" - }, - "ref_name": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "default": null, - "description": "Git reference name", - "title": "Ref Name" - }, - "ref_type": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "default": null, - "description": "Git reference type (branch, tag)", - "title": "Ref Type" - }, - "repository": { - "description": "Repository name (owner/repo)", - "title": "Repository", - "type": "string" - }, - "run_attempt": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "default": null, - "description": "Attempt number for this run", - "title": "Run Attempt" - }, - "run_id": { - "description": "Unique ID for this workflow run", - "title": "Run Id", - "type": "string" - }, - "run_number": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "default": null, - "description": "Run number for this workflow", - "title": "Run Number" - }, - "run_url": { - "description": "URL to the workflow run", - "title": "Run Url", - "type": "string" - }, - "runner_arch": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "default": null, - "description": "Runner architecture (x64, ARM64, etc.)", - "title": "Runner Arch" - }, - "runner_os": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "default": null, - "description": "Runner operating system", - "title": "Runner Os" - }, - "sha": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "default": null, - "description": "Git commit SHA", - "title": "Sha" - }, - "workflow": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "default": null, - "description": "Workflow name", - "title": "Workflow" - }, - "workflow_ref": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "default": null, - "description": "Reference to the workflow file", - "title": "Workflow Ref" - } - }, - "required": [ - "repository", - "run_id", - "run_url" - ], - "title": "GitHubCIMetadata", - "type": "object" - }, - "PytestCIMetadata": { - "description": "Pytest test execution metadata.", - "properties": { - "current_test": { - "description": "Current test being executed", - "title": "Current Test", - "type": "string" - }, - "markers": { - "anyOf": [ - { - "items": { - "type": "string" - }, - "type": "array" - }, - { - "type": "null" - } - ], - "default": null, - "description": "Pytest markers applied to the test", - "title": "Markers" - } - }, - "required": [ - "current_test" - ], - "title": "PytestCIMetadata", - "type": "object" - }, - "SchedulingMetadata": { - "description": "Scheduling metadata for run execution.", - "properties": { - "due_date": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "default": null, - "description": "Requested completion time (ISO 8601). Scheduler will try to complete before this time.", - "title": "Due Date" - }, - "deadline": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "default": null, - "description": "Hard deadline (ISO 8601). Run may be aborted if processing exceeds this time.", - "title": "Deadline" - } - }, - "title": "SchedulingMetadata", - "type": "object" - }, - "SubmissionMetadata": { - "description": "Metadata about how the SDK was invoked.", - "properties": { - "date": { - "description": "ISO 8601 timestamp of submission", - "title": "Date", - "type": "string" - }, - "interface": { - "description": "How the SDK was accessed (script, cli, launchpad)", - "enum": [ - "script", - "cli", - "launchpad" - ], - "title": "Interface", - "type": "string" - }, - "initiator": { - "description": "Who/what initiated the run (user, test, bridge)", - "enum": [ - "user", - "test", - "bridge" - ], - "title": "Initiator", - "type": "string" - } - }, - "required": [ - "date", - "interface", - "initiator" - ], - "title": "SubmissionMetadata", - "type": "object" - }, - "UserMetadata": { - "description": "User information metadata.", - "properties": { - "organization_id": { - "description": "User's organization ID", - "title": "Organization Id", - "type": "string" - }, - "organization_name": { - "description": "User's organization name", - "title": "Organization Name", - "type": "string" - }, - "user_email": { - "description": "User's email address", - "title": "User Email", - "type": "string" - }, - "user_id": { - "description": "User's unique ID", - "title": "User Id", - "type": "string" - } - }, - "required": [ - "organization_id", - "organization_name", - "user_email", - "user_id" - ], - "title": "UserMetadata", - "type": "object" - }, - "WorkflowMetadata": { - "description": "Workflow control metadata.", - "properties": { - "onboard_to_aignostics_portal": { - "default": false, - "description": "Whether to onboard results to the Aignostics Portal", - "title": "Onboard To Aignostics Portal", - "type": "boolean" - }, - "validate_only": { - "default": false, - "description": "Whether to only validate without running analysis", - "title": "Validate Only", - "type": "boolean" - } - }, - "title": "WorkflowMetadata", - "type": "object" - } - }, - "additionalProperties": false, - "description": "Complete Run SDK metadata schema.\n\nThis model defines the structure and validation rules for SDK metadata\nthat is attached to application runs. It includes information about:\n- SDK version and timestamps\n- User information (when available)\n- CI/CD environment context (GitHub Actions, pytest)\n- Workflow control flags\n- Scheduling information\n- Optional user note", - "properties": { - "schema_version": { - "description": "Schema version for this metadata format", - "pattern": "^\\d+\\.\\d+\\.\\d+-?.*$", - "title": "Schema Version", - "type": "string" - }, - "created_at": { - "description": "ISO 8601 timestamp when the metadata was first created", - "title": "Created At", - "type": "string" - }, - "updated_at": { - "description": "ISO 8601 timestamp when the metadata was last updated", - "title": "Updated At", - "type": "string" - }, - "tags": { - "anyOf": [ - { - "items": { - "type": "string" - }, - "type": "array", - "uniqueItems": true - }, - { - "type": "null" - } - ], - "default": null, - "description": "Optional list of tags associated with the run", - "title": "Tags" - }, - "submission": { - "$ref": "#/$defs/SubmissionMetadata", - "description": "Submission context metadata" - }, - "user_agent": { - "description": "User agent string for the SDK client", - "title": "User Agent", - "type": "string" - }, - "user": { - "anyOf": [ - { - "$ref": "#/$defs/UserMetadata" - }, - { - "type": "null" - } - ], - "default": null, - "description": "User information (when authenticated)" - }, - "ci": { - "anyOf": [ - { - "$ref": "#/$defs/CIMetadata" - }, - { - "type": "null" - } - ], - "default": null, - "description": "CI/CD environment metadata" - }, - "note": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "default": null, - "description": "Optional user note for the run", - "title": "Note" - }, - "workflow": { - "anyOf": [ - { - "$ref": "#/$defs/WorkflowMetadata" - }, - { - "type": "null" - } - ], - "default": null, - "description": "Workflow control flags" - }, - "scheduling": { - "anyOf": [ - { - "$ref": "#/$defs/SchedulingMetadata" - }, - { - "type": "null" - } - ], - "default": null, - "description": "Scheduling information" - } - }, - "required": [ - "schema_version", - "created_at", - "updated_at", - "submission", - "user_agent" - ], - "title": "RunSdkMetadata", - "type": "object", - "$schema": "https://json-schema.org/draft/2020-12/schema", - "$id": "https://raw.githubusercontent.com/aignostics/python-sdk/main/docs/source/_static/sdk_metadata_schema_v0.0.4.json" -} \ No newline at end of file diff --git a/docs/source/conf.py b/docs/source/conf.py index 6aec4011..f10bd2b4 100644 --- a/docs/source/conf.py +++ b/docs/source/conf.py @@ -27,7 +27,7 @@ project = "aignostics" author = "Helmut Hoffer von Ankershoffen" copyright = f" (c) 2025-{datetime.now(UTC).year} Aignostics GmbH, Author: {author}" # noqa: A001 -version = "0.2.197" +version = "0.2.189" release = version github_username = "aignostics" github_repository = "python-sdk" diff --git a/examples/notebook.ipynb b/examples/notebook.ipynb index 23dc52c6..fab7b516 100644 --- a/examples/notebook.ipynb +++ b/examples/notebook.ipynb @@ -2,7 +2,6 @@ "cells": [ { "cell_type": "markdown", - "id": "MJUe", "metadata": {}, "source": [ "# Initialize the Client\n", @@ -11,51 +10,53 @@ "- In case you have a browser available, an interactive login flow in your browser is started.\n", "- In case there is no browser available, a device flow is started.\n", "\n", - "**NOTE:** By default, the client caches the access token in your operation systems application cache folder. If you do not want to store the access token, set cache_token to False.\n", + "**NOTE:** By default, the client caches the access token in your operation systems application cache folder. If you do not want to store the access token, please initialize the client like this:\n", "\n", "```python\n", - "import aignostics.client as platform\n", + "import aignostics.platform as platform\n", "# initialize the client\n", - "client = platform.Client(cache_token=True)\n", - "```" + "client = platform.Client(cache_token=False)\n", + "```\n", + "\n" ] }, { "cell_type": "code", - "execution_count": 2, - "id": "vblA", + "execution_count": null, "metadata": {}, "outputs": [], "source": [ + "from collections.abc import Iterator\n", + "\n", "import pandas as pd\n", "from pydantic import BaseModel\n", "\n", "\n", "# the following function is used for visualizing the results nicely in this notebook\n", - "def show(models: BaseModel | list[BaseModel]) -> pd.DataFrame:\n", - " if isinstance(models, BaseModel):\n", - " items = [models.model_dump()]\n", - " else:\n", - " items = (a.model_dump() for a in models)\n", + "def show(models: BaseModel | list[BaseModel] | Iterator[BaseModel]) -> pd.DataFrame:\n", + " \"\"\"Visualize the results in a pandas DataFrame.\n", + "\n", + " Returns:\n", + " pd.DataFrame: A DataFrame containing the results.\n", + " \"\"\"\n", + " items = [models.model_dump()] if isinstance(models, BaseModel) else (a.model_dump() for a in models)\n", " return pd.DataFrame(items)" ] }, { "cell_type": "code", - "execution_count": 3, - "id": "bkHC", + "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from aignostics import platform\n", "\n", "# initialize the client\n", - "client = platform.Client(cache_token=True)" + "client = platform.Client(cache_token=False)" ] }, { "cell_type": "markdown", - "id": "lEQa", "metadata": {}, "source": [ "# List our available applications\n", @@ -66,88 +67,8 @@ { "cell_type": "code", "execution_count": null, - "id": "PKri", "metadata": {}, - "outputs": [ - { - "data": { - "text/html": [ - "
\n", - "\n", - "\n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - "
application_idnameregulatory_classesdescriptionlatest_version
0ds-8130-custom-he-tmeDS-8130-custom-he-tme[unknown]A custom he-tme application running only on ne...{'number': '0.1.1', 'released_at': 2025-09-18 ...
1he-tmeAtlas H&E-TME[]The Atlas H&E TME is an AI application designe...{'number': '1.0.0-beta.8', 'released_at': 2025...
2test-apptest-app[demo, RuO]This is the test application with two algorith...{'number': '0.0.4', 'released_at': 2025-10-13 ...
\n", - "
" - ], - "text/plain": [ - " application_id name regulatory_classes \\\n", - "0 ds-8130-custom-he-tme DS-8130-custom-he-tme [unknown] \n", - "1 he-tme Atlas H&E-TME [] \n", - "2 test-app test-app [demo, RuO] \n", - "\n", - " description \\\n", - "0 A custom he-tme application running only on ne... \n", - "1 The Atlas H&E TME is an AI application designe... \n", - "2 This is the test application with two algorith... \n", - "\n", - " latest_version \n", - "0 {'number': '0.1.1', 'released_at': 2025-09-18 ... \n", - "1 {'number': '1.0.0-beta.8', 'released_at': 2025... \n", - "2 {'number': '0.0.4', 'released_at': 2025-10-13 ... " - ] - }, - "execution_count": 4, - "metadata": {}, - "output_type": "execute_result" - } - ], + "outputs": [], "source": [ "applications = client.applications.list()\n", "# visualize\n", @@ -156,166 +77,72 @@ }, { "cell_type": "markdown", - "id": "Xref", "metadata": {}, "source": [ "# List all available versions of an application\n", "\n", - "Now that we know the applications that are available, we can list all the versions of a specific application. In this case, we will use the `test-app` as an example. Using the `application_id`, we can list all the versions of the application:" + "Now that we know the applications that are available, we can list all the versions of a specific application. In this case, we will use the `TwoTask Dummy Application` as an example, which has the `application_id`: `two-task-dummy`. Using the `application_id`, we can list all the versions of the application:" ] }, { "cell_type": "code", - "execution_count": 5, - "id": "SFPL", + "execution_count": null, "metadata": {}, - "outputs": [ - { - "data": { - "text/html": [ - "
\n", - "\n", - "\n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - "
numberreleased_at
00.0.42025-10-13 14:57:28.167570+00:00
10.0.42025-10-13 14:57:28.167570+00:00
20.0.32025-08-07 14:22:36.365984+00:00
30.0.32025-08-07 14:22:36.365984+00:00
40.0.12025-06-11 16:58:39.825489+00:00
50.0.12025-06-11 16:58:39.825489+00:00
60.0.12025-06-11 16:58:39.825489+00:00
\n", - "
" - ], - "text/plain": [ - " number released_at\n", - "0 0.0.4 2025-10-13 14:57:28.167570+00:00\n", - "1 0.0.4 2025-10-13 14:57:28.167570+00:00\n", - "2 0.0.3 2025-08-07 14:22:36.365984+00:00\n", - "3 0.0.3 2025-08-07 14:22:36.365984+00:00\n", - "4 0.0.1 2025-06-11 16:58:39.825489+00:00\n", - "5 0.0.1 2025-06-11 16:58:39.825489+00:00\n", - "6 0.0.1 2025-06-11 16:58:39.825489+00:00" - ] - }, - "execution_count": 5, - "metadata": {}, - "output_type": "execute_result" - } - ], + "outputs": [], "source": [ - "application_versions = client.applications.versions.list(application=\"test-app\")\n", + "application_versions = client.applications.versions.list(application=\"two-task-dummy\")\n", "# visualize\n", "show(application_versions)" ] }, { "cell_type": "markdown", - "id": "BYtC", "metadata": {}, "source": [ "# Inspect the application version details\n", "\n", - "Now that we have the list of versions, we can inspect the details of a specific version. While we could directly use the list of application version returned by the `list` method, we want to directly query details for a specific application version. In this case, we will use the `test-app` application and version `0.0.4`:" + "Now that we have the list of versions, we can inspect the details of a specific version. While we could directly use the list of application version returned by the `list` method, we want to directly query details for a specific application version. In this case, we will use version `0.35.0`, which has the `application_version_id`: `two-task-dummy:v0.35.0`. We use the `application_version_id` to retrieve further details about the application version:" ] }, { "cell_type": "code", - "execution_count": 5, - "id": "RGSE", + "execution_count": null, "metadata": {}, - "outputs": [ - { - "data": { - "text/plain": [ - "[InputArtifact(name='whole_slide_image', mime_type='image/tiff', metadata_schema={'type': 'object', 'title': 'RGBImageMetadata', '$schema': 'http://json-schema.org/draft-07/schema#', 'required': ['checksum_base64_crc32c', 'width_px', 'height_px'], 'properties': {'width_px': {'type': 'integer', 'title': 'Width Px'}, 'height_px': {'type': 'integer', 'title': 'Height Px'}, 'media_type': {'enum': ['image/tiff'], 'type': 'string', 'const': 'image/tiff', 'title': 'Media Type', 'default': 'image/tiff'}, 'resolution_mpp': {'type': 'number', 'title': 'Resolution Mpp', 'default': None, 'maximum': 0.5, 'minimum': 0.08}, 'checksum_base64_crc32c': {'type': 'string', 'title': 'Base64 encoded big-endian CRC32C checksum'}}, 'description': 'Metadata corresponding to an RGB image.', 'additionalProperties': False})]" - ] - }, - "execution_count": 5, - "metadata": {}, - "output_type": "execute_result" - } - ], + "outputs": [], "source": [ - "test_app_version = client.applications.versions.details(application_id=\"test-app\", application_version=\"0.0.4\")\n", + "from IPython.display import JSON\n", "\n", - "# view the `input_artifacts` to get insights in the required fields of the input expected by this application version.\n", - "test_app_version.input_artifacts" + "# get the application version details\n", + "two_task_app = client.applications.versions.details(application_version=\"two-task-dummy:v0.35.0\")\n", + "\n", + "# view the `input_artifacts` to get insights in the required fields of the application version payload\n", + "JSON(two_task_app.input_artifacts[0].to_json())" ] }, { "cell_type": "markdown", - "id": "Kclp", "metadata": {}, "source": [ - "# Submit an application run\n", + "# Trigger an application run\n", "\n", - "Now, let's submit an application run. We will use the application ID retrieved in the previous steps. We will not specify the version, which automatically uses the latest version. To create an application run, we need to provide a payload that consists of 1 or more items. We provide the Pydantic model `InputItem` an item and the data that comes with it:\n", + "Now, let's trigger an application run for the `Test Application`. We will use the `application_version_id` that we retrieved in the previous step. To create an application run, we need to provide a payload that consists of 1 or more items. We provide the Pydantic model `InputItem` an item and the data that comes with it:\n", "```python\n", "platform.InputItem(\n", - " external_id=\"
\",\n", + " reference=\"\",\n", " input_artifacts=[platform.InputArtifact]\n", ")\n", "```\n", - "The `InputArtifact` defines the actual data that you provide aka. in this case the image that you want to be processed. The expected values are defined by the application version and have to align with the `input_artifacts` schema of the application version. In the case of this application, we only require a single artifact per item, which is the image to process on. The artifact name is defined as `whole_slide_image`. The `download_url` is a signed URL that allows the Aignostics Platform to download the image data later during processing. In addition to the image data itself, you have to provide the metadata defined in the input artifact schema, i.e., `checksum_base64_crc32c`, `resolution_mpp`, `width_px`, and `height_px`. The metadata is used to validate the input data and is required for the processing of the image. The following example shows how to create an item with a single input artifact:\n", + "The `InputArtifact` defines the actual data that you provide aka. in this case the image that you want to be processed. The expected values are defined by the application version and have to align with the `input_artifacts` schema of the application version. In the case of the two task dummy application, we only require a single artifact per item, which is the image to process on. The artifact name is defined as `user_slide` for the `two-task-dummy` application and `whole_slide_image` for the `he-tme` application. The `download_url` is a signed URL that allows the Aignostics Platform to download the image data later during processing. In addition to the image data itself, you have to provide the metadata defined in the input artifact schema, i.e., `checksum_crc32c`, `base_mpp`, `width`, and `height`. The metadata is used to validate the input data and is required for the processing of the image. The following example shows how to create an item with a single input artifact:\n", "\n", "```python\n", "platform.InputArtifact(\n", - " name=\"whole_slide_image\", # as defined by the application version input_artifact schema\n", + " name=\"user_slide\", # as defined by the application version input_artifact schema\n", " download_url=\"\",\n", " metadata={\n", - " \"checksum_base64_crc32c\": \"\",\n", - " \"resolution_mpp\": \"\",\n", - " \"width_px\": \"\",\n", - " \"height_px\": \"\"\n", + " \"checksum_crc32c\": \"\",\n", + " \"base_mpp\": \"\",\n", + " \"width\": \"\",\n", + " \"height\": \"\"\n", " }\n", ")\n", "```" @@ -324,37 +151,23 @@ { "cell_type": "code", "execution_count": null, - "id": "emfo", "metadata": {}, - "outputs": [ - { - "ename": "ValueError", - "evalue": "Invalid google storage URI", - "output_type": "error", - "traceback": [ - "\u001b[31m---------------------------------------------------------------------------\u001b[39m", - "\u001b[31mValueError\u001b[39m Traceback (most recent call last)", - "\u001b[36mCell\u001b[39m\u001b[36m \u001b[39m\u001b[32mIn[6]\u001b[39m\u001b[32m, line 9\u001b[39m\n\u001b[32m 1\u001b[39m application_run = client.runs.create(\n\u001b[32m 2\u001b[39m application_id=\u001b[33m\"\u001b[39m\u001b[33mtest-app\u001b[39m\u001b[33m\"\u001b[39m,\n\u001b[32m 3\u001b[39m items=[\n\u001b[32m 4\u001b[39m platform.InputItem(\n\u001b[32m 5\u001b[39m external_id=\u001b[33m\"\u001b[39m\u001b[33mwsi-1\u001b[39m\u001b[33m\"\u001b[39m,\n\u001b[32m 6\u001b[39m input_artifacts=[\n\u001b[32m 7\u001b[39m platform.InputArtifact(\n\u001b[32m 8\u001b[39m name=\u001b[33m\"\u001b[39m\u001b[33muser_slide\u001b[39m\u001b[33m\"\u001b[39m,\n\u001b[32m----> \u001b[39m\u001b[32m9\u001b[39m download_url=\u001b[43mplatform\u001b[49m\u001b[43m.\u001b[49m\u001b[43mgenerate_signed_url\u001b[49m\u001b[43m(\u001b[49m\u001b[33;43m\"\u001b[39;49m\u001b[33;43m\u001b[39;49m\u001b[33;43m\"\u001b[39;49m\u001b[43m)\u001b[49m,\n\u001b[32m 10\u001b[39m metadata={\n\u001b[32m 11\u001b[39m \u001b[33m\"\u001b[39m\u001b[33mchecksum_base64_crc32c\u001b[39m\u001b[33m\"\u001b[39m: \u001b[33m\"\u001b[39m\u001b[33mAAAAAA==\u001b[39m\u001b[33m\"\u001b[39m,\n\u001b[32m 12\u001b[39m \u001b[33m\"\u001b[39m\u001b[33mresolution_mpp\u001b[39m\u001b[33m\"\u001b[39m: \u001b[32m0.25\u001b[39m,\n\u001b[32m 13\u001b[39m \u001b[33m\"\u001b[39m\u001b[33mwidth_px\u001b[39m\u001b[33m\"\u001b[39m: \u001b[32m10000\u001b[39m,\n\u001b[32m 14\u001b[39m \u001b[33m\"\u001b[39m\u001b[33mheight_px\u001b[39m\u001b[33m\"\u001b[39m: \u001b[32m10000\u001b[39m,\n\u001b[32m 15\u001b[39m },\n\u001b[32m 16\u001b[39m )\n\u001b[32m 17\u001b[39m ],\n\u001b[32m 18\u001b[39m ),\n\u001b[32m 19\u001b[39m ],\n\u001b[32m 20\u001b[39m )\n\u001b[32m 21\u001b[39m \u001b[38;5;28mprint\u001b[39m(application_run)\n", - "\u001b[36mFile \u001b[39m\u001b[32m~/Code/python-sdk/src/aignostics/platform/_utils.py:124\u001b[39m, in \u001b[36mgenerate_signed_url\u001b[39m\u001b[34m(url, expires_seconds)\u001b[39m\n\u001b[32m 122\u001b[39m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m m:\n\u001b[32m 123\u001b[39m msg = \u001b[33m\"\u001b[39m\u001b[33mInvalid google storage URI\u001b[39m\u001b[33m\"\u001b[39m\n\u001b[32m--> \u001b[39m\u001b[32m124\u001b[39m \u001b[38;5;28;01mraise\u001b[39;00m \u001b[38;5;167;01mValueError\u001b[39;00m(msg)\n\u001b[32m 125\u001b[39m bucket_name = m.group(\u001b[33m\"\u001b[39m\u001b[33mbucket_name\u001b[39m\u001b[33m\"\u001b[39m)\n\u001b[32m 126\u001b[39m path = m.group(\u001b[33m\"\u001b[39m\u001b[33mpath\u001b[39m\u001b[33m\"\u001b[39m)\n", - "\u001b[31mValueError\u001b[39m: Invalid google storage URI" - ] - } - ], + "outputs": [], "source": [ - "application_run = client.runs.submit(\n", - " application_id=\"test-app\",\n", + "application_run = client.runs.create(\n", + " application_version=\"two-task-dummy:v0.0.5\",\n", " items=[\n", " platform.InputItem(\n", - " external_id=\"wsi-1\",\n", + " reference=\"wsi-1\",\n", " input_artifacts=[\n", " platform.InputArtifact(\n", " name=\"user_slide\",\n", " download_url=platform.generate_signed_url(\"\"),\n", " metadata={\n", - " \"checksum_base64_crc32c\": \"AAAAAA==\",\n", - " \"resolution_mpp\": 0.25,\n", - " \"width_px\": 10000,\n", - " \"height_px\": 10000,\n", + " \"checksum_crc32c\": \"AAAAAA==\",\n", + " \"base_mpp\": 0.25,\n", + " \"width\": 10000,\n", + " \"height\": 10000,\n", " },\n", " )\n", " ],\n", @@ -366,12 +179,11 @@ }, { "cell_type": "markdown", - "id": "Hstk", "metadata": {}, "source": [ "# Observe the status of the application run and download\n", "\n", - "While you can observe the status of an application run directly via the `status()` method and also retrieve the results via the `results()` method, you can also download the results directly to a folder of your choice. The `download_to_folder()` method will download all the results to the specified folder. The method will automatically create a sub-folder in the specified folder with the name of the application run. The results for each individual input item will be stored in a separate folder named after the `external_id` you defined in the `InputItem`.\n", + "While you can observe the status of an application run directly via the `status()` method and also retrieve the results via the `results()` method, you can also download the results directly to a folder of your choice. The `download_to_folder()` method will download all the results to the specified folder. The method will automatically create a sub-folder in the specified folder with the name of the application run. The results for each individual input item will be stored in a separate folder named after the `reference` you defined in the `Item`.\n", "\n", "The method downloads the results for a slide as soon as they are available. There is no need to keep the method running until all results are available. The method will automatically check for the status of the application run and download the results as soon as they are available. If you invoke the method on a run you already downloaded some results before, it will only download the missing artifacts." ] @@ -379,28 +191,27 @@ { "cell_type": "code", "execution_count": null, - "id": "nWHF", "metadata": {}, "outputs": [], "source": [ - "download_folder = \"/tmp/\"\n", + "import tempfile\n", + "\n", + "download_folder = tempfile.gettempdir()\n", "application_run.download_to_folder(download_folder)" ] }, { "cell_type": "markdown", - "id": "iLit", "metadata": {}, "source": [ "# Continue to retrieve results for an application run\n", "\n", - "In case you just submitted an application run and want to check on the results later or you had a connection loss, you can simply initialize an application run object via its `run_id`. If you do not have the `run_id` anymore, you can simply list all currently running application versions via the `client.runs.list()` method. The `run_id` is part of the `ApplicationRun` object returned by the `list()` method. You can then use the `download_to_folder()` method to continue downloading the results." + "In case you just triggered an application run and want to check on the results later or you had a connection loss, you can simply initialize an applicaiton run object via it's `application_run_id`. If you do not have the `application_run_id` anymore, you can simple list all currently running application version via the `client.runs.list()` method. The `application_run_id` is part of the `ApplicationRun` object returned by the `list()` method. You can then use the `download_to_folder()` method to continue downloading the results." ] }, { "cell_type": "code", "execution_count": null, - "id": "ZHCJ", "metadata": {}, "outputs": [], "source": [ @@ -411,21 +222,33 @@ ] }, { - "cell_type": "markdown", - "id": "ROlb", + "cell_type": "code", + "execution_count": null, "metadata": {}, + "outputs": [], "source": [ + "import tempfile\n", + "\n", "from aignostics.platform.resources.runs import ApplicationRun\n", - "application_run = ApplicationRun.for_run_id(\"\")\n", + "\n", + "application_run = ApplicationRun.for_application_run_id(\"\")\n", "# download\n", - "download_folder = \"/tmp/\"\n", + "\n", + "download_folder = tempfile.gettempdir()\n", "application_run.download_to_folder(download_folder)" ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] } ], "metadata": { "kernelspec": { - "display_name": "aignostics (3.13.7)", + "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, @@ -439,9 +262,9 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.13.7" + "version": "3.11.9" } }, "nbformat": 4, - "nbformat_minor": 5 + "nbformat_minor": 4 } diff --git a/examples/notebook.py b/examples/notebook.py index e672e048..0610182a 100644 --- a/examples/notebook.py +++ b/examples/notebook.py @@ -2,13 +2,13 @@ # requires-python = ">=3.13" # dependencies = [ # "marimo", -# "aignostics==0.2.197", +# "aignostics==0.2.189", # ] # /// import marimo -__generated_with = "0.16.5" +__generated_with = "0.13.0" app = marimo.App(width="full") @@ -22,20 +22,20 @@ def _(): def _(mo): mo.md( r""" - # Initialize the Client + # Initialize the Client - As a first step, you need to initialize the client to interact with the Aignostics Platform. This will execute an OAuth flow depending on the environment you run: - - In case you have a browser available, an interactive login flow in your browser is started. - - In case there is no browser available, a device flow is started. + As a first step, you need to initialize the client to interact with the Aignostics Platform. This will execute an OAuth flow depending on the environment you run: + - In case you have a browser available, an interactive login flow in your browser is started. + - In case there is no browser available, a device flow is started. - **NOTE:** By default, the client caches the access token in your operation systems application cache folder. If you do not want to store the access token, set cache_token to False. + **NOTE:** By default, the client caches the access token in your operation systems application cache folder. If you do not want to store the access token, please initialize the client like this: - ```python - import aignostics.client as platform - # initialize the client - client = platform.Client(cache_token=True) - ``` - """ + ```python + import aignostics.client as platform + # initialize the client + client = platform.Client(cache_token=False) + ``` + """ ) return @@ -59,7 +59,7 @@ def show(models: BaseModel | list[BaseModel]) -> pd.DataFrame: def _(): from aignostics import platform # initialize the client - client = platform.Client(cache_token=True) + client = platform.Client(cache_token=False) return client, platform @@ -67,10 +67,10 @@ def _(): def _(mo): mo.md( r""" - # List our available applications + # List our available applications - Next, let us list the applications that are available in your organization: - """ + Next, let us list the applications that are available in your organization: + """ ) return @@ -87,17 +87,17 @@ def _(client, show): def _(mo): mo.md( r""" - # List all available versions of an application + # List all available versions of an application - Now that we know the applications that are available, we can list all the versions of a specific application. In this case, we will use the `test-app` as an example. Using the `application_id`, we can list all the versions of the application: - """ + Now that we know the applications that are available, we can list all the versions of a specific application. In this case, we will use the `TwoTask Dummy Application` as an example, which has the `application_id`: `two-task-dummy`. Using the `application_id`, we can list all the versions of the application: + """ ) return @app.cell def _(client, show): - application_versions = client.applications.versions.list(application="test-app") + application_versions = client.applications.versions.list(application="two-task-dummy") # visualize show(application_versions) return @@ -107,20 +107,20 @@ def _(client, show): def _(mo): mo.md( r""" - # Inspect the application version details + # Inspect the application version details - Now that we have the list of versions, we can inspect the details of a specific version. While we could directly use the list of application version returned by the `list` method, we want to directly query details for a specific application version. In this case, we will use the `test-app` application and version `0.0.4`: - """ + Now that we have the list of versions, we can inspect the details of a specific version. While we could directly use the list of application version returned by the `list` method, we want to directly query details for a specific application version. In this case, we will use version `0.35.0`, which has the `application_version_id`: `two-task-dummy:v0.35.0`. We use the `application_version_id` to retrieve further details about the application version: + """ ) return @app.cell def _(client): - test_app_version = client.applications.versions.details(application_id="test-app",application_version="0.0.4") + two_task_app = client.applications.versions.details(application_version="two-task-dummy:v0.35.0") - # view the `input_artifacts` to get insights in the required fields of the input expected by this application version. - test_app_version.input_artifacts + # view the `input_artifacts` to get insights in the required fields of the application version payload + two_task_app.input_artifacts[0].to_json() return @@ -128,50 +128,50 @@ def _(client): def _(mo): mo.md( r""" - # Submit an application run - - Now, let's submit an application run. We will use the application ID retrieved in the previous steps. We will not specify the version, which automatically uses the latest version. To submit an application run, we need to provide a payload that consists of 1 or more items. We provide the Pydantic model `InputItem` an item and the data that comes with it: - ```python - platform.InputItem( - external_id="", - input_artifacts=[platform.InputArtifact] - ) - ``` - The `InputArtifact` defines the actual data that you provide aka. in this case the image that you want to be processed. The expected values are defined by the application version and have to align with the `input_artifacts` schema of the application version. In the case of this application, we only require a single artifact per item, which is the image to process on. The artifact name is defined as `whole_slide_image`. The `download_url` is a signed URL that allows the Aignostics Platform to download the image data later during processing. In addition to the image data itself, you have to provide the metadata defined in the input artifact schema, i.e., `checksum_base64_crc32c`, `resolution_mpp`, `width_px`, and `height_px`. The metadata is used to validate the input data and is required for the processing of the image. The following example shows how to create an item with a single input artifact: - - ```python - platform.InputArtifact( - name="whole_slide_image", # as defined by the application version input_artifact schema - download_url="", - metadata={ - "checksum_base64_crc32c": "", - "resolution_mpp": "", - "width_px": "", - "height_px": "" - } - ) - ``` - """ + # Trigger an application run + + Now, let's trigger an application run for the `Test Application`. We will use the `application_version_id` that we retrieved in the previous step. To create an application run, we need to provide a payload that consists of 1 or more items. We provide the Pydantic model `InputItem` an item and the data that comes with it: + ```python + platform.InputItem( + reference="", + input_artifacts=[platform.InputArtifact] + ) + ``` + The `InputArtifact` defines the actual data that you provide aka. in this case the image that you want to be processed. The expected values are defined by the application version and have to align with the `input_artifacts` schema of the application version. In the case of the two task dummy application, we only require a single artifact per item, which is the image to process on. The artifact name is defined as `user_slide` for the `two-task-dummy` application and `whole_slide_image` for the `he-tme` application. The `download_url` is a signed URL that allows the Aignostics Platform to download the image data later during processing. In addition to the image data itself, you have to provide the metadata defined in the input artifact schema, i.e., `checksum_crc32c`, `base_mpp`, `width`, and `height`. The metadata is used to validate the input data and is required for the processing of the image. The following example shows how to create an item with a single input artifact: + + ```python + platform.InputArtifact( + name="user_slide", # as defined by the application version input_artifact schema + download_url="", + metadata={ + "checksum_crc32c": "", + "base_mpp": "", + "width": "", + "height": "" + } + ) + ``` + """ ) return @app.cell def _(client, platform): - application_run = client.runs.submit( - application_id="test-app", + application_run = client.runs.create( + application_version="two-task-dummy:v0.0.5", items=[ platform.InputItem( - external_id="wsi-1", + reference="wsi-1", input_artifacts=[ platform.InputArtifact( name="user_slide", download_url=platform.generate_signed_url(""), metadata={ - "checksum_base64_crc32c": "AAAAAA==", - "resolution_mpp": 0.25, - "width_px": 10000, - "height_px": 10000, + "checksum_crc32c": "AAAAAA==", + "base_mpp": 0.25, + "width": 10000, + "height": 10000, }, ) ], @@ -186,12 +186,12 @@ def _(client, platform): def _(mo): mo.md( r""" - # Observe the status of the application run and download + # Observe the status of the application run and download - While you can observe the status of an application run directly via the `status()` method and also retrieve the results via the `results()` method, you can also download the results directly to a folder of your choice. The `download_to_folder()` method will download all the results to the specified folder. The method will automatically create a sub-folder in the specified folder with the name of the application run. The results for each individual input item will be stored in a separate folder named after the `external_id` you defined in the `InputItem`. + While you can observe the status of an application run directly via the `status()` method and also retrieve the results via the `results()` method, you can also download the results directly to a folder of your choice. The `download_to_folder()` method will download all the results to the specified folder. The method will automatically create a sub-folder in the specified folder with the name of the application run. The results for each individual input item will be stored in a separate folder named after the `reference` you defined in the `Item`. - The method downloads the results for a slide as soon as they are available. There is no need to keep the method running until all results are available. The method will automatically check for the status of the application run and download the results as soon as they are available. If you invoke the method on a run you already downloaded some results before, it will only download the missing artifacts. - """ + The method downloads the results for a slide as soon as they are available. There is no need to keep the method running until all results are available. The method will automatically check for the status of the application run and download the results as soon as they are available. If you invoke the method on a run you already downloaded some results before, it will only download the missing artifacts. + """ ) return @@ -207,10 +207,10 @@ def _(application_run): def _(mo): mo.md( r""" - # Continue to retrieve results for an application run + # Continue to retrieve results for an application run - In case you just submitted an application run and want to check on the results later or you had a connection loss, you can simply initialize an application run object via its `run_id`. If you do not have the `run_id` anymore, you can simply list all currently running application versions via the `client.runs.list()` method. The `run_id` is part of the `ApplicationRun` object returned by the `list()` method. You can then use the `download_to_folder()` method to continue downloading the results. - """ + In case you just triggered an application run and want to check on the results later or you had a connection loss, you can simply initialize an applicaiton run object via it's `application_run_id`. If you do not have the `application_run_id` anymore, you can simple list all currently running application version via the `client.runs.list()` method. The `application_run_id` is part of the `ApplicationRun` object returned by the `list()` method. You can then use the `download_to_folder()` method to continue downloading the results. + """ ) return @@ -228,12 +228,12 @@ def _(client): def _(mo): mo.md( r""" - from aignostics.platform.resources.runs import ApplicationRun - application_run = ApplicationRun.for_run_id("") - # download - download_folder = "/tmp/" - application_run.download_to_folder(download_folder) - """ + from aignostics.client.resources.runs import ApplicationRun + application_run = ApplicationRun.for_application_run_id("") + # download + download_folder = "/tmp/" + application_run.download_to_folder(download_folder) + """ ) return diff --git a/examples/script.py b/examples/script.py index 09f86604..87987565 100644 --- a/examples/script.py +++ b/examples/script.py @@ -6,13 +6,13 @@ # initialize the client client = platform.Client() -# submit application run +# create application run # for details, see the IPython or Marimo notebooks for a detailed explanation of the payload -application_run = client.runs.submit( - application_id="two-task-dummy", +application_run = client.runs.create( + application_version="two-task-dummy:v0.35.0", items=[ platform.InputItem( - external_id="1", + reference="1", input_artifacts=[ platform.InputArtifact( name="user_slide", @@ -20,10 +20,10 @@ "gs://aignx-storage-service-dev/sample_data_formatted/9375e3ed-28d2-4cf3-9fb9-8df9d11a6627.tiff" ), metadata={ - "checksum_base64_crc32c": "N+LWCg==", - "resolution_mpp": 0.46499982, - "width_px": 3728, - "height_px": 3640, + "checksum_crc32c": "N+LWCg==", + "base_mpp": 0.46499982, + "width": 3728, + "height": 3640, }, ) ], diff --git a/install.sh b/install.sh index e59d889a..7b2602be 100755 --- a/install.sh +++ b/install.sh @@ -54,8 +54,6 @@ BREW_TOOLS=( "uv;uv;https://docs.astral.sh/uv/;local" "xmllint;libxml2;https://en.wikipedia.org/wiki/Libxml2;" "7z;p7zip;https://github.com/p7zip-project/p7zip;" - "sentry-cli;sentry-cli;https://docs.sentry.io/cli/;" - "git-filter-repo;git-filter-repo;https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/removing-sensitive-data-from-a-repository" ) MAC_BREW_TOOLS=( diff --git a/noxfile.py b/noxfile.py index f54ba416..76dea069 100644 --- a/noxfile.py +++ b/noxfile.py @@ -2,9 +2,7 @@ import json import os -import platform import re -import sys from pathlib import Path import nox @@ -23,56 +21,8 @@ CLI_MODULE = "cli" API_VERSIONS = ["v1"] UTF8 = "utf-8" - - -def _read_python_version() -> str: - """Read Python version from .python-version file. - - Returns: - str: Python version string (e.g., "3.13" or "3.13.1") - - Raises: - FileNotFoundError: If .python-version file does not exist - ValueError: If version format is invalid (not 2 or 3 segments) - OSError: If reading the file fails - """ - version_file = Path(".python-version") - if not version_file.exists(): - print("Error: .python-version file not found") - sys.exit(1) - - try: - version = version_file.read_text(encoding="utf-8").strip() - except OSError: - print("Error: Failed to read .python-version file") - sys.exit(1) - - if not re.match(r"^\d+\.\d+(?:\.\d+)?$", version): - print(f"Error: Invalid Python version format in .python-version: {version}. Expected X.Y or X.Y.Z") - sys.exit(2) - - return version - - -PYTHON_VERSION = _read_python_version() - - -def _get_test_python_versions() -> list[str]: - """Get Python versions for testing based on platform. - - Returns: - list[str]: List of Python version strings to test against - """ - versions = ["3.11.9", "3.12.12", PYTHON_VERSION] - if platform.system() == "Windows" and platform.machine().lower() in {"arm64", "aarch64"}: - versions = [PYTHON_VERSION] - # Only test with 3.13.x on Windows ARM due to: - # 1. Access denied errors when uv >= 0.9.4 tries to recreate venv directories (all Python versions) - # 2. Instability of Python 3.12.x on Windows ARM platform - return versions - - -TEST_PYTHON_VERSIONS = _get_test_python_versions() +PYTHON_VERSION = "3.13" +TEST_PYTHON_VERSIONS = ["3.11", "3.12", "3.13"] def _setup_venv(session: nox.Session, all_extras: bool = True, no_dev: bool = False) -> None: @@ -123,7 +73,6 @@ def lint(session: nox.Session) -> None: "--check", ".", ) - session.run("pyright", "--pythonversion", PYTHON_VERSION, "--threads") session.run("mypy", "src") @@ -161,7 +110,6 @@ def audit(session: nox.Session) -> None: pip_licenses_base_args.extend([ "--ignore-packages", - "aignostics", "pyinstaller", # https://pyinstaller.org/en/stable/license.html ]) @@ -390,10 +338,10 @@ def _generate_readme(session: nox.Session) -> None: preamble = "\n[//]: # (README.md generated from docs/partials/README_*.md)\n\n" header = Path("docs/partials/README_header.md").read_text(encoding="utf-8") main = Path("docs/partials/README_main.md").read_text(encoding="utf-8") - platform_section = Path("docs/partials/README_platform.md").read_text(encoding="utf-8") + platform = Path("docs/partials/README_platform.md").read_text(encoding="utf-8") glossary = Path("docs/partials/README_glossary.md").read_text(encoding="utf-8") footer = Path("docs/partials/README_footer.md").read_text(encoding="utf-8") - readme_content = f"{preamble}{header}\n\n{main}\n\n{platform_section}\n\n{footer}\n\n{glossary}" + readme_content = f"{preamble}{header}\n\n{main}\n\n{platform}\n\n{footer}\n\n{glossary}" Path("README.md").write_text(readme_content, encoding="utf-8") session.log("Generated README.md file from partials") @@ -427,66 +375,6 @@ def _generate_openapi_schemas(session: nox.Session) -> None: session.log(f"Generated API {version} OpenAPI schema in {format_name} format") -def _generate_sdk_metadata_schema(session: nox.Session, schema_type: str) -> None: - """Generate a single SDK metadata JSON schema with versioned filenames. - - Args: - session: The nox session instance - schema_type: Type of schema ("run" or "item") - """ - # Write schema to temp file to extract version - temp_file = Path(f"docs/source/_static/sdk_{schema_type}_custom_metadata_schema_temp.json") - with temp_file.open("w", encoding="utf-8") as f: - session.run( - "aignostics", - "sdk", - f"{schema_type}-metadata-schema", - "--no-pretty", - "--env", - "AIGNOSTICS_LOG_CONSOLE_ENABLED=false", - stdout=f, - external=True, - ) - - # Read back to get the version from $id - with temp_file.open("r", encoding="utf-8") as f: - schema = json.load(f) - - # Extract version from $id URL - schema_id = schema.get("$id", "") - version = schema_id.split("_")[-1].replace(".json", "") if "_" in schema_id else "v0.0.1" - - # Write to final locations (versioned and latest) - output_path_versioned = Path(f"docs/source/_static/sdk_{schema_type}_custom_metadata_schema_{version}.json") - output_path_latest = Path(f"docs/source/_static/sdk_{schema_type}_custom_metadata_schema_latest.json") - - for output_path in [output_path_versioned, output_path_latest]: - with output_path.open("w", encoding="utf-8") as f: - json.dump(schema, f, indent=2) - - # Clean up temp file - temp_file.unlink() - - session.log( - f"Generated {schema_type.capitalize()} SDK metadata JSON schema: " - f"{output_path_versioned.name} and sdk_{schema_type}_custom_metadata_schema_latest.json" - ) - - -def _generate_sdk_metadata_schemas(session: nox.Session) -> None: - """Generate SDK metadata JSON schemas with versioned filenames. - - Args: - session: The nox session instance - """ - # Create directory if it doesn't exist - Path("docs/source/_static").mkdir(parents=True, exist_ok=True) - - # Generate both run and item metadata schemas - _generate_sdk_metadata_schema(session, "run") - _generate_sdk_metadata_schema(session, "item") - - def _generate_cli_reference(session: nox.Session) -> None: """Generate CLI_REFERENCE.md. @@ -625,7 +513,6 @@ def docs(session: nox.Session) -> None: _generate_readme(session) _generate_cli_reference(session) _generate_openapi_schemas(session) - _generate_sdk_metadata_schemas(session) _generate_api_reference(session) _generate_attributions(session, Path(LICENSES_JSON_PATH)) @@ -670,14 +557,13 @@ def docs_pdf(session: nox.Session) -> None: session.error(f"Failed to parse latexmk version information: {e}") -def _prepare_coverage(session: nox.Session, posargs: list[str]) -> None: +def _prepare_coverage(session: nox.Session) -> None: """Clean coverage data unless keep-coverage flag is specified. Args: session: The nox session - posargs: Command line arguments """ - if "--cov-append" not in posargs: + if "--cov-append" not in session.posargs: session.run("rm", "-rf", ".coverage", external=True) @@ -793,22 +679,16 @@ def _run_pytest( # Distribute tests across available CPUs if not sequential if not is_sequential: - pytest_args.extend(["-n", "logical", "--dist", "worksteal"]) + pytest_args.extend(["-n", "logical", "--dist", "loadgroup"]) # Add act environment filter if needed if _is_act_environment(): pytest_args.extend(["-k", NOT_SKIP_WITH_ACT]) # Apply the appropriate marker - marker_value = f"({test_type})" + marker_value = f"{test_type}" if custom_marker: marker_value += f" and ({custom_marker})" - - # Exclude scheduled_only tests unless explicitly requested - # scheduled_only tests should only run when called with -m containing "scheduled_only" - if not custom_marker or "scheduled_only" not in custom_marker: - marker_value += " and not scheduled_only" - pytest_args.extend(["-m", marker_value]) # Add additional arguments @@ -862,29 +742,17 @@ def _cleanup_test_execution(session: nox.Session) -> None: ) -def _run_test_suite(session: nox.Session, marker: str = "", cov_append: bool = False) -> None: - """Run test suite with specified marker. - - Args: - session: The nox session - marker: Pytest marker expression - cov_append: Whether to append to existing coverage data - """ +@nox.session(python=TEST_PYTHON_VERSIONS) +def test(session: nox.Session) -> None: + """Run tests with pytest.""" _setup_venv(session) - posargs = session.posargs[:] - if "-m" not in posargs and marker: - posargs.extend(["-m", marker]) - - if cov_append: - posargs.append("--cov-append") - # Conditionally clean coverage data # Will remove .coverage file if --cov-append is not specified - _prepare_coverage(session, posargs) + _prepare_coverage(session) # Extract custom markers from posargs if present - custom_marker, filtered_posargs = _extract_custom_marker(posargs) + custom_marker, filtered_posargs = _extract_custom_marker(session.posargs) # Determine report type from python version and custom marker report_type = _get_report_type(session, custom_marker) @@ -897,30 +765,14 @@ def _run_test_suite(session: nox.Session, marker: str = "", cov_append: bool = F filtered_posargs.extend(["--cov-append"]) _run_pytest(session, "sequential", custom_marker, filtered_posargs, report_type) - # Generate coverage report in markdown (only after last test suite) - # Note: This will be called multiple times, which is fine as it updates the same report + # Generate coverage report in markdown _generate_coverage_report(session) # Clean up post test execution _cleanup_test_execution(session) -@nox.session(python=[PYTHON_VERSION]) -def test_default(session: nox.Session) -> None: - """Run tests as part of 'make' (no further args).""" - # Manually call test logic for each test type - _run_test_suite(session, "unit and not long_running and not very_long_running", cov_append=False) - _run_test_suite(session, "integration and not long_running and not very_long_running", cov_append=True) - _run_test_suite(session, "e2e and not long_running and not very_long_running", cov_append=True) - - -@nox.session(python=TEST_PYTHON_VERSIONS, default=False) -def test(session: nox.Session) -> None: - """Run tests with pytest.""" - _run_test_suite(session) - - -@nox.session(python=[PYTHON_VERSION], default=False) +@nox.session(python=["3.13"], default=False) def setup(session: nox.Session) -> None: """Setup dev environment post project creation.""" _setup_venv(session) diff --git a/pyproject.toml b/pyproject.toml index cbab4c8a..ffd821ec 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -1,6 +1,6 @@ [project] name = "aignostics" -version = "0.2.197" +version = "0.2.189" description = "🔬 Python SDK providing access to the Aignostics Platform. Includes Aignostics Launchpad (Desktop Application), Aignostics CLI (Command-Line Interface), example notebooks, and Aignostics Client Library." readme = "README.md" authors = [ @@ -70,14 +70,14 @@ classifiers = [ "Natural Language :: English", ] -requires-python = ">=3.11, <3.14" +requires-python = ">=3.11, <4.0" dependencies = [ # From Template - "fastapi[standard,all]>=0.120.4,<1", - "humanize>=4.14.0,<5", - "logfire[system-metrics]>=4.14.2,<5", - "nicegui[native]>=3.1.0,<3.2.0", # Regression in 3.2.0 + "fastapi[standard,all]>=0.118.2,<1", + "humanize>=4.13.0,<5", + "logfire[system-metrics]>=4.12.0,<5", + "nicegui[native]>=3.0.3,<4", "opentelemetry-instrumentation-fastapi>=0.53b0,<1", "opentelemetry-instrumentation-httpx>=0.53b0,<1", "opentelemetry-instrumentation-jinja2>=0.53b0,<1", @@ -87,30 +87,29 @@ dependencies = [ "opentelemetry-instrumentation-urllib>=0.53b0,<1", "opentelemetry-instrumentation-urllib3>=0.53b0,<1", "packaging>=25.0,<26", - "platformdirs>=4.5.0,<5", - "psutil>=7.1.2,<8", + "platformdirs>=4.4.0,<5", + "psutil>=7.1.0,<8", "pydantic-settings>=2.11.0,<3", "pywin32>=310,<311 ; sys_platform == 'win32'", "pyyaml>=6.0.3,<7", - "sentry-sdk>=2.43.0,<3", - "typer>=0.20.0,<1", + "sentry-sdk>=2.40.0,<3", + "typer>=0.19.2,<1", "uptime>=3.0.1,<4", # Custom "aiopath>=0.6.11,<1", - "boto3>=1.40.64,<2", + "boto3>=1.40.47,<2", "certifi>=2025.10.5,<2026", - "defusedxml>=0.7.1", - "dicom-validator>=0.7.3,<1", + "dicom-validator>=0.7.2,<1", "dicomweb-client[gcp]>=0.59.3,<1", "duckdb>=0.10.0,<=1.4.1", "fastparquet>=2024.11.0,<2025", - "google-cloud-storage>=3.4.1,<4", + "google-cloud-storage>=3.4.0,<4", "google-crc32c>=1.7.1,<2", "highdicom>=0.26.1,<1", "html-sanitizer>=2.6.0,<3", "httpx>=0.28.1,<1", "idc-index-data==22.0.2", - "ijson>=3.4.0.post0,<4", + "ijson>=3.4.0,<4", "jsf>=0.11.2,<1", "jsonschema[format-nongpl]>=4.25.1,<5", "openslide-bin>=4.0.0.8,<5", @@ -138,8 +137,8 @@ jupyter = ["jupyter>=1.1.1,<2"] marimo = [ "cloudpathlib>=0.23.0,<1", "ipython>=9.6.0,<10", - "marimo>=0.17.2,<1", - "matplotlib>=3.10.7,<4", + "marimo>=0.16.5,<1", + "matplotlib>=3.10.6,<4", "shapely>=2.1.0,<3", ] qupath = [] @@ -153,17 +152,17 @@ dev = [ "furo>=2025.9.25,<2026", "git-cliff>=2.10.1,<3", "mypy>=1.18.2,<2", - "nox[uv]>=2025.10.16,<2026", + "nox[uv]>=2025.5.1,<2026", "pip-audit>=2.9.0,<3", "pip-licenses @ git+https://github.com/neXenio/pip-licenses.git@master", # https://github.com/raimon49/pip-licenses/pull/224 "pre-commit>=4.3.0,<5", - "pyright>=1.1.406,<1.1.407", # Regression in 1.1.407, see https://github.com/microsoft/pyright/issues/11060 + "pyright>=1.1.406,<2", "pytest>=8.4.2,<9", "pytest-asyncio>=1.2.0,<2", "pytest-cov>=7.0.0,<8", "pytest-docker>=3.2.3,<4", "pytest-durations>=1.6.1,<2", - "pytest-env>=1.2.0,<2", + "pytest-env>=1.1.5,<2", "pytest-md-report>=0.7.0,<1", "pytest-regressions>=2.8.3,<3", "pytest-retry>=1.7.0,<2", @@ -172,7 +171,7 @@ dev = [ "pytest-timeout>=2.4.0,<3", "pytest-watcher>=0.4.3,<1", "pytest-xdist[psutil]>=3.8.0,<4", - "ruff>=0.14.3,<1", + "ruff>=0.14.0,<1", "scalene>=1.5.55,<2", "sphinx>=8.2.3,<9", "sphinx-autobuild>=2025.8.25,<2026", @@ -182,35 +181,15 @@ dev = [ "sphinx-mdinclude>=0.6.2,<1", "sphinx-rtd-theme>=3.0.2,<4", "sphinx_selective_exclude>=1.0.3,<2", - "sphinx-toolbox>=3.9.0,<4", + "sphinx-toolbox>=4,<5", "sphinxext.opengraph>=0.9.1,<1", - "swagger-plugin-for-sphinx>=5.2.0,<6", - "tomli>=2.3.0,<3", + "swagger-plugin-for-sphinx>=5.1.3,<6", + "tomli>=2.1.0,<3", "types-pyyaml>=6.0.12.20250915,<7", "types-requests>=2.32.4.20250913,<3", "watchdog>=6.0.0,<7", ] -[tool.uv] -required-version = ">=0.9.7" # CVE-2025-54368, GHSA-w476-p2h3-79g9, GHSA-pqhf-p39g-3x64 -override-dependencies = [ # https://github.com/astral-sh/uv/issues/4422 - "rfc3987; sys_platform == 'never'", # GPLv3 - "h11>=0.16.0", # CVE-2025-43859 - "tornado>=6.5.0", # CVE-2025-47287 - "jupyter-core>=5.8.1", # CVE-2025-30167 - "urllib3>=2.5.0", # CVE-2025-50181, CVE-2025-50182, - "pillow>=11.3.0", # CVE-2025-48379, - "aiohttp>=3.12.14", # CVE-2025-53643 - "starlette>=0.47.2", # CVE-2025-54121 - "uv>=0.9.7", # CVE-2025-54368, GHSA-w476-p2h3-79g9, GHSA-pqhf-p39g-3x64 - "jupyterlab>=4.4.9", # CVE-2025-59842 - "pip>=5.3", # CVE-2025-8869 - "starlette>=0.49.1", # GHSA-7f5h-v6xp-fcq8 -] - -[tool.uv.sources] -# No additional sources outside of src/ yet - [project.scripts] aignostics = "aignostics.cli:cli" @@ -234,6 +213,22 @@ packages = ["src/aignostics", "codegen/out/aignx", "examples"] [tool.hatch.metadata] allow-direct-references = true +[tool.uv] +required-version = ">=0.8.9" # CVE-2025-54368 +override-dependencies = [ # https://github.com/astral-sh/uv/issues/4422 + "rfc3987; sys_platform == 'never'", # GPLv3 + "h11>=0.16.0", # CVE-2025-43859 + "tornado>=6.5.0", # CVE-2025-47287 + "jupyter-core>=5.8.1", # CVE-2025-30167 + "urllib3>=2.5.0", # CVE-2025-50181, CVE-2025-50182, + "pillow>=11.3.0", # CVE-2025-48379, + "aiohttp>=3.12.14", # CVE-2025-53643 + "starlette>=0.47.2", # CVE-2025-54121 + "uv>=0.8.9", # CVE-2025-54368 + "jupyterlab>=4.4.9", # CVE-2025-59842 + "pip>=5.3", # CVE-2025-8869 +] + [tool.ruff] target-version = "py311" preview = true @@ -246,7 +241,6 @@ extend-exclude = [ "template/*.py", "**/third_party/*.py", "examples/*.py", - "examples/*.ipynb", "codegen", ] @@ -333,23 +327,17 @@ python_files = ["*_test.py", "test_*.py"] addopts = "-p nicegui.testing.plugin -v --strict-markers --log-disable=aignostics --cov=aignostics --cov-report=term-missing --cov-report=xml:reports/coverage.xml --cov-report=html:reports/coverage_html" asyncio_mode = "auto" asyncio_default_fixture_loop_scope = "function" -timeout = 10 # We use a rather short default timeout. Override with @pytest.mark.timeout(timeout=N) -# timeout_method="signal" env = ["COVERAGE_FILE=.coverage", "COVERAGE_PROCESS_START=pyproject.toml"] markers = [ # From Template "no_extras: Tests that do require no extras installed.", "scheduled: Tests to run on a schedule. They will still be part on non-scheduled test executions.", - "scheduled_only: Tests to run on a schedule only.", "sequential: Exclude from parallel test execution.", "skip_with_act: Don't run with act.", - "docker: That require Docker.", - "long_running: Tests with a test timeout >=5 min and < 60 min. CI/CD will run long running tests with one Python version only. When calling `make` long running tests are excluded - use `make test_long_running` instead.", - "very_long_running: Tests with a test timeout >= 60 min. CI/CD will run very long running tests with one Python version only. When calling `make` very long runing tests are excluded - use `make test_very_long_running` instead.", - # Test Categories (following Martin Fowler's Solitary vs Sociable unit test distinction) - "unit: Solitary unit tests - test a layer of a module in isolation with all dependencies mocked, except interaction with shared utils and the systems module. Unit tests must be able to pass offline, i.e. not calls to external services. The timeout should not be bigger than the default 10s, and must be <5 min.", - "integration: Sociable integration tests - test interactions across architectural layers (e.g. CLI/GUI→Service, Service→Utils) or between modules (e.g. Application→Platform), using real SDK collaborators, real file I/O, real subprocesses, and real Docker containers. Integration test must be able to pass offline, i.e. mock external services (Aignostics Platform API, Auth0, S3/GCS buckets, IDC). The timeout should not be bigger than the default 10s, and must be <5 min.", - "e2e: End-to-end tests - test complete workflows with real external network services (Aignostics Platform API, cloud storage, IDC, etc). If the test timeout is >= 5 min and < 60 min, additionally mark as `long_running`, if >= 60min mark as 'very_long_running'.", + "docker: tests That require Docker.", + "long_running: Tests that take a long time to run. Tests marked as long runing excluded from execution by default. Enable by passing any -m your_marker that matches a marker of the test.", + # Custom + # Nothing yet ] md_report = true md_report_output = "reports/pytest.md" @@ -363,11 +351,10 @@ sigterm = true relative_files = true source = ["src"] omit = [ - "src/aignostics/aignostics.py", - "src/aignostics/third_party/*", - "src/aignostics/notebook/_notebook.py", - "src/aignostics/wsi/_pydicom_handler.py", - "src/aignostics/wsi/_openslide_handler.py", + "*/third_party/*", + "*/_notebook.py", + "*/_pydicom_handler.py", + "*/_openslide_handler.py", ] branch = true parallel = true @@ -378,7 +365,7 @@ source = ["src/"] [tool.bumpversion] -current_version = "0.2.197" +current_version = "0.2.189" parse = "(?P\\d+)\\.(?P\\d+)\\.(?P\\d+)" serialize = ["{major}.{minor}.{patch}"] search = "{current_version}" @@ -393,7 +380,7 @@ allow_dirty = false commit = true commit_args = "--no-verify" tag_message = "Bump version: {current_version} → {new_version}" -message = "Bump version: {current_version} → {new_version}" +message = "Bump version: {current_version} → {new_version} [skip:ci]" # Note: Uncomment the following line to avoid running tests and ketryx during version bumps #message = "Bump version: {current_version} → {new_version} [skip:test:all,skip:ketryx]" #tag_message = "Bump version: {current_version} → {new_version} [skip:test:all,skip:ketryx]" @@ -536,16 +523,16 @@ commit_preprocessors = [ ] # regex for parsing and grouping commits commit_parsers = [ - { message = "^feat|^feature", group = "⛰️ Features" }, + { message = "^feat", group = "⛰️ Features" }, { message = ".*security.*", group = "🛡️ Security" }, { message = "^sec", group = "🛡️ Security" }, { message = "^fix", group = "🐛 Bug Fixes" }, - { message = "^doc|^documentation", group = "📚 Documentation" }, - { message = "^perf|^performance", group = "⚡ Performance" }, + { message = "^doc", group = "📚 Documentation" }, + { message = "^perf", group = "⚡ Performance" }, { message = "^refactor\\(clippy\\)", skip = true }, { message = "^refactor", group = "🚜 Refactor" }, { message = "^style", group = "🎨 Styling" }, - { message = "^test|^testing", group = "🧪 Testing" }, + { message = "^test", group = "🧪 Testing" }, { message = "^chore\\(release\\): prepare for", skip = true }, { message = "^chore\\(pr\\)", skip = true }, { message = "^chore\\(pull\\)", skip = true }, diff --git a/pyrightconfig.json b/pyrightconfig.json deleted file mode 100644 index 612b268a..00000000 --- a/pyrightconfig.json +++ /dev/null @@ -1,25 +0,0 @@ -{ - "typeCheckingMode": "basic", - "exclude": [ - "**/.nox/**", - "**/.venv/**", - "**/dist-packages/**", - "**/dist_vercel/.vercel/**", - "**/dist_native/**", - "**/site-packages/**", - ], - "ignore": [ - "**/third_party/**", - "dist/**", - "dist_vercel/**", - "dist_native/**", - "template/**", - "tests/**", - "codegen/**", - "src/aignostics/wsi/_pydicom_handler.py", - "src/aignostics/notebook/_notebook.py", - ], - "extraPaths": [ - "./src/aignostics/utils/_" - ] -} \ No newline at end of file diff --git a/renovate.json b/renovate.json index b4b200f4..956ecfe0 100644 --- a/renovate.json +++ b/renovate.json @@ -3,15 +3,7 @@ "extends": [ "config:recommended" ], - "timezone": "Europe/Berlin", - "schedule": "before 2am every weekday", - "labels": [ - "bot", - "renovate", - "dependencies", - "skip:test:long_running" - ], "ignorePaths": [ - "plugins/manifest/package.json" + "plugins/manifest/package.json" ] -} \ No newline at end of file +} diff --git a/requirements/SHR-APPLICATION-1.md b/requirements/SHR-APPLICATION-1.md deleted file mode 100644 index 0213f638..00000000 --- a/requirements/SHR-APPLICATION-1.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -itemId: SHR-APPLICATION-1 -itemTitle: Application Discovery and Navigation -itemType: Requirement -Requirement type: ENVIRONMENT ---- - -## Description - -Users shall be able to view available AI applications and navigate to specific application views to access application functionality. diff --git a/requirements/SHR-APPLICATION-2.md b/requirements/SHR-APPLICATION-2.md deleted file mode 100644 index 6ce92341..00000000 --- a/requirements/SHR-APPLICATION-2.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -itemId: SHR-APPLICATION-2 -itemTitle: Application Run Management -itemType: Requirement -Requirement type: ENVIRONMENT ---- - -## Description - -Users shall be able to execute AI applications on their data by preparing data, submitting runs, monitoring run status, managing run lifecycle including cancellation, and accessing results. diff --git a/requirements/SHR-APPLICATION-3.md b/requirements/SHR-APPLICATION-3.md deleted file mode 100644 index f9436697..00000000 --- a/requirements/SHR-APPLICATION-3.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -itemId: SHR-APPLICATION-3 -itemTitle: Application Results Management -itemType: Requirement -Requirement type: ENVIRONMENT ---- - -## Description - -Users shall be able to download and access the results generated by completed AI application runs. diff --git a/requirements/SHR-BUCKET-1.md b/requirements/SHR-BUCKET-1.md deleted file mode 100644 index ddfef038..00000000 --- a/requirements/SHR-BUCKET-1.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -itemId: SHR-BUCKET-1 -itemTitle: Cloud Storage File Management -itemType: Requirement -Requirement type: ENVIRONMENT ---- - -## Description - -Users shall be able to organize, find, download, and delete their files in cloud storage for data management and collaboration workflows. Users shall be able to upload files to cloud storage via CLI for data management workflows. diff --git a/requirements/SHR-DATASET-1.md b/requirements/SHR-DATASET-1.md deleted file mode 100644 index 45868f36..00000000 --- a/requirements/SHR-DATASET-1.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -itemId: SHR-DATASET-1 -itemTitle: Dataset Discovery and Download -itemType: Requirement -Requirement type: ENVIRONMENT ---- - -## Description - -Users shall be able to download publicly available datasets using dataset identifiers for use in their analysis workflows. diff --git a/requirements/SHR-NOTEBOOK-1.md b/requirements/SHR-NOTEBOOK-1.md deleted file mode 100644 index 5e926e1a..00000000 --- a/requirements/SHR-NOTEBOOK-1.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -itemId: SHR-NOTEBOOK-1 -itemTitle: Notebook Environment Management -itemType: Requirement -Requirement type: ENVIRONMENT ---- - -## Description - -Users shall be able to launch notebook environments for interactive data analysis and exploration. diff --git a/requirements/SHR-VISUALIZATION-1.md b/requirements/SHR-VISUALIZATION-1.md deleted file mode 100644 index 20f8eb44..00000000 --- a/requirements/SHR-VISUALIZATION-1.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -itemId: SHR-VISUALIZATION-1 -itemTitle: Image Visualization and Analysis Tools -itemType: Requirement -Requirement type: ENVIRONMENT ---- - -## Description - -Users shall be able to visualize whole slide images and analysis results using specialized visualization tools for pathology workflows, data examination, and image processing tasks. diff --git a/requirements/SHR_PRIVACY_1.md b/requirements/SHR_PRIVACY_1.md new file mode 100644 index 00000000..2266873a --- /dev/null +++ b/requirements/SHR_PRIVACY_1.md @@ -0,0 +1,8 @@ +--- +itemId: SHR-PRIVACY-1 +itemType: Requirement +Requirement type: Use case +Context: Security +--- + +As a user I expect private information to be stored securely and presented appropriately. diff --git a/requirements/SHR_USABILITY_1.md b/requirements/SHR_USABILITY_1.md new file mode 100644 index 00000000..79e1dab4 --- /dev/null +++ b/requirements/SHR_USABILITY_1.md @@ -0,0 +1,8 @@ +--- +itemId: SHR-USABILITY-1 +itemType: Requirement +Requirement type: Use case +Context: Clinical +--- + +As a user I expect to be informed if the system is not operational so that I can take appropriate action. diff --git a/requirements/SWR-APPLICATION-1-1.md b/requirements/SWR-APPLICATION-1-1.md deleted file mode 100644 index 81241305..00000000 --- a/requirements/SWR-APPLICATION-1-1.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -itemId: SWR-APPLICATION-1-1 -itemTitle: List Available Applications -itemHasParent: SHR-APPLICATION-1 -itemType: Requirement -Requirement type: FUNCTIONAL -Layer: System (backend logic) ---- - -System shall provide a list of available applications for user selection. diff --git a/requirements/SWR-APPLICATION-1-2.md b/requirements/SWR-APPLICATION-1-2.md deleted file mode 100644 index 53b0b998..00000000 --- a/requirements/SWR-APPLICATION-1-2.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -itemId: SWR-APPLICATION-1-2 -itemTitle: List Available Application Versions -itemHasParent: SHR-APPLICATION-1 -itemType: Requirement -Requirement type: FUNCTIONAL -Layer: System (backend logic) ---- - -System shall provide a list of available versions for each application. diff --git a/requirements/SWR-APPLICATION-2-1.md b/requirements/SWR-APPLICATION-2-1.md deleted file mode 100644 index 506b0997..00000000 --- a/requirements/SWR-APPLICATION-2-1.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -itemId: SWR-APPLICATION-2-1 -itemTitle: Validate Slide Metadata Completeness -itemHasParent: SHR-APPLICATION-2 -itemType: Requirement -Requirement type: FUNCTIONAL -Layer: System (backend logic) ---- - -System shall validate that slide metadata contains all required fields for application processing. diff --git a/requirements/SWR-APPLICATION-2-10.md b/requirements/SWR-APPLICATION-2-10.md deleted file mode 100644 index f8ece06f..00000000 --- a/requirements/SWR-APPLICATION-2-10.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -itemId: SWR-APPLICATION-2-10 -itemTitle: Provide File Selection Interface -itemHasParent: SHR-APPLICATION-2 -itemType: Requirement -Requirement type: FUNCTIONAL -Layer: User Interface (frontend) ---- - -System shall provide a file selection interface that allows users to choose directories containing slide files for analysis. diff --git a/requirements/SWR-APPLICATION-2-11.md b/requirements/SWR-APPLICATION-2-11.md deleted file mode 100644 index f40cd0e9..00000000 --- a/requirements/SWR-APPLICATION-2-11.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -itemId: SWR-APPLICATION-2-11 -itemTitle: Detect Compatible Slide Files -itemHasParent: SHR-APPLICATION-2 -itemType: Requirement -Requirement type: FUNCTIONAL -Layer: System (backend logic) ---- - -System shall scan selected directories recursively and identify compatible slide files for analysis processing. diff --git a/requirements/SWR-APPLICATION-2-12.md b/requirements/SWR-APPLICATION-2-12.md deleted file mode 100644 index 2b438ff7..00000000 --- a/requirements/SWR-APPLICATION-2-12.md +++ /dev/null @@ -1,9 +0,0 @@ ---- -itemId: SWR-APPLICATION-2-12 -itemTitle: Poll Application Run Status at Regular Intervals -itemHasParent: SHR-APPLICATION-2 -itemType: Requirement -Requirement type: FUNCTIONAL -Layer: System (backend logic) ---- -System shall poll application run status at regular intervals and shall detect when runs reach completed status. diff --git a/requirements/SWR-APPLICATION-2-13.md b/requirements/SWR-APPLICATION-2-13.md deleted file mode 100644 index dce4aa4b..00000000 --- a/requirements/SWR-APPLICATION-2-13.md +++ /dev/null @@ -1,9 +0,0 @@ ---- -itemId: SWR-APPLICATION-2-13 -itemTitle: Accept Optional Run Name and Description -itemHasParent: SHR-APPLICATION-2 -itemType: Requirement -Requirement type: FUNCTIONAL -Layer: System (backend logic) ---- -System shall accept optional user-provided run name and description during application run submission. The system shall validate that run names do not exceed 100 characters and descriptions do not exceed 500 characters when provided, and shall store these metadata fields with the run record. diff --git a/requirements/SWR-APPLICATION-2-14.md b/requirements/SWR-APPLICATION-2-14.md deleted file mode 100644 index ff7cff76..00000000 --- a/requirements/SWR-APPLICATION-2-14.md +++ /dev/null @@ -1,9 +0,0 @@ ---- -itemId: SWR-APPLICATION-2-14 -itemTitle: Accept Optional Custom Item Names and Descriptions -itemHasParent: SHR-APPLICATION-2 -itemType: Requirement -Requirement type: FUNCTIONAL -Layer: System (backend logic) ---- -System shall accept optional user-provided custom names and descriptions for individual items (e.g., slides) during application run submission. The system shall validate that custom item names do not exceed 100 characters and descriptions do not exceed 500 characters when provided, and shall store these metadata fields with the corresponding item records. diff --git a/requirements/SWR-APPLICATION-2-15.md b/requirements/SWR-APPLICATION-2-15.md deleted file mode 100644 index 5f5fe2ec..00000000 --- a/requirements/SWR-APPLICATION-2-15.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -itemId: SWR-APPLICATION-2-15 -itemTitle: Display Run Name and Description in Run List -itemHasParent: SHR-APPLICATION-2 -itemType: Requirement -Requirement type: FUNCTIONAL -Layer: System (frontend interface) ---- -System shall display user-provided run names and descriptions in the application run list -interface. The system shall show the run name as the primary identifier when provided, otherwise -shall display a system-generated identifier, and shall make descriptions accessible through the -interface when available. diff --git a/requirements/SWR-APPLICATION-2-16.md b/requirements/SWR-APPLICATION-2-16.md deleted file mode 100644 index 88fbb63c..00000000 --- a/requirements/SWR-APPLICATION-2-16.md +++ /dev/null @@ -1,9 +0,0 @@ ---- -itemId: SWR-APPLICATION-2-16 -itemTitle: Display Run Items with Metadata -itemHasParent: SHR-APPLICATION-2 -itemType: Requirement -Requirement type: FUNCTIONAL -Layer: User Interface (frontend) ---- -System shall display all items included in an application run showing thumbnail images, custom item names when provided otherwise filenames, custom item descriptions when provided, and file paths. diff --git a/requirements/SWR-APPLICATION-2-2.md b/requirements/SWR-APPLICATION-2-2.md deleted file mode 100644 index 8a7a3363..00000000 --- a/requirements/SWR-APPLICATION-2-2.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -itemId: SWR-APPLICATION-2-2 -itemTitle: Validate MPP Resolution Against Application Limits -itemHasParent: SHR-APPLICATION-2 -itemType: Requirement -Requirement type: FUNCTIONAL -Layer: System (backend logic) ---- - -System shall validate slide resolution in microns per pixel (MPP) against application-specific limits before processing application runs. The system shall reject submissions when MPP exceeds the configured threshold and shall provide error messages indicating the specific resolution value that exceeds limits. diff --git a/requirements/SWR-APPLICATION-2-3.md b/requirements/SWR-APPLICATION-2-3.md deleted file mode 100644 index 28918fe9..00000000 --- a/requirements/SWR-APPLICATION-2-3.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -itemId: SWR-APPLICATION-2-3 -itemTitle: Upload Slide Files to Platform Storage -itemHasParent: SHR-APPLICATION-2 -itemType: Requirement -Requirement type: FUNCTIONAL -Layer: System (backend logic) ---- - -System shall upload slide files from local storage to platform cloud storage using metadata file references. The system shall complete upload operations successfully when files exist and metadata is valid, and shall provide upload completion confirmation to users. diff --git a/requirements/SWR-APPLICATION-2-4.md b/requirements/SWR-APPLICATION-2-4.md deleted file mode 100644 index 6efc3eb2..00000000 --- a/requirements/SWR-APPLICATION-2-4.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -itemId: SWR-APPLICATION-2-4 -itemTitle: Submit Application Run with Validation Error Handling -itemHasParent: SHR-APPLICATION-2 -itemType: Requirement -Requirement type: FUNCTIONAL -Layer: System (backend logic) ---- - -System shall submit application runs when metadata validation is successful and reject submissions when validation fails. diff --git a/requirements/SWR-APPLICATION-2-5.md b/requirements/SWR-APPLICATION-2-5.md deleted file mode 100644 index 3117f26d..00000000 --- a/requirements/SWR-APPLICATION-2-5.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -itemId: SWR-APPLICATION-2-5 -itemTitle: Create Application Run with Unique Identifier -itemHasParent: SHR-APPLICATION-2 -itemType: Requirement -Requirement type: FUNCTIONAL -Layer: System (backend logic) ---- - -System shall create application runs from valid metadata submissions. The system shall return the unique run identifier to users upon successful run creation and shall maintain run state for subsequent operations. diff --git a/requirements/SWR-APPLICATION-2-6.md b/requirements/SWR-APPLICATION-2-6.md deleted file mode 100644 index 8025f56a..00000000 --- a/requirements/SWR-APPLICATION-2-6.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -itemId: SWR-APPLICATION-2-6 -itemTitle: Provide Application Run Status Information -itemHasParent: SHR-APPLICATION-2 -itemType: Requirement -Requirement type: FUNCTIONAL -Layer: System (backend logic) ---- - -System shall provide detailed status information for application runs when requested by run identifier. The system shall return run details including current status, application version, and run metadata for valid run identifiers. diff --git a/requirements/SWR-APPLICATION-2-7.md b/requirements/SWR-APPLICATION-2-7.md deleted file mode 100644 index 4db16e12..00000000 --- a/requirements/SWR-APPLICATION-2-7.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -itemId: SWR-APPLICATION-2-7 -itemTitle: Cancel Running Application Runs -itemHasParent: SHR-APPLICATION-2 -itemType: Requirement -Requirement type: FUNCTIONAL -Layer: System (backend logic) ---- - -System shall cancel running application runs when requested by run identifier. The system shall update run status to canceled state, confirm cancellation operation to users, and shall prevent further processing of canceled runs. diff --git a/requirements/SWR-APPLICATION-2-8.md b/requirements/SWR-APPLICATION-2-8.md deleted file mode 100644 index 57e45191..00000000 --- a/requirements/SWR-APPLICATION-2-8.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -itemId: SWR-APPLICATION-2-8 -itemTitle: Generate Metadata from Slide Files -itemHasParent: SHR-APPLICATION-2 -itemType: Requirement -Requirement type: FUNCTIONAL -Layer: System (backend logic) ---- - -System shall generate technical file metadata from slide files and shall combine this with required user-provided medical metadata for complete slide metadata used in application processing. diff --git a/requirements/SWR-APPLICATION-2-9.md b/requirements/SWR-APPLICATION-2-9.md deleted file mode 100644 index 291c68ff..00000000 --- a/requirements/SWR-APPLICATION-2-9.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -itemId: SWR-APPLICATION-2-9 -itemTitle: Monitor Application Run Status on User Request -itemHasParent: SHR-APPLICATION-2 -itemType: Requirement -Requirement type: FUNCTIONAL -Layer: System (backend logic) ---- - -System shall poll and update application run status when users navigate to the main screen or explicitly request status updates. diff --git a/requirements/SWR-APPLICATION-3-1.md b/requirements/SWR-APPLICATION-3-1.md deleted file mode 100644 index 23109cfb..00000000 --- a/requirements/SWR-APPLICATION-3-1.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -itemId: SWR-APPLICATION-3-1 -itemTitle: Download Application Run Results -itemHasParent: SHR-APPLICATION-3 -itemType: Requirement -Requirement type: FUNCTIONAL -Layer: System (backend logic) ---- - -System shall download application run results at multiple granularity levels: all items per run, individual items, and individual artifacts per item. The system shall download to specified destination directories when requested by run identifier, retrieve results regardless of run status, and provide download confirmation with status information indicating whether run was completed or canceled. diff --git a/requirements/SWR-APPLICATION-3-2.md b/requirements/SWR-APPLICATION-3-2.md deleted file mode 100644 index 0f0bc877..00000000 --- a/requirements/SWR-APPLICATION-3-2.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -itemId: SWR-APPLICATION-3-2 -itemTitle: Validate Output Artifact Integrity -itemHasParent: SHR-APPLICATION-3 -itemType: Requirement -Requirement type: FUNCTIONAL -Layer: System (backend logic) ---- - -System shall validate integrity of downloaded output artifacts by verifying file sizes against expected specifications and confirming all required output files are present. The system shall report validation success or failure for each artifact file. diff --git a/requirements/SWR-APPLICATION-3-3.md b/requirements/SWR-APPLICATION-3-3.md deleted file mode 100644 index 491c9dcd..00000000 --- a/requirements/SWR-APPLICATION-3-3.md +++ /dev/null @@ -1,9 +0,0 @@ ---- -itemId: SWR-APPLICATION-3-3 -itemTitle: Download Partial Results from Cancelled Runs -itemHasParent: SHR-APPLICATION-3 -itemType: Requirement -Requirement type: FUNCTIONAL -Layer: System (backend logic) ---- -System shall provide access to download completed partial results when application runs have been cancelled by users. The system shall make available all output artifacts that were successfully generated before cancellation occurred. diff --git a/requirements/SWR-BUCKET-1-1.md b/requirements/SWR-BUCKET-1-1.md deleted file mode 100644 index ac4e1b0b..00000000 --- a/requirements/SWR-BUCKET-1-1.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -itemId: SWR-BUCKET-1-1 -itemTitle: Upload Directory Structure to Bucket Storage -itemHasParent: SHR-BUCKET-1 -itemType: Requirement -Requirement type: FUNCTIONAL -Layer: System (backend logic) ---- - -System shall upload local directory structures with multiple files and subdirectories to bucket storage while preserving file organization. diff --git a/requirements/SWR-BUCKET-1-2.md b/requirements/SWR-BUCKET-1-2.md deleted file mode 100644 index 6523fb44..00000000 --- a/requirements/SWR-BUCKET-1-2.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -itemId: SWR-BUCKET-1-2 -itemTitle: List Bucket Files -itemHasParent: SHR-BUCKET-1 -itemType: Requirement -Requirement type: FUNCTIONAL -Layer: System (backend logic) ---- - -System shall list files stored in bucket storage for user access. diff --git a/requirements/SWR-BUCKET-1-3.md b/requirements/SWR-BUCKET-1-3.md deleted file mode 100644 index f3e034f7..00000000 --- a/requirements/SWR-BUCKET-1-3.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -itemId: SWR-BUCKET-1-3 -itemTitle: Download Files with Content Validation -itemHasParent: SHR-BUCKET-1 -itemType: Requirement -Requirement type: FUNCTIONAL -Layer: System (backend logic) ---- - -System shall download files from bucket storage to specified destinations with identical content preservation. diff --git a/requirements/SWR-BUCKET-1-4.md b/requirements/SWR-BUCKET-1-4.md deleted file mode 100644 index 1ebe9b03..00000000 --- a/requirements/SWR-BUCKET-1-4.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -itemId: SWR-BUCKET-1-4 -itemTitle: Delete Individual Files from Bucket -itemHasParent: SHR-BUCKET-1 -itemType: Requirement -Requirement type: FUNCTIONAL -Layer: System (backend logic) ---- - -System shall delete individual files from bucket storage when requested by users and confirm successful deletion operations. diff --git a/requirements/SWR-BUCKET-1-5.md b/requirements/SWR-BUCKET-1-5.md deleted file mode 100644 index 4faba34c..00000000 --- a/requirements/SWR-BUCKET-1-5.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -itemId: SWR-BUCKET-1-5 -itemTitle: Enable Download and Delete Upon File Selection -itemHasParent: SHR-BUCKET-1 -itemType: Requirement -Requirement type: FUNCTIONAL -Layer: System (backend logic) ---- - -System shall enable download and delete operations when users select files from bucket storage. diff --git a/requirements/SWR-BUCKET-1-6.md b/requirements/SWR-BUCKET-1-6.md deleted file mode 100644 index 47e7a386..00000000 --- a/requirements/SWR-BUCKET-1-6.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -itemId: SWR-BUCKET-1-6 -itemTitle: Remove Selected Files from Bucket Storage -itemHasParent: SHR-BUCKET-1 -itemType: Requirement -Requirement type: FUNCTIONAL -Layer: System (backend logic) ---- - -System shall remove selected files from bucket storage when triggered by user deletion action through GUI controls. diff --git a/requirements/SWR-BUCKET-1-7.md b/requirements/SWR-BUCKET-1-7.md deleted file mode 100644 index 21484e88..00000000 --- a/requirements/SWR-BUCKET-1-7.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -itemId: SWR-BUCKET-1-7 -itemTitle: Reflect File Removal in User Interface -itemHasParent: SHR-BUCKET-1 -itemType: Requirement -Requirement type: FUNCTIONAL -Layer: User Interface (frontend) ---- - -System shall update the user interface to reflect when files have been removed from bucket storage. diff --git a/requirements/SWR-BUCKET-1-8.md b/requirements/SWR-BUCKET-1-8.md deleted file mode 100644 index 943ea5fe..00000000 --- a/requirements/SWR-BUCKET-1-8.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -itemId: SWR-BUCKET-1-8 -itemTitle: Disable File Operation Controls Without Selection -itemHasParent: SHR-BUCKET-1 -itemType: Requirement -Requirement type: FUNCTIONAL -Layer: User Interface (frontend) ---- - -System shall disable download and delete controls when no files are selected in the bucket interface. diff --git a/requirements/SWR-BUCKET-1-9.md b/requirements/SWR-BUCKET-1-9.md deleted file mode 100644 index eb54bb4c..00000000 --- a/requirements/SWR-BUCKET-1-9.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -itemId: SWR-BUCKET-1-9 -itemTitle: Confirm File Download Completion -itemHasParent: SHR-BUCKET-1 -itemType: Requirement -Requirement type: FUNCTIONAL -Layer: User Interface (frontend) ---- - -System shall notify users when file download operations have completed successfully. diff --git a/requirements/SWR-DATASET-1-1.md b/requirements/SWR-DATASET-1-1.md deleted file mode 100644 index c492a688..00000000 --- a/requirements/SWR-DATASET-1-1.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -itemId: SWR-DATASET-1-1 -itemTitle: Download Dataset Instance via CLI Command -itemHasParent: SHR-DATASET-1 -itemType: Requirement -Requirement type: FUNCTIONAL -Layer: System (backend logic) ---- - -System shall download specified dataset with proper directory structure when user provides valid dataset identifier or URL and destination directory. diff --git a/requirements/SWR-DATASET-1-2.md b/requirements/SWR-DATASET-1-2.md deleted file mode 100644 index 3eb711ae..00000000 --- a/requirements/SWR-DATASET-1-2.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -itemId: SWR-DATASET-1-2 -itemTitle: Verify Downloaded File Integrity -itemHasParent: SHR-DATASET-1 -itemType: Requirement -Requirement type: FUNCTIONAL -Layer: System (backend logic) ---- - -System shall verify that downloaded dataset files are complete, uncorrupted, and have valid format integrity. diff --git a/requirements/SWR-DATASET-1-3.md b/requirements/SWR-DATASET-1-3.md deleted file mode 100644 index 5274df03..00000000 --- a/requirements/SWR-DATASET-1-3.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -itemId: SWR-DATASET-1-3 -itemTitle: Provide Download Completion Confirmation -itemHasParent: SHR-DATASET-1 -itemType: Requirement -Requirement type: FUNCTIONAL -Layer: User Interface (frontend) ---- - -System shall provide confirmation message when dataset download completes successfully. diff --git a/requirements/SWR-NOTEBOOK-1-1.md b/requirements/SWR-NOTEBOOK-1-1.md deleted file mode 100644 index a29041aa..00000000 --- a/requirements/SWR-NOTEBOOK-1-1.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -itemId: SWR-NOTEBOOK-1-1 -itemTitle: Launch Notebook Extension -itemHasParent: SHR-NOTEBOOK-1 -itemType: Requirement -Requirement type: FUNCTIONAL -Layer: System (backend logic) ---- - -System shall launch notebook extension when requested by user. diff --git a/requirements/SWR-VISUALIZATION-1-1.md b/requirements/SWR-VISUALIZATION-1-1.md deleted file mode 100644 index 2177c883..00000000 --- a/requirements/SWR-VISUALIZATION-1-1.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -itemId: SWR-VISUALIZATION-1-1 -itemTitle: Install QuPath Software -itemHasParent: SHR-VISUALIZATION-1 -itemType: Requirement -Requirement type: FUNCTIONAL -Layer: System (backend logic) ---- - -System shall install QuPath software and confirm installation completion with version information. System shall check for and install the latest QuPath version to keep the software up to date. diff --git a/requirements/SWR-VISUALIZATION-1-2.md b/requirements/SWR-VISUALIZATION-1-2.md deleted file mode 100644 index 9d908b69..00000000 --- a/requirements/SWR-VISUALIZATION-1-2.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -itemId: SWR-VISUALIZATION-1-2 -itemTitle: Launch QuPath Application -itemHasParent: SHR-VISUALIZATION-1 -itemType: Requirement -Requirement type: FUNCTIONAL -Layer: System (backend logic) ---- - -System shall launch QuPath application when requested by user. diff --git a/requirements/SWR-VISUALIZATION-1-3.md b/requirements/SWR-VISUALIZATION-1-3.md deleted file mode 100644 index b48e7589..00000000 --- a/requirements/SWR-VISUALIZATION-1-3.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -itemId: SWR-VISUALIZATION-1-3 -itemTitle: Create QuPath Projects from Application Results -itemHasParent: SHR-VISUALIZATION-1 -itemType: Requirement -Requirement type: FUNCTIONAL -Layer: System (backend logic) ---- - -System shall create QuPath projects with annotation data from application run results. diff --git a/requirements/SWR-VISUALIZATION-1-4.md b/requirements/SWR-VISUALIZATION-1-4.md deleted file mode 100644 index 7e610309..00000000 --- a/requirements/SWR-VISUALIZATION-1-4.md +++ /dev/null @@ -1,9 +0,0 @@ ---- -itemId: SWR-VISUALIZATION-1-4 -itemTitle: Create QuPath Projects with Segmentation Polygons -itemHasParent: SHR-VISUALIZATION-1 -itemType: Requirement -Requirement type: FUNCTIONAL -Layer: System (backend logic) ---- -System shall create QuPath projects with input images with annotated cells, and segmentation polygons from application run results. diff --git a/sonar-project.properties b/sonar-project.properties index edc217cc..0c7d9347 100644 --- a/sonar-project.properties +++ b/sonar-project.properties @@ -1,6 +1,6 @@ sonar.projectKey=aignostics_python-sdk sonar.organization=aignostics -sonar.projectVersion=0.2.197 +sonar.projectVersion=0.2.189 sonar.projectDescription=🔬 Python SDK providing access to Aignostics AI services. sonar.links.homepage=https://aignostics.readthedocs.io/en/latest/ sonar.links.scm=https://github.com/aignostics/python-sdk diff --git a/specifications/SPEC-APPLICATION-SERVICE.md b/specifications/SPEC-APPLICATION-SERVICE.md deleted file mode 100644 index 087bd349..00000000 --- a/specifications/SPEC-APPLICATION-SERVICE.md +++ /dev/null @@ -1,440 +0,0 @@ ---- -itemId: SPEC-APPLICATION-SERVICE -itemTitle: Application Module Specification -itemType: Software Item Spec -itemFulfills: SWR-APPLICATION-1-1, SWR-APPLICATION-1-2, SWR-APPLICATION-2-3, SWR-APPLICATION-2-4, SHR-APPLICATION-3, SWR-APPLICATION-2-12, SWR-APPLICATION-2-11, SWR-APPLICATION-2-13, SWR-APPLICATION-2-14, SWR-APPLICATION-2-15, SWR-APPLICATION-2-16, SWR-APPLICATION-2-5, SWR-APPLICATION-2-7, SWR-APPLICATION-2-8, SWR-APPLICATION-2-9, SWR-APPLICATION-3-3 -Module: Application -Layer: Domain Service -Version: 0.2.106 -Date: 2025-09-09 ---- - -## 1. Description - -### 1.1 Purpose - -The Application Module provides comprehensive management of AI applications and their execution lifecycle on the Aignostics Platform. It enables users to discover, submit, monitor, and retrieve results from computational pathology applications through a unified interface. - -The module implements a domain service layer that orchestrates interactions between the platform API, local file systems, and multiple specialized services (WSI, bucket, QuPath) to provide a unified interface for AI application workflows. It abstracts the complexity of the underlying platform through a multi-modal approach, offering CLI, GUI, and programmatic interfaces that coordinate the complete lifecycle from data preparation through result analysis. - -### 1.2 Functional Requirements - -The Application Module shall: - -- **FR-01** **Application Discovery**: List and browse available applications with filtering capabilities and detailed information retrieval -- **FR-02** **Data Preparation**: Automatically scan directories for whole slide images (WSI), extract comprehensive metadata, and validate file formats -- **FR-03** **File Upload Management**: Provide secure, chunked file upload to cloud storage with progress tracking and integrity verification -- **FR-04** **Run Lifecycle Management**: Submit, monitor, cancel, and delete application runs with real-time status updates -- **FR-05** **Result Download**: Progressive download of analysis results with resumable operations and organized directory hierarchies -- **FR-06** **QuPath Integration**: Automatic QuPath project creation with downloaded results for pathology analysis -- **FR-07** **Multi-Modal Interface**: Provide CLI, GUI, and programmatic interfaces for different user workflows - -### 1.3 Non-Functional Requirements - -- **Performance**: Handle multi-gigabyte whole slide images with memory-efficient streaming processing through bucket and WSI service integration -- **Security**: Implement data integrity verification, secure token-based authentication, and comprehensive input validation -- **Reliability**: Provide graceful error handling with typed exceptions, resumable operations after interruption, and partial failure handling for batch operations -- **Usability**: Offer real-time progress tracking, user-friendly error messages, and consistent interfaces across CLI/GUI/API -- **Scalability**: Support large-scale operations through integration with cloud storage and platform services - -### 1.4 Constraints and Limitations - -- Platform API compatibility requirements with auto-generated client integration -- File format support limited to major medical imaging formats (DICOM, TIFF, SVS) -- Memory management constraints for large file processing with configurable chunk sizes -- Optional QuPath integration requiring ijson dependency for full functionality - ---- - -## 2. Architecture and Design - -### 2.1 Module Structure - -``` -application/ -├── _service.py # Core business logic and application lifecycle management -├── _cli.py # Command-line interface for application operations -├── _gui/ # Web-based GUI components -│ ├── _frame.py # Main GUI application frame -│ ├── _page_index.py # Application discovery and selection page -│ ├── _page_application_describe.py # Application details and description page -│ ├── _page_application_run_describe.py # Run details and management page -│ ├── _page_builder.py # Dynamic page builder utilities -│ ├── _utils.py # GUI utility functions -│ └── assets/ # Static assets (images, animations, icons) -├── _settings.py # Module-specific configuration and environment variables -├── _utils.py # Helper functions for metadata processing and validation -└── __init__.py # Module exports and public API -``` - -### 2.2 Key Components - -| Component | Type | Purpose | Public API | -| ----------------------- | ----- | -------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| `DownloadProgressState` | Enum | Enumeration of download progress states | `INITIALIZING`, `QUPATH_ADD_INPUT`, `CHECKING`, `WAITING`, `DOWNLOADING`, `QUPATH_ADD_RESULTS`, `QUPATH_ANNOTATE_INPUT_WITH_RESULTS`, `COMPLETED` | -| `DownloadProgress` | Model | Progress tracking for download operations with computed fields | `total_artifact_count`, `total_artifact_index`, `item_progress_normalized`, `artifact_progress_normalized` | -| `Service` | Class | Main service class for application lifecycle management | `applications()`, `application_run_submit()`, `application_run_download()`, `application_runs()`, `application_run()`, `application_run_cancel()`, `application_run_result_delete()` | - -### 2.3 Design Patterns - -- **Service Layer Pattern**: Core business logic encapsulated in ApplicationService with consistent interfaces -- **Dependency Injection**: Dynamic discovery and lazy initialization of platform clients and dependent services -- **Observer Pattern**: Progress tracking through queue-based communication and callback mechanisms -- **Strategy Pattern**: Multi-modal interface design with CLI, GUI, and programmatic access patterns - ---- - -## 3. Inputs and Outputs - -### 3.1 Inputs - -| Input Type | Source | Data Type/Format | Validation Rules | Business Rules | -| -------------------------- | ------------- | ---------------- | ------------------------------------------------------ | ----------------------------------------------- | -| **Supported WSI Files** | CLI/GUI | Path object | Must exist, extension in WSI_SUPPORTED_FILE_EXTENSIONS | File must be readable, format must be supported | -| **Application Version ID** | API | String | Must be valid UUID format | Must correspond to existing application version | -| **Input Items** | API | List[InputItem] | Each item must have valid metadata | Items must match application input schema | -| **Run ID** | API | String | Must be valid UUID format | Must correspond to existing application run | -| **Upload Chunks** | Configuration | Integer | Must be positive value | Configurable based on platform limits | - -### 3.2 Outputs - -| Output Type | Destination | Data Type/Format | Success Criteria | Error Conditions | -| ---------------------- | ---------------- | --------------------- | -------------------------------------------------- | --------------------------------------- | -| **Application Runs** | Platform API | ApplicationRun object | Run successfully submitted with valid ID | Platform API failure, validation errors | -| **Downloaded Results** | Local filesystem | Directory structure | All artifacts downloaded to organized directories | Network failure, permission errors | -| **QuPath Projects** | Local filesystem | .qpproj file | Valid QuPath project with input/result integration | QuPath dependency missing, file errors | -| **Progress Updates** | Callback/GUI | DownloadProgress | Real-time progress tracking with normalized values | Callback execution errors | -| **Metadata Reports** | CLI/GUI | Formatted text/JSON | Human-readable metadata display | Processing errors, missing files | - -### 3.3 Data Schemas - -**InputItem Schema:** - -```yaml -InputItem: - type: object - properties: - path: - type: string - description: File system path to WSI file - metadata: - type: object - description: Extracted WSI metadata including dimensions and format - bucket_key: - type: string - description: Cloud storage key after upload - required: [path, metadata] -``` - -**DownloadProgress Schema:** - -```yaml -DownloadProgress: - type: object - properties: - state: - type: string - enum: [INITIALIZING, CHECKING, DOWNLOADING, QUPATH_ADD_RESULTS, COMPLETED] - total_artifact_count: - type: integer - description: Total number of artifacts to download - total_artifact_index: - type: integer - description: Current artifact being processed - item_progress_normalized: - type: number - minimum: 0 - maximum: 1 - description: Progress for current item (0-1) - artifact_progress_normalized: - type: number - minimum: 0 - maximum: 1 - description: Overall progress across all artifacts (0-1) - required: [state, total_artifact_count, total_artifact_index] -``` - -### 3.4 Data Flow - -```mermaid -graph TD - A[WSI Files] --> B[WSI Service] - B --> C[Metadata Extraction] - C --> D[Input Items Creation] - - D --> E[Bucket Service] - E --> F[File Upload to Cloud Storage] - F --> G[Platform API] - G --> H[Application Run Submission] - - H --> I[Platform Processing] - I --> J[Run Status Monitoring] - J --> K{Run Complete?} - K -->|No| J - K -->|Yes| L[Result Download] - - L --> M[Bucket Service Download] - M --> N[Local File System] - N --> O{QuPath Available?} - O -->|Yes| P[QuPath Integration] - O -->|No| Q[Results Only] - - R[Settings/_settings.py] --> B - R --> E - R --> G - - S[Progress Tracking] --> F - S --> L - S --> P - - T[CLI/GUI Input] --> A - U[DownloadProgress Model] --> S - - subgraph "Application Service Layer" - V[Service.applications] - W[Service.application_run_submit] - X[Service.application_run_download] - Y[Service.application_run_cancel] - end - - D --> W - L --> X - J --> Y - - subgraph "External Dependencies" - G - E - B - end - - subgraph "Progress States" - Z1[INITIALIZING] - Z2[CHECKING] - Z3[DOWNLOADING] - Z4[QUPATH_ADD_RESULTS] - Z5[COMPLETED] - end - - S --> Z1 - Z1 --> Z2 - Z2 --> Z3 - Z3 --> Z4 - Z4 --> Z5 -``` - ---- - -## 4. Interface Definitions - -### 4.1 Public API - -#### Core Service Interface - -```python -class Service: - """Service of the application module.""" - - def applications(self) -> list[Application]: - """List all available applications with filtering capabilities - - Returns: - List of Application objects with metadata - - Raises: - Exception: When application list cannot be retrieved - """ - pass - - def application_run_submit( - self, - application_version_id: str, - items: list[InputItem] - ) -> ApplicationRun: - """Submit application run with validated inputs - - Args: - application_version_id: ID of the application version to run - items: List of items to process with metadata - - Returns: - ApplicationRun object with run details - - Raises: - ValueError: When input validation fails - RuntimeError: When submission fails - """ - pass - def application_run_download( - self, - run_id: str, - output_dir: Path, - progress_callback: Optional[Callable] = None - ) -> DownloadProgress: - """Download results with progress tracking - - Args: - run_id: Application run identifier - output_dir: Directory for downloaded results - progress_callback: Optional callback for progress updates - - Returns: - DownloadProgress object with completion status - - Raises: - NotFoundException: When run ID is invalid - RuntimeError: When download operation fails - """ - pass -``` - -### 4.2 CLI Interface - -**Command Structure:** - -```bash -uvx aignostics application [subcommand] [options] -``` - -**Available Commands:** - -- `list`: List all available applications with filtering -- `describe`: Get detailed information about a specific application -- `dump-schemata`: Export application schemata -- `run execute`: Combined prepare, upload, and submit workflow -- `run prepare`: Generate metadata from source directory -- `run upload`: Upload files to cloud storage -- `run submit`: Submit application run -- `run list`: List application runs -- `run describe`: Get detailed run information -- `run cancel`: Cancel running application -- `run result download`: Download run results -- `run result delete`: Delete run results - -### 4.3 GUI Interface - -- **Navigation**: Accessible through main application menu and dashboard -- **Key UI Components**: - - Application discovery and selection interface - - Interactive submission workflow with file management - - Real-time progress tracking with visual indicators - - Results management with download capabilities - - Optional QuPath integration when available -- **User Workflows**: - - Browse → Select → Upload → Submit → Monitor → Download → Analyze - ---- - -## 5. Dependencies and Integration - -### 5.1 Internal Dependencies - -| Dependency Module | Usage Purpose | Interface Used | -| ----------------- | --------------------------------------- | ------------------------------------------- | -| Platform Service | Core platform API communication | Client initialization, authentication | -| Bucket Service | Cloud storage operations | File upload/download, signed URL generation | -| WSI Service | Medical image processing and metadata | Format detection, metadata extraction | -| Utils Module | Settings, logging, dependency injection | Configuration management, service discovery | - -### 5.2 External Dependencies - -| Dependency | Version | Purpose | Optional/Required | -| ------------- | -------- | ------------------------------ | ----------------- | -| aignx-codegen | Latest | Platform API client generation | Required | -| ijson | >=3.4.0 | QuPath integration | Optional | -| google-crc32c | >=1.7.1 | Data integrity verification | Required | -| humanize | >=4.12.3 | Progress formatting | Required | -| tqdm | >=4.67.1 | CLI progress indicators | Required | - -### 5.3 Integration Points - -- **Aignostics Platform API**: RESTful API integration for application management, run submission, and result retrieval -- **Cloud Storage Services**: Google Cloud Storage and AWS S3 integration through bucket service -- **QuPath Application**: Optional integration for pathology analysis and annotation management - ---- - -## 6. Configuration and Settings - -### 6.1 Configuration Parameters - -Configuration is managed through environment variables with the prefix `AIGNOSTICS_APPLICATION_`. The module uses Pydantic settings for validation and secure credential handling. - -| Parameter Pattern | Type | Description | Required | -| ------------------------------ | ---- | ---------------------------------- | -------- | -| `AIGNOSTICS_APPLICATION_*` | var | Application-specific configuration | No | -| Platform-specific chunk sizes | int | Configurable through platform API | No | -| Upload/download configurations | var | Managed by bucket and WSI services | No | - -### 6.2 Environment Variables - -| Variable | Purpose | Example Value | -| -------------------------- | ---------------------------------- | --------------------------------- | -| `AIGNOSTICS_APPLICATION_*` | Application-specific configuration | Various configuration parameters | -| `AIGNOSTICS_PLATFORM_URL` | Platform API endpoint | `https://platform.aignostics.com` | -| `AIGNOSTICS_AUTH_TOKEN` | Authentication token | `eyJhbGciOiJSUzI1NiIs...` | - ---- - -## 7. Error Handling and Validation - -### 7.1 Error Categories - -| Error Type | Cause | Handling Strategy | User Impact | -| ------------------- | -------------------------------- | ------------------------------ | ------------------------------- | -| `ValueError` | Invalid input data or metadata | Input validation with feedback | Clear validation error messages | -| `RuntimeError` | Platform API or operation errors | Retry with exponential backoff | Error details and guidance | -| `NotFoundException` | Missing runs or applications | Graceful rejection with info | Clear resource not found info | -| `FileNotFoundError` | Missing input files | File validation before upload | File path verification help | -| `ApiException` | Platform API failures | Retry mechanism with recovery | API error details and guidance | - -### 7.2 Input Validation - -- **WSI Files**: Format validation, file existence, size limits, and metadata extraction verification -- **Application Metadata**: Schema validation against application-specific requirements with type checking -- **Directory Paths**: Path existence, read permissions, and recursive access validation - -### 7.3 Graceful Degradation - -- **When QuPath dependencies are unavailable**: QuPath integration features are disabled with informative messages -- **When platform API is unreachable**: Local operations continue with cached data where possible -- **When individual file uploads fail**: Batch operations continue with error reporting for failed items - ---- - -## 8. Security Considerations - -### 8.1 Data Protection - -- **Authentication**: Token-based authentication with automatic session management and secure credential storage -- **Data Encryption**: HTTPS for all API communications and encrypted storage for sensitive configuration -- **Access Control**: Platform-based authorization with organization-level permissions and role-based access - -### 8.2 Security Measures - -- **Input Sanitization**: Comprehensive validation of all user inputs including file paths and metadata -- **Secret Management**: Secure handling of authentication tokens and API keys with automatic masking in logs -- **Audit Logging**: Security events logged including authentication, authorization, and data access operations - ---- - -## 9. Implementation Details - -### 9.1 Key Algorithms and Business Logic - -- **Metadata Generation Pipeline**: Multi-stage pipeline for WSI file discovery, metadata extraction, and validation -- **Progress Tracking Algorithm**: Normalized progress calculation with multi-level aggregation across files and operations -- **Chunked Upload Algorithm**: Memory-efficient streaming upload with integrity verification and resume capability - -### 9.2 State Management and Data Flow - -- **Configuration State**: Environment-aware settings management with Pydantic validation and secure credential handling -- **Runtime State**: Progress tracking state persistence for resumable operations and error recovery -- **Cache Management**: Platform client caching with lazy initialization and automatic session management - -### 9.3 Performance and Scalability Considerations - -- **Async Operations**: Asynchronous file upload/download operations with configurable concurrency limits -- **Thread Safety**: Thread-safe progress tracking and state management with queue-based communication -- **Resource Management**: Proper cleanup of network connections and file handles with context managers -- **Memory Efficiency**: Handle multi-gigabyte files through streaming and chunked operations -- **Scalability Patterns**: Integration with cloud storage services for horizontal scaling diff --git a/specifications/SPEC-BUCKET-SERVICE.md b/specifications/SPEC-BUCKET-SERVICE.md deleted file mode 100644 index 7dcfd401..00000000 --- a/specifications/SPEC-BUCKET-SERVICE.md +++ /dev/null @@ -1,376 +0,0 @@ ---- -itemId: SPEC-BUCKET-SERVICE -itemTitle: Bucket Module Specification -itemType: Software Item Spec -itemFulfills: SWR-BUCKET-1-1, SWR-BUCKET-1-2, SWR-BUCKET-1-3, SWR-BUCKET-1-4, SWR-BUCKET-1-5, SWR-BUCKET-1-6, SWR-BUCKET-1-7, SWR-BUCKET-1-8, SWR-BUCKET-1-9 -Module: Bucket -Layer: Domain Service -Version: 0.2.105 -Date: 2025-09-09 ---- - -## 1. Description - -### 1.1 Purpose - -The Bucket Module provides comprehensive integration between the Aignostics Python SDK and S3-compatible cloud storage buckets. It enables secure file upload, download, and management operations with support for both Google Cloud Storage and AWS S3. The module serves as the primary interface for cloud storage operations within the SDK, offering both programmatic and interactive access through CLI and GUI interfaces. - -### 1.2 Functional Requirements - -The Bucket Module shall: - -- **[FR-01]** Provide secure upload and download operations for files and directories to S3-compatible storage. -- **[FR-02]** Support pattern-based file operations using regex for bulk operations and content discovery. -- **[FR-03]** Generate time-limited signed URLs for secure file access without exposing credentials. -- **[FR-04]** Implement ETag-based caching to optimize bandwidth usage by skipping unchanged files. -- **[FR-05]** Offer both command-line and web-based user interfaces for interactive storage management. -- **[FR-06]** Support multiple cloud storage providers (Google Cloud Storage, AWS S3) through configurable protocols. - -### 1.3 Non-Functional Requirements - -- **Performance**: Handle files up to several GB with chunked transfer (1MB upload chunks, 10MB download chunks), progress tracking with byte-level granularity -- **Security**: HMAC-based authentication, secure credential management with automatic masking in logs, configurable signed URL expiration -- **Reliability**: ETag-based integrity checking, proper error handling with cleanup, retry mechanisms for network failures -- **Usability**: Type-safe CLI with automatic help generation, intuitive web interface with real-time progress indicators -- **Scalability**: Support concurrent operations, configurable chunk sizes for different file sizes and network conditions - -### 1.4 Constraints and Limitations - -- S3-Compatible API Requirement: Storage backends must support S3-compatible APIs with HMAC authentication -- Protocol Limitation: Currently supports gs:// and s3:// protocols only, no support for other cloud storage APIs -- Single Bucket Operations: Operations are scoped to individual buckets, no cross-bucket operations supported -- Memory Usage: Large file operations may require significant memory for chunked processing and ETag calculation -- **S3-Compatible API**: Uses S3-compatible API endpoints through boto3 library for broad cloud storage compatibility -- **HMAC Authentication**: Requires HMAC access keys for authentication rather than service account credentials -- **Configurable Regions**: Supports configurable regions with EUROPE-WEST3 as default for Google Cloud Storage - ---- - -## 2. Architecture and Design - -### 2.1 Module Structure - -``` -bucket/ -├── _service.py # Core business logic and S3-compatible storage operations -├── _cli.py # Command-line interface with Typer framework -├── _gui.py # Web-based GUI components using NiceGUI -├── _settings.py # Configuration management and environment variables -└── __init__.py # Module exports: Service, cli, and conditional PageBuilder -``` - -### 2.2 Key Components - -| Component | Type | Purpose | Public API | -| ------------- | ----- | ---------------------------------------- | ------------------------------------------------------ | -| `Service` | Class | Core S3-compatible storage operations | `upload()`, `download()`, `list()`, `delete()` | -| `cli` | Typer | Command-line interface for bucket ops | `upload`, `download`, `list`, `delete` commands | -| `PageBuilder` | Class | Web interface for interactive management | `register_pages()` with bucket management UI | -| `Settings` | Class | Configuration and credential management | Environment variable handling, default value provision | - -### 2.3 Design Patterns - -- **Service Layer Pattern**: Business logic encapsulated in Service class with clear separation from presentation layers -- **Dependency Injection**: Settings injected into Service for configurable behavior and testability -- **Adapter Pattern**: S3-compatible API adapter for Google Cloud Storage using boto3 client -- **Strategy Pattern**: Configurable protocols (gs://, s3://) through protocol-specific URL handling - -**Upload Chunking (1MB chunks):** - -````python -UPLOAD_CHUNK_SIZE = 1024 * 1024 # 1MB -# Optimized for memory usage during streaming uploads -def read_in_chunks(): - while True: - chunk = f.read(UPLOAD_CHUNK_SIZE) - if not chunk: ---- - -## 3. Inputs and Outputs - -### 3.1 Inputs - -| Input Type | Source | Data Type/Format | Validation Rules | Business Rules | -| ------------------ | ------------- | ---------------- | --------------------------------------------------------- | ------------------------------------------------------- | -| Bucket Name | CLI/GUI/API | String | Must match GCS bucket naming conventions | Must correspond to accessible cloud storage bucket | -| Object Key/Pattern | CLI/GUI/API | String/Regex | Valid path characters, regex patterns for bulk operations | Keys must follow cloud storage path conventions | -| Local File Path | CLI/GUI/API | Path | Must exist for upload, valid directory for download | File must be readable, directories must be writable | -| Credentials | Environment | HMAC Key Pair | Required AIGNOSTICS_BUCKET_HMAC_* variables | Keys must have appropriate bucket permissions | -| Protocol | Configuration | String | Must be "gs" or "s3" | Protocol must match configured cloud storage provider | - -### 3.2 Outputs - -| Output Type | Destination | Data Type/Format | Success Criteria | Error Conditions | -| ---------------- | ---------------- | ---------------- | --------------------------------------------- | ------------------------------------------- | -| Uploaded Files | Cloud Storage | Binary/Metadata | Successful S3 PUT with ETag confirmation | Network failure, permission errors | -| Downloaded Files | Local Filesystem | Binary | Complete download with ETag validation | Disk space issues, permission errors | -| Signed URLs | Client/Platform | HTTPS URL | Valid URL with correct expiration time | Credential errors, invalid object keys | -| Progress Updates | CLI/GUI | Progress Models | Real-time byte-level progress information | Callback execution errors | -| Operation Status | Logs/Console | Structured Logs | Success/failure with detailed error messages | Logging system failures | - -### 3.3 Data Schemas - -**DownloadProgress Schema:** - -```yaml -DownloadProgress: - type: object - properties: - total_bytes: - type: integer - description: Total bytes to download - downloaded_bytes: - type: integer - description: Bytes downloaded so far - current_file: - type: string - description: Current file being downloaded - progress_percentage: - type: number - minimum: 0 - maximum: 100 - description: Download progress as percentage - required: [total_bytes, downloaded_bytes] -``` - -**UploadProgress Schema:** - -```yaml -UploadProgress: - type: object - properties: - total_bytes: - type: integer - description: Total bytes to upload - uploaded_bytes: - type: integer - description: Bytes uploaded so far - current_file: - type: string - description: Current file being uploaded - upload_speed: - type: number - description: Upload speed in bytes per second - required: [total_bytes, uploaded_bytes] -``` - -### 3.4 Data Flow - -```mermaid -graph LR - A[User Input] --> B[Service Layer] --> C[S3-Compatible API] - B --> D[Progress Tracking] - E[Environment Config] --> B - C --> F[Cloud Storage] - D --> G[UI Updates] - B --> H[Local Filesystem] -```` - ---- - -## 4. Interface Definitions - -### 4.1 Public API - -#### Core Service Interface - -```python -class Service(BaseService): - """Bucket service for S3-compatible cloud storage operations.""" - - def upload(self, source_path: Path, destination_prefix: str, - callback: Callable[[int, Path], None] | None = None) -> dict[str, list[str]]: - """Upload file or directory to cloud storage. - - Args: - source_path: Local file or directory path to upload - destination_prefix: Prefix for object keys (e.g. username) - callback: Optional callback for upload progress updates - - Returns: - Dictionary with 'success' and 'failed' lists containing object keys - - Raises: - ValueError: Invalid source path or destination prefix - BotoClientError: S3 API operation failure - """ - - def download(self, what: list[str] | None = None, - destination: Path = get_user_data_directory("bucket_downloads"), - what_is_key: bool = False, - progress_callback: Callable[[DownloadProgress], None] | None = None) -> DownloadResult: - """Download files from cloud storage with optional pattern matching. - - Args: - what: Patterns or keys to match object keys against (all if None) - destination: Local destination directory - what_is_key: If True, treat pattern as key, else as regex - progress_callback: Optional callback for progress updates - - Returns: - DownloadResult with downloaded and failed object lists - - Raises: - ValueError: Invalid regex pattern or destination - BotoClientError: S3 API operation failure - """ - - def delete(self, what: list[str] | None, what_is_key: bool = False, - dry_run: bool = True) -> int: - """Delete objects from cloud storage. - - Args: - what: Patterns or keys to match object keys against - what_is_key: If True, treat pattern as key, else as regex - dry_run: If True, only show what would be deleted - - Returns: - Number of objects deleted (or would be deleted if dry_run) - """ - - def create_signed_upload_url(self, object_key: str) -> str: - """Generate time-limited signed URL for secure upload access.""" - - def create_signed_download_url(self, object_key: str) -> str: - """Generate time-limited signed URL for secure download access.""" -``` - -### 4.2 CLI Interface - -**Command Structure:** - -```bash -uvx aignostics bucket [subcommand] [options] -``` - -**Available Commands:** - -- `upload `: Upload file or directory to bucket with destination prefix -- `find [patterns...] [--what-is-key] [--detailed]`: Find and list bucket contents with optional pattern matching -- `download [patterns...] [--destination] [--what-is-key]`: Download from bucket with optional pattern filtering -- `delete [patterns...] [--what-is-key] [--dry-run]`: Delete objects from bucket with pattern matching -- `url [--download/--upload]`: Generate signed URLs for secure access - -### 4.3 GUI Interface - -- **Navigation**: Accessible via main SDK GUI menu under "Cloud Storage" -- **Key UI Components**: File upload drag-and-drop, progress bars, bucket browser, pattern-based filtering -- **User Workflows**: Interactive file management, real-time progress tracking, signed URL generation - ---- - -## 5. Dependencies and Integration - -### 5.1 Internal Dependencies - -| Dependency Module | Usage Purpose | Interface Used | -| ----------------- | -------------------------- | ------------------------------- | -| Platform Service | User authentication/config | Environment variable management | -| Utils Module | Logging and base services | `BaseService`, `get_logger` | -| GUI Module | Web interface framework | `frame` component for UI layout | - -### 5.2 External Dependencies - -| Dependency | Version | Purpose | Optional/Required | -| ---------- | -------- | ---------------------------- | ----------------- | -| boto3 | >=1.39.8 | S3-compatible API client | Required | -| pydantic | >=2.0 | Data validation and settings | Required | -| typer | Latest | CLI framework | Required | -| nicegui | Latest | Web GUI framework | Optional | -| rich | Latest | Enhanced console output | Required | - -### 5.3 Integration Points - -- **Aignostics Platform API**: Credential management and user authentication -- **Cloud Storage Services**: Google Cloud Storage (primary), AWS S3 (secondary) via S3-compatible APIs -- **Local Filesystem**: File operations, progress tracking, user data directories - ---- - -## 6. Configuration and Settings - -### 6.1 Configuration Parameters - -| Parameter | Type | Default | Description | Required | -| ---------------------------- | ---- | -------------------------------------------- | ------------------------- | -------- | -| `protocol` | str | "gs" | Storage protocol (gs/s3) | No | -| `endpoint_url` | str | "https://storage.googleapis.com" | S3-compatible endpoint | No | -| `region` | str | "EUROPE-WEST3" | Storage region | No | -| `download_default_directory` | Path | `~/.local/share/aignostics/bucket_downloads` | Default download location | No | - -### 6.2 Environment Variables - -| Variable | Purpose | Example Value | -| ---------------------------------- | ------------------------- | -------------------------- | -| `AIGNOSTICS_BUCKET_HMAC_ACCESS_ID` | S3 access key ID | `GOOG1A2B3C4D5` | -| `AIGNOSTICS_BUCKET_HMAC_SECRET` | S3 secret access key | `secret123...` | -| `AIGNOSTICS_BUCKET_PROTOCOL` | Override default protocol | `s3` | -| `AIGNOSTICS_BUCKET_ENDPOINT_URL` | Override endpoint URL | `https://s3.amazonaws.com` | - ---- - -## 7. Error Handling and Validation - -### 7.1 Error Categories - -| Error Type | Cause | Handling Strategy | User Impact | -| ----------------- | ------------------------------- | ------------------------------- | ----------------------------- | -| `CredentialError` | Missing/invalid HMAC keys | Clear error with setup guide | Operation blocked until fixed | -| `NetworkError` | Connection/timeout issues | Retry with exponential backoff | Temporary delay, then retry | -| `ValidationError` | Invalid input parameters | Input validation with feedback | Clear error message shown | -| `PermissionError` | Insufficient bucket permissions | Auth error with troubleshooting | Access denied notification | - -### 7.2 Input Validation - -- **Bucket Names**: Must follow GCS/S3 naming conventions (lowercase, no special chars) -- **Object Keys**: Validated for path safety, no leading/trailing slashes -- **File Paths**: Existence checks for uploads, directory validation for downloads -- **URLs**: Protocol validation (gs:// or s3://), proper bucket/key parsing - -### 7.3 Graceful Degradation - -- **When credentials unavailable**: CLI operations fail with setup instructions, GUI shows configuration needed -- **When cloud storage unreachable**: Operations timeout gracefully with retry options -- **When local filesystem full**: Upload operations pause with disk space warnings - ---- - -## 8. Security Considerations - -### 8.1 Data Protection - -- **Authentication**: HMAC key-based authentication for S3-compatible APIs -- **Data Encryption**: HTTPS for all transfers, cloud provider encryption at rest -- **Access Control**: Bucket-level permissions managed through cloud provider IAM - -### 8.2 Security Measures - -- **Input Sanitization**: All file paths and object keys validated against injection attacks -- **Secret Management**: HMAC keys never logged, masked in output with `mask_secrets()` -- **Audit Logging**: All operations logged with timestamps, user context, and outcomes - ---- - -## 9. Implementation Details - -### 9.1 Key Algorithms and Business Logic - -- **Chunked Transfer**: Adaptive chunk sizing based on operation type (1MB upload, 10MB download, 100MB ETag) -- **ETag Caching**: MD5-based content comparison to avoid redundant downloads -- **Progress Calculation**: Byte-level progress tracking with transfer speed estimation -- **Pattern Matching**: Regex-based object filtering for bulk operations and content discovery - -### 9.2 State Management and Data Flow - -- **Configuration State**: Settings cached from environment variables with lazy loading -- **Runtime State**: Progress models maintain operation state with real-time updates -- **Cache Management**: ETag-based file validation cache for efficient re-download detection -- **Session Management**: S3 client connection pooling and automatic retry mechanisms - -### 9.3 Performance and Scalability Considerations - -- **Memory Efficiency**: Streaming operations for large files with configurable chunk sizes -- **Network Optimization**: Connection pooling, retry mechanisms, and bandwidth throttling -- **Concurrent Operations**: Thread-safe progress tracking and parallel transfer support -- **Resource Management**: Proper cleanup of S3 client connections and file handles -- **Scalability Patterns**: Support for high-throughput operations with memory constraints diff --git a/specifications/SPEC-BUILD-CHAIN-CICD-SERVICE.md b/specifications/SPEC-BUILD-CHAIN-CICD-SERVICE.md deleted file mode 100644 index b873dfa8..00000000 --- a/specifications/SPEC-BUILD-CHAIN-CICD-SERVICE.md +++ /dev/null @@ -1,443 +0,0 @@ ---- -itemId: SPEC-BUILD-CHAIN-CICD-SERVICE -itemTitle: Build Chain and CI/CD Module Specification -itemType: Software Item Spec -itemFulfills: TBD _(System service requirements to be defined)_ -Layer: Infrastructure Service -Version: 0.2.140 -Date: 2025-09-11 ---- - -## 1. Description - -### 1.1 Purpose - -The Build Chain and CI/CD Module provides a comprehensive automated pipeline for the Aignostics Python SDK, encompassing code quality assurance, testing, security scanning, documentation generation, and multi-platform distribution. It ensures consistent, reliable, and secure software delivery through a series of automated quality gates and deployment mechanisms. - -### 1.2 Functional Requirements - -The Build Chain and CI/CD Module shall: - -- **[FR-01]** Execute automated quality gates including linting, type checking, and security scanning on every code change -- **[FR-02]** Run comprehensive test suites across multiple Python versions and operating systems with coverage reporting -- **[FR-03]** Generate and publish documentation automatically including API references and user guides -- **[FR-04]** Build and distribute Python packages to PyPI with semantic versioning -- **[FR-05]** Create and publish multi-architecture Docker images for both slim and full variants -- **[FR-06]** Generate compliance artifacts including SBOM, license reports, and vulnerability assessments -- **[FR-07]** Provide local development environment consistency through pre-commit hooks and development tools -- **[FR-08]** Support multiple distribution channels including PyPI, Docker registries, and GitHub releases -- **[FR-09]** Enable local CI/CD testing through Act integration for GitHub Actions workflows -- **[FR-10]** Implement automated dependency monitoring and security vulnerability detection - -### 1.3 Non-Functional Requirements - -- **Performance**: Build pipeline optimized for fast feedback with parallel execution across multiple platforms, efficient caching strategies for dependencies and Docker layers -- **Security**: Secrets managed through GitHub secrets, vulnerability scanning integrated, OIDC token-based authentication for secure service communication -- **Reliability**: Manual retry capabilities through GitHub Actions UI, comprehensive error reporting through job summaries and artifacts -- **Usability**: Clear feedback through GitHub status checks, detailed test reports in job summaries, one-command local development setup -- **Scalability**: Matrix builds across multiple platforms, parallel test execution, efficient caching strategies - -### 1.4 Constraints and Limitations - -- GitHub Actions runner limitations for concurrent jobs and execution time -- Docker registry rate limits requiring authenticated access for heavy usage -- Platform-specific testing constraints (e.g., macOS GitHub Actions limitations) -- Secret management restricted to repository administrators and configured environments - ---- - -## 2. Architecture and Design - -### 2.1 Module Structure - -``` -.github/ -├── workflows/ # GitHub Actions workflow definitions -│ ├── ci-cd.yml # Main orchestration workflow -│ ├── _lint.yml # Code quality and formatting checks -│ ├── _test.yml # Multi-platform testing pipeline -│ ├── _audit.yml # Security and compliance scanning -│ ├── _package-publish.yml # PyPI package publishing -│ ├── _docker-publish.yml # Container image publishing -│ ├── _codeql.yml # GitHub CodeQL security analysis -│ └── _ketryx_report_and_check.yml # Compliance reporting -├── copilot-instructions.md # AI pair programming guidelines -└── dependabot.yml # Automated dependency updates - -Makefile # Local development task orchestration -noxfile.py # Python environment management and task automation -pyproject.toml # Project configuration and dependencies -.pre-commit-config.yaml # Git hook definitions for quality gates -``` - -### 2.2 Key Components - -| Component | Type | Purpose | Public Interface | Dependencies | -| ---------------------- | ------------ | ----------------------------------------- | ------------------------------ | ---------------------- | -| `ci-cd.yml` | Workflow | Main orchestration of all pipeline stages | GitHub Actions triggers | All sub-workflows | -| `_lint.yml` | Workflow | Code formatting and linting validation | Ruff formatter and linter | UV, Python environment | -| `_test.yml` | Workflow | Multi-platform testing with coverage | Pytest execution across matrix | UV, test dependencies | -| `_audit.yml` | Workflow | Security and license compliance scanning | pip-audit, pip-licenses | Python environment | -| `_package-publish.yml` | Workflow | PyPI package building and publishing | UV build tools, PyPI API | GitHub release tags | -| `_docker-publish.yml` | Workflow | Container image building and publishing | Docker Buildx, registries | Docker Hub, GHCR | -| `Makefile` | Build System | Local development task orchestration | Command-line interface | Nox, UV, system tools | -| `noxfile.py` | Task Runner | Python environment and session management | Python API and CLI | UV, pytest, ruff, mypy | - -### 2.3 Design Patterns - -- **Pipeline as Code**: All CI/CD logic defined in version-controlled YAML files -- **Matrix Strategy**: Parallel execution across multiple platforms and Python versions -- **Dependency Injection**: Environment-specific configuration through secrets and variables -- **Service Layer Pattern**: Reusable workflow components through callable workflows -- **Fail Fast**: Early termination on critical quality gate failures - ---- - -## 3. Inputs and Outputs - -### 3.1 Inputs - -| Input Type | Source | Data Type/Format | Validation Rules | Business Rules | -| ------------- | --------------------- | ----------------------------- | ------------------------- | -------------------------- | -| Code Changes | Git Push/PR | Git commits | Pre-commit hooks, linting | Must pass quality gates | -| Version Tags | Git Tags | Semantic versioning (v*.*.\*) | Format validation | Triggers release pipeline | -| Configuration | Environment Variables | Key-value pairs | Secret validation | Secure handling required | -| Dependencies | Package Managers | Python packages, system tools | Vulnerability scanning | License compliance checked | -| Test Data | Fixtures/Mocks | JSON, YAML, binary files | Schema validation | Test isolation maintained | - -### 3.2 Outputs - -| Output Type | Destination | Data Type/Format | Success Criteria | Error Conditions | -| ---------------- | ---------------- | ---------------------------------- | ----------------------------- | --------------------------------------- | -| Python Package | PyPI | Wheel and source distribution | Successful upload | Authentication/validation failures | -| Docker Images | Docker Hub, GHCR | Multi-arch container images | Multi-platform build success | Registry authentication failures | -| Documentation | Read The Docs | HTML/PDF documentation | Successful deployment | Build/rendering failures | -| Test Reports | GitHub Actions | JUnit XML, coverage reports | All tests pass, coverage >85% | Test failures, coverage below threshold | -| Security Reports | Artifacts | JSON vulnerability/license reports | Clean vulnerability scan | Critical vulnerabilities detected | -| Release Assets | GitHub Releases | Binaries, documentation, reports | Successful asset upload | Asset creation/upload failures | - -### 3.3 Data Schemas - -**Workflow Trigger Schema:** - -```yaml -# GitHub Actions event schema -workflow_dispatch: - inputs: - skip_tests: - type: boolean - description: Skip test execution - default: false - -push: - branches: ["**"] - tags: ["v*.*.*"] - -pull_request: - branches: [main] - types: [opened, synchronize, reopened] -``` - -**Test Matrix Schema:** - -```yaml -# Test execution matrix configuration -strategy: - matrix: - runner: [ubuntu-latest, macos-latest, windows-latest] - python-version: ["3.11", "3.12", "3.13"] - experimental: [false] - include: - - runner: ubuntu-24.04-arm - experimental: true -``` - -**Release Artifact Schema:** - -```yaml -# Release asset structure -release_assets: - - type: python_package - format: [wheel, sdist] - platform: any - - type: docker_image - variants: [slim, full] - architectures: [amd64, arm64] - - type: documentation - formats: [html, pdf] - - type: compliance_reports - formats: [json, csv, xml] -``` - -### 3.4 Data Flow - -```mermaid -graph LR - A[Code Commit] --> B[Pre-commit Hooks] - B --> C[CI/CD Pipeline] - C --> D[Quality Gates] - D --> E[Build Artifacts] - E --> F[Distribution Channels] - - D --> G[Test Execution] - D --> H[Security Scanning] - D --> I[Documentation Generation] - - F --> J[PyPI] - F --> K[Docker Registries] - F --> L[GitHub Releases] -``` - ---- - -## 4. Interface Definitions - -### 4.1 Public API - -#### GitHub Actions Workflow Interface - -**Main Workflow**: `ci-cd.yml` - -- **Purpose**: Orchestrates the complete CI/CD pipeline from code quality to deployment -- **Triggers**: Git push, pull request, release creation, manual dispatch -- **Key Jobs**: - - `lint`: Code quality validation using Ruff formatter and linter - - `audit`: Security vulnerability and license compliance scanning - - `test`: Multi-platform testing with coverage reporting - - `codeql`: GitHub CodeQL security analysis - - `ketryx_report_and_check`: Compliance reporting and validation - - `package_publish`: PyPI package publishing (tags only) - - `docker_publish`: Container image publishing (tags only) - -**Input/Output Contracts**: - -- **Input Types**: Git events, environment variables, secrets -- **Output Types**: GitHub status checks, artifacts, deployments -- **Error Conditions**: Quality gate failures, authentication errors, platform unavailability - -#### Local Development Interface - -**Make Commands**: - -```bash -# Primary development commands -make install # Setup development environment -make all # Run all quality gates locally -make test # Execute test suite -make lint # Run code quality checks -make audit # Security and compliance scanning -make docs # Generate documentation -make dist # Build distribution packages -``` - -**Nox Sessions**: - -```bash -# Python environment management -uv run nox -s lint # Code quality validation -uv run nox -s test # Test execution with coverage -uv run nox -s audit # Security and compliance -uv run nox -s docs # Documentation generation -uv run nox -s dist # Package building -``` - -### 4.2 CLI Interface - -**Make Interface**: - -| Command | Purpose | Input Requirements | Output Format | -| ------------------- | --------------------------- | ------------------------ | ---------------------------- | -| `make all` | Run complete build pipeline | None | Console output, artifacts | -| `make test` | Execute test suite | Optional: Python version | JUnit XML, coverage reports | -| `make lint` | Code quality checks | None | Console output, exit codes | -| `make audit` | Security/compliance scan | None | JSON reports, console output | -| `make docker_build` | Build container images | None | Docker images (local) | -| `make dist` | Build Python packages | None | Wheel/sdist in dist/ | - -**Common Options**: - -- Environment variables for configuration override -- Skip patterns for CI (e.g., `skip:ci` in commit messages) -- Python version selection for testing -- Platform-specific build targets - -### 4.3 GitHub Actions Interface - -**Workflow Call Interface**: - -| Workflow | Method | Purpose | Permissions Required | Secrets Used | -| ---------------------- | --------------- | ----------------------- | ---------------------------------- | ------------------ | -| `_lint.yml` | `workflow_call` | Code quality validation | `contents: read` | None | -| `_test.yml` | `workflow_call` | Multi-platform testing | `contents: read, packages: write` | Test credentials | -| `_audit.yml` | `workflow_call` | Security scanning | `contents: read` | None | -| `_package-publish.yml` | `workflow_call` | PyPI publishing | `contents: write, packages: write` | `UV_PUBLISH_TOKEN` | -| `_docker-publish.yml` | `workflow_call` | Container publishing | `packages: write` | Docker credentials | - -**Environment Variables**: - -- `GITHUB_TOKEN`: Automatic GitHub authentication -- `PYTHONIOENCODING`: UTF-8 encoding for Python output -- Platform-specific configuration through matrix variables - ---- - -## 5. Dependencies and Integration - -### 5.1 Internal Dependencies - -| Dependency Module | Usage Purpose | Interface/Contract Used | Criticality | -| ----------------- | ------------------------------- | -------------------------- | ----------- | -| Source Code | Build target for all operations | Python package structure | Required | -| Test Suite | Quality validation | pytest test discovery | Required | -| Documentation | User guide generation | Sphinx configuration | Required | -| Configuration | Build parameter control | pyproject.toml, noxfile.py | Required | - -### 5.2 External Dependencies - -| Dependency | Min Version | Purpose | Optional/Required | Fallback Behavior | -| ------------------ | ----------- | ----------------------------- | ----------------- | ---------------------- | -| GitHub Actions | N/A | CI/CD execution platform | Required | Local testing with Act | -| UV Package Manager | Latest | Python environment management | Required | Fallback to pip/venv | -| Docker | 20.10+ | Container image building | Required | Skip container builds | -| Ruff | 0.1.0+ | Code formatting and linting | Required | Pipeline failure | -| MyPy | 1.0+ | Static type checking | Required | Pipeline failure | -| pytest | 7.0+ | Test execution framework | Required | Pipeline failure | -| Nox | 2023.4+ | Session management | Required | Direct tool execution | - -### 5.3 Integration Points - -- **Aignostics Platform API**: Authentication and service integration for E2E tests -- **PyPI**: Package publishing and distribution -- **Docker Hub/GHCR**: Container image registry publication -- **Read The Docs**: Automated documentation deployment -- **Codecov**: Test coverage reporting and analysis -- **SonarQube**: Code quality and security analysis -- **Slack**: Release notifications and alerts - ---- - -## 6. Configuration and Settings - -### 6.1 Configuration Parameters - -| Parameter | Type | Default | Description | Required | -| ---------------------- | ------ | ---------------------------- | ---------------------------------- | -------- | -| `TEST_PYTHON_VERSIONS` | List | ["3.11", "3.12", "3.13"] | Python versions for testing | Yes | -| `PYTHON_VERSION` | String | "3.13" | Default Python version | Yes | -| `API_VERSIONS` | List | ["v1"] | API versions to test | Yes | -| `JUNIT_XML_PREFIX` | String | "--junitxml=reports/junit\_" | Test report file prefix | Yes | -| `UTF8` | String | "utf-8" | Default encoding | Yes | -| `LATEXMK_VERSION_MIN` | Float | 4.86 | Minimum LaTeX version for PDF docs | No | - -### 6.2 Environment Variables - -| Variable | Purpose | Example Value | -| ----------------------------- | ------------------------------ | --------------------- | -| `GITHUB_TOKEN` | GitHub API authentication | `ghp_xxxxxxxxxxxx` | -| `UV_PUBLISH_TOKEN` | PyPI publishing authentication | `pypi-xxxxxxx` | -| `DOCKER_USERNAME` | Docker Hub authentication | `username` | -| `DOCKER_PASSWORD` | Docker Hub token | `dckr_pat_xxxxx` | -| `CODECOV_TOKEN` | Coverage reporting | `xxxxxxxx-xxxx-xxxx` | -| `SONAR_TOKEN` | SonarQube authentication | `squ_xxxxxxxx` | -| `AIGNOSTICS_CLIENT_ID_DEVICE` | Platform API testing | `client_id_value` | -| `AIGNOSTICS_REFRESH_TOKEN` | Platform API testing | `refresh_token_value` | -| `GCP_CREDENTIALS` | Google Cloud authentication | `base64_encoded_json` | - ---- - -## 7. Error Handling and Validation - -### 7.1 Error Categories - -| Error Type | Cause | Handling Strategy | User Impact | -| -------------------------- | ---------------------------------------------- | ---------------------------------------- | ----------------------------------------- | -| `QualityGateFailure` | Linting, type checking, or test failures | Fail pipeline, provide detailed reports | Development blocked until fixed | -| `SecurityVulnerability` | Critical vulnerabilities in dependencies | Fail pipeline, generate security report | Security review required | -| `AuthenticationFailure` | Invalid credentials for external services | Fail deployment steps only | Development continues, deployment blocked | -| `PlatformUnavailability` | GitHub Actions runner or external service down | Retry with backoff, mark as experimental | Temporary delays, alternative paths used | -| `CoverageThresholdFailure` | Test coverage below 85% requirement | Fail pipeline, highlight uncovered code | Code quality improvement required | - -### 7.2 Input Validation - -- **Version Tags**: Must match semantic versioning pattern `v*.*.*` -- **Commit Messages**: Validated for conventional commits format -- **Environment Variables**: Presence and format validation for required secrets -- **Python Code**: Syntax validation through AST parsing before execution -- **Docker Configuration**: Dockerfile and compose file validation - -### 7.3 Graceful Degradation - -- **When GitHub Actions runners unavailable**: Local testing with Act recommended -- **When external registries unreachable**: Build continues, deployment skipped with warning -- **When optional tools missing**: Core functionality preserved, enhanced features disabled -- **When experimental platforms fail**: Pipeline continues with warnings, main platforms must succeed - ---- - -## 8. Security Considerations - -### 8.1 Data Protection - -- **Authentication**: GitHub OIDC tokens for secure service-to-service communication -- **Secret Management**: All credentials stored in GitHub Secrets with restricted access -- **Access Control**: Repository permissions control workflow execution and secret access -- **Audit Logging**: All pipeline executions logged with full traceability - -### 8.2 Security Measures - -- **Input Sanitization**: All external inputs validated and sanitized before use -- **Dependency Scanning**: pip-audit integration for vulnerability detection in dependencies -- **Secret Detection**: detect-secrets pre-commit hook prevents credential leakage -- **Container Security**: Multi-stage docker builds with minimal base images, non-root execution -- **Supply Chain Security**: SBOM generation in CycloneDX and SPDX formats, dependency vulnerability tracking -- **Code Analysis**: GitHub CodeQL and SonarQube integration for security analysis - ---- - -## 9. Implementation Details - -### 9.1 Key Algorithms and Business Logic - -- **Caching Strategy**: UV dependency caching with lock file-based invalidation for optimal performance -- **Parallel Execution**: Test parallelization using pytest-xdist with intelligent load balancing -- **Version Management**: Semantic versioning with bump-my-version for automated releases -- **Artifact Collection**: Systematic gathering of build outputs, test results, and compliance reports - -### 9.2 State Management and Data Flow - -- **State Type**: Stateless pipelines with artifact-based state passing between jobs -- **Data Persistence**: Artifacts stored in GitHub Actions with configurable retention -- **Session Management**: Isolated environments per job with clean setup/teardown -- **Cache Strategy**: Multi-level caching (UV dependencies, Docker layers, build artifacts) - -### 9.3 Performance and Scalability Considerations - -- **Performance Characteristics**: - - Matrix testing: Parallel execution across 6+ platforms - - Caching: UV dependency and Docker layer caching implemented - - Optimization: Efficient resource usage through UV package manager and container builds -- **Scalability Patterns**: Horizontal scaling through GitHub Actions matrix builds -- **Resource Management**: Memory and CPU optimization through UV and container limits -- **Concurrency Model**: Parallel job execution with dependency-based ordering - ---- - -## Documentation Maintenance - -### Verification and Updates - -**Last Verified**: 2025-09-11 when spec was created and accuracy-verified against implementation -**Verification Method**: Comprehensive analysis of .github/workflows/, Makefile, noxfile.py, pyproject.toml, and actual workflow configurations -**Next Review Date**: 2025-12-11 (quarterly review cycle) - -### Change Management - -**Interface Changes**: Changes to workflow interfaces or Make commands require spec updates and version bumps -**Implementation Changes**: Internal workflow changes don't require spec updates unless behavior changes -**Dependency Changes**: Major tool updates (UV, Ruff, pytest) should be reflected in constraints section - -### References - -**Implementation**: See `.github/workflows/`, `Makefile`, `noxfile.py` for current implementation -**Tests**: See `tests/aignostics/` for build system integration tests -**Documentation**: See `CONTRIBUTING.md` and `OPERATIONAL_EXCELLENCE.md` for detailed usage examples diff --git a/specifications/SPEC-DATASET-SERVICE.md b/specifications/SPEC-DATASET-SERVICE.md deleted file mode 100644 index 69f0a0f0..00000000 --- a/specifications/SPEC-DATASET-SERVICE.md +++ /dev/null @@ -1,374 +0,0 @@ ---- -itemId: SPEC-DATASET-SERVICE -itemTitle: Dataset Module Specification -itemType: Software Item Spec -itemFulfills: SHR-DATASET-1, SWR-DATASET-1-1, SWR-DATASET-1-2, SWR-DATASET-1-3, SWR-APPLICATION-2-10 -Module: Dataset -Layer: Domain Service -Version: 0.2.105 -Date: 2025-09-11 ---- - -## 1. Description - -### 1.1 Purpose - -The Dataset Module provides functionality for downloading and managing medical imaging datasets from external sources, specifically the National Cancer Institute's Image Data Commons (IDC) Portal and Aignostics proprietary datasets. It enables users to discover, query, and download DICOM datasets with progress tracking and integration with both command-line and web-based interfaces. - -### 1.2 Functional Requirements - -The Dataset Module shall: - -- **[FR-01]** Enable discovery and browsing of IDC Portal datasets through web portal integration -- **[FR-02]** Support SQL-based querying of IDC metadata indices for dataset discovery -- **[FR-03]** Download DICOM datasets using hierarchical identifier matching (collection, patient, study, series, instance) -- **[FR-04]** Provide configurable directory layout templates for organized dataset storage -- **[FR-05]** Support Aignostics proprietary dataset downloads via signed URLs -- **[FR-06]** Implement progress tracking for download operations with real-time updates -- **[FR-07]** Provide both CLI and web-based interfaces for dataset operations -- **[FR-08]** Support dry-run operations for validation before actual downloads - -### 1.3 Non-Functional Requirements - -- **Performance**: Handle large DICOM dataset downloads through subprocess isolation to maintain UI responsiveness -- **Security**: Signed URL generation for Aignostics datasets, secure credential handling, process isolation -- **Reliability**: Process lifecycle management with automatic cleanup, graceful error handling with retry mechanisms -- **Usability**: Web interface with file picker integration, CLI with rich console output and progress indicators -- **Scalability**: Support concurrent download operations, efficient memory usage for large datasets - -### 1.4 Constraints and Limitations - -- Requires external IDC Portal services and metadata availability -- Download operations run in isolated subprocesses for UI responsiveness -- All operations require internet connectivity -- Downloaded DICOM datasets can be large, requiring adequate local storage -- Path length limitations on Windows systems (260 characters) -- Dependencies on external IDC index data updates - ---- - -## 2. Architecture and Design - -### 2.1 Module Structure - -``` -dataset/ -├── _service.py # Core business logic and service implementation -├── _cli.py # Command-line interface implementation -├── _gui.py # Web-based GUI components -├── assets/ # Static assets for web interface -│ └── NIH-IDC-logo.svg -└── __init__.py # Module exports and public API -``` - -### 2.2 Key Components - -| Component | Type | Purpose | Public Interface | Dependencies | -| ------------- | ----------- | ------------------------------------------- | ------------------------------------- | ---------------- | -| `Service` | Class | Core dataset operations and subprocess mgmt | `download_with_queue()`, health, info | Platform, Utils | -| `cli` | Typer CLI | Command-line interface for all operations | `idc`, `aignostics` commands | Service, Typer | -| `PageBuilder` | Class | Web interface for interactive dataset mgmt | Page registration | NiceGUI, Service | -| `IDCClient` | Third-party | Modified IDC client for portal integration | Query and download methods | idc-index-data | - -_Note: For detailed implementation, refer to the source code in the module directory._ - -### 2.3 Design Patterns - -- **Service Layer Pattern**: Business logic encapsulated in Service class with clear separation from presentation layers -- **Subprocess Isolation**: Download operations run in separate processes for UI responsiveness and better resource management -- **Progress Observer Pattern**: Queue-based progress communication between main and subprocess -- **Command Pattern**: CLI commands structured as discrete operations with consistent interfaces - ---- - -## 3. Inputs and Outputs - -### 3.1 Inputs - -| Input Type | Source | Data Type/Format | Validation Rules | Business Rules | -| ------------------- | ------------- | ---------------- | --------------------------------------- | ------------------------------- | -| Dataset Identifiers | CLI/GUI/API | String/CSV | Non-empty, comma-separated valid UIDs | Must match IDC hierarchy levels | -| Target Directory | CLI/GUI/API | Path | Existing directory, write permissions | Adequate storage space required | -| Layout Template | Configuration | String | Valid template with supported variables | Default template available | -| SQL Query | CLI/API | String | Valid SQL syntax for IDC indices | Must target available indices | -| Aignostics URL | CLI/GUI | URL | Valid gs:// or https:// protocol | Must be authorized resource | - -### 3.2 Outputs - -| Output Type | Destination | Data Type/Format | Success Criteria | Error Conditions | -| ---------------- | ---------------- | --------------------- | ------------------------------- | ----------------------------- | -| Downloaded DICOM | Local Filesystem | DICOM Files | All files downloaded intact | Network, permissions, space | -| Query Results | CLI/Console | Pandas DataFrame | Valid result set returned | Invalid query, service down | -| Progress Updates | GUI/Queue | Progress Values (0-1) | Continuous progress updates | Subprocess communication fail | -| IDC Metadata | CLI/Console | JSON/Table format | Metadata successfully retrieved | IDC service unavailable | -| Operation Status | Logs/Console | Structured Logs | Operation completion logged | Process errors logged | - -### 3.3 Data Schemas - -**Dataset Identifier Schema:** - -```yaml -DatasetIdentifier: - type: object - properties: - collection_id: - type: string - description: IDC collection identifier - pattern: "^[a-zA-Z0-9_-]+$" - patient_id: - type: string - description: DICOM Patient ID - pattern: "^[0-9.]+$" - study_instance_uid: - type: string - description: DICOM Study Instance UID - pattern: "^[0-9.]+$" - series_instance_uid: - type: string - description: DICOM Series Instance UID - pattern: "^[0-9.]+$" - sop_instance_uid: - type: string - description: DICOM SOP Instance UID - pattern: "^[0-9.]+$" -``` - -**Download Progress Schema:** - -```yaml -DownloadProgress: - type: object - properties: - progress: - type: number - minimum: 0.0 - maximum: 1.0 - description: Progress as decimal (0.0-1.0) - status: - type: string - enum: ["initializing", "downloading", "completed", "error"] - description: Current operation status - message: - type: string - description: Status message for user display -``` - -### 3.4 Data Flow - -```mermaid -graph LR - A[User Input] --> B[Validation Layer] --> C[Service Layer] - C --> D[IDC Client] --> E[External IDC Portal] - C --> F[Subprocess Manager] --> G[Download Process] - G --> H[Progress Queue] --> I[UI Updates] - G --> J[Local Storage] - K[Platform Service] --> L[Signed URLs] --> G -``` - ---- - -## 4. Interface Definitions - -### 4.1 Public API - -#### Core Service Interface - -**Service Class**: `Service` - -- **Purpose**: Core dataset operations with subprocess management and progress tracking -- **Key Methods**: - - `info(mask_secrets: bool = True) -> dict[str, Any]`: Service information and configuration - - `health() -> Health`: Service health status - - `download_with_queue(queue, source, target, target_layout, dry_run) -> None`: Download with progress tracking - -**Input/Output Contracts**: - -- **Input Types**: Dataset identifiers (string/CSV), target paths (Path), layout templates (string) -- **Output Types**: Health status, progress updates via queue, downloaded DICOM files -- **Error Conditions**: `ValueError` for invalid inputs, network errors for IDC service issues - -_Note: For detailed method signatures, refer to the module's `__init__.py` and service class documentation._ - -### 4.2 CLI Interface - -**Command Structure:** - -```bash -uvx aignostics dataset [subcommand] [options] -``` - -**Available Commands:** - -| Command | Purpose | Input Requirements | Output Format | -| ---------------------------------- | ------------------------------ | ----------------------------- | ------------------- | -| `idc browse` | Open IDC portal in browser | None | Browser navigation | -| `idc indices` | List available IDC indices | None | Console list | -| `idc columns [index]` | List columns in specific index | Optional index name | Console list | -| `idc query [sql]` | Execute SQL query on IDC data | SQL query string | Pandas DataFrame | -| `idc download [target]` | Download dataset from IDC | Dataset IDs, target directory | Progress indicators | -| `aignostics download [dest]` | Download Aignostics dataset | Signed URL, destination | Progress indicators | - -**Common Options:** - -- `--help`: Display command help -- `--target-layout`: Directory layout template for downloads -- `--dry-run`: Validate without actual download -- `--indices`: Additional indices to sync for queries - -### 4.3 Web Interface - -**Endpoint Structure:** - -| Route | Purpose | Components | User Interactions | -| -------------- | ---------------------------- | --------------------------------- | ----------------------- | -| `/dataset/idc` | Interactive dataset download | ID input, folder picker, progress | Select, download, track | - -**Key Features**: - -- Dataset ID input with validation -- File picker for target directory selection -- Real-time progress tracking with visual indicators -- Integration with IDC portal for dataset discovery - ---- - -## 5. Dependencies and Integration - -### 5.1 Internal Dependencies - -| Dependency Module | Usage Purpose | Interface/Contract Used | Criticality | -| ----------------- | ------------------------------ | -------------------------- | ----------- | -| Platform Service | Signed URL generation | `generate_signed_url()` | Required | -| Utils Module | Base services, logging, health | `BaseService`, `Health` | Required | -| GUI Module | Web interface framework | `BasePageBuilder`, routing | Optional | - -### 5.2 External Dependencies - -| Dependency | Min Version | Purpose | Optional/Required | Fallback Behavior | -| -------------- | ----------- | ----------------------------- | ----------------- | ----------------- | -| idc-index-data | ==21.0.0 | IDC metadata and index access | Required | Service fails | -| pandas | <=2.3.1 | DataFrame operations | Required | Service fails | -| requests | >=2.32.3 | HTTP client for downloads | Required | Service fails | -| typer | Latest | CLI framework | Required | CLI unavailable | -| nicegui | Latest | Web GUI framework | Optional | GUI unavailable | - -_Note: For exact version requirements, refer to `pyproject.toml` and dependency lock files._ - -### 5.3 Integration Points - -- **IDC Portal Services**: RESTful APIs for metadata access and DICOM downloads -- **Aignostics Platform API**: Authentication and signed URL generation -- **Local File System**: DICOM file storage with configurable layouts - ---- - -## 6. Configuration and Settings - -### 6.1 Configuration Parameters - -| Parameter | Type | Default | Description | Required | -| -------------------- | ---- | ------------------------------------------------ | ----------------------------- | -------- | -| `target_layout` | str | `%collection_id/%PatientID/%StudyInstanceUID/` | Directory layout template | No | -| `portal_url` | str | `https://portal.imaging.datacommons.cancer.gov/` | IDC portal base URL | No | -| `example_dataset_id` | str | `1.3.6.1.4.1.5962.99.1.1069745200...` | Example dataset for testing | No | -| `path_length_max` | int | 260 | Maximum path length (Windows) | No | - -### 6.2 Environment Variables - -| Variable | Purpose | Example Value | -| --------------------- | ---------------------- | ----------------------------- | -| `AIGNOSTICS_DATA_DIR` | Default data directory | `/Users/user/data/aignostics` | -| `IDC_CLIENT_TIMEOUT` | IDC client timeout | `60` | -| `DOWNLOAD_CHUNK_SIZE` | Download chunk size | `8192` | - ---- - -## 7. Error Handling and Validation - -### 7.1 Error Categories - -| Error Type | Cause | Handling Strategy | User Impact | -| ----------------- | ----------------------------- | ------------------------------ | ----------------------------- | -| `ValueError` | Invalid identifiers or paths | Input validation with feedback | Clear error message displayed | -| `NetworkError` | IDC service unavailable | Retry with user notification | Graceful degradation | -| `ProcessError` | Subprocess failure | Cleanup and error logging | Progress tracking stops | -| `PermissionError` | Insufficient file permissions | Path validation | Alternative path suggested | - -### 7.2 Input Validation - -- **Dataset Identifiers**: Validated against IDC metadata indices using hierarchical matching -- **Target Directories**: Existence, write permissions, and available space checks -- **SQL Queries**: Basic syntax validation for IDC metadata querying -- **URLs**: Protocol validation (gs://, https://) for Aignostics dataset URLs - -### 7.3 Graceful Degradation - -- **When IDC service is unavailable**: Cache last known indices, show offline mode message -- **When GUI dependencies missing**: Fall back to CLI-only mode -- **When subprocess fails**: Clean up resources, log detailed error information - ---- - -## 8. Security Considerations - -### 8.1 Data Protection - -- **Authentication**: Signed URL authentication for Aignostics datasets using platform service -- **Data Encryption**: HTTPS for all external communications, no local encryption of DICOM files -- **Access Control**: Process isolation through subprocess architecture, file system permissions - -### 8.2 Security Measures - -- **Input Sanitization**: All file paths and identifiers validated against known patterns -- **Process Management**: Automatic cleanup of subprocesses on exit with graceful termination -- **Audit Logging**: All operations logged with timestamps and user context -- **Secret Management**: No secrets stored in code, signed URLs have expiration times - ---- - -## 9. Implementation Details - -### 9.1 Key Algorithms and Business Logic - -- **Progress Monitoring**: Regex pattern matching on subprocess stderr for real-time progress updates -- **Hierarchical Identifier Matching**: Multi-level DICOM hierarchy matching (collection → patient → study → series → instance) -- **Process Lifecycle Management**: Graceful termination with timeout and force-kill fallback -- **Directory Layout Templating**: Variable substitution system for flexible file organization - -### 9.2 State Management and Data Flow - -- **State Type**: Stateless service with transient subprocess state -- **Data Persistence**: No persistent state, downloads to specified local directories -- **Session Management**: Process-based sessions for download operations -- **Cache Strategy**: IDC indices cached temporarily during session - -### 9.3 Performance and Scalability Considerations - -- **Performance Characteristics**: Subprocess isolation prevents UI blocking, concurrent downloads supported -- **Scalability Patterns**: Process-per-download model scales with system resources -- **Resource Management**: Memory-efficient streaming downloads, automatic process cleanup -- **Concurrency Model**: Thread-safe queue communication, daemon threads for monitoring - ---- - -## Documentation Maintenance - -### Verification and Updates - -**Last Verified**: September 11, 2025 -**Verification Method**: Code review against implementation in `src/aignostics/dataset/` -**Next Review Date**: October 11, 2025 - -### Change Management - -**Interface Changes**: Changes to public APIs require spec updates and version bumps -**Implementation Changes**: Internal changes don't require spec updates unless behavior changes -**Dependency Changes**: Major dependency changes should be reflected in constraints section - -### References - -**Implementation**: See `src/aignostics/dataset/` for current implementation -**Tests**: See `tests/aignostics/dataset/` for usage examples and verification -**API Documentation**: Auto-generated from docstrings in service classes - ---- diff --git a/specifications/SPEC-MODULE-SERVICE-TEMPLATE.md b/specifications/SPEC-MODULE-SERVICE-TEMPLATE.md deleted file mode 100644 index 2d3e6b51..00000000 --- a/specifications/SPEC-MODULE-SERVICE-TEMPLATE.md +++ /dev/null @@ -1,402 +0,0 @@ ---- -itemId: SPEC-[MODULE_NAME_UPPER]-SERVICE -itemTitle: [MODULE_NAME] Module Specification -itemType: Software Item Spec -itemFulfills: [REQUIREMENT_ID] _(e.g., FE-6386)_ -Module: [Module Name] _(e.g., Bucket, Application, Dataset)_ -Layer: [Layer Type] _(Domain Service, Platform Service, Infrastructure Service, Presentation Interface)_ -Version: [VERSION] _(e.g., 0.2.105)_ -Date: [DATE] ---- - -## Documentation Guidelines [DO NOT ADD] - -### Code in Specifications - Best Practices - -**INCLUDE Code When:** - -- ✅ Public API signatures (stable contracts) -- ✅ Data structure schemas for inputs/outputs -- ✅ Configuration parameter definitions -- ✅ Error type hierarchies - -**AVOID Code When:** - -- ❌ Internal implementation details -- ❌ Private methods or functions -- ❌ Complete code blocks or algorithms -- ❌ Version-specific dependency details - -**Preferred Approaches:** - -- 📋 Reference interfaces by name and purpose -- 📋 Use schemas (JSON Schema, OpenAPI) for data structures -- 📋 Link to auto-generated documentation for details -- 📋 Focus on behavior and contracts, not implementation - ---- - -## 1. Description - -### 1.1 Purpose - -_[Describe the primary purpose and scope of this software module. What business functionality does it provide? What problems does it solve?]_ - -**Example:** The [Module Name] Module provides [core functionality description] for the Aignostics Python SDK. It enables [key capabilities] and serves as [role in overall architecture]. - -### 1.2 Functional Requirements - -_[List the specific functional capabilities this module must provide]_ - -The [Module Name] Module shall: - -- **[FR-01]** [Functional requirement description] -- **[FR-02]** [Functional requirement description] -- **[FR-03]** [Functional requirement description] - -### 1.3 Non-Functional Requirements - -_[Specify performance, security, usability, and reliability requirements]_ - -- **Performance**: [Performance requirements and constraints] -- **Security**: [Security requirements, data protection, authentication] -- **Reliability**: [Availability, error handling, recovery requirements] -- **Usability**: [User interface requirements, accessibility] -- **Scalability**: [Volume, concurrency, resource requirements] - -### 1.4 Constraints and Limitations - -_[Document any technical or business constraints]_ - -- [Constraint 1: Description and impact] -- [Constraint 2: Description and impact] - ---- - -## 2. Architecture and Design - -### 2.1 Module Structure - -_[Describe the internal organization of the module]_ - -``` -[module_name]/ -├── _service.py # Core business logic and service implementation -├── _cli.py # Command-line interface (if applicable) -├── _gui/ # Web-based GUI components (if applicable) -│ ├── __init__.py -│ └── [gui_files].py -├── _settings.py # Module-specific configuration -├── _utils.py # Helper functions and utilities -└── __init__.py # Module exports and public API -``` - -### 2.2 Key Components - -_[List and describe the main classes, functions, and interfaces, focusing on purpose not implementation]_ - -| Component | Type | Purpose | Public Interface | Dependencies | -| -------------- | -------------- | --------------------- | ------------------ | ------------ | -| `[Component1]` | Class/Function | [Purpose description] | [Key capabilities] | [Major deps] | -| `[Component2]` | Class/Function | [Purpose description] | [Key capabilities] | [Major deps] | -| `[Component3]` | Class/Function | [Purpose description] | [Key capabilities] | [Major deps] | - -_Note: For detailed implementation, refer to the source code in the module directory._ - -### 2.3 Design Patterns - -_[Identify architectural patterns used in this module]_ - -- **[Pattern Name]**: [How it's applied and why] -- **Dependency Injection**: [How DI is used for this module] -- **Service Layer Pattern**: [How business logic is encapsulated] - ---- - -## 3. Inputs and Outputs - -### 3.1 Inputs - -_[Define what data/parameters the module accepts, focusing on contracts not implementation]_ - -| Input Type | Source | Data Type/Format | Validation Rules | Business Rules | -| ---------- | ------------- | ---------------- | ------------------------ | ---------------- | -| [Input1] | [CLI/GUI/API] | [Schema/Format] | [Validation description] | [Business logic] | -| [Input2] | [CLI/GUI/API] | [Schema/Format] | [Validation description] | [Business logic] | -| [Input3] | [CLI/GUI/API] | [Schema/Format] | [Validation description] | [Business logic] | - -### 3.2 Outputs - -_[Define what data/responses the module produces, focusing on contracts not implementation]_ - -| Output Type | Destination | Data Type/Format | Success Criteria | Error Conditions | -| ----------- | --------------- | ---------------- | -------------------- | ---------------- | -| [Output1] | [Target system] | [Schema/Format] | [Success definition] | [Error cases] | -| [Output2] | [Target system] | [Schema/Format] | [Success definition] | [Error cases] | -| [Output3] | [Target system] | [Schema/Format] | [Success definition] | [Error cases] | - -### 3.3 Data Schemas - -_[Define data structures using schemas rather than code snippets]_ - -**Input Data Schema:** - -```yaml -# Example using YAML schema format -InputType1: - type: object - properties: - field1: - type: string - description: [Field description] - validation: [Validation rules] - field2: - type: integer - minimum: 0 - description: [Field description] - required: [field1] -``` - -**Output Data Schema:** - -```yaml -# Example using YAML schema format -OutputType1: - type: object - properties: - result: - type: string - description: [Result description] - metadata: - type: object - description: [Metadata structure] -``` - -_Note: Actual schemas may be defined in OpenAPI specifications or JSON Schema files._ - -### 3.4 Data Flow - -_[Describe the flow of data through the module]_ - -```mermaid -graph LR - A[Input Source] --> B[Module Processing] --> C[Output Destination] - B --> D[External Service Integration] - E[Configuration] --> B -``` - ---- - -## 4. Interface Definitions - -### 4.1 Public API - -_[Document the main public interfaces that other modules or external systems use. Focus on contracts, not implementation.]_ - -#### Core Service Interface - -**Service Class**: `[ModuleName]Service` - -- **Purpose**: [Brief description of the service's responsibility] -- **Key Methods**: - - `[method1](param1: Type1, param2: Type2) -> ReturnType`: [Method purpose and behavior] - - `[method2](param: Type) -> ReturnType`: [Method purpose and behavior] - -**Input/Output Contracts**: - -- **Input Types**: [List expected input data types and validation rules] -- **Output Types**: [List return data types and success criteria] -- **Error Conditions**: [List exception types and when they occur] - -_Note: For detailed method signatures, refer to the module's `__init__.py` and service class documentation._ - -### 4.2 CLI Interface (if applicable) - -_[Document command-line interface specifications focusing on behavior, not implementation]_ - -**Command Structure:** - -```bash -uvx aignostics [module-name] [subcommand] [options] -``` - -**Available Commands:** - -| Command | Purpose | Input Requirements | Output Format | -| ------------ | ----------------------------- | ----------------------------- | -------------------- | -| `[command1]` | [Description of what it does] | [Required parameters/options] | [Output description] | -| `[command2]` | [Description of what it does] | [Required parameters/options] | [Output description] | - -**Common Options:** - -- `--help`: Display command help -- `--verbose`: Enable detailed output -- `[other-options]`: [Description] - -### 4.3 HTTP/Web Interface (if applicable) - -_[Document web interface specifications]_ - -**Endpoint Structure:** - -| Method | Endpoint | Purpose | Request Format | Response Format | -| ------ | -------------- | ------------- | ------------------- | --------------- | -| `GET` | `/[endpoint1]` | [Description] | [Query params/body] | [Response type] | -| `POST` | `/[endpoint2]` | [Description] | [Query params/body] | [Response type] | - -**Authentication**: [Authentication requirements, if any] -**Error Responses**: [Standard error response format] - ---- - -## 5. Dependencies and Integration - -### 5.1 Internal Dependencies - -_[List dependencies on other SDK modules, focusing on interfaces used not implementation details]_ - -| Dependency Module | Usage Purpose | Interface/Contract Used | Criticality | -| ----------------- | ------------------- | ---------------------------- | ------------------- | -| Platform Service | [Usage description] | [Interface name/description] | [Required/Optional] | -| Utils Module | [Usage description] | [Interface name/description] | [Required/Optional] | -| [Other Module] | [Usage description] | [Interface name/description] | [Required/Optional] | - -### 5.2 External Dependencies - -_[List third-party libraries and external services, focusing on purpose not versions]_ - -| Dependency | Min Version | Purpose | Optional/Required | Fallback Behavior | -| ------------------ | ----------- | --------- | ------------------- | ----------------- | -| [Library1] | [Min Ver] | [Purpose] | [Required/Optional] | [If unavailable] | -| [External Service] | [API Ver] | [Purpose] | [Required/Optional] | [If unavailable] | - -_Note: For exact version requirements, refer to `pyproject.toml` and dependency lock files._ - -### 5.3 Integration Points - -_[Describe how this module integrates with external systems]_ - -- **Aignostics Platform API**: [Integration details] -- **Cloud Storage Services**: [Integration details] -- **Third-party Tools**: [Integration details] - ---- - -## 6. Configuration and Settings - -### 6.1 Configuration Parameters - -_[Document all configurable settings]_ - -| Parameter | Type | Default | Description | Required | -| ------------ | ------ | --------- | ------------- | -------- | -| `[setting1]` | [Type] | [Default] | [Description] | [Yes/No] | -| `[setting2]` | [Type] | [Default] | [Description] | [Yes/No] | - -### 6.2 Environment Variables - -_[List environment variables used by this module]_ - -| Variable | Purpose | Example Value | -| ------------ | --------- | ----------------- | -| `[ENV_VAR1]` | [Purpose] | `[example_value]` | -| `[ENV_VAR2]` | [Purpose] | `[example_value]` | - ---- - -## 7. Error Handling and Validation - -### 7.1 Error Categories - -_[Define types of errors this module can encounter and how they're handled]_ - -| Error Type | Cause | Handling Strategy | User Impact | -| -------------- | ------------------- | ------------------ | ----------------- | -| `[ErrorType1]` | [Cause description] | [How it's handled] | [User experience] | -| `[ErrorType2]` | [Cause description] | [How it's handled] | [User experience] | - -### 7.2 Input Validation - -_[Specify validation rules for all inputs]_ - -- **[Input Type]**: [Validation rules and error responses] -- **[Input Type]**: [Validation rules and error responses] - -### 7.3 Graceful Degradation - -_[Describe behavior when dependencies are unavailable]_ - -- **When [dependency] is unavailable**: [Fallback behavior] -- **When [external service] is unreachable**: [Fallback behavior] - ---- - -## 8. Security Considerations - -### 8.1 Data Protection - -_[Describe how sensitive data is handled]_ - -- **Authentication**: [How authentication is managed] -- **Data Encryption**: [In-transit and at-rest encryption] -- **Access Control**: [Permission and authorization mechanisms] - -### 8.2 Security Measures [Optional] - -_[List specific security implementations]_ - -- **Input Sanitization**: [How inputs are validated and sanitized] -- **Secret Management**: [How API keys and secrets are handled] -- **Audit Logging**: [What security events are logged] - ---- - -## 9. Implementation Details - -### 9.1 Key Algorithms and Business Logic - -_[Describe significant algorithms or processing logic at a conceptual level]_ - -- **[Algorithm1]**: [Purpose and high-level approach, not implementation details] -- **[Algorithm2]**: [Purpose and high-level approach, not implementation details] -- **[Business Rule]**: [Important business logic or processing rules] - -### 9.2 State Management and Data Flow - -_[Describe how the module manages state and data flow patterns]_ - -- **State Type**: [Stateless/Stateful and why] -- **Data Persistence**: [How data is stored and managed] -- **Session Management**: [How user sessions or context is handled] -- **Cache Strategy**: [Caching approach, if applicable] - -### 9.3 Performance and Scalability Considerations - -_[Describe performance characteristics and scalability approaches]_ - -- **Performance Characteristics**: [Expected performance behavior] -- **Scalability Patterns**: [How the module scales with load] -- **Resource Management**: [Memory, CPU, I/O considerations] -- **Concurrency Model**: [Thread safety, async patterns] - ---- - -## Documentation Maintenance - -### Verification and Updates - -**Last Verified**: [Date when spec was verified against implementation] -**Verification Method**: [How accuracy was confirmed - code review, testing, etc.] -**Next Review Date**: [When this spec should be reviewed again] - -### Change Management - -**Interface Changes**: Changes to public APIs require spec updates and version bumps -**Implementation Changes**: Internal changes don't require spec updates unless behavior changes -**Dependency Changes**: Major dependency changes should be reflected in constraints section - -### References - -**Implementation**: See `src/aignostics/[module_name]/` for current implementation -**Tests**: See `tests/aignostics/[module_name]/` for usage examples and verification -**API Documentation**: [Link to auto-generated API docs if available] diff --git a/specifications/SPEC-QUPATH-SERVICE.md b/specifications/SPEC-QUPATH-SERVICE.md deleted file mode 100644 index 4e5c4fba..00000000 --- a/specifications/SPEC-QUPATH-SERVICE.md +++ /dev/null @@ -1,411 +0,0 @@ ---- -itemId: SPEC-QUPATH-SERVICE -itemTitle: QuPath Module Specification -itemType: Software Item Spec -itemFulfills: SWR-VISUALIZATION-1-1, SWR-VISUALIZATION-1-2, SWR-VISUALIZATION-1-3, SHR-APPLICATION-2, SWR-VISUALIZATION-1-4 -Module: QuPath -Layer: Domain Service -Version: 0.2.105 -Date: 2025-09-03 ---- - -## 1. Description - -### 1.1 Purpose - -The QuPath Module provides comprehensive integration between the Aignostics Python SDK and QuPath, an open-source software platform for bioimage analysis. It enables seamless management of QuPath installations, automated project creation from Platform AI results, and batch processing of whole slide image annotations. - -The module uses QuPath programmatically to process Model Run Results obtained from the Aignostics Platform API, transforming AI model outputs into complete QuPath projects with the correct images, annotations, GeoJSON files, and all artifacts generated by the AI models. This allows users to visualize and further analyze AI-generated results in QuPath's rich interactive environment. - -This module serves as the primary bridge between Aignostics Platform analysis results and the QuPath ecosystem for visualization and further analysis. - -### 1.2 Functional Requirements - -The QuPath Module shall: - -- **[FR-01]** Automatically download, verify, and install QuPath from GitHub releases with cross-platform support (Linux x64, macOS x64/ARM64, Windows x64) -- **[FR-02]** Maintain version-specific QuPath installations with health monitoring and compatibility validation -- **[FR-03]** Create QuPath projects automatically when adding images, with support for WSI files from various formats (DICOM, SVS, TIFF) and metadata synchronization -- **[FR-04]** Import and process large-scale annotation datasets with batch processing, progress tracking, and error recovery -- **[FR-05]** Convert between Aignostics Platform annotation formats and QuPath-compatible GeoJSON with coordinate system validation -- **[FR-06]** Provide CLI, GUI, and notebook interfaces for scripted workflows, management service, and exploratory analysis -- **[FR-07]** Execute Groovy scripts through QuPath's command-line interface for project automation and introspection -- **[FR-08]** Manage QuPath processes with secure execution, timeout limits, and resource monitoring including process termination capabilities - -### 1.3 Non-Functional Requirements - -- **Performance**: Support for whole slide images with efficient memory management through streaming processing, chunked downloads (10MB chunks), and concurrent operations with thread-safe implementations -- **Security**: Verify download integrity from GitHub releases using HTTPS with timeout enforcement, proper file system permissions, and process isolation with timeout limits (30s launch, 2h script execution) -- **Reliability**: Automatic error recovery for batch operations, resumable downloads, and graceful degradation when QuPath is unavailable -- **Usability**: Type-safe CLI commands with automatic help generation, progressive download indicators, and comprehensive error messages -- **Scalability**: Handle large annotation datasets (500K+ annotations per batch), parallel image processing, and concurrent QuPath instances - -### 1.4 Constraints and Limitations - -- QuPath ARM64 Linux support: QuPath is not officially supported on ARM64 Linux architecture -- Version dependency: Currently supports QuPath v0.6.0-rc5 with backward compatibility considerations -- Platform isolation: Uses managed installation approach to avoid conflicts with existing QuPath installations -- Resource constraints: Long-running operations limited by system memory and processing power for large WSI files - ---- - -## 2. Architecture and Design - -### 2.1 Module Structure - -``` -qupath/ -├── _service.py # Core business logic and QuPath management -├── _cli.py # Command-line interface with Typer integration -├── _gui.py # Web-based GUI components for interactive management -├── _settings.py # Module-specific configuration and defaults -├── scripts/ # Groovy scripts for QuPath automation -│ ├── add.groovy # Project creation and bulk image addition -│ ├── annotate.groovy # GeoJSON annotation import with batch processing -│ ├── inspect.groovy # Project introspection and metadata extraction -│ └── test.groovy # Installation verification and functionality testing -├── assets/ # GUI assets and animations -│ ├── download.lottie # Download progress animation -│ ├── microscope.lottie # Microscope icon animation -│ └── update.lottie # Update progress animation -└── __init__.py # Module exports and public API -``` - -### 2.2 Key Components - -| Component | Type | Purpose | Public API | -| ------------------ | ------ | ------------------------------------------------------------ | --------------------------------------------------------------------------------------------------------------- | -| `Service` | Class | Core QuPath management, installation, and project operations | `install_qupath()`, `uninstall_qupath()`, `health()`, `add()`, `annotate()`, `inspect()` | -| `QuPathVersion` | Model | Version information storage and validation | `version`, `build_time`, `commit_tag` properties | -| `QuPathProject` | Model | Project information and image metadata | `uri`, `version`, `images` properties | -| `QuPathImage` | Model | Individual image information in projects | Image metadata, hierarchy, and file path information | -| `AddProgress` | Model | Progress tracking for image addition operations | `status`, `image_count`, `image_index`, `progress_normalized` properties | -| `AnnotateProgress` | Model | Progress tracking for annotation import operations | `status`, `annotation_count`, `annotation_index`, `progress_normalized` properties | -| `CLI` | Module | Command-line interface with Typer commands | `install`, `launch`, `processes`, `terminate`, `uninstall`, `add`, `annotate`, `inspect`, `run-script` commands | -| `GUI` | Module | Web-based interface using GUI framework | Interactive QuPath management dashboard | - -### 2.3 Design Patterns - -- **Service Layer Pattern**: Business logic encapsulated in `Service` class with clear separation from CLI/GUI interfaces -- **Factory Pattern**: Platform-specific QuPath executable and download URL creation based on system detection -- **Command Pattern**: CLI commands implemented as discrete functions with type-safe parameter validation -- **Observer Pattern**: Progress callbacks for download and extraction operations with real-time updates - ---- - -## 3. Inputs and Outputs - -### 3.1 Inputs - -| Input Type | Source | Format/Type | Validation Rules | -| ----------------- | ----------- | --------------- | ----------------------------------------------------------------- | -| QuPath Version | CLI/GUI/API | String (semver) | Must match available GitHub releases, defaults to 0.6.0-rc5 | -| Installation Path | CLI/GUI | Path | Must be writable directory, defaults to user data directory | -| Platform Override | CLI | String | Must be valid platform identifier (Windows/Linux/Darwin) | -| WSI File Paths | CLI/API | List[Path] | Files must exist and be readable, supports DICOM/SVS/TIFF formats | -| Annotation Data | CLI/API | GeoJSON/JSON | Must contain valid coordinate data and classification schemas | -| Project Metadata | API | Dict | Key-value pairs for QuPath project properties | - -### 3.2 Outputs - -| Output Type | Destination | Format/Type | Success Criteria | -| ------------------------- | ------------ | ------------------- | ---------------------------------------------------------------- | -| QuPath Installation | File System | Binary Executable | Executable responds to `--version` command successfully | -| QuPath Project | File System | .qpproj File | Project opens in QuPath with all images and annotations loaded | -| Health Status | CLI/GUI/API | Health Model | Contains installation status, version, and application health | -| Progress Updates | CLI/GUI | Progress Callbacks | Real-time download/extraction progress with percentage and speed | -| Error Messages | CLI/GUI/Logs | Structured Messages | Clear error descriptions with actionable resolution steps | -| Image Addition Results | CLI/API | Integer Count | Number of images successfully added to project | -| Annotation Import Results | CLI/API | Integer Count | Number of annotations successfully imported to image | -| Project Information | CLI/API | QuPathProject Model | Complete project metadata including images and hierarchy info | - -### 3.3 Data Schemas - -**QuPath Version Schema:** - -```yaml -QuPathVersion: - type: object - properties: - version: - type: string - description: QuPath version string - pattern: "^[0-9]+\.[0-9]+\.[0-9]+.*$" - build_time: - type: string - description: Build timestamp - format: date-time - commit_tag: - type: string - description: Git commit tag - pattern: "^[a-f0-9]+$" -``` - -**QuPath Project Schema:** - -```yaml -QuPathProject: - type: object - properties: - uri: - type: string - description: Project file URI - format: uri - version: - type: string - description: QuPath version used - images: - type: array - items: - $ref: "#/definitions/QuPathImage" - description: List of images in project -``` - -### 3.4 Data Flow - -```mermaid -graph LR - A[User Request] --> B[CLI/GUI Interface] - B --> C[Service Layer] - C --> D[Platform Detection] - D --> E[GitHub Release API] - E --> F[Download & Extract] - F --> G[QuPath Installation] - C --> H[Groovy Script Execution] - H --> I[QuPath Process] - I --> J[Project/Annotation Output] - K[Configuration] --> C - L[Progress Callbacks] --> B -``` - ---- - -## 4. Interface Definitions - -### 4.1 Public API - -#### Core Service Interface - -```python -class Service(BaseService): - """QuPath service for managing installations and projects.""" - - @staticmethod - def install_qupath( - version: str = QUPATH_VERSION, - path: Path | None = None, - reinstall: bool = True, - platform_system: str | None = None, - platform_machine: str | None = None, - download_progress: Callable | None = None, - extract_progress: Callable | None = None, - progress_queue: queue.Queue[InstallProgress] | None = None, - ) -> Path: ... - - @staticmethod - def uninstall_qupath( - version: str | None = None, - path: Path | None = None, - platform_system: str | None = None, - platform_machine: str | None = None, - ) -> bool: ... - - def health(self) -> Health: ... - - @staticmethod - def add( - project: Path, - paths: list[Path], - progress_callable: Callable | None = None, - ) -> int: ... - - @staticmethod - def annotate( - project: Path, - image: Path, - annotations: Path, - progress_callable: Callable | None = None, - ) -> int: ... - - @staticmethod - def inspect(project: Path) -> QuPathProject: ... -``` - -### 4.2 CLI Interface - -**Command Structure:** - -```bash -uvx aignostics qupath [subcommand] [options] -``` - -**Available Commands:** - -- `install`: Download and install QuPath with version and platform options -- `launch`: Launch QuPath application with optional project and script parameters -- `processes`: List running QuPath processes with detailed information -- `terminate`: Terminate all running QuPath processes -- `uninstall`: Remove QuPath installation with platform-specific cleanup -- `add`: Add images to QuPath project (creates project if needed) -- `annotate`: Import GeoJSON annotations to specific image in project -- `inspect`: Examine project structure and metadata -- `run-script`: Execute custom Groovy scripts with QuPath - -**Example Usage:** - -```bash -# Install QuPath (default version) -uvx aignostics qupath install - -# Install specific version with custom path -uvx aignostics qupath install --version 0.6.0-rc5 --path /custom/path - -# Launch QuPath with project -uvx aignostics qupath launch --project ./my_project.qpproj - -# Add images to project (creates if needed) -uvx aignostics qupath add ./my_project.qpproj image1.svs image2.tiff - -# Add annotations to specific image -uvx aignostics qupath annotate ./my_project.qpproj image1.svs annotations.json - -# Inspect project structure -uvx aignostics qupath inspect ./my_project.qpproj - -# List running processes -uvx aignostics qupath processes - -# Terminate all QuPath processes -uvx aignostics qupath terminate -``` - -### 4.3 GUI Interface - -- **Navigation**: Accessible via main SDK GUI under "QuPath" module section -- **Key UI Components**: - - Installation status dashboard with version information - - Progress bars for download/extraction operations with Lottie animations (download.lottie, microscope.lottie, update.lottie) - - Project creation interface with file selection and metadata entry - - Annotation import interface with batch processing configuration -- **User Workflows**: - 1. Check installation status and install/update QuPath - 2. Launch QuPath application with optional project loading - 3. Add images to projects by selecting WSI files - 4. Import annotations by uploading GeoJSON files with progress tracking - 5. Inspect project structure and image metadata - 6. Monitor and manage running QuPath processes - ---- - -## 5. Dependencies and Integration - -### 5.1 Internal Dependencies - -| Dependency Module | Usage Purpose | Interface Used | -| ----------------- | -------------------------------------------------- | --------------------------------------------- | -| Utils Module | Logging, base service class, and common utilities | `BaseService`, `get_logger()`, `console` | -| CLI Module | Command registration and CLI framework integration | CLI router registration and command discovery | -| GUI Module | Web interface framework and component libraries | NiceGUI components and page registration | - -### 5.2 External Dependencies - -| Dependency | Version | Purpose | Optional/Required | -| ----------- | ------- | ------------------------------------------------------------------ | ----------------- | -| `typer` | Latest | Type-safe CLI command framework with automatic help generation | Required | -| `requests` | Latest | HTTP client for GitHub API and file downloads with timeout support | Required | -| `psutil` | Latest | Process management and system monitoring for QuPath instances | Required | -| `appdirs` | Latest | Platform-specific application data directory resolution | Required | -| `packaging` | Latest | Version parsing and comparison for QuPath version management | Required | -| `ijson` | Latest | Streaming JSON parser for large annotation datasets | Required | -| `pydantic` | Latest | Data validation and serialization for configuration models | Required | - -### 5.3 Integration Points - -- **GitHub Releases API**: QuPath version discovery and download URL resolution (REST API) -- **QuPath GitHub Releases**: Binary distribution downloads for all supported platforms (HTTPS downloads with integrity verification) -- **QuPath CLI Interface**: Groovy script execution and project manipulation (Command-line subprocess execution with JSON parameter passing) - ---- - -## 6. Configuration and Settings - -### 6.1 Configuration Parameters - -| Parameter | Type | Default | Description | Required | -| ---------------------------------- | ------ | ----------------- | ----------------------------------------------- | -------- | -| `QUPATH_VERSION` | String | `"0.6.0-rc5"` | Default QuPath version to install | No | -| `DOWNLOAD_CHUNK_SIZE` | Int | `10485760` (10MB) | Size of download chunks for QuPath archive | No | -| `QUPATH_LAUNCH_MAX_WAIT_TIME` | Int | `30` | Maximum wait time for QuPath to start (seconds) | No | -| `QUPATH_SCRIPT_MAX_EXECUTION_TIME` | Int | `7200` | Maximum script execution time (seconds) | No | -| `ANNOTATIONS_BATCH_SIZE` | Int | `500000` | Batch size for annotation processing | No | - -### 6.2 Environment Variables - -| Variable | Purpose | Example Value | -| --------------------- | -------------------------------------- | ------------------------ | -| `AIGNOSTICS_QUPATH_*` | Module-specific configuration settings | Various based on setting | - ---- - -## 7. Error Handling and Validation - -### 7.1 Error Categories - -| Error Type | Cause | Handling Strategy | User Impact | -| ------------------------ | ------------------------------------- | ------------------------------------------- | ------------------------------------------- | -| `Installation Errors` | Network failures, corrupted downloads | Retry with exponential backoff | Clear error message with retry option | -| `Platform Compatibility` | Unsupported architectures | Graceful error with supported platform list | Installation blocked with explanation | -| `QuPath Process Errors` | Script execution failures, timeouts | Process termination and error logging | Operation fails with detailed error | -| `File System Errors` | Path permissions, disk space | Permission checks and space validation | User prompted to resolve file system issues | - -### 7.2 Input Validation - -- **QuPath Version**: Must match semver pattern and be available in GitHub releases -- **Installation Path**: Must be writable directory with sufficient disk space -- **WSI File Paths**: Files must exist, be readable, and match supported formats (DICOM, SVS, TIFF) -- **Annotation Data**: Must contain valid GeoJSON structure with coordinate data and classification schemas - -### 7.3 Graceful Degradation - -- **When QuPath installation is unavailable**: Module functions return appropriate error status with installation instructions -- **When GitHub API is unreachable**: Use cached version information or offline installation if available -- **When QuPath process fails**: Cleanup partial operations and provide recovery suggestions - ---- - -## 8. Security Considerations - -### 8.1 Data Protection - -- **Authentication**: No authentication required as QuPath is a local application -- **Data Encryption**: Downloads verified using HTTPS from GitHub releases -- **Access Control**: File system permissions enforced for QuPath installation directory - -### 8.2 Security Measures - -- **Input Sanitization**: All file paths and command arguments are validated and sanitized before execution -- **Process Isolation**: QuPath processes run with timeout limits and resource monitoring -- **Download Verification**: Archive integrity verified through GitHub API checksums - ---- - -## 9. Implementation Details - -### 9.1 Key Algorithms and Business Logic - -- **Platform Detection**: Automatic detection of OS and architecture for QuPath binary selection -- **Chunked Download**: Efficient download of large QuPath archives with progress tracking -- **Streaming JSON Processing**: Memory-efficient processing of large annotation datasets using ijson -- **Process Management**: Safe execution and monitoring of QuPath subprocess with timeout handling - -### 9.2 State Management and Data Flow - -- **Configuration State**: Module settings stored using Pydantic settings with environment variable override -- **Runtime State**: QuPath process information tracked in memory with psutil integration -- **Installation State**: QuPath installation status persisted in user data directory - -### 9.3 Performance and Scalability Considerations - -- **Async Operations**: Download and extraction operations run in background with progress callbacks -- **Thread Safety**: All service methods are thread-safe for concurrent access -- **Process Monitoring**: QuPath processes monitored asynchronously with timeout enforcement diff --git a/specifications/SPEC-UTILS-SERVICE.md b/specifications/SPEC-UTILS-SERVICE.md deleted file mode 100644 index 51009049..00000000 --- a/specifications/SPEC-UTILS-SERVICE.md +++ /dev/null @@ -1,377 +0,0 @@ ---- -itemId: SPEC-UTILS-SERVICE -itemTitle: Utils Module Specification -itemType: Software Item Spec -itemFulfills: SWR-SYSTEM-CLI-HEALTH-1, SWR-SYSTEM-GUI-HEALTH-1, SWR-SYSTEM-GUI-SETTINGS-1 -itemIsRelatedTo: SPEC-GUI-SERVICE, SPEC-BUCKET-SERVICE, SPEC-DATASET-SERVICE, SPEC-NOTEBOOK-SERVICE, SPEC-PLATFORM-SERVICE, SPEC-QUPATH-SERVICE, SPEC-SYSTEM-SERVICE, SPEC-WSI-SERVICE -Module: Utils -Layer: Infrastructure Service -Version: 1.0.0 -Date: 2025-10-13 ---- - -## 1. Description - -### 1.1 Purpose - -The Utils Module provides foundational infrastructure utilities for the Aignostics Python SDK. It serves as the core infrastructure layer that enables configuration management, dependency injection, logging, health monitoring, CLI preparation, GUI framework support, and cross-cutting concerns for all other modules in the SDK. - -### 1.2 Functional Requirements - -The Utils Module shall: - -- **[FR-01]** Provide environment detection and configuration management with automatic .env file loading -- **[FR-02]** Implement dependency injection mechanisms for dynamic service discovery and registration -- **[FR-03]** Provide centralized logging infrastructure with configurable settings and external service integration -- **[FR-04]** Implement health monitoring models and status propagation for service health checks -- **[FR-05]** Support CLI preparation and dynamic command registration for typer-based interfaces -- **[FR-06]** Provide GUI framework abstractions and page builder patterns for web interface development -- **[FR-07]** Implement settings management with validation, serialization, and sensitive data handling -- **[FR-08]** Provide file system utilities for user data directory management and path sanitization -- **[FR-09]** Support process information gathering and runtime environment detection - -### 1.3 Non-Functional Requirements - -- **Performance**: Lightweight initialization with lazy loading of optional dependencies, efficient caching of discovered services -- **Security**: Secure handling of sensitive configuration data, proper sanitization of file paths, environment-based security controls -- **Reliability**: Graceful degradation when optional dependencies unavailable, robust error handling in service discovery -- **Usability**: Simple API surface with sensible defaults, comprehensive logging and debugging support -- **Scalability**: Efficient service discovery caching, support for dynamic module loading and registration - -### 1.4 Constraints and Limitations - -- **Optional Dependency Constraints**: Full functionality requires optional packages (nicegui, logfire, sentry, marimo) -- **Environment Detection**: Some features disabled in containerized or read-only environments -- **Configuration Dependency**: Relies on .env files and environment variables for configuration - ---- - -## 2. Architecture and Design - -### 2.1 Module Structure - -``` -utils/ -├── __init__.py # Module exports and conditional loading -├── _constants.py # Runtime environment detection and project constants -├── _service.py # Base service class with settings integration -├── _di.py # Dependency injection and service discovery -├── _settings.py # Pydantic settings utilities and validation -├── _health.py # Health monitoring models and status propagation -├── _log.py # Logging configuration and logger factory -├── _cli.py # CLI preparation and dynamic command registration -├── _gui.py # GUI framework abstractions and page builders -├── _fs.py # File system utilities and path management -├── _process.py # Process information and runtime detection -├── _console.py # Rich console configuration -├── _logfire.py # Logfire integration (optional) -├── _sentry.py # Sentry integration (optional) -├── _notebook.py # Marimo notebook utilities (optional) -└── boot.py # Application bootstrap and initialization -``` - -### 2.2 Key Components - -| Component | Type | Purpose | Public Interface | Dependencies | -| ----------------- | ----- | -------------------------------------- | ------------------------------------------- | ------------ | -| `BaseService` | Class | Abstract base for all service classes | Settings integration, health interface | Pydantic | -| `BasePageBuilder` | Class | Abstract base for GUI page builders | Abstract page registration interface | NiceGUI | -| `Health` | Class | Health status modeling and propagation | Status computation with component hierarchy | Pydantic | -| `LogSettings` | Class | Logging configuration management | Environment-based logging configuration | Rich | -| `OpaqueSettings` | Class | Base for settings with sensitive data | Controlled serialization for sensitive data | Pydantic | -| `ProcessInfo` | Class | Process and runtime information | System and environment metadata collection | Core | - -_Note: For detailed implementation, refer to the source code in the `src/aignostics/utils/` directory._ - -### 2.3 Design Patterns - -- **Abstract Base Class Pattern**: `BaseService` and `BasePageBuilder` provide standardized interfaces for extension across all SDK modules -- **Dependency Injection Pattern**: Dynamic discovery and registration of services using reflection-based service location -- **Settings Pattern**: Centralized configuration management using Pydantic settings with environment variable binding -- **Health Check Pattern**: Hierarchical health status modeling with automatic propagation of failure states -- **Factory Pattern**: Logger factory and service instantiation with settings injection - ---- - -## 3. Inputs and Outputs - -### 3.1 Inputs - -| Input Type | Source | Data Type/Format | Validation Rules | Business Rules | -| --------------------- | ---------------- | --------------------- | ------------------------------------ | --------------------------- | -| Environment Variables | System/Container | Key-value strings | Project-specific prefix validation | Optional configuration | -| Configuration Files | Filesystem | .env format | Key-value pairs, optional file | Fallback to defaults | -| Settings Classes | Code | BaseSettings types | Pydantic validation rules | Must inherit from base | -| Service Classes | Code | BaseService types | Must inherit from BaseService | Auto-discovery enabled | -| PageBuilder Classes | Code | BasePageBuilder types | Must implement register_pages() | GUI registration required | -| Health Components | Services | Health objects | Valid status and reason combinations | Component hierarchy support | - -### 3.2 Outputs - -| Output Type | Destination | Data Type/Format | Success Criteria | Error Conditions | -| ----------------- | ---------------- | ---------------- | ------------------------------------- | ------------------------- | -| Logger Instances | Application | logging.Logger | Configured logger with proper level | Configuration errors | -| Service Instances | Dependency Graph | BaseService | Properly initialized with settings | Settings validation fails | -| Health Status | Monitoring | Health objects | Valid status with component hierarchy | Component failures | -| GUI Application | Web Browser | NiceGUI app | Running web application | Port conflicts | -| CLI Application | Terminal | Configured Typer | Registered commands available | Command conflicts | -| Settings Objects | Services | BaseSettings | Validated configuration objects | Environment errors | -| File Paths | Application | Path objects | Sanitized and validated paths | Permission denied | - -### 3.3 Data Schemas - -**Settings Schema:** - -```yaml -# Source: Based on BaseSettings and OpaqueSettings contracts -settings: - type: object - description: Configuration object with environment binding - properties: - sensitive_fields: - type: array - description: Fields to exclude from serialization - validation_rules: - type: object - description: Pydantic validators applied - environment_binding: true -``` - -**Health Status Schema:** - -```yaml -# Source: Health model in _health.py -health: - type: object - required: [status] - properties: - status: - type: string - enum: [UP, DOWN] - description: Service health status - reason: - type: string - nullable: true - description: Optional reason for status - components: - type: object - description: Hierarchical component health - additionalProperties: - $ref: "#/health" -``` - -_Note: Complete schemas available in implementation docstrings and type hints._ - -### 3.4 Data Flow - -``` -Environment/Config → Settings Loading → Service Initialization → Health Monitoring -Service Discovery → Dependency Injection → Component Registration → Application Ready -``` - ---- - -## 4. Interface Definitions - -### 4.1 Public API - -#### Core Service Interface - -**BaseService Class** - -- **Purpose**: Provides standardized service pattern for all SDK modules with settings integration -- **Key Capabilities**: - - Settings-based initialization with automatic validation - - Health status reporting with component hierarchy - - Module identification for service discovery - -**Input/Output Contracts**: - -- **Initialization**: Accepts optional settings class, performs validation -- **Health Reporting**: Returns structured health status with components -- **Service Identity**: Provides module-based service identification - -#### Dependency Injection Interface - -**Service Discovery Functions** - -- **locate_subclasses()**: Discovers all subclasses of given base class -- **locate_implementations()**: Creates instances of discovered service classes - -#### Health Monitoring Interface - -**Health Status Management** - -- **Purpose**: Hierarchical health status modeling with automatic propagation -- **Capabilities**: Component health aggregation, status computation, failure propagation - -_Note: For detailed method signatures, refer to the module's `__init__.py` and implementation files._ - -### 4.2 CLI Interface - -**Command Structure:** - -```bash -uvx aignostics [module-name] [subcommand] [options] -``` - -**CLI Preparation:** - -| Function | Purpose | Input Requirements | Output Format | -| ---------------- | ----------------------------- | ------------------ | ---------------------- | -| `prepare_cli()` | Dynamic command registration | Typer instance | Configured CLI | -| Service commands | Module-specific functionality | Service parameters | Module-specific output | - -### 4.3 GUI Interface - -**GUI Framework Support:** - -| Component | Purpose | Requirements | Integration | -| ----------------- | ------------------------- | ----------------------- | ------------------ | -| `BasePageBuilder` | Page registration pattern | Abstract method impl | NiceGUI framework | -| `gui_run()` | Application launcher | Optional configuration | Web browser launch | -| Page registration | Module UI integration | PageBuilder inheritance | Dynamic discovery | - ---- - -## 5. Dependencies and Integration - -### 5.1 Internal Dependencies - -| Dependency Module | Usage Purpose | Interface/Contract Used | Criticality | -| ----------------- | -------------------------- | ---------------------------- | ----------- | -| None | Utils is foundation module | Provides base classes to all | Required | - -### 5.2 External Dependencies - -| Dependency | Min Version | Purpose | Optional/Required | Fallback Behavior | -| ------------------- | ----------- | ---------------------------------- | ----------------- | -------------------------- | -| `pydantic` | ^2.0 | Settings validation and modeling | Required | N/A - core functionality | -| `pydantic-settings` | ^2.0 | Environment-based configuration | Required | N/A - core functionality | -| `rich` | ^13.0 | Console output and formatting | Required | Basic console output | -| `typer` | ^0.12 | CLI framework and command handling | Required | N/A - CLI functionality | -| `python-dotenv` | ^1.0 | Environment file loading | Required | Environment variables only | -| `nicegui` | ^1.0 | GUI framework support | Optional | CLI-only mode | -| `logfire` | ^0.41 | Observability and monitoring | Optional | Standard logging | -| `sentry-sdk` | ^2.0 | Error tracking and performance | Optional | Local error handling | -| `marimo` | ^0.8 | Notebook utilities | Optional | Notebook features disabled | - -_Note: For exact version requirements, refer to `pyproject.toml` and dependency lock files._ - -### 5.3 Integration Points - -- **All SDK Modules**: Provides foundational services through BaseService pattern -- **CLI Integration**: Dynamic command discovery and registration for all module CLIs -- **GUI Integration**: Page builder pattern for modular web interface development -- **External Monitoring**: Integration with Logfire and Sentry for observability - ---- - -## 6. Configuration and Settings - -### 6.1 Configuration Parameters - -| Parameter | Type | Default | Description | Required | -| ----------------------------------------- | ---- | ----------- | ---------------------------------------- | -------- | -| `UNHIDE_SENSITIVE_INFO` | str | N/A | Context key for revealing sensitive data | No | -| `__is_development_mode__` | bool | auto-detect | Development vs production mode | No | -| `__is_running_in_container__` | bool | auto-detect | Container environment detection | No | -| `__is_running_in_read_only_environment__` | bool | auto-detect | Read-only environment detection | No | - -### 6.2 Environment Variables - -| Variable | Purpose | Example Value | -| --------------------------------- | ------------------------------ | --------------------- | -| `AIGNOSTICS_ENV_FILE` | Custom environment file path | `/path/to/custom.env` | -| `AIGNOSTICS_RUNNING_IN_CONTAINER` | Container environment flag | `true` | -| `VERCEL_ENV` | Vercel deployment environment | `production` | -| `RAILWAY_ENVIRONMENT` | Railway deployment environment | `production` | - ---- - -## 7. Error Handling and Validation - -### 7.1 Error Categories - -| Error Type | Cause | Handling Strategy | User Impact | -| ------------------- | ----------------------------------- | ----------------------- | ---------------------- | -| `ValidationError` | Invalid settings configuration | Log error, fail fast | Configuration required | -| `ImportError` | Missing optional dependencies | Graceful degradation | Feature unavailable | -| `FileNotFoundError` | Missing configuration files | Use defaults | Continue with defaults | -| `TypeError` | Invalid service/builder inheritance | Log error, skip service | Service unavailable | - -### 7.2 Input Validation - -- **Settings Classes**: Pydantic validation with custom validators for sensitive data -- **File Paths**: Sanitization and validation of user data directory paths -- **Environment Variables**: Type coercion and validation of environment configuration -- **Service Discovery**: Type checking for proper inheritance from base classes - -### 7.3 Graceful Degradation - -- **When NiceGUI unavailable**: GUI features disabled, CLI-only mode -- **When Logfire unavailable**: Standard logging without observability features -- **When Sentry unavailable**: Error tracking disabled, local logging only -- **When Marimo unavailable**: Notebook features disabled - ---- - -## 8. Security Considerations - -### 8.1 Data Protection - -- **Sensitive Settings**: OpaqueSettings base class with controlled serialization excludes sensitive data from logs and exports -- **Environment Variables**: Secure loading and validation of configuration with proper access controls -- **File System Access**: Path sanitization and validation prevents directory traversal attacks - -### 8.2 Security Measures - -- **Input Sanitization**: Path component and full path sanitization with Windows drive letter preservation, prevents malicious path injection -- **Secret Management**: Controlled serialization of sensitive data with `OpaqueSettings.serialize_sensitive_info()` method -- **Environment Isolation**: Environment-specific behavior with container and read-only environment detection - ---- - -## 9. Implementation Details - -### 9.1 Key Algorithms and Business Logic - -- **Service Discovery**: Reflection-based discovery with performance-optimized caching for dynamic service location -- **Health Propagation**: Recursive tree traversal algorithm for computing hierarchical health status with failure propagation -- **Settings Loading**: Environment variable binding with validation, type coercion, and sensitive data protection -- **Path Sanitization**: Security-focused path validation and normalization to prevent directory traversal - -### 9.2 State Management and Data Flow - -- **State Type**: Primarily stateless design with cached discovery results for performance -- **Data Persistence**: No persistent state maintained; configuration loaded from environment and constants -- **Cache Strategy**: In-memory caching of service discovery results with thread-safe access patterns - -### 9.3 Performance and Scalability Considerations - -- **Performance Characteristics**: Efficient lazy loading of optional dependencies, cached service discovery for repeated operations -- **Scalability Patterns**: Thread-safe service discovery enables concurrent access, stateless design supports horizontal scaling -- **Resource Management**: Memory-efficient caching with automatic cleanup, lightweight service initialization -- **Concurrency Model**: Thread-safe discovery caches, async-compatible patterns for GUI and web components - ---- - -## Documentation Maintenance - -### Verification and Updates - -**Last Verified**: September 11, 2025 -**Verification Method**: Code review against implementation in `src/aignostics/utils/` -**Next Review Date**: December 11, 2025 - -### Change Management - -**Interface Changes**: Changes to BaseService or BasePageBuilder APIs require spec updates and version bumps -**Implementation Changes**: Internal discovery algorithms don't require spec updates unless behavior changes -**Dependency Changes**: Optional dependency changes should be reflected in fallback behavior section - -### References - -**Implementation**: See `src/aignostics/utils/` for current implementation -**Tests**: See `tests/aignostics/utils/` for usage examples and verification -**API Documentation**: Auto-generated from docstrings and type hints diff --git a/specifications/SPEC_CLI.md b/specifications/SPEC_CLI.md new file mode 100644 index 00000000..1950f03f --- /dev/null +++ b/specifications/SPEC_CLI.md @@ -0,0 +1,10 @@ +--- +itemId: SPEC-CLI +itemType: Software Item Spec +itemFulfills: SWR-SYSTEM-CLI-HEALTH-1 +Module: All +Layer: CLI +--- + + +The CLI is built using Typer. diff --git a/specifications/SPEC_GUI.md b/specifications/SPEC_GUI.md new file mode 100644 index 00000000..556a8cca --- /dev/null +++ b/specifications/SPEC_GUI.md @@ -0,0 +1,10 @@ +--- +itemId: SPEC-GUI +itemType: Software Item Spec +itemFulfills: SWR-SYSTEM-GUI-HEALTH-1 +Module: All +Layer: GUI +--- + + +The GUI is built using NiceGUI. diff --git a/specifications/SPEC_GUI_SERVICE.md b/specifications/SPEC_GUI_SERVICE.md deleted file mode 100644 index fcf9d483..00000000 --- a/specifications/SPEC_GUI_SERVICE.md +++ /dev/null @@ -1,359 +0,0 @@ ---- -itemId: SPEC-GUI-SERVICE -itemTitle: GUI Module Specification -itemType: Software Item Spec -itemFulfills: SWR-SYSTEM-GUI-HEALTH-1, SWR-SYSTEM-GUI-SETTINGS-1 -itemIsRelatedTo: SPEC-APPLICATION-SERVICE, SPEC-BUCKET-SERVICE, SPEC-DATASET-SERVICE, SPEC-NOTEBOOK-SERVICE, SPEC-QUPATH-SERVICE, SPEC-SYSTEM-SERVICE -Module: GUI _(Graphical User Interface Framework)_ -Layer: Presentation Interface -Version: 1.0.0 -Date: 2025-09-11 ---- - -## 1. Description - -### 1.1 Purpose - -The GUI (Graphical User Interface) Module provides a web-based interface framework for the Aignostics Python SDK. The module enables other SDK modules to create consistent web interfaces using the BasePageBuilder pattern, with standardized theming, error handling, and layout components. It serves as the presentation layer that aggregates functionality from domain modules into a unified web application interface. - -### 1.2 Functional Requirements - -The GUI Module shall: - -- **[FR-01]** Provide standardized page layout framework through `frame()` context manager with navigation and branding components -- **[FR-02]** Enable consistent theming through `theme()` function with Aignostics brand colors, fonts, and CSS styling -- **[FR-03]** Implement BasePageBuilder pattern to enable module-specific GUI component registration and discovery -- **[FR-04]** Support static asset management with centralized serving of fonts, logos, and styling resources -- **[FR-05]** Provide error page handling through ErrorPageBuilder with fallback mechanisms for failed operations -- **[FR-06]** Enable health monitoring integration with periodic system health updates in the navigation frame -- **[FR-07]** Support user authentication status display with profile integration and authentication state management - -### 1.3 Non-Functional Requirements - -- **Performance**: Health monitoring updates every 30 seconds, user info updates every 3600 seconds, lazy loading of optional dependencies -- **Security**: Secure handling of user authentication status, safe asset serving, input validation for navigation -- **Reliability**: Graceful degradation when NiceGUI is unavailable, fallback error pages, conditional feature loading -- **Usability**: Consistent navigation patterns, responsive design, accessible theming, cross-platform desktop support -- **Scalability**: Modular architecture supporting dynamic module registration, efficient asset management - -### 1.4 Constraints and Limitations - -- **NiceGUI Framework Dependency**: Full functionality requires NiceGUI installation, graceful degradation when unavailable -- **Platform Integration Dependencies**: Requires Platform and System modules for user authentication and health monitoring features -- **Container Environment**: Health monitoring and authentication features may have limited functionality in containerized environments -- **Browser Compatibility**: Requires modern web browser with JavaScript enabled for full functionality - ---- - -## 2. Architecture and Design - -### 2.1 Module Structure - -``` -gui/ -├── __init__.py # Module exports and conditional NiceGUI loading -├── _theme.py # Brand theming, colors, fonts, CSS styling -├── _frame.py # Page layout framework with navigation and health monitoring -├── _error.py # Error page handling and fallback mechanisms -└── assets/ # Static assets for branding and styling - ├── cabin-v27-latin-regular.woff2 # Custom font file - ├── cat.lottie # Animation asset - └── logo.png # Brand logo -``` - -### 2.2 Key Components - -| Component | Type | Purpose | Public Interface | Dependencies | -| ------------------ | -------- | ------------------------------------- | -------------------------------------- | ---------------- | -| `theme` | Function | Apply Aignostics brand styling | Configures UI colors, fonts, CSS | NiceGUI | -| `frame` | Function | Page layout with navigation framework | Context manager for page structure | Platform, System | -| `PageBuilder` | Class | Theme and static asset registration | BasePageBuilder pattern implementation | Utils Module | -| `ErrorPageBuilder` | Class | Error page handling and fallbacks | Error scenario page registration | Utils Module | - -For detailed implementation, refer to the source code in the `src/aignostics/gui/` directory. - -### 2.3 Design Patterns - -- **BasePageBuilder Pattern**: Abstract base class pattern enabling module-specific GUI component registration through standardized interface -- **Context Manager Pattern**: `frame()` function provides consistent page layout structure with automatic resource management -- **Factory Pattern**: Theme application with configurable styling and brand asset loading -- **Observer Pattern**: Health monitoring integration with automatic UI updates and status propagation -- **Conditional Loading Pattern**: Optional dependency detection and graceful degradation when GUI frameworks unavailable -- **Auto-Discovery Pattern**: Automatic detection and registration of module GUI components using `locate_subclasses()` - ---- - -## 3. Inputs and Outputs - -### 3.1 Inputs - -| Input Type | Source | Data Type/Format | Validation Rules | Business Rules | -| -------------------- | ---------- | ---------------- | -------------------------------------- | --------------------------------------- | -| Navigation Title | Function | `str` | Required, non-empty string | Must be descriptive page identifier | -| Navigation Icon | Function | `str` or `None` | Optional, valid icon identifier | Must be valid NiceGUI icon name | -| Layout Configuration | Function | `bool` | Boolean flags for sidebar display | Controls page layout structure | -| Static Asset Files | Filesystem | Binary files | Valid file paths, readable permissions | Font, image, and animation files | -| Module PageBuilders | Code | Class instances | Must inherit from BasePageBuilder | Auto-discovered through service pattern | - -### 3.2 Outputs - -| Output Type | Destination | Data Type/Format | Success Criteria | Error Conditions | -| ---------------- | ------------- | ---------------- | ------------------------------------- | ------------------------------------- | -| HTML Page Layout | Web Browser | HTML/CSS/JS | Complete responsive page structure | Rendering errors, missing components | -| Theme Styles | Web Browser | CSS stylesheets | Applied brand colors and typography | Style conflicts, loading failures | -| Static Assets | Web Browser | Binary responses | Correct MIME types, efficient serving | File not found, permission denied | -| Error Pages | Web Browser | HTML content | User-friendly error display | Critical system failures | -| Health Updates | Web Interface | JSON data | Real-time service status display | Health monitoring service unavailable | - -### 3.3 Data Schemas - -**Frame Configuration Schema:** - -```yaml -# Frame function parameters -frame_config: - type: object - required: [navigation_title] - properties: - navigation_title: - type: string - description: Title displayed in navigation bar - validation: Non-empty string - icon: - type: string - description: Icon identifier for navigation - validation: Valid NiceGUI icon name or null - left_sidebar: - type: boolean - description: Enable left sidebar display - default: false -``` - -**Theme Configuration Schema:** - -```yaml -# Theme styling configuration -theme_config: - type: object - properties: - colors: - type: object - description: Aignostics brand color scheme - fonts: - type: object - description: Custom font definitions - css_overrides: - type: string - description: Additional CSS styling -``` - -Actual schemas may be defined in OpenAPI specifications or JSON Schema files. - -### 3.4 Data Flow - -```mermaid -graph LR - A[Module Registration] --> B[PageBuilder Discovery] --> C[Frame Layout] - C --> D[Theme Application] --> E[Asset Serving] - F[Health Monitoring] --> G[Status Updates] --> H[Navigation Display] - I[User Interface] --> J[Error Handling] --> K[Fallback Pages] -``` - ---- - -## 4. Interface Definitions - -### 4.1 Public API - -#### Core Service Interface - -**Frame Context Manager**: `frame()` - -- **Purpose**: Provides standardized page layout with navigation, branding, and health monitoring integration -- **Key Methods**: - - `frame(navigation_title: str, icon: str | None = None, left_sidebar: bool = False)`: Creates consistent page layout structure -- **Input/Output Contracts**: - - **Input Types**: Navigation title (required string), optional icon and layout configuration - - **Output Types**: HTML page structure with navigation, sidebar, and content areas - - **Error Conditions**: Graceful fallback when NiceGUI dependencies unavailable - -**Theme Application Function**: `theme()` - -- **Purpose**: Applies consistent Aignostics branding and styling across all interfaces -- **Key Methods**: - - `theme() -> None`: Applies CSS color scheme, custom fonts, and responsive design patterns - -**PageBuilder Classes**: `PageBuilder`, `ErrorPageBuilder` - -- **Purpose**: Standard interface for module-specific GUI component registration and static asset management -- **Key Methods**: - - `register_pages() -> None`: Abstract method for route and asset registration - -### 4.2 CLI Interface (if applicable) - -**Command Structure:** - -```bash -uvx aignostics launchpad -``` - -**Available Commands:** - -| Command | Purpose | Input Requirements | Output Format | -| ----------- | ------------------------------------- | ------------------ | --------------------- | -| `launchpad` | Open graphical user interface desktop | None | Native desktop window | - -**Common Options:** - -- `--help`: Display command help -- Conditional availability based on NiceGUI and WebView dependencies - -### 4.3 HTTP/Web Interface (if applicable) - -**Endpoint Structure:** - -| Method | Endpoint | Purpose | Request Format | Response Format | -| ------ | ----------------------- | --------------------- | -------------- | --------------- | -| `GET` | `/` | Main application page | None | HTML page | -| `GET` | `/module_name_assets/*` | Static asset serving | File path | Binary content | -| `GET` | `/module_name/*` | Module-specific pages | None | HTML page | - -**Authentication**: User authentication status display integrated in navigation -**Error Responses**: Standardized error pages with fallback mechanisms - ---- - -## 5. Dependencies and Integration - -### 5.1 Internal Dependencies - -| Dependency Module | Usage Purpose | Interface/Contract Used | Criticality | -| ----------------- | -------------------------------------- | ---------------------------------------- | ----------- | -| Utils Module | BasePageBuilder pattern implementation | `BasePageBuilder`, service discovery | Required | -| Platform Module | User authentication status integration | `UserInfo`, authentication services | Optional | -| System Module | Health monitoring integration | `SystemService`, health status reporting | Optional | -| Constants Module | Version and project metadata | `__version__`, project configuration | Required | - -### 5.2 External Dependencies - -| Dependency | Min Version | Purpose | Optional/Required | Fallback Behavior | -| ---------- | ----------- | ------------------------------------ | ----------------- | ------------------------------------- | -| `nicegui` | ^1.0 | Web framework and UI components | Required | GUI functionality completely disabled | -| `fastapi` | ^0.100 | Static file serving and HTTP routing | Required | Asset serving fails | -| `humanize` | ^4.0 | Human-readable time formatting | Required | Raw timestamp display | -| `webview` | ^4.0 | Native desktop application support | Optional | Web browser launch only | -| `uvicorn` | ^0.23 | ASGI server for development | Optional | Development server unavailable | - -For exact version requirements, refer to `pyproject.toml` and dependency lock files. - -### 5.3 Integration Points - -- **All SDK Modules**: Provides theming and layout framework for module-specific GUI components through PageBuilder pattern -- **CLI Integration**: GUI application launcher integrated with main CLI through conditional command registration -- **Web Browser**: Primary interface through modern web browser with responsive design compatibility -- **Desktop Environment**: Optional native desktop application support through WebView integration - ---- - -## 6. Configuration and Settings - -### 6.1 Configuration Parameters - -| Parameter | Type | Default | Description | Required | -| -------------------------- | ----- | ------- | ------------------------------------- | -------- | -| `HEALTH_UPDATE_INTERVAL` | `int` | `30` | Health check frequency (seconds) | No | -| `USERINFO_UPDATE_INTERVAL` | `int` | `3600` | User info refresh frequency (seconds) | No | - -### 6.2 Environment Variables - -| Variable | Purpose | Example Value | -| ----------------------------- | -------------------------------------- | ------------------ | -| `__is_running_in_container__` | Container detection for feature gating | `"true"`/`"false"` | - ---- - -## 7. Error Handling and Validation - -### 7.1 Error Categories - -| Error Type | Cause | Handling Strategy | User Impact | -| --------------------- | ----------------------------- | -------------------------- | ------------------------ | -| `ImportError` | NiceGUI not available | Graceful degradation | GUI features unavailable | -| `ModuleNotFoundError` | WebView not available | Disable desktop features | Web-only interface | -| `ValueError` | Invalid navigation parameters | Log warning, use defaults | Default navigation shown | -| `RuntimeError` | Asset serving failure | Fallback to default assets | Basic styling applied | - -### 7.2 Input Validation - -- **Navigation Title**: Required non-empty string, sanitized for HTML display -- **Icon Parameters**: Optional string validation against known icon set -- **Asset Paths**: Path validation for security, restricted to module directories -- **Boolean Flags**: Type validation with default fallbacks - -### 7.3 Graceful Degradation - -- **When NiceGUI is unavailable**: All GUI functionality disabled, empty exports returned -- **When WebView is unavailable**: Desktop features disabled, web-only mode active -- **When assets are missing**: Fallback to default assets and basic styling - ---- - -## 8. Security Considerations - -### 8.1 Data Protection - -- **Authentication**: Secure display of user authentication status without exposing sensitive data -- **Data Encryption**: In-transit encryption through HTTPS for web interface -- **Access Control**: Module-based permission system for GUI component access - -### 8.2 Security Measures [Optional] - -- **Input Sanitization**: Navigation titles and parameters sanitized for HTML output -- **Secret Management**: No secrets stored in GUI module, authentication handled by Platform module -- **Audit Logging**: Security events logged through standard logging framework - ---- - -## 9. Implementation Details - -### 9.1 Key Algorithms and Business Logic - -- **Auto-Discovery Algorithm**: Uses `locate_subclasses()` to find all `BasePageBuilder` implementations across modules and automatically register their pages -- **Conditional Loading Algorithm**: Uses `find_spec()` to detect available dependencies and conditionally load features -- **Theme Application Algorithm**: CSS injection and font loading with fallback mechanisms for consistent styling - -### 9.2 State Management and Data Flow - -- **State Type**: Stateful GUI application with session-based configuration and persistent theme settings -- **Data Persistence**: No persistent state maintained; configuration loaded from constants and environment detection -- **Session Management**: Browser session tracking for theme application and health monitoring state synchronization -- **Cache Strategy**: Static asset caching through FastAPI, one-time theme application per session - -### 9.3 Performance and Scalability Considerations - -- **Performance Characteristics**: Fast theme application with cached asset serving, efficient health update cycles -- **Scalability Patterns**: Modular PageBuilder pattern supports dynamic module registration, asynchronous health monitoring -- **Resource Management**: Memory-efficient static asset serving, configurable update intervals for monitoring overhead -- **Concurrency Model**: Timer-based async operations for health updates, thread-safe GUI component operations - ---- - -## Documentation Maintenance - -### Verification and Updates - -**Last Verified**: September 15, 2025 -**Verification Method**: Code review against implementation in `src/aignostics/gui/` and template compliance check -**Next Review Date**: December 15, 2025 - -### Change Management - -**Interface Changes**: Changes to BasePageBuilder APIs require spec updates and version bumps -**Implementation Changes**: Theme and styling changes don't require spec updates unless affecting public contracts -**Dependency Changes**: NiceGUI version changes should be reflected in constraints section - -### References - -**Implementation**: See `src/aignostics/gui/` for current implementation -**Tests**: See `tests/aignostics/gui/` for usage examples and verification -**API Documentation**: Auto-generated from frame and theme function docstrings diff --git a/specifications/SPEC_NOTEBOOK_SERVICE.md b/specifications/SPEC_NOTEBOOK_SERVICE.md deleted file mode 100644 index 0379fbf4..00000000 --- a/specifications/SPEC_NOTEBOOK_SERVICE.md +++ /dev/null @@ -1,324 +0,0 @@ ---- -itemId: SPEC-NOTEBOOK-SERVICE -itemTitle: Notebook Module Specification -itemType: Software Item Spec -itemFulfills: SHR-NOTEBOOK-1, SWR-NOTEBOOK-1-1 -Module: Notebook _(Interactive Data Analysis)_ -Layer: Presentation Interface -Version: 1.0.0 -Date: 2025-09-11 ---- - -## 1. Description - -### 1.1 Purpose - -The Notebook Module provides interactive data analysis capabilities through Marimo notebook integration for the Aignostics Python SDK. It enables users to perform exploratory data analysis, visualization, and computation within the Aignostics platform ecosystem and serves as the presentation interface for data analysis workflows. - -### 1.2 Functional Requirements - -The Notebook Module shall: - -- **[FR-01]** Manage Marimo server lifecycle with start/stop/health monitoring capabilities -- **[FR-02]** Provide web-based notebook interface through iframe integration -- **[FR-03]** Support application run data integration for analysis workflows -- **[FR-04]** Register GUI pages and maintain navigation controls -- **[FR-05]** Handle asset management for notebook interface resources -- **[FR-06]** Implement comprehensive error handling and recovery mechanisms - -### 1.3 Non-Functional Requirements - -- **Performance**: Server startup within 60 seconds, responsive GUI page loading, minimal impact from output monitoring -- **Security**: User-level process permissions, URL parameter validation, iframe same-origin policy compliance -- **Reliability**: Graceful server startup failure recovery, safe concurrent start/stop requests, stable iframe integration -- **Usability**: Clear feedback for server operations, actionable error messages, consistent navigation controls -- **Scalability**: Singleton pattern for resource management, configurable timeout handling - -### 1.4 Constraints and Limitations - -- Marimo and NiceGUI dependencies required for full functionality - module unavailable without these -- No support for Jupyter notebook compatibility or custom notebook runtime engines -- Limited to subprocess-based Marimo server management - no advanced configuration customization - ---- - -## 2. Architecture and Design - -### 2.1 Module Structure - -``` -notebook/ -├── _service.py # Core business logic and Marimo server management -├── _gui.py # Web-based GUI components and page registration -├── _notebook.py # Default notebook template and configuration -├── assets/ # Static assets for notebook interface -│ └── python.lottie # Animation resources -└── __init__.py # Module exports and conditional loading -``` - -### 2.2 Key Components - -| Component | Type | Purpose | Public Interface | Dependencies | -| ------------- | ---------------- | ------------------------------------ | ------------------------- | --------------------- | -| `Service` | Class | Marimo server lifecycle management | start(), stop(), health() | marimo, subprocess | -| `PageBuilder` | Class | GUI page registration and navigation | register_pages() | nicegui, Service | -| `_Runner` | Class (Internal) | Subprocess execution and monitoring | N/A - Internal | threading, subprocess | - -_Note: For detailed implementation, refer to the source code in the `src/aignostics/notebook/` directory._ - -### 2.3 Design Patterns - -- **Singleton Pattern**: Applied to \_Runner class for single server instance management -- **Facade Pattern**: Service class provides simplified interface to complex \_Runner operations -- **Page Builder Pattern**: Modular GUI page registration and asset management - ---- - -## 3. Inputs and Outputs - -### 3.1 Inputs - -| Input Type | Source | Data Type/Format | Validation Rules | Business Rules | -| ------------------ | ------------- | ---------------- | ---------------------------- | ------------------------------ | -| Application Run ID | GUI/URL | String | Non-empty, 'all' or valid ID | Determines notebook data scope | -| Results Folder | GUI/URL | File Path | Valid directory path | Provides data access location | -| Server Timeout | Configuration | Integer | 1-300 seconds | Controls startup wait time | - -### 3.2 Outputs - -| Output Type | Destination | Data Type/Format | Success Criteria | Error Conditions | -| ------------- | ----------- | ---------------- | ------------------------------ | ------------------------- | -| Server URL | Client/GUI | HTTP URL String | Valid URL returned | RuntimeError on failure | -| Health Status | Monitoring | Health Object | Server/thread status available | Component failure states | -| GUI Pages | Web Browser | HTML/NiceGUI | Pages load successfully | Missing dependency errors | - -### 3.3 Data Schemas - -**Server Configuration Schema:** - -```yaml -ServerConfig: - type: object - properties: - startup_timeout: - type: integer - default: 60 - description: "Maximum seconds to wait for server startup" - source: "MARIMO_SERVER_STARTUP_TIMEOUT constant in _service.py" - required: [] - note: "Default notebook path is hardcoded in constants.py, not configurable" -``` - -**Navigation Parameters Schema:** - -```yaml -NavigationParams: - type: object - properties: - application_run_id: - type: string - description: "Application run identifier or 'all' for general access" - source: "URL path parameter in GUI routing" - results_folder: - type: string - format: path - description: "Path to results directory for data access" - source: "URL query parameter in GUI routing" - required: [application_run_id, results_folder] -``` - -_Note: Actual schemas are implemented through method signatures and configuration constants._ - -### 3.4 Data Flow - -```mermaid -graph LR - A[GUI Request] --> B[Service Interface] --> C[Marimo Server] - B --> D[Health Monitoring] - E[Configuration] --> B - F[Application Data] --> C -``` - ---- - -## 4. Interface Definitions - -### 4.1 Public API - -#### Core Service Interface - -**Service Class**: `Service` - -- **Purpose**: Manages Marimo server lifecycle and provides health monitoring -- **Key Methods**: - - `start() -> str`: Start Marimo server and return URL - - `stop() -> None`: Stop running Marimo server - - `health() -> Health`: Get server and thread status information - -**Input/Output Contracts**: - -- **Input Types**: Timeout configuration (integer), optional parameters -- **Output Types**: Server URLs (string), Health objects, None for cleanup operations -- **Error Conditions**: RuntimeError for startup failures, graceful handling for missing dependencies - -_Note: For detailed method signatures, refer to the module's `__init__.py` and service class documentation._ - -### 4.2 GUI Interface - -**Page Registration Interface:** - -The module registers web interface pages through the PageBuilder pattern: - -- `/notebook` - Main notebook management interface -- `/notebook/{application_run_id}` - Application-specific notebook view with data integration - -**Navigation Behavior:** - -- Iframe integration for seamless notebook embedding -- Back navigation controls with context-aware routing -- Error handling with retry mechanisms for server failures - -### 4.3 HTTP/Web Interface - -**Endpoint Structure:** - -| Method | Endpoint | Purpose | Request Format | Response Format | -| ------ | ---------------- | ------------------------- | ------------------------- | ---------------- | -| `GET` | `/notebook` | Main notebook interface | Query parameters optional | HTML page | -| `GET` | `/notebook/{id}` | Application-specific view | Path + query parameters | HTML with iframe | - -**Error Responses**: Standard NiceGUI error handling with user-friendly messages and retry options - ---- - -## 5. Dependencies and Integration - -### 5.1 Internal Dependencies - -| Dependency Module | Usage Purpose | Interface/Contract Used | Criticality | -| ----------------- | ------------------------- | ---------------------------- | ----------- | -| GUI Module | Frame and theme support | frame(), theme() functions | Required | -| Utils Module | Base services and logging | BaseService, BasePageBuilder | Required | -| Constants Module | Default notebook path | NOTEBOOK_DEFAULT constant | Required | - -### 5.2 External Dependencies - -| Dependency | Min Version | Purpose | Optional/Required | Fallback Behavior | -| ---------- | ----------- | ----------------------- | ----------------- | ------------------ | -| marimo | Latest | Notebook server runtime | Required | Module unavailable | -| nicegui | Latest | Web UI framework | Required | Module unavailable | - -_Note: For exact version requirements, refer to `pyproject.toml` and dependency lock files._ - -### 5.3 Integration Points - -- **Aignostics Platform**: Application run data integration through URL parameters -- **File System**: Results folder access for data analysis workflows -- **Web Browser**: Iframe integration for notebook interface embedding - ---- - -## 6. Configuration and Settings - -### 6.1 Configuration Parameters - -| Parameter | Type | Default | Description | Required | -| ------------------ | ------- | ------------- | ------------------------------------------ | -------- | -| `startup_timeout` | Integer | 60 | Maximum seconds to wait for server startup | No | -| `default_notebook` | Path | \_notebook.py | Path to default notebook template | No | - -### 6.2 Environment Variables - -| Variable | Purpose | Example Value | -| -------- | --------------------------------------- | ------------- | -| N/A | No environment variables currently used | N/A | - ---- - -## 7. Error Handling and Validation - -### 7.1 Error Categories - -| Error Type | Cause | Handling Strategy | User Impact | -| -------------- | ---------------------- | ---------------------------------------- | -------------------------------------------- | -| `RuntimeError` | Server startup failure | Clear error messages with retry options | User sees actionable error with retry button | -| `ImportError` | Missing dependencies | Graceful degradation, module unavailable | Module not loaded, no functionality | -| `TimeoutError` | Server startup timeout | Process cleanup and error reporting | User notified of timeout with retry option | - -### 7.2 Input Validation - -- **Application Run ID**: Validated as non-empty string, supports 'all' for general access -- **Results Folder Path**: URL encoded, validated for path traversal prevention -- **Timeout Values**: Integer validation with reasonable bounds (1-300 seconds) - -### 7.3 Graceful Degradation - -- **When marimo is unavailable**: Module not loaded, conditional import prevents errors -- **When nicegui is unavailable**: Module not loaded, no GUI functionality available -- **When server startup fails**: Clear error display with retry mechanisms - ---- - -## 8. Security Considerations - -### 8.1 Data Protection - -- **Authentication**: Relies on platform-level authentication mechanisms -- **Data Encryption**: No sensitive data stored, relies on HTTPS for transport security -- **Access Control**: Results folder access controlled through application permissions - -### 8.2 Security Measures - -- **Input Sanitization**: URL parameter validation prevents path traversal attacks -- **Process Management**: Server processes run with user-level permissions only -- **Iframe Security**: Follows same-origin policy where applicable for browser security - ---- - -## 9. Implementation Details - -### 9.1 Key Algorithms and Business Logic - -- **Server Lifecycle Management**: Singleton pattern ensures single server instance with proper cleanup -- **URL Detection**: Character-by-character output monitoring with regex pattern matching -- **Health Monitoring**: Thread-based status checking for server and monitoring components - -### 9.2 State Management and Data Flow - -- **State Type**: Stateful with singleton server instance management -- **Data Persistence**: Temporary server state only, no persistent data storage -- **Session Management**: Browser session-based through iframe integration -- **Cache Strategy**: No caching implemented, direct server communication - -### 9.3 Performance and Scalability Considerations - -- **Performance Characteristics**: 60-second server startup, character-level output monitoring -- **Scalability Patterns**: Single server instance per process, thread-safe operations -- **Resource Management**: Automatic process cleanup, memory-efficient output capture -- **Concurrency Model**: Thread-safe singleton with proper synchronization - ---- - -## Documentation Maintenance - -### Verification and Updates - -**Last Verified**: 2025-09-11 -**Verification Method**: Source code analysis and senior engineer review -**Next Review Date**: 2025-12-11 - -### Change Management - -**Interface Changes**: Changes to Service API require spec updates and version bumps -**Implementation Changes**: Internal \_Runner changes don't require spec updates unless contracts change -**Dependency Changes**: marimo/nicegui version changes should be reflected in requirements section - -### References - -**Implementation**: See `src/aignostics/notebook/` for current implementation -**Tests**: See `tests/aignostics/notebook/` for usage examples and verification -**Feature Tests**: See `tests/aignostics/notebook/TC-NOTEBOOK-GUI-01.feature` for behavior verification - -``` - -``` diff --git a/specifications/SPEC_PLATFORM_SERVICE.md b/specifications/SPEC_PLATFORM_SERVICE.md deleted file mode 100644 index d62a002b..00000000 --- a/specifications/SPEC_PLATFORM_SERVICE.md +++ /dev/null @@ -1,505 +0,0 @@ ---- -itemId: SPEC-PLATFORM-SERVICE -itemTitle: Platform Module Specification -itemType: Software Item Spec -itemFulfills: SWR-APPLICATION-1-1, SWR-APPLICATION-1-2, SWR-APPLICATION-2-1, SWR-APPLICATION-2-5, SWR-APPLICATION-2-6, SWR-APPLICATION-2-7, SWR-APPLICATION-2-9, SWR-APPLICATION-2-14, SWR-APPLICATION-2-15, SWR-APPLICATION-2-16, SWR-APPLICATION-3-1, SWR-APPLICATION-3-2, SWR-APPLICATION-3-3 -Module: Platform -Layer: Platform Service -Version: 1.0.0 -Date: 2025-09-09 ---- - -## 1. Description - -### 1.1 Purpose - -The Platform Module provides the foundational authentication, API client management, and core service infrastructure for the Aignostics Python SDK. It enables secure communication with the Aignostics Platform API and serves as the primary entry point for all biomedical data analysis workflows. This module handles OAuth 2.0 authentication flows, token management, API client configuration, and provides the base infrastructure for higher-level application modules. - -### 1.2 Functional Requirements - -The Platform Module shall: - -- **[FR-01]** Provide secure OAuth 2.0 authentication with support for both Authorization Code with PKCE and Device Authorization flows -- **[FR-02]** Manage JWT token lifecycle including acquisition, caching, validation, and refresh operations -- **[FR-03]** Configure and provide authenticated API clients for interaction with Aignostics Platform services -- **[FR-04]** Support multiple deployment environments (production, staging, development) with automatic endpoint configuration -- **[FR-05]** Provide CLI commands for user authentication operations (login, logout, whoami) -- **[FR-06]** Handle authentication errors with retry mechanisms and fallback flows -- **[FR-07]** Support proxy configurations and SSL certificate handling for enterprise environments -- **[FR-08]** Provide health monitoring for both public and authenticated API endpoints -- **[FR-09]** Manage application and application version resources with listing and filtering capabilities -- **[FR-10]** Create and manage application runs with status monitoring and result retrieval -- **[FR-11]** Download and verify file integrity using CRC32C checksums for run artifacts -- **[FR-12]** Generate signed URLs for secure Google Cloud Storage access -- **[FR-13]** Provide user and organization information retrieval with sensitive data masking options - -### 1.3 Non-Functional Requirements - -- **Performance**: 30s API timeout, 3s retry backoff, file-based token caching, 10-retry port availability checking -- **Security**: HTTPS-only communication, JWT validation (signature/audience/expiration), `SecretStr` protection, data masking, proxy/SSL support -- **Reliability**: Auto token refresh, browser-to-device flow fallback, re-authentication on failures, port conflict handling, socket reuse -- **Usability**: Predefined error messages, auto browser launch, device flow instructions, CLI commands (`login`/`logout`/`whoami`), environment auto-config -- **Scalability**: Token caching, configurable timeouts/retries, concurrent auth support, lazy API client initialization - -### 1.4 Constraints and Limitations - -- OAuth 2.0 dependency: Requires external Auth0 service for authentication, creating external dependency -- Browser dependency: Interactive flow requires web browser availability, limiting headless deployment options -- Network dependency: Requires internet connectivity for initial authentication and token validation -- Platform-specific: Designed specifically for Aignostics Platform API integration - ---- - -## 2. Architecture and Design - -### 2.1 Module Structure - -``` -platform/ -├── _service.py # Core service implementation with health monitoring -├── _client.py # API client factory and configuration management -├── _authentication.py # OAuth flows and token management -├── _cli.py # Command-line interface for user operations -├── _settings.py # Environment-specific configuration management -├── _constants.py # Environment constants and endpoint definitions -├── _messages.py # User-facing messages and error constants -├── _utils.py # File operations, checksums, and GCS utilities -├── resources/ # Resource-specific implementations -│ ├── applications.py # Application and version management -│ ├── runs.py # Application run lifecycle management -│ └── utils.py # Shared resource utilities -└── __init__.py # Public API exports and module interface -``` - -### 2.2 Key Components - -| Component | Type | Purpose | Public API | -| ---------------- | ------ | ------------------------------------------------------- | ----------------------------------------------------------------------------- | -| `Client` | Class | Main entry point for authenticated API operations | `__init__()`, `me()`, `application()`, `run()`
Static: `get_api_client()` | -| `Service` | Class | Core service with health monitoring and user operations | `login()`, `logout()`, `get_user_info()`, `health()`, `info()` | -| `Settings` | Class | Environment-aware configuration management | Property accessors for all auth endpoints | -| `Applications` | Class | Application resource management | `list()`, `versions` accessor | -| `ApplicationRun` | Class | Run lifecycle and result management | `details()`, `cancel()`, `results()`, `download_to_folder()` | -| `Versions` | Class | Application version management | `list()`, `list_sorted()`, `latest()`, `details()` | -| `Runs` | Class | Application run management and creation | `create()`, `list()`, `list_data()`, `__call__()` | -| `utils` | Module | Resource utility functions and pagination helpers | `paginate()` | - -### 2.3 Design Patterns - -- **Factory Pattern**: `Client.get_api_client()` creates configured API clients based on environment settings -- **Service Layer Pattern**: Business logic encapsulated in service classes with clean separation from API details -- **Strategy Pattern**: Multiple authentication flows (Authorization Code vs Device Flow) selected based on environment capabilities -- **Template Method Pattern**: Base authentication flow with specific implementations for different OAuth grant types - ---- - -## 3. Inputs and Outputs - -### 3.1 Inputs - -| Input Type | Source | Format/Type | Validation Rules | Code Location | -| --------------------- | ----------------------- | ---------------- | ----------------------------------------------------------- | ---------------------------------------------------------- | -| Environment Variables | OS Environment | String/SecretStr | Must match expected format for URLs and client IDs | `_settings.py::Settings` class fields | -| Configuration Files | .env files | Key-value pairs | Validated against Pydantic schema | `_settings.py::Settings` with pydantic-settings | -| User Credentials | Interactive/Device Flow | OAuth responses | Validated against JWT specification | `_authentication.py::get_token()` OAuth flow handlers | -| API Root URL | Configuration | URL string | Must be valid HTTPS URL matching known endpoints | `_settings.py::Settings.api_root` property | -| File Paths | CLI/API | Path objects | Must be valid filesystem paths with appropriate permissions | `_utils.py` file operations and `ApplicationRun` downloads | - -### 3.2 Outputs - -| Output Type | Destination | Format/Type | Success Criteria | Code Location | -| ---------------- | ------------------- | ---------------------- | --------------------------------------------------- | ---------------------------------------------------- | -| JWT Access Token | Token cache/memory | String | Valid JWT with required claims and unexpired | `_authentication.py::get_token()` return value | -| API Client | Client applications | PublicApi object | Authenticated and configured for target environment | `_client.py::Client.get_api_client()` factory method | -| User Information | CLI/Application | UserInfo/Me objects | Complete user and organization data | `_service.py::Service.get_user_info()` method | -| Health Status | Monitoring systems | Health object | Accurate service and dependency status | `_service.py::Service.health()` method | -| Downloaded Files | Local filesystem | Binary/structured data | Verified checksums and complete downloads | `_utils.py` download functions and `ApplicationRun` | - -### 3.3 Data Schemas - -**Authentication Token Schema:** - -```yaml -JWTToken: - type: object - properties: - access_token: - type: string - description: JWT access token for API authentication - token_type: - type: string - enum: ["Bearer"] - description: Token type for authorization header - expires_in: - type: integer - description: Token expiration time in seconds - refresh_token: - type: string - description: Long-lived token for access token renewal - required: [access_token, token_type, expires_in] -``` - -**User Information Schema:** - -```yaml -UserInfo: - type: object - properties: - user: - type: object - properties: - id: { type: string } - name: { type: string } - email: { type: string } - picture: { type: string, nullable: true } - organization: - type: object - properties: - id: { type: string } - name: { type: string, nullable: true } - role: - type: string - description: User role within organization - token: - type: object - properties: - expires_in: { type: integer } - required: [user, organization, role, token] -``` - -### 3.4 Data Flow - -```mermaid -graph TD - A[User Request] --> B{Token Cached?} - B -->|Yes| C[Use Cached Token] - B -->|No| D[OAuth Authentication] - - D --> E{Browser Available?} - E -->|Yes| F[Browser Login] - E -->|No| G[Device Code Login] - - F --> H[Get JWT Token] - G --> H - C --> H - - H --> I[Create API Client] - I --> J[Make API Request] - J --> K[Return Results] - - L[Health Check] --> M[Check API Status] -``` - ---- - -## 4. Interface Definitions - -### 4.1 Public API - -#### Core Client Interface - -```python -class Client: - """Main client for interacting with the Aignostics Platform API.""" - - def __init__(self, cache_token: bool = True) -> None: - """Initializes authenticated API client with resource accessors.""" - - def me(self) -> Me: - """Retrieves current user and organization information.""" - - def application(self, application_id: str) -> Application: - """Finds specific application by ID.""" - - def run(self, application_run_id: str) -> ApplicationRun: - """Creates ApplicationRun instance for existing run.""" - - @staticmethod - def get_api_client(cache_token: bool = True) -> PublicApi: - """Creates authenticated API client with proper configuration.""" -``` - -#### Service Interface - -```python -class Service(BaseService): - """Core service for authentication and system operations.""" - - def login(self, relogin: bool = False) -> bool: - """Authenticates user and caches token.""" - - def logout(self) -> bool: - """Removes cached authentication token.""" - - def get_user_info(self, relogin: bool = False) -> UserInfo: - """Retrieves authenticated user information.""" - - def health(self) -> Health: - """Determines service and API health status.""" - - def info(self, mask_secrets: bool = True) -> dict[str, Any]: - """Returns service information including settings and user info.""" -``` - -```python -class Applications: - """Resource class for managing applications.""" - - def __init__(self, api: PublicApi) -> None: - """Initializes the Applications resource with the API client.""" - - def list(self) -> Iterator[Application]: - """Find all available applications.""" - - @property - def versions(self) -> Versions: - """Access to application versions management.""" -``` - -```python -class Versions: - """Resource class for managing application versions.""" - - def __init__(self, api: PublicApi) -> None: - """Initializes the Versions resource with the API client.""" - - def list(self, application: Application | str) -> Iterator[ApplicationVersion]: - """Find all versions for a specific application.""" - - def list_sorted(self, application: Application | str) -> list[ApplicationVersion]: - """Get application versions sorted by semver, descending.""" - - def latest(self, application: Application | str) -> ApplicationVersion | None: - """Get latest version.""" - - def details(self, application_version: ApplicationVersion | str) -> ApplicationVersion: - """Retrieves details for a specific application version.""" -``` - -```python -class Runs: - """Resource class for managing application runs.""" - - def __init__(self, api: PublicApi) -> None: - """Initializes the Runs resource with the API client.""" - - def create(self, application_version: str, items: list[ItemCreationRequest]) -> ApplicationRun: - """Creates a new application run.""" - - def list(self, for_application_version: str | None = None) -> Generator[ApplicationRun, Any, None]: - """Find application runs, optionally filtered by application version.""" - - def list_data(self, for_application_version: str | None = None, sort: str | None = None, page_size: int = 100) -> Iterator[ApplicationRunData]: - """Fetch application runs data with optional filtering and sorting.""" - - def __call__(self, application_run_id: str) -> ApplicationRun: - """Retrieves an ApplicationRun instance for an existing run.""" -``` - -```python -class ApplicationRun: - """Represents a single application run.""" - - def __init__(self, api: PublicApi, application_run_id: str) -> None: - """Initializes an ApplicationRun instance.""" - - @classmethod - def for_application_run_id(cls, application_run_id: str) -> "ApplicationRun": - """Creates an ApplicationRun instance for an existing run.""" - - def details(self) -> ApplicationRunData: - """Retrieves the current status of the application run.""" - - def cancel(self) -> None: - """Cancels the application run.""" - - def results(self) -> Iterator[ItemResultData]: - """Retrieves the results of all items in the run.""" - - def item_status(self) -> dict[str, ItemStatus]: - """Retrieves the status of all items in the run.""" - - def download_to_folder(self, download_base: Path | str, checksum_attribute_key: str = "checksum_base64_crc32c") -> None: - """Downloads all result artifacts to a folder.""" -``` - -```python -class Settings(OpaqueSettings): - """Configuration settings for the Aignostics SDK.""" - - # Core API settings - api_root: str - audience: str - authorization_base_url: str - token_url: str - device_url: str - jws_json_url: str - client_id_interactive: str - - # Optional settings - client_id_device: SecretStr | None - refresh_token: SecretStr | None - cache_dir: str - request_timeout_seconds: int - authorization_backoff_seconds: int -``` - -### 4.2 CLI Interface - -**Command Structure:** - -```bash -uvx aignostics platform [subcommand] [options] -``` - -**Available Commands:** - -- `login [--relogin]`: Authenticate user with platform -- `logout`: Remove authentication and clear cached tokens -- `whoami [--mask-secrets] [--relogin]`: Display current user information - -### 4.3 GUI Interface - -The Platform module provides foundational services but does not directly expose GUI components. It supports GUI applications by providing: - -- **Authentication State**: Token validation and user information for GUI session management -- **API Client Factory**: Configured clients for GUI data operations -- **Health Monitoring**: Service status for GUI health indicators - ---- - -## 5. Dependencies and Integration - -### 5.1 Internal Dependencies - -| Dependency Module | Usage Purpose | Interface Used | -| ----------------- | ---------------------------------------------------- | ----------------------------------------------- | -| Utils Module | Logging, configuration loading, base service classes | `get_logger()`, `BaseService`, `OpaqueSettings` | - -### 5.2 External Dependencies - -| Dependency | Version | Purpose | Optional/Required | -| -------------------- | ------- | ------------------------------------ | ----------------- | -| aignx.codegen | ^X.X.X | Auto-generated API client and models | Required | -| requests | ^2.31.0 | HTTP client for authentication flows | Required | -| requests-oauthlib | ^1.3.0 | OAuth 2.0 flow implementation | Required | -| PyJWT | ^2.8.0 | JWT token validation and decoding | Required | -| google-crc32c | ^1.5.0 | File integrity verification | Required | -| pydantic | ^2.5.0 | Settings validation and data models | Required | -| pydantic-settings | ^2.1.0 | Environment-based configuration | Required | -| google-cloud-storage | ^2.10.0 | Signed URL generation for downloads | Optional | -| typer | ^0.9.0 | CLI framework | Required | -| appdirs | ^1.4.4 | Platform-appropriate cache directory | Required | - -### 5.3 Integration Points - -- **Aignostics Platform API**: Primary integration via authenticated HTTP requests to platform endpoints -- **Auth0 Identity Service**: OAuth 2.0 flows for user authentication and token management -- **Google Cloud Storage**: Signed URL generation and secure file download capabilities -- **System Proxy Services**: Automatic proxy detection and configuration for enterprise environments - ---- - -## 6. Configuration and Settings - -### 6.1 Configuration Parameters - -| Parameter | Type | Default | Description | Required | -| ------------------------------- | ---- | --------------------------------- | --------------------------- | -------- | -| `api_root` | str | `https://platform.aignostics.com` | Base URL of Aignostics API | Yes | -| `audience` | str | Environment-specific | OAuth audience claim | Yes | -| `scope` | str | `offline_access` | OAuth scopes required | Yes | -| `cache_dir` | str | User cache directory | Directory for token storage | No | -| `request_timeout_seconds` | int | 30 | API request timeout | No | -| `authorization_backoff_seconds` | int | 3 | Retry backoff time | No | - -### 6.2 Environment Variables - -| Variable | Purpose | Example Value | -| ------------------------------------------ | -------------------------------- | ------------------------------------------------ | -| `AIGNOSTICS_API_ROOT` | Override default API endpoint | `https://platform-dev.aignostics.com` | -| `AIGNOSTICS_CLIENT_ID_DEVICE` | Device flow client ID | `device_client_123` | -| `AIGNOSTICS_SCOPE` | OAuth scopes required | `offline_access,read:data` | -| `AIGNOSTICS_AUDIENCE` | OAuth audience claim | `https://custom-audience` | -| `AIGNOSTICS_AUTHORIZATION_BASE_URL` | Custom authorization endpoint | `https://custom.auth0.com/authorize` | -| `AIGNOSTICS_TOKEN_URL` | Custom token endpoint | `https://custom.auth0.com/oauth/token` | -| `AIGNOSTICS_REDIRECT_URI` | Custom redirect URI | `http://localhost:9000/` | -| `AIGNOSTICS_DEVICE_URL` | Custom device authorization URL | `https://custom.auth0.com/oauth/device/code` | -| `AIGNOSTICS_JWS_JSON_URL` | Custom JWS key set URL | `https://custom.auth0.com/.well-known/jwks.json` | -| `AIGNOSTICS_CLIENT_ID_INTERACTIVE` | Interactive flow client ID | `interactive_client_123` | -| `AIGNOSTICS_REFRESH_TOKEN` | Long-lived refresh token | `refresh_token_value` | -| `AIGNOSTICS_CACHE_DIR` | Custom cache directory | `/custom/cache/path` | -| `AIGNOSTICS_REQUEST_TIMEOUT_SECONDS` | API request timeout | `60` | -| `AIGNOSTICS_AUTHORIZATION_BACKOFF_SECONDS` | Authorization retry backoff time | `5` | -| `AIGNOSTICS_ENV_FILE` | Custom .env file location | `/path/to/custom/.env` | -| `REQUESTS_CA_BUNDLE` | SSL certificate bundle | `/path/to/ca-bundle.crt` | - ---- - -## 7. Error Handling and Validation - -### 7.1 Error Categories - -| Error Type | Cause | Handling Strategy | User Impact | -| --------------------- | ---------------------------------------- | --------------------------------------------------- | -------------------------------------------- | -| `AuthenticationError` | Invalid credentials or network issues | Retry with exponential backoff; clear cached tokens | User prompted to re-authenticate | -| `ConfigurationError` | Invalid settings or missing endpoints | Validate on startup; provide clear error messages | Application fails fast with actionable error | -| `NetworkError` | Connection timeouts or proxy issues | Retry with backoff; fallback to device flow | Automatic retry or alternative auth flow | -| `TokenExpiredError` | JWT token past expiration | Automatic refresh using refresh token | Transparent token renewal | -| `ValidationError` | Invalid input parameters or file formats | Input sanitization and validation | Clear validation error messages | - -### 7.2 Input Validation - -- **JWT Tokens**: Signature verification, expiration checking, audience validation, issuer verification -- **File Paths**: Existence validation, permission checking, path traversal protection -- **URLs**: Format validation, HTTPS requirement, endpoint whitelist checking -- **Configuration Values**: Type validation, range checking, required field validation - -### 7.3 Graceful Degradation - -- **When browser unavailable**: Automatically fallback to device authorization flow for headless environments -- **When authentication service unreachable**: Use cached tokens if available and not expired (with 5-minute buffer) -- **When callback port unavailable**: Retry port availability check with backoff, then fallback to device flow if port remains occupied -- **When proxy configuration fails**: Attempt direct connection using system proxy settings with fallback to no proxy -- **When token refresh fails**: Force new authentication flow while preserving user session context -- **When API health check fails**: Continue with cached authentication but report degraded service status - ---- - -## 8. Security Considerations - -### 8.1 Data Protection - -- **Authentication**: OAuth 2.0 with PKCE for enhanced security; JWT tokens with short expiration times -- **Data Encryption**: HTTPS for all API communications; encrypted token storage with appropriate file permissions -- **Access Control**: Token-based API access; organization and role-based authorization claims - -### 8.2 Security Measures - -- **Input Sanitization**: URL validation, path traversal protection, parameter type checking -- **Secret Management**: SecretStr types for sensitive data; environment variable isolation; masked logging -- **Audit Logging**: Authentication events, token lifecycle, API access patterns -- **Token Security**: Automatic expiration, secure storage, validation on each use - ---- - -## 9. Implementation Details - -### 9.1 Key Algorithms and Business Logic - -- **PKCE Flow**: OAuth 2.0 Authorization Code flow with Proof Key for Code Exchange for enhanced security in public clients -- **Token Caching**: File-based token persistence with expiration tracking and automatic cleanup -- **Health Monitoring**: Multi-layer health checks including public endpoint availability and authenticated API access - -### 9.2 State Management and Data Flow - -- **Configuration State**: Pydantic-based settings with environment variable override hierarchy -- **Runtime State**: In-memory API client instances with lazy initialization -- **Cache Management**: File-based token cache with automatic expiration and cleanup - -### 9.3 Performance and Scalability Considerations - -- **Async Operations**: Synchronous design with thread-safe token operations -- **Thread Safety**: File-based locking for token cache operations; atomic file writes for token storage - ---- diff --git a/specifications/SPEC_SYSTEM_SERVICE.md b/specifications/SPEC_SYSTEM_SERVICE.md index 5d8f8a1a..2e2be417 100644 --- a/specifications/SPEC_SYSTEM_SERVICE.md +++ b/specifications/SPEC_SYSTEM_SERVICE.md @@ -1,421 +1,11 @@ --- itemId: SPEC-SYSTEM-SERVICE -itemTitle: System Module Specification itemType: Software Item Spec itemFulfills: SWR-SYSTEM-CLI-HEALTH-1, SWR-SYSTEM-GUI-HEALTH-1, SWR-SYSTEM-GUI-SETTINGS-1 -itemIsRelatedTo: SPEC-UTILS-SERVICE, SPEC-GUI-SERVICE Module: System -Layer: Platform Service -Version: 1.0.0 -Date: 2025-10-13 +Layer: Service --- -## 1. Description +The system module provides a service to check the health of the system which is accessible via the CLI and the GUI. The service is designed to return the current operational status of the system. -### 1.1 Purpose - -The System Module provides core platform services and system management capabilities for the Aignostics Python SDK. It enables system health monitoring, configuration management, diagnostics, proxy configuration, and serves as the foundational service layer for other modules in the platform ecosystem. - -### 1.2 Functional Requirements - -The System Module shall: - -- **[FR-01]** Provide comprehensive system health monitoring with network connectivity checks -- **[FR-02]** Aggregate and report system information including runtime, hardware, and process details -- **[FR-03]** Manage environment variable configuration through .env file operations -- **[FR-04]** Support remote diagnostics control via Sentry and Logfire integration -- **[FR-05]** Enable HTTP proxy configuration with SSL certificate and verification options -- **[FR-06]** Provide token-based authentication validation for sensitive operations -- **[FR-07]** Offer CLI commands for system management and configuration -- **[FR-08]** Support web-based GUI interface for system administration -- **[FR-09]** Generate and serve OpenAPI schema for API documentation - -### 1.3 Non-Functional Requirements - -- **Performance**: Network health checks timeout at 5 seconds, system info gathering uses 2-second intervals for CPU measurements, minimal CPU overhead during monitoring -- **Security**: Secret detection and masking in environment variables, token-based authentication, secure proxy configuration -- **Reliability**: Graceful degradation when network unavailable, robust error handling, consistent state management -- **Usability**: Clear CLI output formats (JSON/YAML), intuitive web interface, comprehensive help documentation -- **Scalability**: Efficient service discovery, minimal memory footprint, thread-safe operations - -### 1.4 Constraints and Limitations - -- Network health checks depend on external connectivity to api.ipify.org -- Remote diagnostics require valid Sentry and Logfire configuration -- OpenAPI schema loading requires accessible schema file in codegen directory -- GUI functionality requires nicegui dependency availability - ---- - -## 2. Architecture and Design - -### 2.1 Module Structure - -``` -system/ -├── _service.py # Core business logic and system operations -├── _cli.py # Command-line interface commands -├── _gui.py # Web-based GUI components and pages -├── _settings.py # Module-specific configuration settings -├── _exceptions.py # Custom exception definitions -├── assets/ # Static assets for GUI interface -│ └── system.lottie # Animation resources -└── __init__.py # Module exports and conditional loading -``` - -### 2.2 Key Components - -| Component | Type | Purpose | Public Interface | Dependencies | -| ------------- | ----- | ---------------------------------------- | ------------------------------------ | ----------------- | -| `Service` | Class | Core system operations and health checks | health(), info(), token validation | requests, psutil | -| `Settings` | Class | Configuration management | Token storage and validation | pydantic_settings | -| `PageBuilder` | Class | GUI page registration and interface | register_pages() | nicegui, Service | -| `cli` | Typer | Command-line interface | health, info, config, serve commands | typer, yaml | - -_Note: For detailed implementation, refer to the source code in the `src/aignostics/system/` directory._ - -### 2.3 Design Patterns - -- **Service Layer Pattern**: Service class encapsulates all business logic and system operations -- **Facade Pattern**: Simplified interface to complex system information gathering and configuration -- **Strategy Pattern**: Multiple output formats (JSON/YAML) for CLI commands -- **Template Method Pattern**: BaseService inheritance for consistent service behavior - ---- - -## 3. Inputs and Outputs - -### 3.1 Inputs - -| Input Type | Source | Data Type/Format | Validation Rules | Business Rules | -| -------------------- | ----------- | ---------------- | ----------------------------------- | --------------------------------- | -| Authentication Token | CLI/API/GUI | String | Non-empty, matches configured token | Required for sensitive operations | -| Configuration Key | CLI | String | Valid environment variable name | Converted to uppercase | -| Configuration Value | CLI | String | Any string value | Stored in primary .env file | -| Proxy Settings | CLI | Host/Port/Scheme | Valid URL components | SSL options mutually exclusive | -| Output Format | CLI | Enum | 'json' or 'yaml' | Default to JSON | - -### 3.2 Outputs - -| Output Type | Destination | Data Type/Format | Success Criteria | Error Conditions | -| -------------- | ----------- | ---------------- | ---------------------------- | ------------------------- | -| Health Status | CLI/API/GUI | Health Object | Status and component details | Network/service failures | -| System Info | CLI/API/GUI | JSON/Dict | Complete system information | Permission/access errors | -| Configuration | Environment | .env File | Key-value pairs written | File access errors | -| OpenAPI Schema | CLI/API | JSON Schema | Valid OpenAPI specification | Schema file not found | -| GUI Pages | Web Browser | HTML/NiceGUI | Pages render successfully | Missing dependency errors | - -### 3.3 Data Schemas - -**Health Status Schema:** - -```yaml -Health: - type: object - properties: - status: - type: string - enum: [UP, DOWN] - description: "Overall system health status" - components: - type: object - description: "Health status of individual components" - additionalProperties: - $ref: "#/definitions/Health" - reason: - type: string - description: "Reason for DOWN status, null for UP" - required: [status] -``` - -**System Info Schema:** - -```yaml -SystemInfo: - type: object - properties: - package: - type: object - description: "Package metadata (version, name, repository)" - runtime: - type: object - description: "Runtime environment information" - properties: - environment: - type: string - username: - type: string - process: - type: object - host: - type: object - python: - type: object - environ: - type: object - description: "Environment variables (optional, masked by default)" - settings: - type: object - description: "Aggregated settings from all modules" - required: [package, runtime, settings] -``` - -**Configuration Schema:** - -```yaml -Configuration: - type: object - properties: - key: - type: string - description: "Environment variable key (uppercase)" - value: - type: string - description: "Environment variable value" - required: [key, value] -``` - -### 3.4 Data Flow - -```mermaid -graph LR - A[CLI/API Request] --> B[Service Layer] --> C[System Operations] - B --> D[Health Monitoring] - B --> E[Configuration Management] - F[External Services] --> D - G[Environment Files] --> E - H[System Resources] --> C -``` - ---- - -## 4. Interface Definitions - -### 4.1 Public API - -#### Core Service Interface - -**Service Class**: `Service` - -- **Purpose**: Provides core system management, health monitoring, and configuration services -- **Key Methods**: - - `health() -> Health`: Get aggregate system health including component status (instance method) - - `health_static() -> Health`: Static method to get system health without instance - - `info(include_environ: bool = False, mask_secrets: bool = True) -> dict[str, Any]`: Static method to get comprehensive system information - - `is_token_valid(token: str) -> bool`: Validate authentication token (instance method) - - `dotenv_set(key: str, value: str) -> None`: Static method to set environment variable in .env files - - `dotenv_get(key: str) -> str | None`: Static method to get environment variable value - - `dotenv_unset(key: str) -> int`: Static method to remove environment variable from .env files - - `remote_diagnostics_enable() -> None`: Static method to enable remote diagnostics - - `remote_diagnostics_disable() -> None`: Static method to disable remote diagnostics - - `http_proxy_enable(host: str, port: int, scheme: str, ssl_cert_file: str | None = None, no_ssl_verify: bool = False) -> None`: Static method to configure HTTP proxy - - `http_proxy_disable() -> None`: Static method to disable HTTP proxy - - `openapi_schema() -> JsonType`: Static method to get OpenAPI specification - -**Input/Output Contracts**: - -- **Input Types**: Strings for tokens/keys/values, booleans for flags, timeout integers, optional SSL certificate paths -- **Output Types**: Health objects, dictionaries for info, strings for configuration values, JSON for OpenAPI schema -- **Error Conditions**: RuntimeError for network failures, ValueError for configuration errors, OpenAPISchemaError for schema issues - -### 4.2 CLI Interface - -**Command Structure:** - -```bash -uvx aignostics system [subcommand] [options] -``` - -**Available Commands:** - -| Command | Purpose | Input Requirements | Output Format | -| --------- | ------------------------------ | ------------------------------ | --------------- | -| `health` | Display system health status | Optional output format | JSON/YAML | -| `info` | Show comprehensive system info | Optional environ/masking flags | JSON/YAML | -| `serve` | Start web GUI server | Host, port, browser options | Server startup | -| `openapi` | Display OpenAPI schema | API version, output format | JSON/YAML | -| `install` | Complete installation | None | Success message | - -**Configuration Subcommands:** - -| Command | Purpose | Input Requirements | Output | -| ----------------------------------- | -------------------------- | ----------------------- | --------------- | -| `config get ` | Get configuration value | Configuration key name | Key value | -| `config set ` | Set configuration value | Key name and value | Success message | -| `config unset ` | Remove configuration value | Configuration key name | Success message | -| `config remote-diagnostics-enable` | Enable remote diagnostics | None | Success message | -| `config remote-diagnostics-disable` | Disable remote diagnostics | None | Success message | -| `config http-proxy-enable` | Configure HTTP proxy | Host, port, SSL options | Success message | -| `config http-proxy-disable` | Disable HTTP proxy | None | Success message | - -### 4.3 HTTP/Web Interface - -**GUI Pages:** - -| Route | Purpose | Request Format | Response Format | -| --------- | ------------------------ | ---------------- | --------------- | -| `/system` | System administration UI | Query parameters | HTML interface | - -**GUI Features:** - -- Health monitoring with JSON tree display -- System info with secret masking controls -- Configuration management interface -- Remote diagnostics toggle - ---- - -## 5. Dependencies and Integration - -### 5.1 Internal Dependencies - -| Dependency Module | Usage Purpose | Interface/Contract Used | Criticality | -| ----------------- | ------------------------- | ---------------------------- | ----------- | -| Utils Module | Base services and logging | BaseService, Health, logging | Required | -| Constants Module | API version information | API_VERSIONS constant | Required | -| GUI Module | Frame and theme support | frame() function | Optional | - -### 5.2 External Dependencies - -| Dependency | Min Version | Purpose | Optional/Required | Fallback Behavior | -| ----------------- | ----------- | -------------------------- | ----------------- | -------------------- | -| requests | Latest | HTTP requests for health | Required | Network health fails | -| psutil | Latest | System resource monitoring | Required | Info gathering fails | -| typer | Latest | CLI framework | Required | No CLI functionality | -| pydantic-settings | Latest | Configuration management | Required | No settings support | -| nicegui | Latest | Web GUI framework | Optional | No GUI functionality | -| python-dotenv | Latest | .env file management | Required | No config management | - -_Note: For exact version requirements, refer to `pyproject.toml` and dependency lock files._ - -### 5.3 Integration Points - -- **External Health Check**: api.ipify.org for network connectivity validation -- **File System**: .env files for configuration persistence -- **Process System**: System resource monitoring and process information -- **Other SDK Modules**: Service discovery and health aggregation - ---- - -## 6. Configuration and Settings - -### 6.1 Configuration Parameters - -| Parameter | Type | Default | Description | Required | -| --------- | --------- | ------- | --------------------------------------------- | -------- | -| `token` | SecretStr | None | Authentication token for sensitive operations | No | - -### 6.2 Environment Variables - -| Variable | Purpose | Example Value | -| ---------------------------- | -------------------------- | ------------------------------- | -| `AIGNOSTICS_SYSTEM_TOKEN` | Authentication token | `secret-token-value` | -| `AIGNOSTICS_SENTRY_ENABLED` | Enable Sentry diagnostics | `1` | -| `AIGNOSTICS_LOGFIRE_ENABLED` | Enable Logfire diagnostics | `1` | -| `HTTP_PROXY` | HTTP proxy URL | `http://proxy.example.com:8080` | -| `HTTPS_PROXY` | HTTPS proxy URL | `http://proxy.example.com:8080` | -| `SSL_CERT_FILE` | SSL certificate file path | `/path/to/certificate.pem` | -| `SSL_NO_VERIFY` | Disable SSL verification | `1` | - ---- - -## 7. Error Handling and Validation - -### 7.1 Error Categories - -| Error Type | Cause | Handling Strategy | User Impact | -| -------------------- | ----------------------- | ------------------------------- | ----------------------------- | -| `OpenAPISchemaError` | Schema file issues | Clear error message with path | Schema operations fail | -| `ValueError` | Configuration conflicts | Validation with helpful message | Configuration rejected | -| `RuntimeError` | Network/system failures | Graceful degradation | Reduced functionality | -| `FileNotFoundError` | Missing .env files | Clear error with file path | Configuration operations fail | - -### 7.2 Input Validation - -- **Authentication Tokens**: Non-empty string validation, secure comparison -- **Configuration Keys**: Converted to uppercase, validated as environment variable names -- **Proxy Settings**: URL format validation, SSL option mutual exclusion -- **File Paths**: Existence verification for SSL certificates - -### 7.3 Graceful Degradation - -- **When network is unavailable**: Health status shows DOWN but system continues -- **When .env files missing**: Clear error messages with file paths -- **When dependencies unavailable**: Conditional loading prevents import errors -- **When external services fail**: Local operations continue, remote features disabled - ---- - -## 8. Security Considerations - -### 8.1 Data Protection - -- **Authentication**: Token-based validation for sensitive operations -- **Data Encryption**: No sensitive data persistence, secure token handling -- **Access Control**: Environment variable access controlled through service layer - -### 8.2 Security Measures - -- **Secret Detection**: Advanced pattern matching for environment variable masking -- **Input Sanitization**: Configuration key validation and path traversal prevention -- **Token Security**: SecretStr usage for secure token storage and comparison -- **Audit Logging**: Comprehensive logging of configuration changes and access - -- **Token Security**: Automatic expiration, secure storage, validation on each use - ---- - -## 9. Implementation Details - -### 9.1 Key Algorithms and Business Logic - -- **Health Aggregation**: Automatic discovery and aggregation of BaseService implementations -- **System Information Gathering**: Comprehensive runtime, hardware, and process data collection with configurable intervals -- **Secret Detection**: Dual-strategy pattern matching for environment variable classification using sophisticated algorithms: - - Word Boundary Matching: Terms like "id" use regex word boundaries to avoid false positives - - String Matching: Unambiguous terms like "token", "key", "secret", "password" use substring matching - - Case Insensitive: All detection is case-insensitive for robustness - - Real-world Patterns: Handles common environment variable naming conventions -- **Configuration Management**: Atomic .env file operations with rollback capability - -**Key Constants:** - -- `NETWORK_TIMEOUT = 5`: Network health check timeout in seconds -- `MEASURE_INTERVAL_SECONDS = 2`: CPU measurement interval for system info gathering -- `IPIFY_URL`: External service URL for network connectivity validation - -### 9.2 State Management and Data Flow - -- **State Type**: Mostly stateless with singleton service instances -- **Data Persistence**: Configuration persisted to .env files, runtime state in memory -- **Session Management**: Token-based authentication for web operations -- **Cache Strategy**: No caching for real-time system information - -### 9.3 Performance and Scalability Considerations - -- **Performance Characteristics**: Sub-second health checks, 2-second info gathering -- **Scalability Patterns**: Service discovery pattern for module integration -- **Resource Management**: Efficient system monitoring with configurable intervals -- **Concurrency Model**: Thread-safe operations, no shared mutable state - ---- - -## Documentation Maintenance - -### Verification and Updates - -**Last Verified**: 2025-09-11 -**Verification Method**: Source code analysis, test examination, and implementation review -**Next Review Date**: 2025-12-11 - -### Change Management - -**Interface Changes**: Changes to Service API require spec updates and version bumps -**Implementation Changes**: Internal algorithm changes don't require spec updates unless behavior changes -**Dependency Changes**: Major dependency changes should be reflected in requirements section - -### References - -**Implementation**: See `src/aignostics/system/` for current implementation -**Tests**: See `tests/aignostics/system/` for usage examples and verification -**API Documentation**: Auto-generated from service class docstrings +The service auto-masks secrets in the output, which can as well be switched off. For that a set of patterns is used to identify keys containing secrets. diff --git a/specifications/SPEC_WSI_SERVICE.md b/specifications/SPEC_WSI_SERVICE.md deleted file mode 100644 index 192b38d9..00000000 --- a/specifications/SPEC_WSI_SERVICE.md +++ /dev/null @@ -1,519 +0,0 @@ ---- -itemId: SPEC-WSI-SERVICE -itemTitle: WSI Module Specification -itemType: Software Item Spec -itemFulfills: SWR-VISUALIZATION-1, SWR-APPLICATION-2-2 -Module: WSI _(Whole Slide Image Processing)_ -Layer: Domain Service -Version: 1.0.0 -Date: 2025-09-11 ---- - -## 1. Description - -### 1.1 Purpose - -The WSI (Whole Slide Image) Module provides comprehensive support for digital pathology image processing within the Aignostics Python SDK. It enables users to work with high-resolution microscopy images across multiple formats and provides essential infrastructure for computational pathology workflows. - -### 1.2 Functional Requirements - -The WSI Module shall: - -- **[FR-01]** Support multi-format WSI processing including DICOM (.dcm), TIFF/BigTIFF (.tiff, .tif), and Aperio SVS (.svs) formats -- **[FR-02]** Generate standardized PNG thumbnails (256x256 pixels) from any supported WSI format -- **[FR-03]** Extract comprehensive metadata from WSI files including resolution, dimensions, and format-specific properties -- **[FR-04]** Provide DICOM-specific hierarchical organization by study → container (slide) → series -- **[FR-05]** Handle DICOM Structured Report (SR) annotations with GeoJSON import capability -- **[FR-06]** Serve thumbnails and converted images through HTTP endpoints for web integration -- **[FR-07]** Support real-time TIFF-to-JPEG conversion for web display -- **[FR-08]** Provide command-line interface for WSI inspection and DICOM analysis - -### 1.3 Non-Functional Requirements - -- **Performance**: Efficient processing of large WSI files (multi-gigabyte images) without loading entire images into memory for metadata operations -- **Security**: Input validation for file paths and URLs, secure HTTP endpoint handling with fallback responses -- **Reliability**: Graceful error handling for corrupted files, missing dependencies, and network issues -- **Usability**: Clear CLI output formatting, standardized thumbnail generation, comprehensive metadata extraction -- **Scalability**: Streaming operations for large files, memory-efficient thumbnail generation, support for pyramidal image structures - -### 1.4 Constraints and Limitations - -- **File Format Support**: Limited to extensions defined in `WSI_SUPPORTED_FILE_EXTENSIONS = {".dcm", ".tiff", ".tif", ".svs"}` with format-specific capabilities: - - DICOM (.dcm): Complete metadata, pyramidal support, annotation support via PydicomHandler - - TIFF/BigTIFF (.tiff, .tif): Standard metadata, pyramidal support, limited annotations via OpenSlideHandler - - Aperio SVS (.svs): Vendor-specific metadata, pyramidal support, no annotation support via OpenSlideHandler -- **OpenSlide Dependency**: Non-DICOM formats require OpenSlide library installation for processing -- **Memory Constraints**: Large WSI files require careful memory management and streaming operations -- **Platform Dependencies**: Requires platform-specific compilation of OpenSlide and image processing libraries -- **Coordinate Systems**: Supports DICOM image coordinates (pixel-based), QuPath coordinates (micron-based), and GeoJSON standard adaptation for pathology - ---- - -## 2. Architecture and Design - -### 2.1 Module Structure - -The WSI module follows the standard Aignostics SDK module structure: - -``` -wsi/ -├── _service.py # Core business logic and WSI processing service -├── _cli.py # Command-line interface for WSI inspection -├── _gui.py # Web-based GUI components and HTTP endpoints -├── _openslide_handler.py # Handler for TIFF/SVS formats using OpenSlide -├── _pydicom_handler.py # Handler for DICOM formats using PyDICOM/HighDICOM -├── _utils.py # Helper functions for output formatting -├── __init__.py # Module exports and public API -└── assets/ # Static assets (fallback images) - └── fallback.png -``` - -### 2.2 Key Components - -| Component | Type | Purpose | Public Interface | Dependencies | -| ------------------ | ----- | ------------------------------------- | ------------------------------------------------------------ | -------------------------------- | -| `Service` | Class | Core WSI processing service | Thumbnail generation, metadata extraction, format conversion | OpenSlideHandler, PydicomHandler | -| `OpenSlideHandler` | Class | Handles TIFF/SVS formats | Format detection, thumbnail creation, metadata parsing | OpenSlide, PIL | -| `PydicomHandler` | Class | Handles DICOM formats and annotations | DICOM parsing, hierarchy organization, annotation import | PyDICOM, HighDICOM | -| `PageBuilder` | Class | Web interface registration | HTTP endpoint registration | NiceGUI (optional) | -| `cli` | Typer | Command-line interface | WSI inspection, DICOM analysis | Typer, Rich | - -_Note: For detailed implementation, refer to the source code in the `src/aignostics/wsi/` directory._ - -### 2.3 Design Patterns - -- **Handler Pattern**: Separate handlers (`OpenSlideHandler`, `PydicomHandler`) for different image formats provide format-specific processing while maintaining a unified interface -- **Service Layer Pattern**: `Service` class encapsulates business logic and coordinates between handlers -- **Factory Pattern**: `from_file()` class methods create appropriate handler instances based on file type -- **Strategy Pattern**: Different processing strategies for DICOM vs non-DICOM formats - ---- - -## 3. Inputs and Outputs - -### 3.1 Inputs - -| Input Type | Source | Data Type/Format | Validation Rules | Business Rules | -| --------------- | ----------- | ---------------- | ------------------------------------------------ | -------------------------------------------------- | -| WSI File Path | CLI/Service | `Path` object | Must exist, extension in supported formats | File must be readable, format must be supported | -| TIFF URL | HTTP API | `str` URL | Must start with 'http://localhost' or 'https://' | URL must be accessible, content must be valid TIFF | -| DICOM Directory | CLI | `Path` object | Must exist and be accessible | Directory must contain valid DICOM files | -| GeoJSON File | CLI | `Path` object | Must exist, valid JSON format | Must contain valid geometric annotations | -| CLI Options | CLI | Boolean flags | Standard CLI validation | Options control output verbosity and format | - -### 3.2 Outputs - -| Output Type | Destination | Data Type/Format | Success Criteria | Error Conditions | -| --------------- | ----------------- | --------------------- | ----------------------------------------------------- | ------------------------------------------- | -| Thumbnail Image | HTTP Response/PIL | PNG bytes/PIL.Image | 256x256 pixel PNG image | File processing failure, format unsupported | -| WSI Metadata | CLI/Service | Structured dictionary | Complete metadata with dimensions, resolution, levels | File corruption, missing metadata | -| JPEG Image | HTTP Response | JPEG bytes | Successfully converted TIFF to JPEG | Network failure, invalid TIFF format | -| CLI Output | Terminal | Formatted text | Human-readable metadata display | Processing errors, missing files | -| DICOM Hierarchy | Terminal | Formatted text | Study/slide/series organization | Invalid DICOM structure | -| Fallback Image | HTTP Response | PNG redirect | Default image served on errors | No error conditions | - -### 3.3 Data Schemas - -**WSI Metadata Schema:** - -```yaml -metadata: - type: object - description: Based on OpenSlideHandler.get_metadata() output structure - properties: - format: - type: string - description: Detected WSI format (e.g., 'generic-tiff', 'aperio', 'hamamatsu-vms') - level_count: - type: integer - description: Number of pyramid levels - dimensions: - type: array - description: Width and height for each level - items: - type: array - items: [{ type: integer }, { type: integer }] - description: [width, height] tuple for each level - level_downsamples: - type: array - description: Downsample factor for each level - items: - type: number - mpp_x: - type: number - nullable: true - description: Microns per pixel in X direction - mpp_y: - type: number - nullable: true - description: Microns per pixel in Y direction - vendor: - type: string - nullable: true - description: Scanner vendor information - background_color: - type: array - nullable: true - description: Background color as RGB tuple - items: - type: integer - associated_images: - type: object - description: Dictionary of associated image names - additionalProperties: - type: object - properties: - type: object - description: Raw slide properties from OpenSlide - additionalProperties: true -``` - -**DICOM Hierarchy Schema:** - -```yaml -hierarchy: - type: object - description: Based on PydicomHandler._organize_by_hierarchy() output structure - properties: - studies: - type: object - description: Study instances organized by StudyInstanceUID - patternProperties: - "^[0-9.]+$": - type: object - description: Study instance data - properties: - study_uid: - type: string - description: Study Instance UID - study_date: - type: string - description: Study Date (YYYYMMDD format) - study_time: - type: string - description: Study Time (HHMMSS format) - patient_name: - type: string - description: Patient's Name - patient_id: - type: string - description: Patient ID - accession_number: - type: string - description: Accession Number - study_description: - type: string - description: Study Description - slides: - type: object - description: Slide instances organized by unique slide identifier - patternProperties: - ".*": - type: object - description: Slide instance data - properties: - metadata: - type: object - description: Slide-level metadata - properties: - slide_id: - type: string - description: Unique slide identifier - container_identifier: - type: string - description: Container Identifier - specimen_label_in_image: - type: string - description: Specimen Label in Image - specimen_short_description: - type: string - description: Specimen Short Description - specimen_detailed_description: - type: string - description: Specimen Detailed Description - series: - type: array - description: Series instances for this slide - items: - type: object - properties: - series_uid: - type: string - description: Series Instance UID - series_number: - type: string - description: Series Number - modality: - type: string - description: Modality (typically 'SM' for Slide Microscopy) - series_description: - type: string - description: Series Description - instances: - type: array - description: Instance files in this series - items: - type: object - properties: - sop_instance_uid: - type: string - description: SOP Instance UID - instance_number: - type: string - description: Instance Number - file_path: - type: string - description: Path to DICOM file -``` - -_Note: Complete schemas are maintained in the implementation and auto-generated documentation._ - -### 3.4 Data Flow - -```mermaid -graph TB - subgraph "Input Layer" - A[WSI File Path] - B[TIFF URL] - C[DICOM Directory] - D[GeoJSON File] - end - - subgraph "Processing Layer" - E[Service Class] - F[OpenSlideHandler] - G[PydicomHandler] - end - - subgraph "Output Layer" - H[PNG Thumbnail] - I[JPEG Image] - J[Metadata JSON] - K[CLI Display] - L[HTTP Response] - end - - A --> E - B --> E - C --> G - D --> G - - E --> F - E --> G - - F --> H - F --> J - G --> J - G --> K - E --> I - E --> L - - style E fill:#e1f5fe - style F fill:#f3e5f5 - style G fill:#f3e5f5 -``` - ---- - -## 4. Interface Definitions - -### 4.1 Public API - -#### Core Service Interface - -**Service Class**: `WSIService` - -- **Purpose**: Central service for WSI processing and format conversion -- **Key Methods**: - - `get_thumbnail(path: Path) -> PIL.Image`: Generate 256x256 pixel thumbnail from WSI file - - `get_thumbnail_bytes(path: Path) -> bytes`: Return thumbnail as PNG bytes for HTTP responses - - `get_metadata(path: Path) -> dict`: Extract comprehensive metadata including dimensions and resolution - - `get_tiff_as_jpg(url: str) -> bytes`: Convert TIFF from URL to JPEG format - -**Input/Output Contracts**: - -- **Input Types**: Path objects for local files, URL strings for remote TIFF processing -- **Output Types**: PIL Image objects, PNG/JPEG bytes, structured metadata dictionaries -- **Error Conditions**: `ValueError` for invalid inputs, `RuntimeError` for processing failures - -_Note: For detailed method signatures, refer to the module's `_service.py` and auto-generated API documentation._ - -### 4.2 CLI Interface - -**Command Structure:** - -```bash -uvx aignostics wsi [subcommand] [options] -``` - -**Available Commands:** - -| Command | Purpose | Input Requirements | Output Format | -| -------------------------------------------------- | ----------------------- | -------------------------------------- | ------------------------------------- | -| `inspect ` | Display WSI metadata | Path to WSI file | Formatted metadata display | -| `dicom inspect ` | Analyze DICOM hierarchy | Path to DICOM file/directory | Hierarchical study/slide organization | -| `dicom geojson_import ` | Import annotations | DICOM file and GeoJSON annotation file | Import status and validation results | - -**Common Options:** - -- `--help`: Display command help -- `--verbose`: Enable detailed output for DICOM commands -- `--summary`: Show only summary information for DICOM hierarchy - -### 4.3 HTTP/Web Interface - -**Endpoint Structure:** - -| Method | Endpoint | Purpose | Request Format | Response Format | -| ------ | -------------------------- | -------------------- | -------------------------- | ------------------------------- | -| `GET` | `/thumbnail?source=` | Serve WSI thumbnail | Query parameter: file path | PNG image or fallback redirect | -| `GET` | `/tiff?url=` | Convert TIFF to JPEG | Query parameter: TIFF URL | JPEG image or fallback redirect | -| `GET` | `/wsi_assets/fallback.png` | Fallback image | None | PNG image | - -**Authentication**: No authentication required (local service) -**Error Responses**: HTTP redirects to fallback image on processing errors - ---- - -## 5. Dependencies and Integration - -### 5.1 Internal Dependencies - -| Dependency Module | Usage Purpose | Interface/Contract Used | Criticality | -| ----------------- | ------------------------------------------- | ---------------------------------------- | ----------- | -| Utils Module | Base service class, logging, console output | `BaseService`, `get_logger()`, `console` | Required | -| Constants Module | Supported file extensions definition | `WSI_SUPPORTED_FILE_EXTENSIONS` constant | Required | -| Utils GUI | Base page builder for web interface | `BasePageBuilder` class | Optional | - -### 5.2 External Dependencies - -| Dependency | Min Version | Purpose | Optional/Required | Fallback Behavior | -| ------------------ | ----------- | -------------------------------------- | ---------------------- | ----------------------------------- | -| `openslide-python` | Latest | Reading TIFF/SVS formats | Required for non-DICOM | Clear error with installation guide | -| `pydicom` | Latest | DICOM file parsing | Required for DICOM | Error message for DICOM operations | -| `highdicom` | Latest | DICOM annotation handling | Required for DICOM | Annotation features unavailable | -| `Pillow (PIL)` | Latest | Image processing and format conversion | Required | Core functionality fails | -| `shapely` | Latest | Geometry processing for annotations | Required for GeoJSON | GeoJSON import fails | -| `requests` | Latest | HTTP requests for TIFF URL processing | Required | URL-based TIFF conversion fails | -| `typer` | Latest | CLI framework | Required | CLI unavailable | -| `nicegui` | Latest | Web interface framework | Optional | Web endpoints unavailable | - -_Note: For exact version requirements, refer to `pyproject.toml` and dependency lock files._ - -### 5.3 Integration Points - -- **Aignostics Platform API**: Provides WSI processing capabilities for the broader platform -- **QuPath Integration**: Metadata extraction supports QuPath project creation workflows -- **Web Applications**: HTTP endpoints enable thumbnail serving for browser-based viewers -- **File System**: Direct access to local WSI files for processing - ---- - -## 6. Configuration and Settings - -### 6.1 Configuration Parameters - -| Parameter | Type | Default | Description | Required | -| ------------------------------- | ----------------- | ----------------------------------- | ------------------------------- | -------- | -| `WSI_SUPPORTED_FILE_EXTENSIONS` | `set[str]` | `{".dcm", ".tiff", ".tif", ".svs"}` | Supported WSI file extensions | Yes | -| `TIMEOUT` | `int` | `60` | HTTP request timeout in seconds | No | -| Thumbnail size | `tuple[int, int]` | `(256, 256)` | Standard thumbnail dimensions | No | - -### 6.2 Environment Variables - -| Variable | Purpose | Example Value | -| ---------------------- | ----------------------------------------- | ------------------------ | -| `NICEGUI_STORAGE_PATH` | Storage path for NiceGUI web interface | `~/.aignostics/.nicegui` | -| `MATPLOTLIB` | Disable matplotlib for headless operation | `"false"` | - ---- - -## 7. Error Handling and Validation - -### 7.1 Error Categories - -| Error Type | Cause | Handling Strategy | User Impact | -| ------------------------ | --------------------------------------------------- | -------------------------------------------------- | --------------------------------------------- | -| `ValueError` | File doesn't exist, unsupported format, invalid URL | Log warning, raise with descriptive message | Clear error message indicating specific issue | -| `RuntimeError` | Processing failure, conversion error | Log exception with stack trace, raise with context | Error message with troubleshooting guidance | -| `OpenSlideError` | Corrupted WSI file, missing OpenSlide | Graceful degradation, fallback image serving | Fallback image displayed, error logged | -| `HTTPError` | Network issues with TIFF URL | Log warning, return appropriate HTTP status | HTTP error response with descriptive message | -| `UnidentifiedImageError` | Invalid image format from URL | Log warning, validate input format | Clear format validation error | - -### 7.2 Input Validation - -- **File Path Validation**: Check file existence, extension in `WSI_SUPPORTED_FILE_EXTENSIONS`, readable permissions -- **URL Validation**: Must start with 'http://localhost' or 'https://', proper URL format validation -- **DICOM File Validation**: Verify DICOM tags, proper metadata structure, hierarchy validation -- **GeoJSON Validation**: Valid JSON format, proper geometry structure, coordinate bounds checking - -### 7.3 Graceful Degradation - -- **When OpenSlide is unavailable**: Clear error message with installation instructions for non-DICOM formats -- **When DICOM dependencies are missing**: Error message indicating PyDICOM/HighDICOM installation needed -- **When image processing fails**: Fallback to default thumbnail image via HTTP redirect -- **When network requests timeout**: Configurable timeout with appropriate error response - ---- - -## 8. Security Considerations - -### 8.1 Data Protection - -- **File System Access**: Direct file system access limited to readable files, no write operations -- **URL Validation**: Strict validation requiring localhost or HTTPS protocols to prevent SSRF attacks -- **Input Sanitization**: Path validation, file extension verification, format validation -- **Error Information**: Error messages avoid exposing sensitive file system paths or internal details - -### 8.2 Security Measures - -- **Input Validation**: All file paths and URLs validated before processing -- **Resource Limits**: Configurable timeouts prevent resource exhaustion from long-running requests -- **Fallback Mechanisms**: Secure fallback to default images prevents information disclosure -- **Logging**: Security-relevant events logged without exposing sensitive data - ---- - -## 9. Implementation Details - -### 9.1 Key Algorithms and Business Logic - -- **Format Detection**: Multi-stage detection using file extensions, OpenSlide format detection, and DICOM tag analysis to determine appropriate processing strategy -- **Thumbnail Generation**: Efficient downsampling using library-specific thumbnail generation with standardized 256x256 output for consistent web display -- **Metadata Extraction**: Format-specific parsers that extract resolution, dimensions, and pyramidal level information while preserving coordinate precision -- **Coordinate Transformation**: Conversion between pixel coordinates, micron measurements, and geographic coordinate systems for annotation compatibility - -### 9.2 State Management and Data Flow - -- **State Type**: Stateless service design ensures thread safety and prevents memory leaks from large image references -- **Data Persistence**: No persistent state maintained; configuration loaded from constants and environment variables -- **Cache Strategy**: No caching implemented; each request processes fresh data to ensure accuracy - -### 9.3 Performance and Scalability Considerations - -- **Performance Characteristics**: Memory-efficient processing of multi-gigabyte files through streaming operations and metadata-only parsing -- **Scalability Patterns**: Stateless design enables horizontal scaling; concurrent requests handled safely through independent processing -- **Resource Management**: Careful memory management with automatic cleanup, configurable timeouts, and fallback mechanisms for resource exhaustion -- **Concurrency Model**: Thread-safe operations with per-request resource isolation and no shared state between concurrent processes - ---- - -## Documentation Maintenance - -### Verification and Updates - -**Last Verified**: September 10, 2025 -**Verification Method**: Code review against implementation in `src/aignostics/wsi/` and test verification in `tests/aignostics/wsi/` -**Next Review Date**: December 10, 2025 (quarterly review) - -### Change Management - -**Interface Changes**: Changes to public APIs require spec updates and version bumps -**Implementation Changes**: Internal changes don't require spec updates unless behavior changes -**Dependency Changes**: Major dependency changes should be reflected in constraints section - -### References - -**Implementation**: See `src/aignostics/wsi/` for current implementation -**Tests**: See `tests/aignostics/wsi/` for usage examples and verification -**API Documentation**: Auto-generated from docstrings in service classes diff --git a/src/aignostics/CLAUDE.md b/src/aignostics/CLAUDE.md index cb797405..a6d673bb 100644 --- a/src/aignostics/CLAUDE.md +++ b/src/aignostics/CLAUDE.md @@ -21,18 +21,12 @@ This file provides a comprehensive overview of all modules in the Aignostics SDK ### 🔐 platform -**Foundation module providing authentication, API access, and SDK metadata tracking** - -- **Core Features**: - - OAuth 2.0 authentication, JWT token management, API client wrapper - - **SDK Metadata System** (NEW): Automatic tracking of execution context, user info, CI/CD environment - - JSON Schema validation for metadata with versioning (v0.0.1) - - Operation caching for non-mutating API calls -- **CLI**: - - `user login`, `user logout`, `user whoami` for authentication - - `sdk metadata-schema` for JSON Schema export -- **Dependencies**: `utils` (logging, user_agent generation) -- **Used By**: All modules requiring API access; application module for automatic metadata attachment +**Foundation module providing authentication and API access** + +- **Core Features**: OAuth 2.0 authentication, JWT token management, API client wrapper +- **CLI**: `login`, `logout`, `whoami` commands for authentication +- **Dependencies**: None (foundation layer) +- **Used By**: All modules requiring API access ### 🚀 application @@ -78,13 +72,10 @@ This file provides a comprehensive overview of all modules in the Aignostics SDK **Core infrastructure and shared utilities** -- **Core Features**: - - Dependency injection, logging, settings, health checks - - **Enhanced User Agent** (NEW): Context-aware user agent with CI/CD tracking +- **Core Features**: Dependency injection, logging, settings, health checks - **Service Discovery**: `locate_implementations()`, `locate_subclasses()` -- **User Agent**: Generates `{name}/{version} ({platform}; {test}; {github_run_url})` - **No CLI/GUI**: Infrastructure module -- **Used By**: All modules; platform module for SDK metadata +- **Used By**: All modules ### 🖥️ gui @@ -266,7 +257,7 @@ aignostics gui For detailed information about each module, see: -- [platform/CLAUDE.md](platform/CLAUDE.md) - Authentication, API client, and SDK metadata system +- [platform/CLAUDE.md](platform/CLAUDE.md) - Authentication and API details - [application/CLAUDE.md](application/CLAUDE.md) - Application orchestration - [wsi/CLAUDE.md](wsi/CLAUDE.md) - Image processing - [dataset/CLAUDE.md](dataset/CLAUDE.md) - Dataset operations diff --git a/src/aignostics/__init__.py b/src/aignostics/__init__.py index 68e00eb8..b1d21de5 100644 --- a/src/aignostics/__init__.py +++ b/src/aignostics/__init__.py @@ -2,13 +2,7 @@ import os -from .constants import ( - HETA_APPLICATION_ID, - MODULES_TO_INSTRUMENT, - TEST_APP_APPLICATION_ID, - WSI_SUPPORTED_FILE_EXTENSIONS, - WSI_SUPPORTED_FILE_EXTENSIONS_TEST_APP, -) +from .constants import MODULES_TO_INSTRUMENT, WSI_SUPPORTED_FILE_EXTENSIONS from .utils.boot import boot # Add scheme to HTTP proxy environment variables if missing @@ -19,9 +13,4 @@ boot(modules_to_instrument=MODULES_TO_INSTRUMENT) -__all__ = [ - "HETA_APPLICATION_ID", - "TEST_APP_APPLICATION_ID", - "WSI_SUPPORTED_FILE_EXTENSIONS", - "WSI_SUPPORTED_FILE_EXTENSIONS_TEST_APP", -] +__all__ = ["WSI_SUPPORTED_FILE_EXTENSIONS"] diff --git a/src/aignostics/application/CLAUDE.md b/src/aignostics/application/CLAUDE.md index cc9c9231..69562904 100644 --- a/src/aignostics/application/CLAUDE.md +++ b/src/aignostics/application/CLAUDE.md @@ -13,7 +13,6 @@ The application module provides high-level orchestration for AI/ML applications - **Progress Tracking**: Multi-stage progress monitoring with real-time updates and QuPath integration - **File Processing**: WSI validation, chunked uploads, CRC32C integrity verification - **State Management**: Complex state machines for run lifecycle with error recovery -- **SDK Metadata Integration**: Automatic attachment of SDK context metadata to all submitted runs - **Integration Hub**: Bridges platform, WSI, bucket, and QuPath services seamlessly ### User Interfaces @@ -38,7 +37,6 @@ The application module provides high-level orchestration for AI/ML applications **Service Layer (`_service.py`):** Core application operations: - - Application listing and version management (semver validation) - Run lifecycle management (submit, monitor, complete) - File upload with chunking (1MB chunks) and CRC32C verification @@ -48,25 +46,6 @@ Core application operations: ## Architecture & Design Patterns -### Module Structure (NEW in v1.0.0-beta.7) - -The application module is organized into focused submodules: - -``` -application/ -├── _service.py # High-level orchestration and API integration -├── _models.py # Data models (DownloadProgress, DownloadProgressState) [NEW] -├── _download.py # Download helpers with progress tracking [NEW] -├── _utils.py # Shared utilities -├── _cli.py # CLI commands -└── _gui/ # GUI components -``` - -**Key Separation:** -- **_models.py**: Pydantic models for progress tracking with computed fields -- **_download.py**: Pure download logic (URLs, artifacts, progress callbacks) -- **_service.py**: High-level business logic and module integration - ### Service Layer Architecture ``` @@ -75,7 +54,6 @@ application/ │ (High-Level Orchestration) │ ├────────────────────────────────────────────┤ │ Progress Tracking & State Management │ -│ (_models.py - NEW) │ ├────────────────────────────────────────────┤ │ Integration Layer │ │ ┌──────────┬───────────┬──────────┐ │ @@ -85,19 +63,18 @@ application/ ├────────────────────────────────────────────┤ │ File Processing Layer │ │ (Upload, Download, Verification) │ -│ (_download.py - NEW) │ └────────────────────────────────────────────┘ ``` ### State Machine Design ```python -RunState: +ApplicationRunStatus: QUEUED → RUNNING → COMPLETED ↓ FAILED / CANCELLED -ItemState: +ItemStatus: PENDING → PROCESSING → COMPLETED ↓ FAILED @@ -124,33 +101,41 @@ DownloadProgress: **Actual Semantic Version Validation:** ```python -def application_version(self, application_id: str, - version_number: str | None = None) -> ApplicationVersion: - """Validate and retrieve application version. - - Args: - application_id: The ID of the application (e.g., 'heta') - version_number: The semantic version number (e.g., '1.0.0') - If None, returns the latest version - - Returns: - ApplicationVersion with application_id and version_number attributes - """ - # Delegates to platform client which validates semver format - # Platform client uses Versions resource internally - return self.platform_client.application_version( - application_id=application_id, - version_number=version_number - ) +def application_version(self, application_version_id: str, + use_latest_if_no_version_given: bool = True): + """Validate and retrieve application version.""" + + # Pattern: application_id:vX.Y.Z + match = re.match(r"^([^:]+):v(.+)$", application_version_id) + + # Uses semver library for validation + if not match or not semver.Version.is_valid(match.group(2)): + if use_latest_if_no_version_given: + # Try to find latest version + application_id = match.group(1) if match else application_version_id + latest_version = self.application_version_latest(self.application(application_id)) + if latest_version: + return latest_version + raise ValueError(f"No valid version found, no latest version available") + + raise ValueError(f"Invalid application version id format: {application_version_id}. " + "Expected format: application_id:vX.Y.Z") + + # Lookup version in application + application_id = match.group(1) + application = self.application(application_id) + for version in self.application_versions(application): + if version.application_version_id == application_version_id: + return version + raise NotFoundException(f"Version {application_version_id} not found") ``` **Key Points:** -- Application ID and version number are now **separate parameters** -- Version format: semantic version string without 'v' prefix (e.g., `"1.0.0"`, not `"v1.0.0"`) -- Uses `semver.Version.is_valid()` for validation in the platform layer -- Falls back to latest version if `version_number` is `None` -- Returns `ApplicationVersion` object with `application_id` and `version_number` attributes +- Uses `semver.Version.is_valid()` for validation (NOT custom regex) +- Version MUST have 'v' prefix: `app-id:v1.2.3` +- Falls back to latest version if configured +- Iterates through versions to find match ### File Processing Constants (Actual Values) @@ -164,21 +149,19 @@ APPLICATION_RUN_DOWNLOAD_SLEEP_SECONDS = 5 # Wait between status checks ### Progress State Management -**Actual DownloadProgress Model (`_models.py`):** +**Actual DownloadProgress Model:** ```python class DownloadProgress(BaseModel): - """Model for tracking download progress with computed progress metrics.""" - # Core state status: DownloadProgressState = DownloadProgressState.INITIALIZING # Run and item tracking - run: RunData | None = None + run: ApplicationRunData | None = None item: ItemResult | None = None item_count: int | None = None item_index: int | None = None - item_external_id: str | None = None + item_reference: str | None = None # Artifact tracking artifact: OutputArtifactElement | None = None @@ -190,13 +173,6 @@ class DownloadProgress(BaseModel): artifact_downloaded_chunk_size: int = 0 # Last chunk size artifact_downloaded_size: int = 0 # Total downloaded - # Input slide tracking (NEW in v1.0.0-beta.7) - input_slide_path: Path | None = None - input_slide_url: str | None = None - input_slide_size: int | None = None - input_slide_downloaded_chunk_size: int = 0 - input_slide_downloaded_size: int = 0 - # QuPath integration (conditional) if has_qupath_extra: qupath_add_input_progress: QuPathAddProgress | None = None @@ -206,42 +182,15 @@ class DownloadProgress(BaseModel): @computed_field @property def total_artifact_count(self) -> int | None: - """Calculate total number of artifacts across all items.""" if self.item_count and self.artifact_count: return self.item_count * self.artifact_count return None - @computed_field - @property - def total_artifact_index(self) -> int | None: - """Calculate the current artifact index across all items.""" - if self.item_count and self.artifact_count and self.item_index is not None and self.artifact_index is not None: - return self.item_index * self.artifact_count + self.artifact_index - return None - @computed_field @property def item_progress_normalized(self) -> float: - """Normalized progress 0..1 across all items. - - Handles different progress states: - - DOWNLOADING_INPUT: Progress through items being downloaded - - DOWNLOADING: Progress through artifacts being downloaded - - QUPATH_*: QuPath-specific progress tracking - """ - # Implementation varies by state... - - @computed_field - @property - def artifact_progress_normalized(self) -> float: - """Normalized progress 0..1 for current artifact/input download. - - Handles different download types: - - DOWNLOADING_INPUT: Input slide download progress - - DOWNLOADING: Artifact download progress - - QUPATH_ANNOTATE: QuPath annotation progress - """ - # Implementation varies by state... + """Normalized progress 0..1 across all items.""" + # Implementation details... ``` ### QuPath Integration (Conditional Loading) @@ -266,13 +215,11 @@ def process_with_qupath(self, ...): # QuPath processing... ``` -**Download Progress States (`_models.py`):** +**Download Progress States:** ```python class DownloadProgressState(StrEnum): - """Enum for download progress states.""" INITIALIZING = "Initializing ..." - DOWNLOADING_INPUT = "Downloading input slide ..." # NEW in v1.0.0-beta.7 QUPATH_ADD_INPUT = "Adding input slides to QuPath project ..." CHECKING = "Checking run status ..." WAITING = "Waiting for item completing ..." @@ -282,146 +229,6 @@ class DownloadProgressState(StrEnum): COMPLETED = "Completed." ``` -### Download Module (`_download.py` - NEW in v1.0.0-beta.7) - -The download module provides reusable download helper functions with comprehensive progress tracking. - -**Key Functions:** - -```python -def extract_filename_from_url(url: str) -> str: - """Extract a filename from a URL robustly. - - Supports: - - gs:// (Google Cloud Storage) - - http:// and https:// URLs - - Handles query parameters and trailing slashes - - Sanitizes filenames for filesystem use - - Examples: - >>> extract_filename_from_url("gs://bucket/path/to/file.tiff") - 'file.tiff' - >>> extract_filename_from_url("https://example.com/slides/sample.svs?token=abc") - 'sample.svs' - """ - -def download_url_to_file_with_progress( - progress: DownloadProgress, - url: str, - destination_path: Path, - download_progress_queue: Any | None = None, - download_progress_callable: Callable | None = None, -) -> Path: - """Download a file from a URL (gs://, http://, or https://) with progress tracking. - - Features: - - Converts gs:// URLs to signed URLs automatically - - Streams downloads with 1MB chunks (APPLICATION_RUN_DOWNLOAD_CHUNK_SIZE) - - Updates progress on every chunk - - Supports both queue and callback progress updates - - Creates parent directories automatically - - Args: - progress: Progress tracking object (updated in place) - url: URL to download (gs://, http://, https://) - destination_path: Local file path to save to - download_progress_queue: Optional queue for GUI progress updates - download_progress_callable: Optional callback for CLI progress updates - - Returns: - Path: The path to the downloaded file - - Raises: - ValueError: If URL scheme is unsupported - RuntimeError: If download fails - """ - -def download_available_items( - progress: DownloadProgress, - application_run: Run, - destination_directory: Path, - downloaded_items: set[str], - create_subdirectory_per_item: bool = False, - download_progress_queue: Any | None = None, - download_progress_callable: Callable | None = None, -) -> None: - """Download items that are available and not yet downloaded. - - Features: - - Only downloads TERMINATED items with FULL output - - Skips already downloaded items (tracked via external_id) - - Optional subdirectory per item - - Progress tracking for each item and artifact - - Args: - progress: Progress tracking object - application_run: Run object with results - destination_directory: Directory to save files - downloaded_items: Set of already downloaded external_ids - create_subdirectory_per_item: Create item subdirectories - download_progress_queue: Optional queue for GUI updates - download_progress_callable: Optional callback for CLI updates - """ - -def download_item_artifact( - progress: DownloadProgress, - artifact: Any, - destination_directory: Path, - prefix: str = "", - download_progress_queue: Any | None = None, - download_progress_callable: Callable | None = None, -) -> None: - """Download an artifact of a result item with progress tracking. - - Features: - - CRC32C checksum verification - - Skips download if file exists with correct checksum - - Automatic file extension detection - - Chunked downloads with progress updates - - Raises: - ValueError: If no checksum metadata found or checksum mismatch - requests.HTTPError: If download fails - """ -``` - -**Constants:** - -```python -# From _download.py -APPLICATION_RUN_FILE_READ_CHUNK_SIZE = 1024 * 1024 * 1024 # 1GB (for checksum calculation) -APPLICATION_RUN_DOWNLOAD_CHUNK_SIZE = 1024 * 1024 # 1MB (for streaming downloads) -``` - -**URL Support:** - -The download module supports three URL schemes: -1. **gs://** - Google Cloud Storage (converted to signed URLs via `platform.generate_signed_url()`) -2. **http://** - HTTP URLs (used directly) -3. **https://** - HTTPS URLs (used directly) - -**Progress Update Pattern:** - -```python -def update_progress( - progress: DownloadProgress, - download_progress_callable: Callable | None = None, - download_progress_queue: Any | None = None, -) -> None: - """Update download progress via callback or queue. - - Dual update mechanism: - - Callback: Synchronous update (CLI, blocking) - - Queue: Asynchronous update (GUI, non-blocking) - - Both can be used simultaneously. - """ - if download_progress_callable: - download_progress_callable(progress) - if download_progress_queue: - download_progress_queue.put_nowait(progress) -``` - ## Usage Patterns & Best Practices ### Basic Application Execution @@ -436,31 +243,21 @@ apps = service.list_applications() # Get specific version (actual pattern) try: - # Application ID and version are separate parameters + # Requires 'v' prefix app_version = service.application_version( - application_id="heta", - version_number="2.1.0" # Semantic version without 'v' prefix - ) - # Access attributes - print(f"Application: {app_version.application_id}") - print(f"Version: {app_version.version_number}") - - # Get latest version - latest = service.application_version( - application_id="heta", - version_number=None # Returns latest version + "heta:v2.1.0", # Must be app-id:vX.Y.Z format + use_latest_if_no_version_given=True ) except ValueError as e: - # Handle invalid version format + # Handle invalid format or missing version logger.error(f"Version error: {e}") except NotFoundException as e: - # Handle missing application or version + # Handle missing application logger.error(f"Application not found: {e}") # Run application (simplified - actual has more parameters) run = service.run_application( application_id="heta", - application_version="2.1.0", # Optional, uses latest if omitted files=["slide1.svs", "slide2.tiff"] ) ``` @@ -491,73 +288,29 @@ def upload_file(self, file_path: Path, signed_url: str): ### Download with Progress (Actual Pattern) -**Basic download with progress callback:** - -```python -from aignostics.application._download import download_url_to_file_with_progress -from aignostics.application._models import DownloadProgress -from pathlib import Path - -# Create progress object -progress = DownloadProgress() - -# Define progress callback -def on_progress(p: DownloadProgress): - if p.input_slide_size: - percent = (p.input_slide_downloaded_size / p.input_slide_size) * 100 - print(f"Downloaded: {percent:.1f}%") - -# Download from gs://, http://, or https:// -downloaded_file = download_url_to_file_with_progress( - progress=progress, - url="gs://my-bucket/slides/sample.svs", - destination_path=Path("./downloads/sample.svs"), - download_progress_callable=on_progress -) - -print(f"Downloaded to: {downloaded_file}") -``` - -**Download with GUI queue (non-blocking):** - ```python -from queue import Queue -from aignostics.application._download import download_url_to_file_with_progress - -# Create queue for GUI updates -progress_queue = Queue() - -# Download in background thread -download_url_to_file_with_progress( - progress=DownloadProgress(), - url="https://example.com/slide.tiff", - destination_path=Path("./slide.tiff"), - download_progress_queue=progress_queue # Non-blocking updates -) - -# In GUI thread, poll queue -while not progress_queue.empty(): - progress = progress_queue.get() - ui.update(progress.artifact_progress_normalized) -``` - -**Download application run results:** +def download_artifact(self, url: str, output_path: Path, progress_callback): + """Download with progress tracking.""" + + response = requests.get(url, stream=True) + total_size = int(response.headers.get("Content-Length", 0)) + + downloaded = 0 + with output_path.open("wb") as f: + for chunk in response.iter_content(chunk_size=APPLICATION_RUN_DOWNLOAD_CHUNK_SIZE): + f.write(chunk) + downloaded += len(chunk) + + # Update progress + progress = DownloadProgress( + status=DownloadProgressState.DOWNLOADING, + artifact_downloaded_chunk_size=len(chunk), + artifact_downloaded_size=downloaded, + artifact_size=total_size + ) -```python -from aignostics.application._download import download_available_items - -# Track downloaded items to avoid re-downloading -downloaded_items = set() - -# Download all available items -download_available_items( - progress=DownloadProgress(), - application_run=run, - destination_directory=Path("./results"), - downloaded_items=downloaded_items, - create_subdirectory_per_item=True, # Create dirs per item - download_progress_callable=lambda p: print(f"Item {p.item_index}/{p.item_count}") -) + if progress_callback: + progress_callback(progress) ``` ## Testing Strategies (Actual Test Patterns) @@ -567,47 +320,39 @@ download_available_items( ```python def test_application_version_valid_semver_formats(): """Test valid semver formats.""" - valid_versions = [ - "1.0.0", - "1.2.3", - "10.20.30", - "1.1.2-prerelease+meta", - "1.0.0-alpha", - "1.0.0-beta", - "1.0.0-alpha.beta", - "1.0.0-rc.1+meta", + valid_formats = [ + "test-app:v1.0.0", + "test-app:v1.2.3", + "test-app:v10.20.30", + "test-app:v1.1.2-prerelease+meta", + "test-app:v1.0.0-alpha", + "test-app:v1.0.0-beta", + "test-app:v1.0.0-alpha.beta", + "test-app:v1.0.0-rc.1+meta", ] - for version in valid_versions: + for version_id in valid_formats: try: - result = service.application_version( - application_id="test-app", - version_number=version - ) - assert result.application_id == "test-app" - assert result.version_number == version + service.application_version(version_id) except ValueError as e: - pytest.fail(f"Valid format '{version}' rejected: {e}") + pytest.fail(f"Valid format '{version_id}' rejected: {e}") except NotFoundException: # Application doesn't exist, but format is valid - pytest.skip(f"Application not found for test-app") + pytest.skip(f"Application not found for {version_id}") def test_application_version_invalid_semver_formats(): """Test invalid formats are rejected.""" - invalid_versions = [ - "v1.0.0", # 'v' prefix not allowed - "1.0", # Incomplete version - "1.0.0-", # Trailing dash - "", # Empty string - "not-semver", # Not a valid semver + invalid_formats = [ + "test-app:1.0.0", # Missing 'v' prefix + "test-app:v1.0", # Incomplete version + "test-app:v1.0.0-", # Trailing dash + ":v1.0.0", # Missing application ID + "no-colon-v1.0.0", # Missing colon separator ] - for version in invalid_versions: - with pytest.raises(ValueError, match="Invalid version format"): - service.application_version( - application_id="test-app", - version_number=version - ) + for version_id in invalid_formats: + with pytest.raises(ValueError, match="Invalid application version id format"): + service.application_version(version_id) ``` ### Use Latest Fallback Test @@ -618,20 +363,17 @@ def test_application_version_use_latest_fallback(): service = ApplicationService() try: - # Get latest version by passing None + # Try with just application ID (no version) result = service.application_version( - application_id=HETA_APPLICATION_ID, - version_number=None # Falls back to latest + HETA_APPLICATION_ID, + use_latest_if_no_version_given=True ) assert result is not None - assert result.application_id == HETA_APPLICATION_ID - assert result.version_number is not None - # version_number should be valid semver - assert semver.Version.is_valid(result.version_number) - except NotFoundException as e: - if "No versions found" in str(e): + assert result.application_version_id.startswith(f"{HETA_APPLICATION_ID}:v") + except ValueError as e: + if "no latest version available" in str(e): # Expected if no versions exist - pytest.skip(f"No versions available for {HETA_APPLICATION_ID}") + pass else: pytest.fail(f"Unexpected error: {e}") ``` @@ -677,22 +419,13 @@ logger.error("Application version validation failed", extra={ ### Semver Format Issues -**Problem:** Using incorrect version format or combining application ID with version +**Problem:** Missing 'v' prefix in version **Solution:** - ```python -# Correct: Separate application_id and version_number -app_version = service.application_version( - application_id="heta", - version_number="1.2.3" # No 'v' prefix -) - -# Wrong: Old combined format -# app_version = service.application_version("heta:v1.2.3") # No longer supported - -# Wrong: Version with 'v' prefix -# version_number="v1.2.3" # Will fail validation +# Always include 'v' prefix +version_id = "app-id:v1.2.3" # Correct +# NOT: "app-id:1.2.3" # Wrong ``` ### QuPath Availability @@ -700,7 +433,6 @@ app_version = service.application_version( **Problem:** QuPath features not working **Solution:** - ```python # Check if ijson is installed if not has_qupath_extra: @@ -712,7 +444,6 @@ if not has_qupath_extra: **Problem:** Memory issues with large files **Solution:** - ```python # Use streaming with appropriate chunk size chunk_size = APPLICATION_RUN_FILE_READ_CHUNK_SIZE # 1GB @@ -723,44 +454,20 @@ with open(file_path, 'rb') as f: ## Module Dependencies -### Internal Module Organization (NEW in v1.0.0-beta.7) +### Internal Dependencies -The application module is organized into focused submodules: - -- **`_service.py`** - High-level orchestration, API integration, run lifecycle management -- **`_models.py`** - Data models (DownloadProgress, DownloadProgressState) -- **`_download.py`** - Download helpers with progress tracking and checksum verification -- **`_utils.py`** - Shared utilities -- **`_cli.py`** - CLI commands -- **`_gui/`** - GUI components (page builders, reactive components) - -### Internal SDK Dependencies - -- `platform` - Client, API operations, **SDK metadata system** (automatic attachment to all runs), signed URLs +- `platform` - Client and API operations - `wsi` - WSI file validation - `bucket` - Cloud storage operations - `qupath` - Analysis integration (optional, requires ijson) -- `utils` - Logging, sanitization, and base utilities - -**SDK Metadata Integration:** - -Every run submitted through the application module automatically includes SDK metadata: -- **Run metadata** (v0.0.4): Execution context, user info, CI/CD details, tags, timestamps -- **Item metadata** (v0.0.3): Platform bucket location, tags, timestamps -- Automatic attachment via `platform._sdk_metadata.build_run_sdk_metadata()` - -See `platform/CLAUDE.md` for detailed SDK metadata documentation and schema. - -**Signed URL Generation:** - -The `_download.py` module uses `platform.generate_signed_url()` to convert `gs://` URLs to time-limited signed URLs for downloads. +- `utils` - Logging and utilities ### External Dependencies - `semver` - Semantic version validation (using `Version.is_valid()`) -- `google-crc32c` - File integrity checking (CRC32C checksums) -- `requests` - HTTP operations (streaming downloads) -- `pydantic` - Data models with validation and computed fields +- `google-crc32c` - File integrity checking +- `requests` - HTTP operations +- `pydantic` - Data models with validation - `ijson` - Required for QuPath features (optional) ## Performance Notes diff --git a/src/aignostics/application/__init__.py b/src/aignostics/application/__init__.py index 77589880..3a043970 100644 --- a/src/aignostics/application/__init__.py +++ b/src/aignostics/application/__init__.py @@ -1,11 +1,12 @@ """Application module.""" from ._cli import cli -from ._models import DownloadProgress, DownloadProgressState from ._service import Service -from ._settings import Settings -__all__ = ["DownloadProgress", "DownloadProgressState", "Service", "Settings", "cli"] +__all__ = [ + "Service", + "cli", +] from importlib.util import find_spec diff --git a/src/aignostics/application/_cli.py b/src/aignostics/application/_cli.py index e56af780..2a5d6e96 100644 --- a/src/aignostics/application/_cli.py +++ b/src/aignostics/application/_cli.py @@ -10,11 +10,10 @@ import typer from aignostics.bucket import Service as BucketService -from aignostics.platform import NotFoundException, RunState +from aignostics.platform import NotFoundException from aignostics.utils import console, get_logger, get_user_data_directory, sanitize_path -from ._models import DownloadProgress, DownloadProgressState -from ._service import Service +from ._service import DownloadProgress, DownloadProgressState, Service from ._utils import ( application_run_status_to_str, get_mime_type_for_artifact, @@ -44,7 +43,7 @@ def application_list( ) -> None: """List available applications.""" try: - apps = Service().applications() + applications = Service().applications() except Exception as e: logger.exception("Could not load applications") console.print(f"[error]Error:[/error] Could not load applications: {e}") @@ -56,31 +55,29 @@ def application_list( console.print("[bold]Available Applications:[/bold]") console.print("=" * 80) - for app in apps: + for app in applications: app_count += 1 console.print(f"[bold]Application ID:[/bold] {app.application_id}") console.print(f"[bold]Name:[/bold] {app.name}") console.print(f"[bold]Regulatory Classes:[/bold] {', '.join(app.regulatory_classes)}") try: - details = Service().application(app.application_id) + versions = Service().application_versions(app) except Exception as e: - logger.exception("Failed to get application details for application '%s'", app.application_id) + logger.exception("Failed to list versions for application '%s'", app.application_id) console.print( - f"[error]Error:[/error] Failed to get application details for application " - f"'{app.application_id}': {e}" + f"[error]Error:[/error] Failed to list versions for application '{app.application_id}': {e}" ) continue - console.print("[bold]Available Versions:[/bold]") - for version in details.versions: - console.print(f" - {version.number} ({version.released_at})") + if versions: + console.print("[bold]Available Versions:[/bold]") + for version in versions: + console.print(f" - {version.version} ({version.application_version_id})") + console.print(f" Changelog: {version.changelog}") - app_version = Service().application_version(app.application_id, version.number) - console.print(f" Changelog: {app_version.changelog}") - - num_inputs = len(app_version.input_artifacts) - num_outputs = len(app_version.output_artifacts) - console.print(f" Artifacts: {num_inputs} input(s), {num_outputs} output(s)") + num_inputs = len(version.input_artifacts) + num_outputs = len(version.output_artifacts) + console.print(f" Artifacts: {num_inputs} input(s), {num_outputs} output(s)") console.print("[bold]Description:[/bold]") for line in app.description.strip().split("\n"): @@ -89,10 +86,12 @@ def application_list( console.print("-" * 80) else: console.print("[bold]Available Aignostics Applications:[/bold]") - for app in apps: + for app in applications: app_count += 1 + latest_version = Service().application_version_latest(app) console.print( - f"- [bold]{app.application_id}[/bold] - latest application version: `{app.latest_version or 'None'}`" + f"- [bold]{app.application_id}[/bold] - latest application version id: " + f"`{latest_version.application_version_id if latest_version else 'None'}`" ) if app_count == 0: @@ -102,16 +101,13 @@ def application_list( @cli.command("dump-schemata") def application_dump_schemata( # noqa: C901 - application_id: Annotated[ + id: Annotated[ # noqa: A002 str, - typer.Argument(help="Id of the application or application_version to dump the output schema for."), - ], - application_version: Annotated[ - str | None, - typer.Option( - help="Version of the application. If not provided, the latest version will be used.", + typer.Argument( + help="Id of the application or application_version to dump the output schema for. " + "If application id is given the latest version of the application will be used." ), - ] = None, + ], destination: Annotated[ Path, typer.Option( @@ -133,8 +129,8 @@ def application_dump_schemata( # noqa: C901 ) -> None: """Output the input schema of the application in JSON format.""" try: - app = Service().application(application_id) - app_version = Service().application_version(application_id, application_version) + application_version = Service().application_version(id, True) + application = Service().application(application_version.application_id) except (NotFoundException, ValueError) as e: message = f"Failed to load application version with ID '{id}', check your input: : {e!s}." logger.warning(message) @@ -153,52 +149,44 @@ def application_dump_schemata( # noqa: C901 created_files: list[Path] = [] - for input_artifact in app_version.input_artifacts: + for input_artifact in application_version.input_artifacts: if input_artifact.metadata_schema: file_path: Path = sanitize_path( - Path( - destination / f"{app.application_id}_{app_version.version_number}_input_{input_artifact.name}.json" - ) + Path(destination / f"{application_version.application_version_id}_input_{input_artifact.name}.json") ) # type: ignore file_path.write_text(data=json.dumps(input_artifact.metadata_schema, indent=2), encoding="utf-8") created_files.append(file_path) - for output_artifact in app_version.output_artifacts: + for output_artifact in application_version.output_artifacts: if output_artifact.metadata_schema: file_path = sanitize_path( - Path( - destination - / f"{app.application_id}_{app_version.version_number}_output_{output_artifact.name}.json" - ) + Path(destination / f"{application_version.application_version_id}_output_{output_artifact.name}.json") ) # type: ignore file_path.write_text(data=json.dumps(output_artifact.metadata_schema, indent=2), encoding="utf-8") created_files.append(file_path) - md_file_path: Path = sanitize_path( - Path(destination / f"{app.application_id}_{app_version.version_number}_schemata.md") - ) # type: ignore + md_file_path: Path = sanitize_path(Path(destination / f"{application_version.application_version_id}_schemata.md")) # type: ignore with md_file_path.open("w", encoding="utf-8") as md_file: - md_file.write(f"# Schemata for Aignostics Application {app.name}\n") - md_file.write(f"* ID: {app.application_id}\n") - md_file.write(f"\n## Description: \n{app.description}\n\n") + md_file.write(f"# Schemata for Aignostics Application {application.name}\n") + md_file.write(f"* ID: {application.application_id}\n") + md_file.write(f"* Version ID: {application_version.application_version_id}\n") + md_file.write(f"\n## Description: \n{application.description}\n\n") md_file.write("\n## Input Artifacts\n") - for input_artifact in app_version.input_artifacts: + for input_artifact in application_version.input_artifacts: md_file.write( f"- {input_artifact.name}: " - f"{app.application_id}_{app_version.version_number}_input_{input_artifact.name}.json\n" + f"{application_version.application_version_id}_input_{input_artifact.name}.json\n" ) md_file.write("\n## Output Artifacts\n") - for output_artifact in app_version.output_artifacts: + for output_artifact in application_version.output_artifacts: md_file.write( f"- {output_artifact.name}: " - f"{app.application_id}_{app_version.version_number}_output_{output_artifact.name}.json\n" + f"{application_version.application_version_id}_output_{output_artifact.name}.json\n" ) created_files.append(md_file_path) if zip: - zip_filename = sanitize_path( - Path(destination / f"{app.application_id}_{app_version.version_number}_schemata.zip") - ) + zip_filename = sanitize_path(Path(destination / f"{application_version.application_version_id}_schemata.zip")) with zipfile.ZipFile(zip_filename, "w", zipfile.ZIP_DEFLATED) as zipf: for file_path in created_files: zipf.write(file_path, arcname=file_path.name) @@ -210,11 +198,10 @@ def application_dump_schemata( # noqa: C901 @cli.command("describe") def application_describe( application_id: Annotated[str, typer.Argument(help="Id of the application to describe")], - verbose: Annotated[bool, typer.Option(help="Show application details")] = False, ) -> None: """Describe application.""" try: - app = Service().application(application_id) + application = Service().application(application_id) except NotFoundException: logger.warning("Application with ID '%s' not found.", application_id) console.print(f"[warning]Warning:[/warning] Application with ID '{application_id}' not found.") @@ -224,41 +211,32 @@ def application_describe( console.print(f"[error]Error:[/error] Failed to describe application: {e}") sys.exit(1) - console.print(f"[bold]Application Details for {app.application_id}[/bold]") + console.print(f"[bold]Application Details for {application.application_id}[/bold]") console.print("=" * 80) - console.print(f"[bold]Name:[/bold] {app.name}") - console.print(f"[bold]Regulatory Classes:[/bold] {', '.join(app.regulatory_classes)}") + console.print(f"[bold]Name:[/bold] {application.name}") + console.print(f"[bold]Regulatory Classes:[/bold] {', '.join(application.regulatory_classes)}") console.print("[bold]Description:[/bold]") - for line in app.description.strip().split("\n"): + for line in application.description.strip().split("\n"): console.print(f" {line}") - if app.versions: + versions = Service().application_versions(application) + if versions: console.print() console.print("[bold]Available Versions:[/bold]") - for version in app.versions: - console.print(f" [bold]Version:[/bold] {version.number} ({version.released_at})") - if not verbose: - continue - try: - app_version = Service().application_version(app.application_id, version.number) - except Exception as e: - logger.exception("Failed to get application version for '%s', '%s'", application_id, version.number) - console.print( - f"[error]Error:[/error] Failed to get application version for " - f"'{application_id}', '{version.number}': {e}" - ) - sys.exit(1) + for version in versions: + console.print(f" [bold]Version ID:[/bold] {version.application_version_id}") + console.print(f" [bold]Version:[/bold] {version.version}") + console.print(f" [bold]Changelog:[/bold] {version.changelog}") - console.print(f" [bold]Changelog:[/bold] {app_version.changelog}") console.print(" [bold]Input Artifacts:[/bold]") - for artifact in app_version.input_artifacts: + for artifact in version.input_artifacts: console.print(f" - Name: {artifact.name}") console.print(f" MIME Type: {get_mime_type_for_artifact(artifact)}") console.print(f" Schema: {artifact.metadata_schema}") console.print(" [bold]Output Artifacts:[/bold]") - for artifact in app_version.output_artifacts: + for artifact in version.output_artifacts: console.print(f" - Name: {artifact.name}") console.print(f" MIME Type: {get_mime_type_for_artifact}") console.print(f" Scope: {artifact.scope}") @@ -269,14 +247,17 @@ def application_describe( @run_app.command(name="execute") def run_execute( # noqa: PLR0913, PLR0917 - application_id: Annotated[ + application_version_id: Annotated[ str, - typer.Argument(help="Id of application version to execute."), + typer.Argument( + help="Id of application version to execute. " + "If application id is given, the latest version of that application is used." + ), ], metadata_csv_file: Annotated[ Path, typer.Argument( - help="Filename of the .csv file containing the metadata and external ids.", + help="Filename of the .csv file containing the metadata and references.", exists=False, file_okay=True, dir_okay=False, @@ -302,18 +283,12 @@ def run_execute( # noqa: PLR0913, PLR0917 typer.Argument( help="Mapping to use for amending metadata CSV file. " "Each mapping is of the form '::,:,...'." - "The regular expression is matched against the external_id attribute of the entry. " + "The regular expression is matched against the reference attribute of the entry. " "The key/value pairs are applied to the entry if the pattern matches. " "You can use the mapping option multiple times to set values for multiple files. " 'Example: ".*:staining_method:H&E,tissue:LIVER,disease:LIVER_CANCER"', ), ], - application_version: Annotated[ - str | None, - typer.Option( - help="Version of the application. If not provided, the latest version will be used.", - ), - ] = None, create_subdirectory_for_run: Annotated[ bool, typer.Option( @@ -338,34 +313,6 @@ def run_execute( # noqa: PLR0913, PLR0917 help="Wait for run completion and download results incrementally", ), ] = True, - note: Annotated[ - str | None, - typer.Option(help="Optional note to include with the run submission via custom metadata."), - ] = None, - due_date: Annotated[ - str | None, - typer.Option( - help="Optional soft due date to include with the run submission, ISO8601 format. " - "The scheduler will try to complete the run by this date, taking the subscription tier" - "and available GPU resources into account." - ), - ] = None, - deadline: Annotated[ - str | None, - typer.Option( - help=( - "Optional hard deadline to include with the run submission, ISO8601 format. " - "If processing exceeds this deadline, the run can be aborted." - ), - ), - ] = None, - onboard_to_aignostics_portal: Annotated[ - bool, - typer.Option(help="If True, onboard the run to the Aignostics Portal."), - ] = False, - validate_only: Annotated[ - bool, typer.Option(help="If True, cancel the run post validation, before analysis.") - ] = False, ) -> None: """Prepare metadata, upload data to platform, and submit an application run, then incrementally download results. @@ -383,31 +330,22 @@ def run_execute( # noqa: PLR0913, PLR0917 and downloading results incrementally. """ run_prepare( - application_id=application_id, + application_version_id=application_version_id, metadata_csv=metadata_csv_file, source_directory=source_directory, - application_version=application_version, mapping=mapping, ) run_upload( - application_id=application_id, + application_version_id=application_version_id, metadata_csv_file=metadata_csv_file, - application_version=application_version, upload_prefix=upload_prefix, - onboard_to_aignostics_portal=onboard_to_aignostics_portal, ) - run_id = run_submit( - application_id=application_id, + application_run_id = run_submit( + application_version_id=application_version_id, metadata_csv_file=metadata_csv_file, - application_version=application_version, - note=note, - due_date=due_date, - deadline=deadline, - onboard_to_aignostics_portal=onboard_to_aignostics_portal, - validate_only=validate_only, ) result_download( - run_id=run_id, + run_id=application_run_id, destination_directory=metadata_csv_file.parent, create_subdirectory_for_run=create_subdirectory_for_run, create_subdirectory_per_item=create_subdirectory_per_item, @@ -417,9 +355,12 @@ def run_execute( # noqa: PLR0913, PLR0917 @run_app.command(name="prepare") def run_prepare( - application_id: Annotated[ + application_version_id: Annotated[ str, - typer.Argument(help="Id of the application to generate the metadata for. "), + typer.Argument( + help="Id of the application version to generate the metadata for. " + "If application id is given, the latest version of that application is used." + ), ], metadata_csv: Annotated[ Path, @@ -445,18 +386,12 @@ def run_prepare( resolve_path=True, ), ], - application_version: Annotated[ - str | None, - typer.Option( - help="Version of the application. If not provided, the latest version will be used.", - ), - ] = None, mapping: Annotated[ list[str] | None, typer.Option( help="Mapping to use for amending metadata CSV file. " "Each mapping is of the form '::,:,...'. " - "The regular expression is matched against the external_id attribute of the entry. " + "The regular expression is matched against the reference attribute of the entry. " "The key/value pairs are applied to the entry if the pattern matches. " "You can use the mapping option multiple times to set values for multiple files. " ), @@ -477,9 +412,8 @@ def run_prepare( write_metadata_dict_to_csv( metadata_csv=metadata_csv, metadata_dict=Service().generate_metadata_from_source_directory( + application_version_id=application_version_id, source_directory=source_directory, - application_id=application_id, - application_version=application_version, mappings=mapping or [], ), ) @@ -489,14 +423,17 @@ def run_prepare( @run_app.command(name="upload") def run_upload( - application_id: Annotated[ + application_version_id: Annotated[ str, - typer.Argument(help="Id of the application to upload data for. "), + typer.Argument( + help="Id of the application version to upload data for. " + "If application id is given, the latest version of that application is used." + ), ], metadata_csv_file: Annotated[ Path, typer.Argument( - help="Filename of the .csv file containing the metadata and external ids.", + help="Filename of the .csv file containing the metadata and references.", exists=True, file_okay=True, dir_okay=False, @@ -505,12 +442,6 @@ def run_upload( resolve_path=True, ), ], - application_version: Annotated[ - str | None, - typer.Option( - help="Version of the application. If not provided, the latest version will be used.", - ), - ] = None, upload_prefix: Annotated[ str, typer.Option( @@ -547,7 +478,7 @@ def run_upload( total_bytes = 0 for i, entry in enumerate(metadata_dict): - source = entry["external_id"] + source = entry["reference"] source_file_path = Path(source) if not source_file_path.is_file(): logger.warning("Source file '%s' (row %d) does not exist", source_file_path, i) @@ -574,7 +505,7 @@ def run_upload( def update_progress(bytes_uploaded: int, source: Path, platform_bucket_url: str) -> None: progress.update(task, advance=bytes_uploaded, description=f"{source.name}") for entry in metadata_dict: - if entry["external_id"] == str(source): + if entry["reference"] == str(source): entry["platform_bucket_url"] = platform_bucket_url break write_metadata_dict_to_csv( @@ -583,8 +514,7 @@ def update_progress(bytes_uploaded: int, source: Path, platform_bucket_url: str) ) Service().application_run_upload( - application_id=application_id, - application_version=application_version, + application_version_id=application_version_id, metadata=metadata_dict, onboard_to_aignostics_portal=onboard_to_aignostics_portal, upload_prefix=upload_prefix, @@ -596,15 +526,18 @@ def update_progress(bytes_uploaded: int, source: Path, platform_bucket_url: str) @run_app.command("submit") -def run_submit( # noqa: PLR0913, PLR0917 - application_id: Annotated[ +def run_submit( + application_version_id: Annotated[ str, - typer.Argument(help="Id of the application to submit run for."), + typer.Argument( + help="Id of the application version to submit run for. " + "If application id is given, the latest version of that application is used." + ), ], metadata_csv_file: Annotated[ Path, typer.Argument( - help="Filename of the .csv file containing the metadata and external ids.", + help="Filename of the .csv file containing the metadata and references.", exists=False, file_okay=True, dir_okay=False, @@ -613,45 +546,10 @@ def run_submit( # noqa: PLR0913, PLR0917 resolve_path=True, ), ], - application_version: Annotated[ - str | None, - typer.Option( - help="Version of the application to generate the metadata for. " - "If not provided, the latest version will be used.", - ), - ] = None, note: Annotated[ str | None, typer.Option(help="Optional note to include with the run submission via custom metadata."), ] = None, - tags: Annotated[ - str | None, - typer.Option(help="Optional comma-separated list of tags to attach to the run for filtering."), - ] = None, - due_date: Annotated[ - str | None, - typer.Option( - help="Optional soft due date to include with the run submission, ISO8601 format. " - "The scheduler will try to complete the run by this date, taking the subscription tier" - "and available GPU resources into account." - ), - ] = None, - deadline: Annotated[ - str | None, - typer.Option( - help=( - "Optional hard deadline to include with the run submission, ISO8601 format. " - "If processing exceeds this deadline, the run can be aborted." - ), - ), - ] = None, - onboard_to_aignostics_portal: Annotated[ - bool, - typer.Option(help="If True, onboard the run to the Aignostics Portal."), - ] = False, - validate_only: Annotated[ - bool, typer.Option(help="If True, cancel the run post validation, before analysis.") - ] = False, ) -> str: """Submit run by referencing the metadata CSV file. @@ -660,118 +558,46 @@ def run_submit( # noqa: PLR0913, PLR0917 Returns: The ID of the submitted application run. """ - try: - app_version = Service().application_version( - application_id=application_id, application_version=application_version - ) - except ValueError as e: - logger.warning( - "Bad input to create run for application '%s' (version: '%s'): %s", application_id, application_version, e - ) - console.print( - f"[warning]Warning:[/warning] Bad input to create run for application " - f"'{application_id} (version: {application_version})': {e}" - ) - sys.exit(2) - except NotFoundException as e: - logger.warning( - "Could not find application version '%s' (version: '%s'): %s", application_id, application_version, e - ) - console.print( - f"[warning]Warning:[/warning] Could not find application '{application_id} " - f"(version: {application_version})': {e}" - ) - sys.exit(2) - except (Exception, RuntimeError) as e: - message = ( - f"Failed to load application version '{application_version}' for application '{application_id}': {e!s}." - ) - logger.exception(message) - console.print(f"[error]Error:[/error] {message}") - sys.exit(1) - try: metadata_dict = read_metadata_csv_to_dict(metadata_csv_file=metadata_csv_file) if not metadata_dict: console.print("Could mot read metadata file '%s'", metadata_csv_file) sys.exit(2) logger.debug( - "Submitting run for application '%s' (version: '%s') with metadata: %s", - application_id, - app_version.version_number, - metadata_dict, + "Submitting run for application version '%s' with metadata: %s", application_version_id, metadata_dict ) application_run = Service().application_run_submit_from_metadata( - application_id=application_id, + application_version_id=application_version_id, metadata=metadata_dict, - application_version=application_version, - custom_metadata=None, # TODO(Helmut): Add support for custom metadata - note=note, - tags={tag.strip() for tag in tags.split(",") if tag.strip()} if tags else None, - due_date=due_date, - deadline=deadline, - onboard_to_aignostics_portal=onboard_to_aignostics_portal, - validate_only=validate_only, + custom_metadata={"sdk": {"note": note}} if note else None, ) - console.print( - f"Submitted run with id '{application_run.run_id}' for " - f"'{application_id} (version: {app_version.version_number})'." - ) - return application_run.run_id + console.print(f"Submitted run with id '{application_run.application_run_id}' for '{application_version_id}'.") + return application_run.application_run_id except ValueError as e: - logger.warning( - "Bad input to create run for application '%s' (version: %s): %s", - application_id, - app_version.version_number, - e, - ) + logger.warning("Bad input to create run for application version '%s': %s", application_version_id, e) console.print( - f"[warning]Warning:[/warning] Bad input to create run for application " - f"'{application_id} (version: {app_version.version_number})': {e}" + f"[warning]Warning:[/warning] Bad input to create run for application version " + f"'{application_version_id}': {e}" ) sys.exit(2) except Exception as e: - logger.exception( - "Failed to create run for application '%s' (version: %s)", application_id, app_version.version_number - ) + logger.exception("Failed to create run for application version '%s'", application_version_id) console.print( - f"[error]Error:[/error] Failed to create run for application " - f"'{application_id} (version: {app_version.version_number})': {e}" + f"[error]Error:[/error] Failed to create run for application version '{application_version_id}': {e}" ) sys.exit(1) @run_app.command("list") -def run_list( # noqa: PLR0913, PLR0917 +def run_list( verbose: Annotated[bool, typer.Option(help="Show application details")] = False, limit: Annotated[int | None, typer.Option(help="Maximum number of runs to display")] = None, - tags: Annotated[ - str | None, - typer.Option(help="Optional comma-separated list of tags to filter runs. All tags must match."), - ] = None, - note_regex: Annotated[ - str | None, - typer.Option(help="Optional regex pattern to filter runs by note metadata."), - ] = None, - query: Annotated[str | None, typer.Option(help="Optional query string to filter runs by note OR tags.")] = None, - note_case_insensitive: Annotated[bool, typer.Option(help="Make note regex search case-insensitive.")] = True, ) -> None: """List runs.""" try: - runs = Service().application_runs( - limit=limit, - tags={tag.strip() for tag in tags.split(",") if tag.strip()} if tags else None, - note_regex=note_regex, - note_query_case_insensitive=note_case_insensitive, - query=query, - ) + runs = Service().application_runs(limit=limit) if len(runs) == 0: - if tags: - message = f"You did not yet create a run matching tags: {tags!r}." - elif note_regex: - message = f"You did not yet create a run matching note pattern: {note_regex!r}." - else: - message = "You did not yet create a run." + message = "You did not yet create a run." logger.warning(message) console.print(message, style="warning") else: @@ -802,84 +628,6 @@ def run_describe(run_id: Annotated[str, typer.Argument(help="Id of the run to de sys.exit(1) -@run_app.command("dump-metadata") -def run_dump_metadata( - run_id: Annotated[str, typer.Argument(help="Id of the run to dump custom metadata for")], - pretty: Annotated[bool, typer.Option(help="Pretty print JSON output with indentation")] = False, -) -> None: - """Dump custom metadata of a run as JSON to stdout.""" - logger.debug("Dumping custom metadata for run with ID '%s'", run_id) - - try: - run = Service().application_run(run_id).details() - custom_metadata = run.custom_metadata if hasattr(run, "custom_metadata") else {} - - # Output JSON to stdout - if pretty: - print(json.dumps(custom_metadata, indent=2)) - else: - print(json.dumps(custom_metadata)) - - logger.info("Dumped custom metadata for run with ID '%s'", run_id) - except NotFoundException: - logger.warning("Run with ID '%s' not found.", run_id) - console.print(f"[warning]Warning:[/warning] Run with ID '{run_id}' not found.") - sys.exit(2) - except Exception as e: - logger.exception("Failed to dump custom metadata for run with ID '%s'", run_id) - console.print(f"[error]Error:[/error] Failed to dump custom metadata for run with ID '{run_id}': {e}") - sys.exit(1) - - -@run_app.command("dump-item-metadata") -def run_dump_item_metadata( - run_id: Annotated[str, typer.Argument(help="Id of the run containing the item")], - external_id: Annotated[str, typer.Argument(help="External ID of the item to dump custom metadata for")], - pretty: Annotated[bool, typer.Option(help="Pretty print JSON output with indentation")] = False, -) -> None: - """Dump custom metadata of an item as JSON to stdout.""" - logger.debug("Dumping custom metadata for item '%s' in run with ID '%s'", external_id, run_id) - - try: - run = Service().application_run(run_id) - - # Find the item with the matching external_id in the results - item = None - for result_item in run.results(): - if result_item.external_id == external_id: - item = result_item - break - - if item is None: - logger.warning("Item with external ID '%s' not found in run '%s'.", external_id, run_id) - print( - f"Warning: Item with external ID '{external_id}' not found in run '{run_id}'.", - file=sys.stderr, - ) - sys.exit(2) - - custom_metadata = item.custom_metadata if hasattr(item, "custom_metadata") else {} - - # Output JSON to stdout - if pretty: - print(json.dumps(custom_metadata, indent=2)) - else: - print(json.dumps(custom_metadata)) - - logger.info("Dumped custom metadata for item '%s' in run with ID '%s'", external_id, run_id) - except NotFoundException: - logger.warning("Run with ID '%s' not found.", run_id) - print(f"Warning: Run with ID '{run_id}' not found.", file=sys.stderr) - sys.exit(2) - except Exception as e: - logger.exception("Failed to dump custom metadata for item '%s' in run with ID '%s'", external_id, run_id) - print( - f"Error: Failed to dump custom metadata for item '{external_id}' in run with ID '{run_id}': {e}", - file=sys.stderr, - ) - sys.exit(1) - - @run_app.command("cancel") def run_cancel( run_id: Annotated[str, typer.Argument(..., help="Id of the run to cancel")], @@ -895,114 +643,14 @@ def run_cancel( logger.warning("Run with ID '%s' not found.", run_id) console.print(f"[warning]Warning:[/warning] Run with ID '{run_id}' not found.") sys.exit(2) - except ValueError: - logger.warning("Run ID '%s' invalid", run_id) - console.print(f"[warning]Warning:[/warning] Run ID '{run_id}' invalid.") - sys.exit(2) except Exception as e: logger.exception("Failed to cancel run with ID '%s'", run_id) console.print(f"[bold red]Error:[/bold red] Failed to cancel run with ID '{run_id}': {e}") sys.exit(1) -@run_app.command("update-metadata") -def run_update_metadata( - run_id: Annotated[str, typer.Argument(..., help="Id of the run to update")], - metadata_json: Annotated[ - str, typer.Argument(..., help='Custom metadata as JSON string (e.g., \'{"key": "value"}\')') - ], -) -> None: - """Update custom metadata for a run.""" - import json # noqa: PLC0415 - - logger.debug("Updating custom metadata for run with ID '%s'", run_id) - - try: - # Parse JSON metadata - try: - custom_metadata = json.loads(metadata_json) - if not isinstance(custom_metadata, dict): - console.print("[error]Error:[/error] Metadata must be a JSON object (dictionary).") - sys.exit(1) - except json.JSONDecodeError as e: - console.print(f"[error]Error:[/error] Invalid JSON: {e}") - sys.exit(1) - - Service().application_run_update_custom_metadata(run_id, custom_metadata) - logger.info("Updated custom metadata for run with ID '%s'.", run_id) - console.print(f"Successfully updated custom metadata for run with ID '{run_id}'.") - except NotFoundException: - logger.warning("Run with ID '%s' not found.", run_id) - console.print(f"[warning]Warning:[/warning] Run with ID '{run_id}' not found.") - sys.exit(2) - except ValueError as e: - logger.warning("Run ID '%s' invalid or metadata invalid: %s", run_id, e) - console.print(f"[warning]Warning:[/warning] Run ID '{run_id}' invalid or metadata invalid: {e}") - sys.exit(2) - except Exception as e: - logger.exception("Failed to update custom metadata for run with ID '%s'", run_id) - console.print(f"[bold red]Error:[/bold red] Failed to update custom metadata for run with ID '{run_id}': {e}") - sys.exit(1) - - -@run_app.command("update-item-metadata") -def run_update_item_metadata( - run_id: Annotated[str, typer.Argument(..., help="Id of the run containing the item")], - external_id: Annotated[str, typer.Argument(..., help="External ID of the item to update")], - metadata_json: Annotated[ - str, typer.Argument(..., help='Custom metadata as JSON string (e.g., \'{"key": "value"}\')') - ], -) -> None: - """Update custom metadata for an item in a run.""" - import json # noqa: PLC0415 - - logger.debug("Updating custom metadata for item '%s' in run with ID '%s'", external_id, run_id) - - try: - # Parse JSON metadata - try: - custom_metadata = json.loads(metadata_json) - if not isinstance(custom_metadata, dict): - console.print("[error]Error:[/error] Metadata must be a JSON object (dictionary).") - sys.exit(1) - except json.JSONDecodeError as e: - console.print(f"[error]Error:[/error] Invalid JSON: {e}") - sys.exit(1) - - Service().application_run_update_item_custom_metadata(run_id, external_id, custom_metadata) - logger.info("Updated custom metadata for item '%s' in run with ID '%s'.", external_id, run_id) - console.print(f"Successfully updated custom metadata for item '{external_id}' in run with ID '{run_id}'.") - except NotFoundException: - logger.warning("Run with ID '%s' or item '%s' not found.", run_id, external_id) - console.print(f"[warning]Warning:[/warning] Run with ID '{run_id}' or item '{external_id}' not found.") - sys.exit(2) - except ValueError as e: - logger.warning( - "Run ID '%s' or item external ID '%s' invalid or metadata invalid: %s", - run_id, - external_id, - e, - ) - console.print( - f"[warning]Warning:[/warning] Run ID '{run_id}' or item external ID '{external_id}' " - f"invalid or metadata invalid: {e}" - ) - sys.exit(2) - except Exception as e: - logger.exception( - "Failed to update custom metadata for item '%s' in run with ID '%s'", - external_id, - run_id, - ) - console.print( - f"[bold red]Error:[/bold red] Failed to update custom metadata for item '{external_id}' " - f"in run with ID '{run_id}': {e}" - ) - sys.exit(1) - - @result_app.command("download") -def result_download( # noqa: C901, PLR0913, PLR0915, PLR0917 +def result_download( # noqa: PLR0913, PLR0917 run_id: Annotated[str, typer.Argument(..., help="Id of the run to download results for")], destination_directory: Annotated[ Path, @@ -1104,66 +752,34 @@ def result_download( # noqa: C901, PLR0913, PLR0915, PLR0917 with Live(panel): main_task = main_download_progress_ui.add_task(description="", total=None, extra_description="") - def update_progress(progress: DownloadProgress) -> None: # noqa: C901 + def update_progress(progress: DownloadProgress) -> None: """Update progress bar for file downloads.""" if progress.run: - panel.title = ( - f"Run {progress.run.run_id} of {progress.run.application_id} " - f"(version: {progress.run.version_number})" - ) - panel.subtitle = f"Triggered at {progress.run.submitted_at.strftime('%a, %x %X')}" + panel.title = f"Run {progress.run.application_run_id} of {progress.run.application_version_id}" + panel.subtitle = f"Triggered at {progress.run.triggered_at.strftime('%a, %x %X')}" if progress.item_count: panel.subtitle += f" with {progress.item_count} " + ( "item" if progress.item_count == 1 else "items" ) - if progress.run.state is RunState.TERMINATED: - status_text = application_run_status_to_str(progress.run.state) - panel.subtitle += f", status: {status_text} ({progress.run.termination_reason})." - else: - panel.subtitle += f", status: {application_run_status_to_str(progress.run.state)}." - # Determine the status message based on progress state - if progress.status is DownloadProgressState.DOWNLOADING_INPUT: - status_message = ( - f"Downloading input slide {progress.item_index + 1} of {progress.item_count}" - if progress.item_index is not None and progress.item_count - else "Downloading input slide ..." - ) - elif progress.status is DownloadProgressState.DOWNLOADING and progress.total_artifact_index is not None: - status_message = ( - f"Downloading artifact {progress.total_artifact_index + 1} of {progress.total_artifact_count}" - ) - else: - status_message = progress.status - + panel.subtitle += f", status: {application_run_status_to_str(progress.run.status)}." main_download_progress_ui.update( main_task, - description=status_message.ljust(50), - ) - # Handle input slide download progress - if progress.status is DownloadProgressState.DOWNLOADING_INPUT and progress.input_slide_path: - task_key = str(progress.input_slide_path.absolute()) - if task_key not in download_tasks: - download_tasks[task_key] = artifact_download_progress_ui.add_task( - f"{progress.input_slide_path.name}".ljust(50), - total=progress.input_slide_size, - extra_description=f"Input from {progress.input_slide_url or 'gs://'}" - if progress.input_slide_url - else "Input slide", + description=( + progress.status + if progress.status is not DownloadProgressState.DOWNLOADING or not progress.total_artifact_index + else ( + f"Downloading artifact {progress.total_artifact_index + 1} " + f"of {progress.total_artifact_count}" ) - - artifact_download_progress_ui.update( - download_tasks[task_key], - total=progress.input_slide_size, - advance=progress.input_slide_downloaded_chunk_size, - ) - # Handle artifact download progress - elif progress.artifact_path: + ).ljust(50), + ) + if progress.artifact_path: task_key = str(progress.artifact_path.absolute()) if task_key not in download_tasks: download_tasks[task_key] = artifact_download_progress_ui.add_task( f"{progress.artifact_path.name}".ljust(50), total=progress.artifact_size, - extra_description=f"Item {progress.item.external_id if progress.item else 'unknown'}", + extra_description=f"Item {progress.item.reference if progress.item else 'unknown'}", ) artifact_download_progress_ui.update( diff --git a/src/aignostics/application/_download.py b/src/aignostics/application/_download.py deleted file mode 100644 index 67def43a..00000000 --- a/src/aignostics/application/_download.py +++ /dev/null @@ -1,329 +0,0 @@ -"""Download helper functions for application run results.""" - -import base64 -from collections.abc import Callable -from pathlib import Path, PurePosixPath -from typing import Any -from urllib.parse import urlparse - -import google_crc32c -import requests - -from aignostics.platform import ItemOutput, ItemState, Run, generate_signed_url -from aignostics.utils import get_logger, sanitize_path_component - -from ._models import DownloadProgress, DownloadProgressState -from ._utils import get_file_extension_for_artifact - -logger = get_logger(__name__) - -# Download chunk sizes -APPLICATION_RUN_FILE_READ_CHUNK_SIZE = 1024 * 1024 * 1024 # 1GB -APPLICATION_RUN_DOWNLOAD_CHUNK_SIZE = 1024 * 1024 # 1MB - - -def extract_filename_from_url(url: str) -> str: - """Extract a filename from a URL robustly. - - Args: - url (str): The URL to extract filename from. - - Returns: - str: The extracted filename, sanitized for use as a path component. - - Examples: - >>> extract_filename_from_url("gs://bucket/path/to/file.tiff") - 'file.tiff' - >>> extract_filename_from_url("https://example.com/slides/sample.svs?token=abc") - 'sample.svs' - >>> extract_filename_from_url("https://example.com/download/") - 'download' - """ - # Parse the URL and extract the path component - parsed = urlparse(url) - # Use PurePosixPath since URLs always use forward slashes - path = PurePosixPath(parsed.path) - - # Get the last component (name) of the path - # If path ends with /, .name will be empty, so use the parent's name - filename = path.name or path.parent.name - - # If still empty (e.g., root path), use a default - if not filename: - filename = "download" - - # Sanitize the filename to ensure it's safe for filesystem use - return sanitize_path_component(filename) - - -def download_url_to_file_with_progress( - progress: DownloadProgress, - url: str, - destination_path: Path, - download_progress_queue: Any | None = None, # noqa: ANN401 - download_progress_callable: Callable | None = None, # type: ignore[type-arg] -) -> Path: - """Download a file from a URL (gs://, http://, or https://) with progress tracking. - - Args: - progress (DownloadProgress): Progress tracking object for GUI or CLI updates. - url (str): The URL to download from (supports gs://, http://, https://). - destination_path (Path): The local file path to save to. - download_progress_queue (Any | None): Queue for GUI progress updates. - download_progress_callable (Callable | None): Callback for CLI progress updates. - - Returns: - Path: The path to the downloaded file. - - Raises: - ValueError: If the URL is invalid. - RuntimeError: If the download fails. - """ - logger.debug("Downloading URL '%s' to '%s' with progress tracking", url, destination_path) - - # Initialize progress tracking - progress.status = DownloadProgressState.DOWNLOADING_INPUT - progress.input_slide_url = url - progress.input_slide_path = destination_path - progress.input_slide_downloaded_size = 0 - progress.input_slide_downloaded_chunk_size = 0 - progress.input_slide_size = None - update_progress(progress, download_progress_callable, download_progress_queue) - - # Generate download URL (convert gs:// to signed URL, use http(s):// directly) - if url.startswith("gs://"): - download_url = generate_signed_url(url) - elif url.startswith(("http://", "https://")): - download_url = url - else: - msg = f"Unsupported URL scheme: {url}. Only gs://, http://, and https:// are supported." - raise ValueError(msg) - - destination_path.parent.mkdir(parents=True, exist_ok=True) - - # Download with progress tracking - try: - response = requests.get(download_url, stream=True, timeout=60) - response.raise_for_status() - - progress.input_slide_size = int(response.headers.get("content-length", 0)) - update_progress(progress, download_progress_callable, download_progress_queue) - - with destination_path.open("wb") as f: - for chunk in response.iter_content(chunk_size=APPLICATION_RUN_DOWNLOAD_CHUNK_SIZE): - if chunk: - f.write(chunk) - progress.input_slide_downloaded_chunk_size = len(chunk) - progress.input_slide_downloaded_size += progress.input_slide_downloaded_chunk_size - update_progress(progress, download_progress_callable, download_progress_queue) - - logger.info("Downloaded URL '%s' to '%s'", url, destination_path) - return destination_path - except requests.HTTPError as e: - msg = f"HTTP error downloading '{url}': {e}" - logger.warning(msg) - raise RuntimeError(msg) from e - except requests.RequestException as e: - msg = f"Network error downloading '{url}': {e}" - logger.warning(msg) - raise RuntimeError(msg) from e - - -def update_progress( - progress: DownloadProgress, - download_progress_callable: Callable | None = None, # type: ignore[type-arg] - download_progress_queue: Any | None = None, # noqa: ANN401 -) -> None: - """Update download progress via callback or queue. - - Args: - progress (DownloadProgress): Progress tracking object to send. - download_progress_callable (Callable | None): Optional callback function. - download_progress_queue (Any | None): Optional queue for progress updates. - """ - if download_progress_callable: - download_progress_callable(progress) - if download_progress_queue: - download_progress_queue.put_nowait(progress) - - -def download_available_items( # noqa: PLR0913, PLR0917 - progress: DownloadProgress, - application_run: Run, - destination_directory: Path, - downloaded_items: set[str], - create_subdirectory_per_item: bool = False, - download_progress_queue: Any | None = None, # noqa: ANN401 - download_progress_callable: Callable | None = None, # type: ignore[type-arg] -) -> None: - """Download items that are available and not yet downloaded. - - Args: - progress (DownloadProgress): Progress tracking object for GUI or CLI updates. - application_run (Run): The application run object. - destination_directory (Path): Directory to save files. - downloaded_items (set): Set of already downloaded item external ids. - create_subdirectory_per_item (bool): Whether to create a subdirectory for each item. - download_progress_queue (Queue | None): Queue for GUI progress updates. - download_progress_callable (Callable | None): Callback for CLI progress updates. - """ - items = list(application_run.results()) - progress.item_count = len(items) - for item_index, item in enumerate(items): - if item.external_id in downloaded_items: - continue - - if item.state == ItemState.TERMINATED and item.output == ItemOutput.FULL: - progress.status = DownloadProgressState.DOWNLOADING - progress.item_index = item_index - progress.item = item - progress.item_external_id = item.external_id - - progress.artifact_count = len(item.output_artifacts) - update_progress(progress, download_progress_callable, download_progress_queue) - - if create_subdirectory_per_item: - path = Path(item.external_id) - stem_name = path.stem - try: - # Handle case where path might be relative to destination - rel_path = path.relative_to(destination_directory) - stem_name = rel_path.stem - except ValueError: - # Not a subfolder - just use the stem - pass - item_directory = destination_directory / stem_name - else: - item_directory = destination_directory - item_directory.mkdir(exist_ok=True) - - for artifact_index, artifact in enumerate(item.output_artifacts): - progress.artifact_index = artifact_index - progress.artifact = artifact - update_progress(progress, download_progress_callable, download_progress_queue) - - download_item_artifact( - progress, - artifact, - item_directory, - item.external_id if not create_subdirectory_per_item else "", - download_progress_queue, - download_progress_callable, - ) - - downloaded_items.add(item.external_id) - - -def download_item_artifact( # noqa: PLR0913, PLR0917 - progress: DownloadProgress, - artifact: Any, # noqa: ANN401 - destination_directory: Path, - prefix: str = "", - download_progress_queue: Any | None = None, # noqa: ANN401 - download_progress_callable: Callable | None = None, # type: ignore[type-arg] -) -> None: - """Download an artifact of a result item with progress tracking. - - Args: - progress (DownloadProgress): Progress tracking object for GUI or CLI updates. - artifact (Any): The artifact to download. - destination_directory (Path): Directory to save the file. - prefix (str): Prefix for the file name, if needed. - download_progress_queue (Queue | None): Queue for GUI progress updates. - download_progress_callable (Callable | None): Callback for CLI progress updates. - - Raises: - ValueError: If - no checksum metadata is found for the artifact. - requests.HTTPError: If the download fails. - """ - metadata = artifact.metadata or {} - metadata_checksum = metadata.get("checksum_base64_crc32c", "") or metadata.get("checksum_crc32c", "") - if not metadata_checksum: - message = f"No checksum metadata found for artifact {artifact.name}" - logger.error(message) - raise ValueError(message) - - artifact_path = ( - destination_directory - / f"{prefix}{sanitize_path_component(artifact.name)}{get_file_extension_for_artifact(artifact)}" - ) - - if artifact_path.exists(): - checksum = google_crc32c.Checksum() # type: ignore[no-untyped-call] - with open(artifact_path, "rb") as f: - while chunk := f.read(APPLICATION_RUN_FILE_READ_CHUNK_SIZE): - checksum.update(chunk) # type: ignore[no-untyped-call] - existing_checksum = base64.b64encode(checksum.digest()).decode("ascii") # type: ignore[no-untyped-call] - if existing_checksum == metadata_checksum: - logger.debug("File %s already exists with correct checksum", artifact_path) - return - - download_file_with_progress( - progress, - artifact.download_url, - artifact_path, - metadata_checksum, - download_progress_queue, - download_progress_callable, - ) - - -def download_file_with_progress( # noqa: PLR0913, PLR0917 - progress: DownloadProgress, - signed_url: str, - artifact_path: Path, - metadata_checksum: str, - download_progress_queue: Any | None = None, # noqa: ANN401 - download_progress_callable: Callable | None = None, # type: ignore[type-arg] -) -> None: - """Download a file with progress tracking support. - - Args: - progress (DownloadProgress): Progress tracking object for GUI or CLI updates. - signed_url (str): The signed URL to download from. - artifact_path (Path): Path to save the file. - metadata_checksum (str): Expected CRC32C checksum in base64. - download_progress_queue (Any | None): Queue for GUI progress updates. - download_progress_callable (Callable | None): Callback for CLI progress updates. - - Raises: - ValueError: If - checksum verification fails. - requests.HTTPError: If download fails. - """ - logger.debug( - "Downloading artifact '%s' to '%s' with expected checksum '%s' for item with external id '%s'", - progress.artifact.name if progress.artifact else "unknown", - artifact_path, - metadata_checksum, - progress.item_external_id or "unknown", - ) - progress.artifact_download_url = signed_url - progress.artifact_path = artifact_path - progress.artifact_downloaded_size = 0 - progress.artifact_downloaded_chunk_size = 0 - progress.artifact_size = None - update_progress(progress, download_progress_callable, download_progress_queue) - - checksum = google_crc32c.Checksum() # type: ignore[no-untyped-call] - - with requests.get(signed_url, stream=True, timeout=60) as stream: - stream.raise_for_status() - progress.artifact_size = int(stream.headers.get("content-length", 0)) - update_progress(progress, download_progress_callable, download_progress_queue) - with open(artifact_path, mode="wb") as file: - for chunk in stream.iter_content(chunk_size=APPLICATION_RUN_DOWNLOAD_CHUNK_SIZE): - if chunk: - file.write(chunk) - checksum.update(chunk) # type: ignore[no-untyped-call] - progress.artifact_downloaded_chunk_size = len(chunk) - progress.artifact_downloaded_size += progress.artifact_downloaded_chunk_size - update_progress(progress, download_progress_callable, download_progress_queue) - - downloaded_checksum = base64.b64encode(checksum.digest()).decode("ascii") # type: ignore[no-untyped-call] - if downloaded_checksum != metadata_checksum: - artifact_path.unlink() # Remove corrupted file - msg = f"Checksum mismatch for {artifact_path}: {downloaded_checksum} != {metadata_checksum}" - logger.error(msg) - raise ValueError(msg) diff --git a/src/aignostics/application/_gui/_frame.py b/src/aignostics/application/_gui/_frame.py index 08501010..c75cbe04 100644 --- a/src/aignostics/application/_gui/_frame.py +++ b/src/aignostics/application/_gui/_frame.py @@ -1,4 +1,3 @@ -from datetime import UTC, datetime from typing import Any from nicegui import app, background_tasks, context, ui # noq @@ -14,8 +13,7 @@ BORDERED_SEPARATOR = "bordered separator" RUNS_LIMIT = 100 -RUNS_REFRESH_INTERVAL = 60 * 15 # 15 minutes -STORAGE_TAB_RUNS_HAS_OUTPUT = "runs_has_output" +STORAGE_TAB_RUNS_COMPLETED_ONLY = "runs_completed_only" service = Service() @@ -26,9 +24,6 @@ class SearchInput: search_input = SearchInput() -# Module-level state for auto-refresh and notifications (reset on page reload) -_runs_last_refresh_time: datetime | None = None - async def _frame( # noqa: C901, PLR0913, PLR0915, PLR0917 navigation_title: str, @@ -40,12 +35,6 @@ async def _frame( # noqa: C901, PLR0913, PLR0915, PLR0917 ) -> None: if args is None: args = {} - - if args.get("query"): - search_input.query = args["query"] - else: - search_input.query = "" - with frame( # noqa: PLR1702 navigation_title=navigation_title, navigation_icon=navigation_icon, @@ -75,9 +64,7 @@ async def _frame( # noqa: C901, PLR0913, PLR0915, PLR0917 ): with ( ui.item_section().props("avatar"), - ui.icon(application_id_to_icon(application.application_id), color="primary").classes( - "text-4xl" - ), + ui.icon(application_id_to_icon(application.application_id), color="primary"), ): ui.tooltip(application.application_id) with ui.item_section(): @@ -91,26 +78,22 @@ async def _frame( # noqa: C901, PLR0913, PLR0915, PLR0917 except Exception as e: with ui.item(): with ui.item_section().props("avatar"): - ui.icon("error", color="red").classes("text-4xl") + ui.icon("error", color="red") with ui.item_section(): ui.label(f"Could not load applications: {e!s}").mark("LABEL_ERROR") logger.exception("Could not load applications") - async def application_runs_load_and_render( # noqa: C901 - runs_column: ui.column, has_output: bool = False, query: str | None = None + async def application_runs_load_and_render( + runs_column: ui.column, completed_only: bool = False, note_query: str | None = None ) -> None: - global _runs_last_refresh_time # noqa: PLW0603 - with runs_column: try: - # Store previous refresh time for detecting newly terminated runs - previous_refresh_time = _runs_last_refresh_time - runs = await nicegui_run.io_bound( Service.application_runs_static, limit=RUNS_LIMIT, - has_output=has_output, - query=query, + completed_only=completed_only, + note_regex=f".*{note_query}.*" if note_query else None, + note_query_case_insensitive=True, ) if runs is None: message = ( # type: ignore[unreachable] @@ -119,95 +102,42 @@ async def application_runs_load_and_render( # noqa: C901 ) logger.error(message) raise RuntimeError(message) # noqa: TRY301 - - # Update refresh timestamp before checking for new terminations - _runs_last_refresh_time = datetime.now(UTC) - - # Check for newly terminated runs and show notifications - if previous_refresh_time is not None and runs: - newly_terminated = [ - r - for r in runs - if r.get("terminated_at") is not None and r["terminated_at"] > previous_refresh_time - ] - - # Show notifications for newly completed runs (limit to 3 to avoid spam) - for run in newly_terminated[:3]: - ui.notify( - f"🎉 Run {run['application_id']} completed!", - type="positive", - position="top", - timeout=60 * 60 * 24 * 7, - progress=True, - close_button=True, - ) - runs_column.clear() for index, run_data in enumerate(runs): with ( ui.item( - on_click=lambda run_id=run_data["run_id"]: ui.navigate.to(f"/application/run/{run_id}") + on_click=lambda run_id=run_data["application_run_id"]: ui.navigate.to( + f"/application/run/{run_id}" + ) ) .props("clickable") .classes("w-full") - .style("padding-left: 0; padding-right: 0;") - .mark(f"SIDEBAR_RUN_ITEM:{index}:{run_data['run_id']}") + .mark(f"SIDEBAR_RUN_ITEM:{index}") ): with ui.item_section().props("avatar"): - icon, color = run_status_to_icon_and_color( - run_data["state"], - run_data["termination_reason"], - run_data["item_count"], - run_data["item_succeeded_count"], - ) - with ( - ui.circular_progress( - min=0, - max=run_data["item_count"] if run_data["item_count"] > 0 else 1, - value=run_data["item_succeeded_count"], - color=color, - show_value=False, - ), - ui.icon(icon, color=color).classes("text-4xl"), - ): - tooltip_text = ( - f"{run_data['item_succeeded_count']} of {run_data['item_count']} succeeded, " - f"status {run_data['state'].value.upper()}, " + icon, color = run_status_to_icon_and_color(run_data["status"]) + with ui.icon(icon, color=color): + ui.tooltip( + f"Run {run_data['application_run_id']}, " + f"status {run_data['status'].value.upper()}" ) - if run_data["termination_reason"]: - tooltip_text += f"{run_data['termination_reason']}, " - tooltip_text += f"run id {run_data['run_id']}" - ui.tooltip(tooltip_text) with ui.item_section(): - ui.label(f"{run_data['application_id']} ({run_data['version_number']})").classes( + ui.label(f"{run_data['application_version_id']}").classes( "font-bold" - if context.client.page.path == "/application/run/{run_id}" + if context.client.page.path == "/application/run/{application_run_id}" and args - and args.get("run_id") == run_data["run_id"] + and args.get("application_run_id") == run_data["application_run_id"] else "font-normal" - ).mark(f"LABEL_RUN_APPLICATION:{index}") - ui.label(f"submitted {run_data['submitted_at'].astimezone().strftime('%m-%d %H:%M')}") - if run_data.get("tags") and len(run_data["tags"]): - with ui.row().classes("gap-1 mt-1"): - for tag in run_data["tags"][:3]: - - def _on_tag_click(t: str = tag) -> None: - search_input.query = t - _runs_list.refresh() - - ui.chip( - tag, - on_click=_on_tag_click, - ).props("clickable").classes("bg-white text-black text-xs") - if len(run_data["tags"]) > 3: # noqa: PLR2004 - ui.tooltip(f"Tags: {', '.join(run_data['tags'][3:])}") + ) + ui.label( + f"triggered on {run_data['triggered_at'].astimezone().strftime('%m-%d %H:%M')}" + ) if not runs: with ui.item(): with ui.item_section().props("avatar"): ui.icon("info") with ui.item_section(): ui.label("You did not yet create a run.") - except Exception as e: runs_column.clear() with ui.item(): @@ -219,46 +149,39 @@ def _on_tag_click(t: str = tag) -> None: @ui.refreshable async def _runs_list() -> None: - with ( - ui.scroll_area() - .props('id="runs-list-container"') - .classes("w-full") - .style("height: calc(100vh - 250px);") - .props("content-style='padding-right: 0;'"), - ui.column().classes("full-width justify-center") as runs_column, - ): + with ui.column().classes("full-width justify-center") as runs_column: with ui.row().classes("w-full justify-center"): ui.spinner(size="lg").classes("m-5") await ui.context.client.connected() background_tasks.create_lazy( coroutine=application_runs_load_and_render( runs_column=runs_column, - has_output=app.storage.tab.get(STORAGE_TAB_RUNS_HAS_OUTPUT, False), - query=search_input.query, + completed_only=app.storage.tab.get(STORAGE_TAB_RUNS_COMPLETED_ONLY, False), + note_query=search_input.query, ), name="_runs_list", ) class RunFilterButton(ui.icon): - _has_output: bool = False + _state: bool = False def __init__(self, *args, **kwargs) -> None: # type: ignore[no-untyped-def] super().__init__(*args, **kwargs) - self._has_output = app.storage.tab.get(STORAGE_TAB_RUNS_HAS_OUTPUT, False) + self._state = app.storage.tab.get(STORAGE_TAB_RUNS_COMPLETED_ONLY, False) self.on("click", self.toggle) def toggle(self) -> None: - self._has_output = not self._has_output - app.storage.tab[STORAGE_TAB_RUNS_HAS_OUTPUT] = self._has_output + self._state = not self._state + app.storage.tab[STORAGE_TAB_RUNS_COMPLETED_ONLY] = self._state self.update() _runs_list.refresh() def update(self) -> None: - self.props(f"color={'positive' if self._has_output else 'grey'}") + self.props(f"color={'positive' if self._state else 'grey'}") super().update() def is_active(self) -> bool: - return bool(self._has_output) + return bool(self._state) try: with ui.list().props(BORDERED_SEPARATOR).classes("full-width"): @@ -266,17 +189,14 @@ def is_active(self) -> bool: ui.item_label("Runs").props("header") await ui.context.client.connected() ui.input( - placeholder="Filter by note or tags", + placeholder="Filter by note", on_change=_runs_list.refresh, ).bind_value(search_input, "query").props("rounded outlined dense clearable").style( - "max-width: 16ch;" - ).classes("text-xs").mark("INPUT_RUNS_FILTER_NOTE_OR_TAGS") + "max-width: 15ch;" + ).classes("text-xs").mark("INPUT_RUNS_FILTER_NOTE") with RunFilterButton("done_all", size="sm").classes("mr-3").mark("BUTTON_RUNS_FILTER_COMPLETED"): ui.tooltip("Show completed runs only") ui.separator() await _runs_list() - - # Auto-refresh runs list - ui.timer(interval=RUNS_REFRESH_INTERVAL, callback=_runs_list.refresh) except Exception as e: # noqa: BLE001 ui.label(f"Failed to list application runs: {e!s}").mark("LABEL_ERROR") diff --git a/src/aignostics/application/_gui/_page_application_describe.py b/src/aignostics/application/_gui/_page_application_describe.py index 654e2c75..6a75ad91 100644 --- a/src/aignostics/application/_gui/_page_application_describe.py +++ b/src/aignostics/application/_gui/_page_application_describe.py @@ -2,10 +2,9 @@ import sys import time -from datetime import UTC, datetime, timedelta from multiprocessing import Manager from pathlib import Path -from typing import TYPE_CHECKING, Any +from typing import Any from aiopath import AsyncPath from nicegui import app, binding, ui # noq @@ -13,9 +12,6 @@ from aignostics.utils import GUILocalFilePicker, get_logger, get_user_data_directory -if TYPE_CHECKING: - from aignostics.platform import UserInfo - from .._service import Service # noqa: TID252 from .._utils import get_mime_type_for_artifact # noqa: TID252 from ._frame import _frame @@ -34,8 +30,7 @@ class SubmitForm: """Submit form.""" - application_id: str | None = None - application_version: str | None = None + application_version_id: str | None = None source: Path | None = None wsi_step_label: ui.label | None = None wsi_next_button: ui.button | None = None @@ -46,10 +41,6 @@ class SubmitForm: metadata_next_button: ui.button | None = None upload_and_submit_button: ui.button | None = None note: str | None = None - tags: list[str] | None = None - due_date: str = (datetime.now().astimezone() + timedelta(hours=6)).strftime("%Y-%m-%d %H:%M") - deadline: str = (datetime.now().astimezone() + timedelta(hours=24)).strftime("%Y-%m-%d %H:%M") - validate_only: bool = False onboard_to_aignostics_portal: bool = False @@ -60,32 +51,14 @@ class SubmitForm: service = Service() -async def _page_application_describe(application_id: str) -> None: # noqa: C901, PLR0915 +async def _page_application_describe(application_id: str) -> None: # noqa: C901, PLR0912, PLR0915 """Describe Application. Args: application_id (str): The application ID. """ - ui.add_head_html(""" - - """) - spinner = ui.spinner(size="xl").classes("fixed inset-0 m-auto") - ui.notify(f"Loading application details for {application_id}...", type="info") application = await nicegui_run.io_bound(service.application, application_id) - application_versions = await nicegui_run.io_bound(service.application_versions, application_id) - ui.notify( - ( - f"Loaded {application.name if application else ''} with " - f"{len(application_versions) if application_versions else 0} versions." - ), - type="positive", - ) spinner.set_visibility(False) if application is None: @@ -109,16 +82,17 @@ async def _page_application_describe(application_id: str) -> None: # noqa: C901 args={"application_id": application_id}, ) - submit_form.application_id = application.application_id - latest_application_version = application.versions[0] if application.versions else None - submit_form.application_version = latest_application_version.number if latest_application_version else None + application_versions = service.application_versions(application) + latest_application_version = application_versions[0] + latest_application_version_id = latest_application_version.application_version_id + submit_form.application_version_id = latest_application_version_id with ui.dialog() as release_notes_dialog, ui.card().style(WIDTH_1200px): ui.label(f"Release notes of {application.name}").classes("text-h5") with ui.scroll_area().classes("w-full h-100"): for application_version in application_versions: - ui.label(f"Version {application_version.version_number}").classes("text-h6") - ui.markdown(application_version.changelog.replace("\n", "\n\n")) + ui.label(f"Version {application_version.version}").classes("text-h6") + ui.markdown(application_version.changelog) with ui.row(align_items="end").classes("w-full"), ui.column(align_items="end").classes("w-full"): ui.button("Close", on_click=release_notes_dialog.close) @@ -222,9 +196,8 @@ async def _on_wsi_next_click() -> None: submit_form.wsi_next_button.set_visibility(False) submit_form.metadata_grid.options["rowData"] = await nicegui_run.cpu_bound( Service.generate_metadata_from_source_directory, + str(submit_form.application_version_id), submit_form.source, - str(submit_form.application_id), - str(submit_form.application_version), True, [".*:staining_method=H&E"], True, @@ -246,29 +219,19 @@ async def _on_wsi_next_click() -> None: stepper.next() except Exception as e: logger.exception("Error generating metadata from source directory") - ui.notify( - f"Error generating metadata: {e!s}", - type="negative", - progress=True, - timeout=1000 * 60 * 5, - close_button=True, - ) + ui.notify(f"Error generating metadata: {e!s}", type="warning") raise else: ui.notify("No source directory selected", type="warning") - @ui.refreshable - def _info_dialog_content() -> None: - """Refreshable content for the info dialog.""" - if submit_form.application_version is None: - ui.label("No version selected").classes("text-h6") + with ui.dialog() as info_dialog, ui.card().style("width: 1200px; max-width: none; height: 1000px"): # noqa: PLR1702 + if submit_form.application_version_id is None: return - with ui.scroll_area().classes("w-full h-[calc(100vh-2rem)]"): for application_version in application_versions: - if application_version.version_number == submit_form.application_version: - ui.label(f"Latest changes in v{application_version.version_number}").classes("text-h5") - ui.markdown(application_version.changelog.replace("\n", "\n\n")) + if application_version.application_version_id == submit_form.application_version_id: + ui.label(f"Latest changes in v{application_version.version}").classes("text-h5") + ui.markdown(application_version.changelog) ui.label("Expected Input Artifacts:").classes("text-h5") for artifact in application_version.input_artifacts: with ui.expansion( @@ -300,11 +263,9 @@ def _info_dialog_content() -> None: "statusBar": False, }).classes("full-width") break - - with ui.dialog() as info_dialog, ui.card().style("width: 1200px; max-width: none; height: 1000px"): - _info_dialog_content() with ui.row(align_items="end").classes("w-full"), ui.column(align_items="end").classes("w-full"): ui.button("Close", on_click=info_dialog.close) + with ui.stepper().props("vertical").classes("w-full") as stepper: # noqa: PLR1702 with ui.step("Select Application Version"): with ui.row().classes("w-full justify-center"): @@ -313,16 +274,10 @@ def _info_dialog_content() -> None: f"Select the version of {application.name} you want to run. Not sure? " "Click “Next” to auto-select the latest version" ) - unique_versions = list( - dict.fromkeys( - str(version.number) for version in application.versions if version.number is not None - ) - ) ui.select( - options={version: version for version in unique_versions}, - value=latest_application_version.number if latest_application_version else None, - on_change=lambda _: _info_dialog_content.refresh(), - ).bind_value_to(submit_form, "application_version") + {version.application_version_id: version.version for version in application_versions}, + value=latest_application_version_id, + ).bind_value(submit_form, "application_version_id") ui.space() with ui.column(), ui.button(icon="info", on_click=info_dialog.open): ui.tooltip("Show changes and input/ouput schema of this application version.") @@ -331,7 +286,7 @@ def _info_dialog_content() -> None: "BUTTON_APPLICATION_VERSION_NEXT" ) - with ui.step("Find Whole Slide Images"): + with ui.step("Select Whole Slide Images"): submit_form.wsi_step_label = ui.label( "Select the folder with the whole slide images you want to analyze then click Next." ) @@ -350,7 +305,7 @@ def _info_dialog_content() -> None: submit_form.wsi_spinner.set_visibility(False) ui.button("Back", on_click=stepper.previous).props("flat") - with ui.step("Prepare Whole Slide Images"): + with ui.step("Choose Images and Edit Metadata"): ui.markdown( """ The Launchpad has found all compatible slide files in your selected folder. @@ -421,7 +376,7 @@ async def _validate() -> None: submit_form.metadata_grid.run_grid_method("autoSizeAllColumns") async def _metadata_next() -> None: - if submit_form.metadata_grid is None: + if submit_form.metadata_grid is None or submit_form.upload_and_submit_button is None: logger.error(MESSAGE_METADATA_GRID_IS_NOT_INITIALIZED) return if "pytest" in sys.modules: @@ -432,10 +387,12 @@ async def _metadata_next() -> None: submit_form.metadata = rows else: submit_form.metadata = await submit_form.metadata_grid.get_client_data() + _upload_ui.refresh(submit_form.metadata) + submit_form.upload_and_submit_button.enable() if "pytest" in sys.modules: - message = f"Captured metadata '{submit_form.metadata}' for pytest." + message = f"Prepared upload UI with metadata '{submit_form.metadata}' for pytest." logger.debug(message) - ui.notify("Metadata captured.", type="info") + ui.notify("Prepared upload UI.", type="info") stepper.next() async def _delete_selected() -> None: @@ -471,7 +428,7 @@ class ThumbnailRenderer { this.eGui = document.createElement('img'); this.eGui.setAttribute('src', `/thumbnail?source=${encodeURIComponent(params.data.source)}`); this.eGui.setAttribute('style', 'height:70px; width: 70px'); - this.eGui.setAttribute('alt', `${params.data.external_id}`); + this.eGui.setAttribute('alt', `${params.data.reference}`); } getGui() { return this.eGui; @@ -482,7 +439,7 @@ class ThumbnailRenderer { submit_form.metadata_grid = ( ui.aggrid({ "columnDefs": [ - {"headerName": "Reference", "field": "path_short", "checkboxSelection": True}, + {"headerName": "Reference", "field": "reference_short", "checkboxSelection": True}, { "headerName": "Thumbnail", "field": "thumbnail", @@ -592,149 +549,24 @@ class ThumbnailRenderer { submit_form.metadata_exclude_button.disable() ui.button("Back", on_click=stepper.previous).props("flat") - with ui.step("Notes and Tags"): - with ui.column(align_items="start").classes("w-full"): - ui.textarea( - label="Note (optional)", - placeholder=( - "Enter a note for this run. " - "Tip: You can later use the search box in the left sidebar " - "(see magnifying glass icon) to find runs by searching for text in this note." - ), - ).bind_value(submit_form, "note").mark("TEXTAREA_NOTE").classes("full-width") - - ui.input_chips( - "Tags (optional, press Enter to add)", - value=submit_form.tags, - new_value_mode="add-unique", - clearable=True, - ).bind_value(submit_form, "tags").classes("full-width").mark("INPUT_TAGS") - - with ui.stepper_navigation(): - ui.button("Next", on_click=stepper.next).mark("BUTTON_NOTES_AND_TAGS_NEXT") - ui.button("Back", on_click=stepper.previous).props("flat") - - with ui.step("Schedule"): - with ui.column(align_items="start").classes("w-full"): - now = datetime.now().astimezone() - today = now.strftime("%Y/%m/%d") - min_hour = (now + timedelta(hours=1)).hour - min_minute = (now + timedelta(hours=1)).minute - ui.label("Soft Due Date").classes("text-h6 mb-0 pb-0") - ui.label( - "The platform will try to complete the run before this time, " - "given your subscription tier and available GPU resources." - ).classes("text-sm mt-0 pt-0") - with ui.row().classes("full-width"): - ui.label("") - due_date_date_picker = ( - ui.date(mask="YYYY-MM-DD HH:mm") - .bind_value(submit_form, "due_date") - .props(f":options=\"(date) => date >= '{today}'\"") - .mark("DATE_DUE_DATE") - ) - due_date_time_picker = ( - ui.time(mask="YYYY-MM-DD HH:mm") - .bind_value(submit_form, "due_date") - .props("format24h now-btn") - .mark("TIME_DUE_DATE") - ) - # Add dynamic time restriction based on selected date - ui.run_javascript( - f""" - const datePicker = getElement({due_date_date_picker.id}); - const timePicker = getElement({due_date_time_picker.id}); - const today = '{today}'; - const minHour = {min_hour}; - const minMinute = {min_minute}; - - function updateTimeOptions() {{ - const selectedDate = datePicker?.$refs?.qDateProxy?.modelValue?.split(' ')[0]; - if (!selectedDate) return; - - const selectedDateStr = selectedDate.replace(/-/g, '/'); - const isToday = selectedDateStr === today; - - if (isToday) {{ - timePicker.$refs.qTimeProxy.options = (hr, min) => {{ - if (hr < minHour) return false; - if (hr === minHour && min < minMinute) return false; - return true; - }}; - }} else {{ - timePicker.$refs.qTimeProxy.options = null; - }} - }} - - // Watch for date changes - if (datePicker?.$refs?.qDateProxy) {{ - datePicker.$refs.qDateProxy.$watch('modelValue', updateTimeOptions); - updateTimeOptions(); - }} - """ - ) - ui.label("Hard Deadline").classes("text-h6 mb-0 pb-0") - ui.label("The platform might cancel the run if not completed by this time.").classes( - "text-sm mt-0 pt-0" - ) - with ui.row().classes("full-width"): - ui.date(mask="YYYY-MM-DD HH:mm").bind_value(submit_form, "deadline").props( - f":options=\"(date) => date >= '{today}'\"" - ).mark("DATE_DEADLINE") - ui.time(mask="YYYY-MM-DD HH:mm").bind_value(submit_form, "deadline").props( - "format24h now-btn" - ).mark("TIME_DEADLINE") - - def _scheduling_next() -> None: - if submit_form.upload_and_submit_button is None: - logger.error("Submission submit button is not initialized.") - return - _upload_ui.refresh(submit_form.metadata or []) - submit_form.upload_and_submit_button.enable() - if "pytest" in sys.modules: - ui.notify("Prepared upload UI.", type="info") - stepper.next() - - with ui.stepper_navigation(): - ui.button("Next", on_click=_scheduling_next).mark("BUTTON_SCHEDULING_NEXT") - ui.button("Back", on_click=stepper.previous).props("flat") - def _submit() -> None: """Submit the application run.""" ui.notify("Submitting application run ...", type="info") try: run = service.application_run_submit_from_metadata( - application_id=str(submit_form.application_id), - metadata=submit_form.metadata or [], - application_version=str(submit_form.application_version), - custom_metadata=None, # TODO(Helmut): Allow user to edit custom metadata - note=submit_form.note, - tags=set(submit_form.tags) if submit_form.tags else None, - due_date=datetime.strptime(submit_form.due_date, "%Y-%m-%d %H:%M") - .astimezone() - .astimezone(UTC) - .isoformat(), - deadline=datetime.strptime(submit_form.deadline, "%Y-%m-%d %H:%M") - .astimezone() - .astimezone(UTC) - .isoformat(), - validate_only=submit_form.validate_only, - onboard_to_aignostics_portal=submit_form.onboard_to_aignostics_portal, + str(submit_form.application_version_id), + submit_form.metadata or [], + {"sdk": {"note": submit_form.note}} if submit_form.note else None, + submit_form.onboard_to_aignostics_portal, ) except Exception as e: # noqa: BLE001 - ui.notify( - f"Failed to submit application run: {e}.", - type="negative", - progress=True, - timeout=1000 * 60 * 5, - close_button=True, - ) + ui.notify(f"Failed to submit application run: {e}.", type="warning") return ui.notify( - f"Application run submitted with id '{run.run_id}'. Navigating to application run ...", + f"Application run submitted with id '{run.application_run_id}'. Navigating to application run ...", type="positive", ) - ui.navigate.to(f"/application/run/{run.run_id}") + ui.navigate.to(f"/application/run/{run.application_run_id}") async def _upload() -> None: """Upload prepared slides.""" @@ -742,55 +574,36 @@ async def _upload() -> None: if submit_form.upload_and_submit_button is None: logger.error("Submission submit button is not initialized.") return - message = "Uploading whole slide images to Aignostics Platform ..." - logger.debug(message) - ui.notify(message, type="info") + ui.notify("Uploading whole slide images to Aignostics Platform ...", type="info") submit_form.upload_and_submit_button.disable() - await nicegui_run.io_bound( + + await nicegui_run.cpu_bound( Service.application_run_upload, - str(submit_form.application_id), + str(submit_form.application_version_id), submit_form.metadata or [], - str(submit_form.application_version), submit_form.onboard_to_aignostics_portal, str(time.time() * 1000), upload_message_queue, ) - message = "Upload to Aignostics Platform completed." - logger.debug(message) - ui.notify(message, type="positive") + ui.notify("Upload to Aignostics Platform completed.", type="positive") _submit() @ui.refreshable def _upload_ui(metadata: list[dict[str, Any]]) -> None: """Upload UI.""" - with ui.column(align_items="start").classes("w-full"): + with ui.column(align_items="start"): ui.label(f"Upload and submit your {len(metadata)} slide(s) for analysis.") - - # Allow users of some organisations to request onboarding slides to Portal - user_info: UserInfo | None = app.storage.tab.get("user_info", None) - with ui.row().classes("full-width mt-4 mb-4"): - if ( - user_info - and user_info.organization - and user_info.organization.name - and user_info.organization.name.lower() in {"aignostics", "pre-alpha-org", "lmu", "charite"} - ): - ui.checkbox( - text="Onboard Slides and Output to Aignostics Portal", - ).bind_value(submit_form, "onboard_to_aignostics_portal").mark( - "CHECKBOX_ONBOARD_TO_AIGNOSTICS_PORTAL" - ) - # Allow users in aignostics' organisations to do validate only runs - if ( - user_info - and user_info.organization - and user_info.organization.name - and user_info.organization.name.lower() in {"aignostics", "pre-alpha-org"} - ): - ui.checkbox( - text="Validate only", - ).bind_value(submit_form, "validate_only").mark("CHECKBOX_VALIDATE_ONLY") - + ui.textarea( + label="Note (optional)", + placeholder=( + "Enter a note for this run. " + "Tip: You can later use the search box in the left sidebar " + "(see magnifying glass icon) to find runs by searching for text in this note." + ), + ).bind_value(submit_form, "note").mark("TEXTAREA_NOTE").classes("full-width") + ui.checkbox( + text="Onboard to Aignostics Portal (optional)", + ).bind_value(submit_form, "onboard_to_aignostics_portal").mark("CHECKBOX_ONBOARD_TO_AIGNOSTICS_PORTAL") upload_complete = True for row in metadata or []: upload_complete = upload_complete and row["file_upload_progress"] == 1 @@ -806,9 +619,9 @@ def _update_upload_progress() -> None: return while not upload_message_queue.empty(): message = upload_message_queue.get() - if message and isinstance(message, dict) and "external_id" in message: + if message and isinstance(message, dict) and "reference" in message: for row in submit_form.metadata: - if row["external_id"] == message["external_id"]: + if row["reference"] == message["reference"]: if "file_upload_progress" in message: row["file_upload_progress"] = message["file_upload_progress"] break @@ -817,7 +630,7 @@ def _update_upload_progress() -> None: break _upload_ui.refresh(submit_form.metadata) - with ui.step("Submit"): + with ui.step("Slide Submission"): _upload_ui([]) ui.timer(0.1, callback=_update_upload_progress) diff --git a/src/aignostics/application/_gui/_page_application_run_describe.py b/src/aignostics/application/_gui/_page_application_run_describe.py index 241ce93b..5c7d5e7b 100644 --- a/src/aignostics/application/_gui/_page_application_run_describe.py +++ b/src/aignostics/application/_gui/_page_application_run_describe.py @@ -3,31 +3,24 @@ from importlib.util import find_spec from multiprocessing import Manager from pathlib import Path -from typing import TYPE_CHECKING, Any +from typing import Any from urllib.parse import quote import humanize from aiopath import AsyncPath -from nicegui import ( - app, - ui, # noq -) from nicegui import run as nicegui_run +from nicegui import ui # noq -from aignostics.platform import ItemOutput, ItemState, RunState +from aignostics.platform import ApplicationRunStatus, ItemStatus from aignostics.third_party.showinfm.showinfm import show_in_file_manager from aignostics.utils import GUILocalFilePicker, get_logger, get_user_data_directory -if TYPE_CHECKING: - from aignostics.platform import UserInfo - -from .._models import DownloadProgressState # noqa: TID252 -from .._service import Service # noqa: TID252 +from .._service import DownloadProgressState, Service # noqa: TID252 from .._utils import get_mime_type_for_artifact # noqa: TID252 from ._frame import _frame from ._utils import ( mime_type_to_icon, - run_item_status_and_termination_reason_to_icon_and_color, + run_item_status_to_icon_and_color, run_status_to_icon_and_color, ) @@ -38,77 +31,49 @@ service = Service() -async def _page_application_run_describe(run_id: str) -> None: # noqa: C901, PLR0912, PLR0914, PLR0915 +async def _page_application_run_describe(application_run_id: str) -> None: # noqa: C901, PLR0912, PLR0914, PLR0915 """Describe Application. Args: - run_id (str): The ID of the application run to describe. + application_run_id (str): The ID of the application run to describe. """ import pandas as pd # noqa: PLC0415 if find_spec("ijson"): from aignostics.qupath import Service as QuPathService # noqa: PLC0415 - ui.add_head_html(""" - - """) - spinner = ui.spinner(size="xl").classes("fixed inset-0 m-auto") - run = await nicegui_run.io_bound(service.application_run, run_id) + run = await nicegui_run.io_bound(service.application_run, application_run_id) spinner.set_visibility(False) run_data = run.details() if run else None if run and run_data: - icon, color = run_status_to_icon_and_color( - run_data.state.value, - run_data.termination_reason, - run_data.statistics.item_count, - run_data.statistics.item_succeeded_count, - ) + icon, color = run_status_to_icon_and_color(run_data.status.value) await _frame( navigation_title=( - f"Run of {run_data.application_id} ({run_data.version_number}) on " - f"{run_data.submitted_at.astimezone().strftime('%m-%d %H:%M')}" + f"Run of {run_data.application_version_id} " + f"on {run_data.triggered_at.astimezone().strftime('%m-%d %H:%M')}" ), navigation_icon=icon, navigation_icon_color=color, - navigation_icon_tooltip=f"Run {run_data.run_id}, status {run_data.state.value.upper()}", + navigation_icon_tooltip=f"Run {run_data.application_run_id}, status {run_data.status.value.upper()}", left_sidebar=True, - args={"run_id": run_id}, + args={"application_run_id": application_run_id}, ) else: await _frame( - navigation_title=f"Run {run_id}", + navigation_title=f"Run {application_run_id}", navigation_icon="bug_report", navigation_icon_color="negative", navigation_icon_tooltip="Could not load run data", left_sidebar=True, - args={"run_id": run_id}, + args={"application_run_id": application_run_id}, ) if run is None: - ui.label(f"Failed to get run '{run_id}'").mark("LABEL_ERROR") # type: ignore[unreachable] + ui.label(f"Failed to get run '{application_run_id}'").mark("LABEL_ERROR") # type: ignore[unreachable] return - # Forward declaration of UI buttons that will be defined later - cancel_button: ui.button - delete_button: ui.button - async def _cancel(run_id: str) -> bool: """Cancel the application run. @@ -194,7 +159,6 @@ async def _select_download_destination() -> None: selected_folder.value = str(folder_path) else: selected_folder.value = str(folder_path.parent) - ui.notify(f"Using custom directory: {selected_folder.value}", type="info") download_button.enable() else: ui.notify("No folder selected", type="warning") @@ -202,7 +166,6 @@ async def _select_download_destination() -> None: async def _select_data() -> None: # noqa: RUF029 """Open a file picker dialog and show notifier when closed again.""" selected_folder.value = str(get_user_data_directory("results")) - ui.notify("Using Launchpad results directory", type="info") download_button.enable() with ui.row().classes("w-full"): @@ -230,32 +193,22 @@ async def start_download() -> None: # noqa: C901, PLR0915 ui.notify("Please select a folder first", type="warning") return - ui.notify("Downloading ...", type="info") + ui.notify("Downloading ...", type="info") progress_queue = Manager().Queue() - def update_download_progress() -> None: # noqa: C901, PLR0912 + def update_download_progress() -> None: """Update the progress indicator with values from the queue.""" while not progress_queue.empty(): progress = progress_queue.get() - # Determine status text based on progress state - if progress.status is DownloadProgressState.DOWNLOADING_INPUT: - status_text = ( - f"Downloading input slide {progress.item_index + 1} of {progress.item_count}" - if progress.item_index is not None and progress.item_count - else "Downloading input slide ..." - ) - elif ( - progress.status is DownloadProgressState.DOWNLOADING - and progress.total_artifact_index is not None - ): - status_text = ( + download_item_status.set_text( + progress.status + if progress.status is not DownloadProgressState.DOWNLOADING + or progress.total_artifact_index is None + else ( f"Downloading artifact {progress.total_artifact_index + 1} " f"of {progress.total_artifact_count}" ) - else: - status_text = progress.status - - download_item_status.set_text(status_text) + ) download_item_status.set_visibility(True) download_item_progress.set_value(progress.item_progress_normalized) download_artifact_progress.set_value(progress.artifact_progress_normalized) @@ -263,13 +216,7 @@ def update_download_progress() -> None: # noqa: C901, PLR0912 download_artifact_status.set_visibility(False) download_item_progress.set_visibility(False) download_artifact_progress.set_visibility(False) - elif progress.status is DownloadProgressState.DOWNLOADING_INPUT: - if progress.input_slide_path: - download_artifact_status.set_text(f"Input: {progress.input_slide_path.name}") - download_artifact_status.set_visibility(True) - download_item_progress.set_visibility(True) - download_artifact_progress.set_visibility(True) - elif progress.status is DownloadProgressState.DOWNLOADING: + if progress.status is DownloadProgressState.DOWNLOADING: if progress.artifact_path: download_artifact_status.set_text(str(progress.artifact_path)) download_artifact_status.set_visibility(True) @@ -309,7 +256,7 @@ def update_download_progress() -> None: # noqa: C901, PLR0912 download_button.props(add="loading") results_folder = await nicegui_run.cpu_bound( Service.application_run_download_static, - run_id=run.run_id, + run_id=run.application_run_id, destination_directory=Path(selected_folder.value), wait_for_completion=True, qupath_project=qupath_project, @@ -449,33 +396,6 @@ def tiff_dialog_open(title: str, url: str) -> None: tiff_view_dialog_content.refresh(title=title, url=url) tiff_view_dialog.open() - @ui.refreshable - def custom_metadata_dialog_content(title: str | None, custom_metadata: str | None) -> None: - if title: - ui.label(title).classes("text-h5") - if custom_metadata: - try: - ui.json_editor({ - "content": {"json": custom_metadata}, - "mode": "tree", - "readOnly": True, - "mainMenuBar": False, - "navigationBar": True, - "statusBar": False, - }).classes("full-width") - except Exception as e: # noqa: BLE001 - ui.notify(f"Failed to render metadata: {e!s}", type="negative") - - with ui.dialog() as custom_metadata_dialog, ui.card().style(WIDTH_1200px): - custom_metadata_dialog_content(title=None, custom_metadata=None) - with ui.row(align_items="end").classes("w-full"), ui.column(align_items="end").classes("w-full"): - ui.button("Close", on_click=custom_metadata_dialog.close) - - def custom_metadata_dialog_open(title: str, custom_metadata: dict[str, Any]) -> None: - """Open the Custom Metadata dialog.""" - custom_metadata_dialog_content.refresh(title=title, custom_metadata=custom_metadata) - custom_metadata_dialog.open() - async def open_qupath( project: Path | None = None, image: Path | str | None = None, button: ui.button | None = None ) -> None: @@ -506,87 +426,48 @@ def open_marimo(results_folder: Path, button: ui.button | None = None) -> None: if button: button.disable() button.props(add="loading") - ui.navigate.to(f"/notebook/{run.run_id}?results_folder={quote(results_folder.as_posix())}") + ui.navigate.to(f"/notebook/{run.application_run_id}?results_folder={quote(results_folder.as_posix())}") ui.navigate.reload() # TODO(Helmut): Find out why this workaround works. Was just a hunch ... if run_data: # noqa: PLR1702 with ui.row().classes("w-full justify-center"): - expansion = ui.expansion(text=f"Run {run.run_id}", icon="info") - expansion.on_value_change( - lambda e: expansion.classes(add="w-full" if e.value else "", remove="w-full" if not e.value else "") - ) - with expansion: + with ui.expansion(text=f"Run {run.application_run_id}"): # Display run metadata, including duration if possible, using humanize - submitted_at = run_data.submitted_at.astimezone() + triggered_at = run_data.triggered_at.astimezone() terminated_at = run_data.terminated_at.astimezone() if run_data.terminated_at else None - if submitted_at and terminated_at: - duration_seconds = (terminated_at - submitted_at).total_seconds() + if triggered_at and terminated_at: + duration_seconds = (terminated_at - triggered_at).total_seconds() duration_str = humanize.precisedelta(duration_seconds, format="%0.0f") else: duration_str = "N/A" - if run_data.state is RunState.TERMINATED and run_data.termination_reason: - status_str = f"{run_data.state.value} ({run_data.termination_reason.name})" - else: - status_str = f"{run_data.state.value}" - ui.code( f""" - * Run ID: {run_data.run_id} - * Application: {run_data.application_id} ({run_data.version_number}) - * Status: {status_str} - * Output: {run_data.output.name} - - {run_data.statistics.item_count} items - - {run_data.statistics.item_pending_count} pending - - {run_data.statistics.item_processing_count} processing - - {run_data.statistics.item_skipped_count} skipped - - {run_data.statistics.item_succeeded_count} succeeded - - {run_data.statistics.item_user_error_count} user errors - - {run_data.statistics.item_system_error_count} system errors - * Submitted: {submitted_at.strftime("%m-%d %H:%M")} ({run_data.submitted_by}) - * Terminated: {terminated_at.strftime("%m-%d %H:%M") if terminated_at else "N/A"} ({duration_str}) - * Error: {run_data.error_message or "N/A"} ({run_data.error_code or "N/A"}) + * Run ID: {run_data.application_run_id} + * Application Version: {run_data.application_version_id} + * Message: {run_data.message} + * Duration: {duration_str} + * Triggered On: {triggered_at.strftime("%m-%d %H:%M")} + * Terminated At: {terminated_at.strftime("%m-%d %H:%M") if terminated_at else "N/A"} + * Triggered By: {run_data.triggered_by} + * Organization: {run_data.organization_id} """, language="markdown", - ).classes("full-width").mark("CODE_RUN_METADATA") - user_info: UserInfo | None = app.storage.tab.get("user_info", None) - if run_data.custom_metadata: - is_editable = user_info and user_info.role in {"admin", "super_admin"} + ).classes("full-width") + if run_data.metadata: properties = { - "content": {"json": run_data.custom_metadata}, + "content": {"json": run_data.metadata}, "mode": "tree", - "readOnly": not is_editable, + "readOnly": True, "mainMenuBar": True, "navigationBar": False, "statusBar": False, } - - async def handle_metadata_change(e: Any) -> None: # noqa: ANN401 - """Handle changes to the custom metadata and update the run.""" - if not is_editable: - return - try: - # Extract the new metadata from the event's content attribute - new_metadata = e.content.get("json") if hasattr(e, "content") else None - if new_metadata: - ui.notify("Updating custom metadata...", type="info") - await nicegui_run.io_bound( - Service.application_run_update_custom_metadata_static, - run_id=run_id, - custom_metadata=new_metadata, - ) - ui.notify("Custom metadata updated successfully!", type="positive") - ui.navigate.reload() - except Exception as ex: # noqa: BLE001 - ui.notify(f"Failed to update custom metadata: {ex!s}", type="negative") - - ui.json_editor(properties, on_change=handle_metadata_change).classes("full-width").mark( - "JSON_EDITOR_CUSTOM_METADATA" - ) + ui.json_editor(properties).classes("full-width").mark("JSON_EDITOR_HEALTH") ui.space() with ui.row().classes("justify-end"): - if run_data.state.value == RunState.TERMINATED and run_data.statistics.item_succeeded_count > 0: + if run_data.status.value == ApplicationRunStatus.COMPLETED: with ui.button_group().props("push"): with ( ui.button("Download", icon="cloud_download", on_click=lambda _: download_run_dialog_open()) @@ -617,36 +498,34 @@ async def handle_metadata_change(e: Any) -> None: # noqa: ANN401 ): ui.tooltip("Open results in Python Notebook served by Marimo") - if run_data.state.value in {RunState.PENDING, RunState.PROCESSING}: + if run_data.status.value == ApplicationRunStatus.RUNNING: cancel_button = ui.button( "Cancel", color="red", - on_click=lambda: _cancel(run.run_id), + on_click=lambda: _cancel(run.application_run_id), icon="cancel", ).mark("BUTTON_APPLICATION_RUN_CANCEL") - if run_data: + if run_data.status.value in { + ApplicationRunStatus.CANCELED_USER, + ApplicationRunStatus.CANCELED_SYSTEM, + ApplicationRunStatus.COMPLETED, + ApplicationRunStatus.COMPLETED_WITH_ERROR, + ApplicationRunStatus.REJECTED, + ApplicationRunStatus.RUNNING, + ApplicationRunStatus.SCHEDULED, + }: delete_button = ui.button( "Delete", color="red", - on_click=lambda: _delete(run.run_id), + on_click=lambda: _delete(run.application_run_id), icon="delete", ).mark("BUTTON_APPLICATION_RUN_RESULT_DELETE") - note = run_data.custom_metadata.get("sdk", {}).get("note") if run_data.custom_metadata else None + note = run_data.metadata.get("sdk", {}).get("note") if run_data.metadata else None if note: - with ui.card().classes("full-width bg-aignostics-light"): - ui.label("Note:").classes("text-italic text-sm text-gray-500") - ui.label(str(note)).classes("-mt-4") - - tags = run_data.custom_metadata.get("sdk", {}).get("tags") if run_data.custom_metadata else None - if tags and len(tags): - with ui.row().classes("gap-1 -mt-2 full-width"): - for tag in tags[:20]: - ui.chip( - tag, - on_click=lambda t=tag: ui.navigate.to(f"/?query={quote(str(t))}"), - ).props("small outlined clickable").classes("bg-white text-black") + with ui.card().classes("full-width"): + ui.markdown(str(note)) with ui.list().classes("full-width"): results = list(run.results()) @@ -662,30 +541,25 @@ async def handle_metadata_change(e: Any) -> None: # noqa: ANN401 ui.space() return for item in results: - with ui.item().classes("h-96 px-0").props("clickable"): + with ui.item().classes("h-96").props("clickable"): with ( ui.item_section().classes("h-full"), ui.card().tight().classes("h-full"), ui.row().classes("w-full"), ): - image_file: AsyncPath | None = await AsyncPath(item.external_id).resolve() + image_file: AsyncPath | None = await AsyncPath(item.reference).resolve() if image_file and await image_file.is_file(): image_url = "/thumbnail?source=" + quote(image_file.as_posix()) else: image_file = None image_url = "/application_assets/image-not-found.png" ui.image(image_url).classes("object-contain absolute-center max-h-full") - icon, color = run_item_status_and_termination_reason_to_icon_and_color( - item.state.value, item.termination_reason - ) + icon, color = run_item_status_to_icon_and_color(item.status.value) with ui.row().classes("justify-center w-full"): with ui.icon(icon, color=color).classes("text-4xl pl-2 pt-1").props("floating"): - tooltip = f"Item {item.item_id}, status {item.state.value.upper()}" - if item.termination_reason: - tooltip += f" ({item.termination_reason})" - ui.tooltip(tooltip) + ui.tooltip(f"Item {item.item_id}, status {item.status.value.upper()}") ui.space() - with ui.button_group(): + with ui.button_group().props(): if find_spec("ijson") and QuPathService.is_qupath_installed(): with ui.button( icon="zoom_in", @@ -697,20 +571,10 @@ async def handle_metadata_change(e: Any) -> None: # noqa: ANN401 ) ) ui.tooltip("Open in QuPath") - if item.custom_metadata: - with ui.button( - icon="info", - on_click=lambda _, - custom_metadata=item.custom_metadata, - external_id=item.external_id: custom_metadata_dialog_open( - title=f"Custom Metadata of item {external_id} ", - custom_metadata=custom_metadata, - ), - ).props("floating"): - ui.tooltip("Show custom metadata") if image_file: with ui.button( icon="folder_open", + color="primary", on_click=lambda _, image_file=image_file: show_in_file_manager( str(image_file.parent) ), @@ -719,85 +583,84 @@ async def handle_metadata_change(e: Any) -> None: # noqa: ANN401 with ui.row().classes( "absolute-bottom h-32 bg-indigo-700 bg-opacity-80 content-center w-full p-4" ): - ui.label(item.external_id).classes( + ui.label(item.reference).classes( "text-center break-all text-white font-semibold text-shadow-lg/30" ) - if item.output is ItemOutput.FULL: - with ui.item_section().classes("w-full"), ui.scroll_area().classes("h-full p-0"): - for artifact in sorted(item.output_artifacts, key=lambda a: str(a.name)): - mime_type = get_mime_type_for_artifact(artifact) - with ui.expansion( - str(artifact.name), - icon=mime_type_to_icon(mime_type), - group="artifacts", - ).classes("w-full"): - if artifact.download_url: - url = artifact.download_url - title = artifact.name - metadata = artifact.metadata - with ui.button_group(): - if mime_type == "image/tiff": - ui.button( - "Preview", - icon=mime_type_to_icon(mime_type), - on_click=lambda _, url=url, title=title: tiff_dialog_open( - title, url - ), - ) - if mime_type == "text/csv": - ui.button( - "Preview", - icon=mime_type_to_icon(mime_type), - on_click=lambda _, url=url, title=title: csv_dialog_open( - title, url - ), - ) - if url: - ui.button( - text="Download", - icon="cloud_download", - on_click=lambda _, url=url: ui.navigate.to(url, new_tab=True), - ) - if metadata: - ui.button( - text="Schema", - icon="schema", - on_click=lambda _, - title=title, - metadata=metadata: metadata_dialog_open(title, metadata), - ) - elif item.state is ItemState.TERMINATED: - if item.error_message: - with ( - ui.row() - .classes("w-1/2 justify-start items-start content-start ml-4") - .style("max-width: 50%;") - ): + if item.status is ItemStatus.SUCCEEDED: + with ui.item_section().classes("w-full"): + if item.status is ItemStatus.SUCCEEDED and item.output_artifacts: + with ui.scroll_area().classes("h-full").style("padding: 0"): + for artifact in sorted(item.output_artifacts, key=lambda a: str(a.name)): + mime_type = get_mime_type_for_artifact(artifact) + with ui.expansion( + str(artifact.name), + icon=mime_type_to_icon(mime_type), + group="artifacts", + ).classes("w-full"): + if artifact.download_url: + url = artifact.download_url + title = artifact.name + metadata = artifact.metadata + with ui.button_group(): + if mime_type == "image/tiff": + ui.button( + "Preview", + icon=mime_type_to_icon(mime_type), + on_click=lambda _, url=url, title=title: tiff_dialog_open( + title, url + ), + ) + if mime_type == "text/csv": + ui.button( + "Preview", + icon=mime_type_to_icon(mime_type), + on_click=lambda _, url=url, title=title: csv_dialog_open( + title, url + ), + ) + if url: + ui.button( + text="Download", + icon="cloud_download", + on_click=lambda _, url=url: ui.navigate.to( + url, new_tab=True + ), + ) + if metadata: + ui.button( + text="Schema", + icon="schema", + on_click=lambda _, + title=title, + metadata=metadata: metadata_dialog_open(title, metadata), + ) + elif item.status in { + ItemStatus.PENDING, + ItemStatus.ERROR_USER, + ItemStatus.ERROR_SYSTEM, + ItemStatus.CANCELED_USER, + ItemStatus.CANCELED_SYSTEM, + }: + if item.message: + with ui.row().classes("w-1/2 justify-start items-start content-start ml-4"): ui.code( - f"Error: {item.error_message}, code: {item.error_code or 'N/A'}", + item.message, language="markdown", - ).classes("ml-8").style("width: 100%; max-width: 100%;") + ).classes("full-width break-all ml-8") else: with ui.row().classes("w-1/2 justify-center content-center"): ui.space() + animation_file = { + ItemStatus.PENDING: "pending.lottie", + ItemStatus.ERROR_USER: "error.lottie", + ItemStatus.ERROR_SYSTEM: "error.lottie", + ItemStatus.CANCELED_USER: "canceled.lottie", + ItemStatus.CANCELED_SYSTEM: "canceled.lottie", + }[item.status] ui.html( - '', sanitize=False, ) ui.space() - else: - with ui.row().classes("w-1/2 justify-center content-center"): - ui.space() - animation_file = { - ItemState.PENDING: "pending.lottie", - ItemState.PROCESSING: "processing.lottie", # TODO(Helmut): Different icon - }[item.state] - ui.html( - f'', - sanitize=False, - ) - ui.space() diff --git a/src/aignostics/application/_gui/_page_builder.py b/src/aignostics/application/_gui/_page_builder.py index 6169985e..271e16f7 100644 --- a/src/aignostics/application/_gui/_page_builder.py +++ b/src/aignostics/application/_gui/_page_builder.py @@ -13,11 +13,11 @@ def register_pages() -> None: app.add_static_files("/application_assets", Path(__file__).parent / "assets") @ui.page("/") - async def page_index(client: Client, query: str | None = None) -> None: + async def page_index(client: Client) -> None: """Index page of application module, serving as the homepage of Aignostics Launchpad.""" from ._page_index import _page_index # noqa: PLC0415 - await _page_index(client, query=query) + await _page_index(client) @ui.page("/application/{application_id}") async def page_application_describe(application_id: str) -> None: @@ -30,13 +30,13 @@ async def page_application_describe(application_id: str) -> None: await _page_application_describe(application_id) - @ui.page("/application/run/{run_id}") - async def page_application_run_describe(run_id: str) -> None: + @ui.page("/application/run/{application_run_id}") + async def page_application_run_describe(application_run_id: str) -> None: """Describe Application Run. Args: - run_id (str): The application run id + application_run_id (str): The application run id """ from ._page_application_run_describe import _page_application_run_describe # noqa: PLC0415 - await _page_application_run_describe(run_id) + await _page_application_run_describe(application_run_id) diff --git a/src/aignostics/application/_gui/_page_index.py b/src/aignostics/application/_gui/_page_index.py index 9b25feac..59b167c7 100644 --- a/src/aignostics/application/_gui/_page_index.py +++ b/src/aignostics/application/_gui/_page_index.py @@ -20,20 +20,15 @@ pyi_splash = None -async def _page_index(client: Client, query: str | None = None) -> None: - """Homepage of Applications. - - Args: - client: The NiceGUI client. - query: Optional query parameter for filtering runs. - """ +async def _page_index(client: Client) -> None: + """Homepage of Applications.""" client.content.classes(remove="nicegui-content") client.content.classes(add="pl-5 pt-5") if pyi_splash and pyi_splash.is_alive(): pyi_splash.update_text("Connecting with API ...") - await _frame("Analyze your Whole Slide Images with AI", left_sidebar=True, args={"query": query}) + await _frame("Analyze your Whole Slide Images with AI", left_sidebar=True) if pyi_splash and pyi_splash.is_alive(): pyi_splash.close() diff --git a/src/aignostics/application/_gui/_utils.py b/src/aignostics/application/_gui/_utils.py index cb59100f..11802305 100644 --- a/src/aignostics/application/_gui/_utils.py +++ b/src/aignostics/application/_gui/_utils.py @@ -1,6 +1,6 @@ """Utility functions for the application GUI.""" -from aignostics.platform import ItemState, ItemTerminationReason, RunState, RunTerminationReason +from aignostics.platform import ApplicationRunStatus, ItemStatus def application_id_to_icon(application_id: str) -> str: @@ -20,70 +20,59 @@ def application_id_to_icon(application_id: str) -> str: return "bug_report" -def run_status_to_icon_and_color( - run_status: str, termination_reason: str | None, item_count: int, item_succeeded_count: int -) -> tuple[str, str]: - """Convert run status and termination reason to icon and color. +def run_status_to_icon_and_color(run_status: str) -> tuple[str, str]: # noqa: PLR0911 + """Convert run status to icon amd color. Args: run_status (str): The run status. - termination_reason (str): The termination reason. - item_count (int): The total number of items in the run. - item_succeeded_count (int): The number of items that succeeded in the run. Returns: tuple[str, str]: The icon name and color. """ match run_status: - case RunState.PENDING: - return "schedule", "info" - case RunState.PROCESSING: + case ApplicationRunStatus.RUNNING: + return "directions_run", "info" + case ApplicationRunStatus.CANCELED_USER: + return "cancel", "warning" + case ApplicationRunStatus.CANCELED_SYSTEM: + return "sync_problem", "negative" + case ApplicationRunStatus.COMPLETED: + return "done_all", "positive" + case ApplicationRunStatus.COMPLETED_WITH_ERROR: + return "error", "negative" + case ApplicationRunStatus.RECEIVED: + return "call_received", "info" + case ApplicationRunStatus.REJECTED: + return "hand_gesture_off", "negative" + case ApplicationRunStatus.RUNNING: return "directions_run", "info" - case RunState.TERMINATED: - icon = "bug_report" - if termination_reason == RunTerminationReason.CANCELED_BY_USER: - icon = "cancel" - if termination_reason == RunTerminationReason.CANCELED_BY_SYSTEM: - icon = "error" - if termination_reason == RunTerminationReason.ALL_ITEMS_PROCESSED: - icon = "sports_score" - color = "negative" - if item_succeeded_count <= 0: - color = "negative" - elif item_succeeded_count < item_count: - color = "warning" - elif item_succeeded_count == item_count: - color = "positive" - return (icon, color) + case ApplicationRunStatus.SCHEDULED: + return "schedule", "info" return "bug_report", "negative" -def run_item_status_and_termination_reason_to_icon_and_color( # noqa: PLR0911 - item_status: str, termination_reason: str | None -) -> tuple[str, str]: - """Convert item status and termination reason to icon and color. +def run_item_status_to_icon_and_color(run_status: str) -> tuple[str, str]: # noqa: PLR0911 + """Convert run item status to icon. Args: - item_status (str): The item status. - termination_reason (str | None): The termination reason. + run_status (str): The run item status. Returns: tuple[str, str]: The icon name and color. """ - match item_status: - case ItemState.PENDING: - return "schedule", "info" - case ItemState.PROCESSING: - return "directions_run", "info" - case ItemState.TERMINATED: - if termination_reason == ItemTerminationReason.SKIPPED: - return "next_plan", "warning" - if termination_reason == ItemTerminationReason.SUCCEEDED: - return "check_circle", "positive" - if termination_reason == ItemTerminationReason.SYSTEM_ERROR: - return "error", "negative" - if termination_reason == ItemTerminationReason.USER_ERROR: - return "warning", "negative" + match run_status: + case ItemStatus.PENDING: + return "pending", "info" + case ItemStatus.CANCELED_USER: + return "cancel", "warning" + case ItemStatus.CANCELED_SYSTEM: + return "sync_problem", "negative" + case ItemStatus.ERROR_USER: + return "hand_gesture_off", "negative" + case ItemStatus.ERROR_SYSTEM: + return "error", "negative" + case ItemStatus.SUCCEEDED: + return "check", "positive" return "bug_report", "negative" diff --git a/src/aignostics/application/_gui/assets/processing.lottie b/src/aignostics/application/_gui/assets/pending_alt.lottie similarity index 100% rename from src/aignostics/application/_gui/assets/processing.lottie rename to src/aignostics/application/_gui/assets/pending_alt.lottie diff --git a/src/aignostics/application/_models.py b/src/aignostics/application/_models.py deleted file mode 100644 index a95365ab..00000000 --- a/src/aignostics/application/_models.py +++ /dev/null @@ -1,131 +0,0 @@ -"""Data models for the application module.""" - -from enum import StrEnum -from importlib.util import find_spec -from pathlib import Path - -from pydantic import BaseModel, computed_field - -from aignostics.platform import ItemResult, OutputArtifactElement, RunData - -has_qupath_extra = find_spec("ijson") -if has_qupath_extra: - from aignostics.qupath import AddProgress as QuPathAddProgress - from aignostics.qupath import AnnotateProgress as QuPathAnnotateProgress - - -class DownloadProgressState(StrEnum): - """Enum for download progress states.""" - - INITIALIZING = "Initializing ..." - DOWNLOADING_INPUT = "Downloading input slide ..." - QUPATH_ADD_INPUT = "Adding input slides to QuPath project ..." - CHECKING = "Checking run status ..." - WAITING = "Waiting for item completing ..." - DOWNLOADING = "Downloading artifact ..." - QUPATH_ADD_RESULTS = "Adding result images to QuPath project ..." - QUPATH_ANNOTATE_INPUT_WITH_RESULTS = "Annotating input slides in QuPath project with results ..." - COMPLETED = "Completed." - - -class DownloadProgress(BaseModel): - """Model for tracking download progress with computed progress metrics.""" - - status: DownloadProgressState = DownloadProgressState.INITIALIZING - run: RunData | None = None - item: ItemResult | None = None - item_count: int | None = None - item_index: int | None = None - item_external_id: str | None = None - artifact: OutputArtifactElement | None = None - artifact_count: int | None = None - artifact_index: int | None = None - artifact_path: Path | None = None - artifact_download_url: str | None = None - artifact_size: int | None = None - artifact_downloaded_chunk_size: int = 0 - artifact_downloaded_size: int = 0 - input_slide_path: Path | None = None - input_slide_url: str | None = None - input_slide_size: int | None = None - input_slide_downloaded_chunk_size: int = 0 - input_slide_downloaded_size: int = 0 - if has_qupath_extra: - qupath_add_input_progress: QuPathAddProgress | None = None - qupath_add_results_progress: QuPathAddProgress | None = None - qupath_annotate_input_with_results_progress: QuPathAnnotateProgress | None = None - - @computed_field # type: ignore - @property - def total_artifact_count(self) -> int | None: - """Calculate total number of artifacts across all items. - - Returns: - int | None: Total artifact count or None if counts not available. - """ - if self.item_count and self.artifact_count: - return self.item_count * self.artifact_count - return None - - @computed_field # type: ignore - @property - def total_artifact_index(self) -> int | None: - """Calculate the current artifact index across all items. - - Returns: - int | None: Total artifact index or None if indices not available. - """ - if self.item_count and self.artifact_count and self.item_index is not None and self.artifact_index is not None: - return self.item_index * self.artifact_count + self.artifact_index - return None - - @computed_field # type: ignore - @property - def item_progress_normalized(self) -> float: # noqa: PLR0911 - """Compute normalized item progress in range 0..1. - - Returns: - float: The normalized item progress in range 0..1. - """ - if self.status == DownloadProgressState.DOWNLOADING_INPUT: - if (not self.item_count) or self.item_index is None: - return 0.0 - return min(1, float(self.item_index + 1) / float(self.item_count)) - if self.status == DownloadProgressState.DOWNLOADING: - if (not self.total_artifact_count) or self.total_artifact_index is None: - return 0.0 - return min(1, float(self.total_artifact_index + 1) / float(self.total_artifact_count)) - if has_qupath_extra: - if self.status == DownloadProgressState.QUPATH_ADD_INPUT and self.qupath_add_input_progress: - return self.qupath_add_input_progress.progress_normalized - if self.status == DownloadProgressState.QUPATH_ADD_RESULTS and self.qupath_add_results_progress: - return self.qupath_add_results_progress.progress_normalized - if self.status == DownloadProgressState.QUPATH_ANNOTATE_INPUT_WITH_RESULTS: - if (not self.item_count) or (not self.item_index): - return 0.0 - return min(1, float(self.item_index + 1) / float(self.item_count)) - return 0.0 - - @computed_field # type: ignore - @property - def artifact_progress_normalized(self) -> float: - """Compute normalized artifact progress in range 0..1. - - Returns: - float: The normalized artifact progress in range 0..1. - """ - if self.status == DownloadProgressState.DOWNLOADING_INPUT: - if not self.input_slide_size: - return 0.0 - return min(1, float(self.input_slide_downloaded_size) / float(self.input_slide_size)) - if self.status == DownloadProgressState.DOWNLOADING: - if not self.artifact_size: - return 0.0 - return min(1, float(self.artifact_downloaded_size) / float(self.artifact_size)) - if ( - has_qupath_extra - and self.status == DownloadProgressState.QUPATH_ANNOTATE_INPUT_WITH_RESULTS - and self.qupath_annotate_input_with_results_progress - ): - return self.qupath_annotate_input_with_results_progress.progress_normalized - return 0.0 diff --git a/src/aignostics/application/_service.py b/src/aignostics/application/_service.py index 3e1ed796..7d9b1872 100644 --- a/src/aignostics/application/_service.py +++ b/src/aignostics/application/_service.py @@ -4,6 +4,7 @@ import re import time from collections.abc import Callable, Generator +from enum import StrEnum from http import HTTPStatus from importlib.util import find_spec from pathlib import Path @@ -11,45 +12,35 @@ import google_crc32c import requests +import semver +from pydantic import BaseModel, computed_field from aignostics.bucket import Service as BucketService -from aignostics.constants import ( - TEST_APP_APPLICATION_ID, -) +from aignostics.constants import WSI_SUPPORTED_FILE_EXTENSIONS from aignostics.platform import ( LIST_APPLICATION_RUNS_MAX_PAGE_SIZE, ApiException, Application, - ApplicationSummary, + ApplicationRun, + ApplicationRunData, + ApplicationRunStatus, ApplicationVersion, Client, InputArtifact, InputItem, + ItemResult, + ItemStatus, NotFoundException, - Run, - RunData, - RunOutput, - RunState, + OutputArtifactElement, ) from aignostics.platform import ( Service as PlatformService, ) -from aignostics.utils import BaseService, Health, get_logger, sanitize_path_component +from aignostics.utils import UNHIDE_SENSITIVE_INFO, BaseService, Health, get_logger, sanitize_path_component from aignostics.wsi import Service as WSIService -from ._download import ( - download_available_items, - download_url_to_file_with_progress, - extract_filename_from_url, - update_progress, -) -from ._models import DownloadProgress, DownloadProgressState from ._settings import Settings -from ._utils import ( - get_mime_type_for_artifact, - get_supported_extensions_for_application, - validate_due_date, -) +from ._utils import get_file_extension_for_artifact, get_mime_type_for_artifact has_qupath_extra = find_spec("ijson") if has_qupath_extra: @@ -66,7 +57,98 @@ APPLICATION_RUN_UPLOAD_CHUNK_SIZE = 1024 * 1024 # 1MB -class Service(BaseService): # noqa: PLR0904 +class DownloadProgressState(StrEnum): + """Enum for download progress states.""" + + INITIALIZING = "Initializing ..." + QUPATH_ADD_INPUT = "Adding input slides to QuPath project ..." + CHECKING = "Checking run status ..." + WAITING = "Waiting for item completing ..." + DOWNLOADING = "Downloading artifact ..." + QUPATH_ADD_RESULTS = "Adding result images to QuPath project ..." + QUPATH_ANNOTATE_INPUT_WITH_RESULTS = "Annotating input slides in QuPath project with results ..." + COMPLETED = "Completed." + + +class DownloadProgress(BaseModel): + status: DownloadProgressState = DownloadProgressState.INITIALIZING + run: ApplicationRunData | None = None + item: ItemResult | None = None + item_count: int | None = None + item_index: int | None = None + item_reference: str | None = None + artifact: OutputArtifactElement | None = None + artifact_count: int | None = None + artifact_index: int | None = None + artifact_path: Path | None = None + artifact_download_url: str | None = None + artifact_size: int | None = None + artifact_downloaded_chunk_size: int = 0 + artifact_downloaded_size: int = 0 + if has_qupath_extra: + qupath_add_input_progress: QuPathAddProgress | None = None + qupath_add_results_progress: QuPathAddProgress | None = None + qupath_annotate_input_with_results_progress: QuPathAnnotateProgress | None = None + + @computed_field # type: ignore + @property + def total_artifact_count(self) -> int | None: + if self.item_count and self.artifact_count: + return self.item_count * self.artifact_count + return None + + @computed_field # type: ignore + @property + def total_artifact_index(self) -> int | None: + if self.item_count and self.artifact_count and self.item_index is not None and self.artifact_index is not None: + return self.item_index * self.artifact_count + self.artifact_index + return None + + @computed_field # type: ignore + @property + def item_progress_normalized(self) -> float: # noqa: PLR0911 + """Compute normalized item progress in range 0..1. + + Returns: + float: The normalized item progress in range 0..1. + """ + if self.status == DownloadProgressState.DOWNLOADING: + if (not self.total_artifact_count) or self.total_artifact_index is None: + return 0.0 + return min(1, float(self.total_artifact_index + 1) / float(self.total_artifact_count)) + if has_qupath_extra: + if self.status == DownloadProgressState.QUPATH_ADD_INPUT and self.qupath_add_input_progress: + return self.qupath_add_input_progress.progress_normalized + if self.status == DownloadProgressState.QUPATH_ADD_RESULTS and self.qupath_add_results_progress: + return self.qupath_add_results_progress.progress_normalized + if self.status == DownloadProgressState.QUPATH_ANNOTATE_INPUT_WITH_RESULTS: + if (not self.item_count) or (not self.item_index): + return 0.0 + return min(1, float(self.item_index + 1) / float(self.item_count)) + return 0.0 + + @computed_field # type: ignore + @property + def artifact_progress_normalized(self) -> float: + """Compute normalized artifact progress in range 0..1. + + Returns: + float: The normalized artifact progress in range 0..1. + """ + if self.status == DownloadProgressState.DOWNLOADING: + if not self.artifact_size: + return 0.0 + return min(1, float(self.artifact_downloaded_size) / float(self.artifact_size)) + if ( + has_qupath_extra + and self.status == DownloadProgressState.QUPATH_ANNOTATE_INPUT_WITH_RESULTS + and self.qupath_annotate_input_with_results_progress + ): + return self.qupath_annotate_input_with_results_progress.progress_normalized + return 0.0 + + +class Service(BaseService): """Service of the application module.""" _settings: Settings @@ -77,7 +159,7 @@ def __init__(self) -> None: """Initialize service.""" super().__init__(Settings) # automatically loads and validates the settings - def info(self, mask_secrets: bool = True) -> dict[str, Any]: # noqa: ARG002, PLR6301 + def info(self, mask_secrets: bool = True) -> dict[str, Any]: """Determine info of this service. Args: @@ -86,7 +168,7 @@ def info(self, mask_secrets: bool = True) -> dict[str, Any]: # noqa: ARG002, PL Returns: dict[str,Any]: The info of this service. """ - return {} + return {"settings": self._settings.model_dump(context={UNHIDE_SENSITIVE_INFO: not mask_secrets})} def health(self) -> Health: # noqa: PLR6301 """Determine health of this service. @@ -131,7 +213,7 @@ def _get_platform_service(self) -> PlatformService: return self._platform_service @staticmethod - def applications_static() -> list[ApplicationSummary]: + def applications_static() -> list[Application]: """Get a list of all applications, static variant. Returns: @@ -145,7 +227,7 @@ def applications_static() -> list[ApplicationSummary]: """ return Service().applications() - def applications(self) -> list[ApplicationSummary]: + def applications(self) -> list[Application]: """Get a list of all applications. Returns: @@ -164,13 +246,13 @@ def applications(self) -> list[ApplicationSummary]: ] def application(self, application_id: str) -> Application: - """Get application. + """Get a specific application. Args: application_id (str): The ID of the application. Returns: - Application: The application. + Application: The application or None if not found. Raises: NotFoundException: If the application with the given ID is not found. @@ -187,91 +269,98 @@ def application(self, application_id: str) -> Application: logger.exception(message) raise RuntimeError(message) from e - def application_version(self, application_id: str, application_version: str | None = None) -> ApplicationVersion: + def application_version( + self, application_version_id: str, use_latest_if_no_version_given: bool = False + ) -> ApplicationVersion: """Get a specific application version. Args: - application_id (str): The ID of the application - application_version (str|None): The version of the application (semver). - If not given latest version is used. + application_version_id (str): The ID of the application version + use_latest_if_no_version_given (bool): If True, use the latest version if no specific version is given. Returns: ApplicationVersion: The application version Raises: - ValueError: If - the application version number is invalid. - NotFoundException: If the application version with the given ID and number is not found. + ValueError: If the application version ID is invalid. + NotFoundException: If the application with the given ID is not found. RuntimeError: If the application cannot be retrieved unexpectedly. """ - try: - return self._get_platform_client().application_version(application_id, application_version) - except ValueError: - raise - except NotFoundException as e: - message = f"Application with ID '{application_id}' not found: {e}" - logger.warning(message) - raise NotFoundException(message) from e - except Exception as e: - message = f"Failed to retrieve application with ID '{application_id}': {e}" - logger.exception(message) - raise RuntimeError(message) from e - - @staticmethod - def application_versions_static(application_id: str) -> list[ApplicationVersion]: - """Get a list of all versions for a specific application, static variant. + # Validate format: application_id:vX.Y.Z (where X.Y.Z is a semver) + # This checks for proper format like "he-tme:v0.50.0" where "he-tme" is the application id + # and "v0.50.0" is the version with proper semver format + match = re.match(r"^([^:]+):v(.+)$", application_version_id) + if not match or not semver.Version.is_valid(match.group(2)): + if use_latest_if_no_version_given: + application_id = match.group(1) if match else application_version_id + latest_version = self.application_version_latest(self.application(application_id)) + if latest_version: + return latest_version + message = ( + f"No valid application version found for '{application_version_id}'no latest version available." + ) + logger.warning(message) + raise ValueError(message) + message = f"Invalid application version id format: {application_version_id}. " + message += "Expected format: application_id:vX.Y.Z" + raise ValueError(message) - Args: - application_id (str): The ID of the application. + application_id = match.group(1) + application = self.application(application_id) + for version in self.application_versions(application): + if version.application_version_id == application_version_id: + return version + message = f"Application version with ID {application_version_id} not found in application {application_id}" + raise NotFoundException(message) - Returns: - list[ApplicationVersion]: A list of all versions for the application. - - Raises: - Exception: If the application versions cannot be retrieved. - """ - return Service().application_versions(application_id) - - def application_versions(self, application_id: str) -> list[ApplicationVersion]: - """Get a list of all versions for a specific application. + def application_versions(self, application: Application) -> list[ApplicationVersion]: + """Get a list of all versions of the given application. Args: - application_id (str): The ID of the application. + application (Application): The application to check for versions. Returns: - list[ApplicationVersion]: A list of all versions for the application. + list[ApplicationVersion]: A list of all application versions sorted by semantic versioning (latest first). Raises: - RuntimeError: If the versions cannot be retrieved unexpectedly. NotFoundException: If the application with the given ID is not found. + RuntimeError: If version list cannot be retrieved unexpectedly. """ - # TODO(Andreas): Have to make calls for all application versions to construct - # Changelog dialog on run describe page. - # Can be optimized to one call if API would support it. - # Let's discuss if we should re-add the endpoint that existed. try: - client = self._get_platform_client() - return [ - client.application_version(application_id, version.number) - for version in client.versions.list(application_id) - ] + return self._get_platform_client().applications.versions.list_sorted(application=application) except NotFoundException as e: - message = f"Application with ID '{application_id}' not found: {e}" + message = f"Application with ID '{application.application_id}' not found: {e}" logger.warning(message) raise NotFoundException(message) from e except Exception as e: - message = f"Failed to retrieve versions for application with ID '{application_id}': {e}" + message = f"Failed to retrieve application versions for application '{application.application_id}': {e}" logger.exception(message) raise RuntimeError(message) from e + def application_version_latest(self, application: Application) -> ApplicationVersion | None: + """Get a latest application version. + + Args: + application (Application): The application to check for versions. + + Returns: + ApplicationVersion | None: A list of all application versions. + + Raises: + NotFoundException: If the application with the given ID is not found. + RuntimeError: If version list cannot be retrieved unexpectedly. + """ + versions = self.application_versions(application) + return versions[0] if versions else None + @staticmethod - def _process_key_value_pair(entry: dict[str, Any], key_value: str, external_id: str) -> None: + def _process_key_value_pair(entry: dict[str, Any], key_value: str, reference: str) -> None: """Process a single key-value pair from a mapping. Args: - entry (dict[str, Any]): The entry dictionary to update - key_value (str): String in the format "key=value" - external_id (str): The external_id value for logging + entry: The entry dictionary to update + key_value: String in the format "key=value" + reference: The reference value for logging """ key, value = key_value.split("=", 1) key = key.strip() @@ -280,43 +369,42 @@ def _process_key_value_pair(entry: dict[str, Any], key_value: str, external_id: return if key not in entry: - logger.warning("key '%s' not found in entry, ignoring mapping for '%s'", key, external_id) + logger.warning("key '%s' not found in entry, ignoring mapping for '%s'", key, reference) return - logger.debug("Updating key '%s' with value '%s' for external_id '%s'.", key, value, external_id) + logger.debug("Updating key '%s' with value '%s' for reference '%s'.", key, value, reference) entry[key.strip()] = value.strip() @staticmethod def _apply_mappings_to_entry(entry: dict[str, Any], mappings: list[str]) -> None: """Apply key/value mappings to an entry. - If the external_id attribute of the entry matches the regex pattern in the mapping, + If the reference attribute of the entry matches the regex pattern in the mapping, the key/value pairs are applied. Args: - entry (dict[str, Any]): The entry dictionary to update with mapped values - mappings (list[str]): List of strings with format 'regex:key=value,...' - where regex ismatched against the external_id attribute in the entry + entry: The entry dictionary to update with mapped values + mappings: List of strings with format 'regex:key=value,...' + where regex ismatched against the reference attribute in the entry """ - external_id = entry["external_id"] + reference = entry["reference"] for mapping in mappings: parts = mapping.split(":", 1) if len(parts) != 2: # noqa: PLR2004 continue pattern = parts[0].strip() - if not re.search(pattern, external_id): + if not re.search(pattern, reference): continue key_value_pairs = parts[1].split(",") for key_value in key_value_pairs: - Service._process_key_value_pair(entry, key_value, external_id) + Service._process_key_value_pair(entry, key_value, reference) @staticmethod - def generate_metadata_from_source_directory( # noqa: PLR0913, PLR0917 + def generate_metadata_from_source_directory( + application_version_id: str, source_directory: Path, - application_id: str, - application_version: str | None = None, with_gui_metadata: bool = False, mappings: list[str] | None = None, with_extra_metadata: bool = False, @@ -326,7 +414,7 @@ def generate_metadata_from_source_directory( # noqa: PLR0913, PLR0917 Steps: 1. Recursively files ending with supported extensions in the source directory 2. Creates a dict with the following columns - - external_id (str): The external_id of the file, by default equivalent to the absolute file name + - reference (str): The reference of the file, by default equivalent to the absolute file name - source (str): The absolute filename - checksum_base64_crc32c (str): The CRC32C checksum of the file constructed, base64 encoded - resolution_mpp (float): The microns per pixel, inspecting the base layer @@ -336,13 +424,12 @@ def generate_metadata_from_source_directory( # noqa: PLR0913, PLR0917 3. Applies the optional mappings to fill in additional metadata fields in the dict. Args: + application_version_id (str): The ID of the application version. + If application id is given, the latest version of that application is used. source_directory (Path): The source directory to generate metadata from. - application_id (str): The ID of the application. - application_version (str|None): The version of the application (semver). - If not given latest version is used. with_gui_metadata (bool): If True, include additional metadata for GUI. mappings (list[str]): Mappings of the form '::,:,...'. - The regular expression is matched against the external_id attribute of the entry. + The regular expression is matched against the reference attribute of the entry. The key/value pairs are applied to the entry if the pattern matches. with_extra_metadata (bool): If True, include extra metadata from the WSIService. @@ -354,21 +441,18 @@ def generate_metadata_from_source_directory( # noqa: PLR0913, PLR0917 Raises: NotFoundError: If the application version with the given ID is not found. - ValueError: If - the source directory does not exist - or is not a directory. + ValueError: If the source directory does not exist or is not a directory. RuntimeError: If the metadata generation fails unexpectedly. """ logger.debug("Generating metadata from source directory: %s", source_directory) # TODO(Helmut): Use it - _ = Service().application_version(application_id, application_version) + _ = Service().application_version(application_version_id, use_latest_if_no_version_given=True) metadata = [] try: - extensions = get_supported_extensions_for_application(application_id) - for extension in extensions: + for extension in list(WSI_SUPPORTED_FILE_EXTENSIONS): for file_path in source_directory.glob(f"**/*{extension}"): # Generate CRC32C checksum with google_crc32c and encode as base64 hash_sum = google_crc32c.Checksum() # type: ignore[no-untyped-call] @@ -382,10 +466,10 @@ def generate_metadata_from_source_directory( # noqa: PLR0913, PLR0917 height = image_metadata["dimensions"]["height"] mpp = image_metadata["resolution"]["mpp_x"] file_size_human = image_metadata["file"]["size_human"] - path = file_path.absolute() + reference = file_path.absolute() entry = { - "external_id": str(path), - "path_name": str(path.name), + "reference": str(reference), + "reference_short": str(reference.name), "source": str(file_path), "checksum_base64_crc32c": checksum, "resolution_mpp": mpp, @@ -402,7 +486,7 @@ def generate_metadata_from_source_directory( # noqa: PLR0913, PLR0917 entry["extra"] = image_metadata.get("extra", {}) if not with_gui_metadata: - entry.pop("path_name", None) + entry.pop("reference_short", None) entry.pop("source", None) entry.pop("file_size_human", None) entry.pop("file_upload_progress", None) @@ -426,9 +510,8 @@ def generate_metadata_from_source_directory( # noqa: PLR0913, PLR0917 @staticmethod def application_run_upload( # noqa: PLR0913, PLR0917 - application_id: str, + application_version_id: str, metadata: list[dict[str, Any]], - application_version: str | None = None, onboard_to_aignostics_portal: bool = False, upload_prefix: str = str(time.time() * 1000), upload_progress_queue: Any | None = None, # noqa: ANN401 @@ -437,10 +520,9 @@ def application_run_upload( # noqa: PLR0913, PLR0917 """Upload files with a progress queue. Args: - application_id (str): The ID of the application. + application_version_id (str): The ID of the application version. + If application id is given, the latest version of that application is used. metadata (list[dict[str, Any]]): The metadata to upload. - application_version (str|None): The version ID of the application. - If not given latest version is used. onboard_to_aignostics_portal (bool): True if the run should be onboarded to the Aignostics Portal. upload_prefix (str): The prefix for the upload, defaults to current milliseconds. upload_progress_queue (Queue | None): The queue to send progress updates to. @@ -457,16 +539,16 @@ def application_run_upload( # noqa: PLR0913, PLR0917 import psutil # noqa: PLC0415 logger.debug("Uploading files with upload ID '%s'", upload_prefix) - app_version = Service().application_version(application_id, application_version=application_version) + application_version = Service().application_version(application_version_id, use_latest_if_no_version_given=True) for row in metadata: - external_id = row["external_id"] - source_file_path = Path(row["external_id"]) + reference = row["reference"] + source_file_path = Path(row["reference"]) if not source_file_path.is_file(): - logger.warning("Source file '%s' does not exist.", row["external_id"]) + logger.warning("Source file '%s' does not exist.", row["reference"]) return False username = psutil.Process().username().replace("\\", "_") object_key = ( - f"{username}/{upload_prefix}/{application_id}/{app_version.version_number}/{source_file_path.name}" + f"{username}/{upload_prefix}/{application_version.application_version_id}/{source_file_path.name}" ) if onboard_to_aignostics_portal: object_key = f"onboard/{object_key}" @@ -477,7 +559,7 @@ def application_run_upload( # noqa: PLR0913, PLR0917 logger.debug("Generated signed upload URL '%s' for object '%s'", signed_upload_url, platform_bucket_url) if upload_progress_queue: upload_progress_queue.put_nowait({ - "external_id": external_id, + "reference": reference, "platform_bucket_url": platform_bucket_url, }) file_size = source_file_path.stat().st_size @@ -493,7 +575,7 @@ def application_run_upload( # noqa: PLR0913, PLR0917 ): def read_in_chunks( # noqa: PLR0913, PLR0917 - external_id: str, + reference: str, file_size: int, upload_progress_queue: Any | None = None, # noqa: ANN401 upload_progress_callable: Callable[[int, Path, str], None] | None = None, @@ -506,7 +588,7 @@ def read_in_chunks( # noqa: PLR0913, PLR0917 break if upload_progress_queue: upload_progress_queue.put_nowait({ - "external_id": external_id, + "reference": reference, "file_upload_progress": min(100.0, f.tell() / file_size), }) if upload_progress_callable: @@ -515,7 +597,7 @@ def read_in_chunks( # noqa: PLR0913, PLR0917 response = requests.put( signed_upload_url, - data=read_in_chunks(external_id, file_size, upload_progress_queue, upload_progress_callable), + data=read_in_chunks(reference, file_size, upload_progress_queue, upload_progress_callable), headers={"Content-Type": "application/octet-stream"}, timeout=60, ) @@ -524,241 +606,79 @@ def read_in_chunks( # noqa: PLR0913, PLR0917 return True @staticmethod - def application_runs_static( # noqa: PLR0913, PLR0917 - application_id: str | None = None, - application_version: str | None = None, - external_id: str | None = None, - has_output: bool = False, + def application_runs_static( + limit: int | None = None, + completed_only: bool = False, note_regex: str | None = None, note_query_case_insensitive: bool = True, - tags: set[str] | None = None, - query: str | None = None, - limit: int | None = None, ) -> list[dict[str, Any]]: """Get a list of all application runs, static variant. Args: - application_id (str | None): The ID of the application to filter runs. If None, no filtering is applied. - application_version (str | None): The version of the application to filter runs. - If None, no filtering is applied. - external_id (str | None): The external ID to filter runs. If None, no filtering is applied. - has_output (bool): If True, only runs with partial or full output are retrieved. + limit (int | None): The maximum number of runs to retrieve. If None, all runs are retrieved. + completed_only (bool): If True, only completed runs are retrieved. note_regex (str | None): Optional regex to filter runs by note metadata. If None, no filtering is applied. - Cannot be used together with query parameter. note_query_case_insensitive (bool): If True, the note_regex is case insensitive. Default is True. - tags (set[str] | None): Optional set of tags to filter runs. All tags must match. - Cannot be used together with query parameter. - query (str | None): Optional string to filter runs by note OR tags (case insensitive partial match). - If None, no filtering is applied. Cannot be used together with custom_metadata, note_regex, or tags. - Performs a union search: matches runs where the query appears in the note OR matches any tag. - limit (int | None): The maximum number of runs to retrieve. If None, all runs are retrieved. Returns: - list[RunData]: A list of all application runs. + list[ApplicationRunData]: A list of all application runs. Raises: - ValueError: If query is used together with custom_metadata, note_regex, or tags. RuntimeError: If the application run list cannot be retrieved. """ return [ { - "run_id": run.run_id, - "application_id": run.application_id, - "version_number": run.version_number, - "submitted_at": run.submitted_at, - "terminated_at": run.terminated_at, - "state": run.state, - "termination_reason": run.termination_reason, - "item_count": run.statistics.item_count, - "item_succeeded_count": run.statistics.item_succeeded_count, - "tags": run.custom_metadata.get("sdk", {}).get("tags", []) if run.custom_metadata else [], + "application_run_id": run.application_run_id, + "application_version_id": run.application_version_id, + "triggered_at": run.triggered_at, + "status": run.status, } for run in Service().application_runs( - application_id=application_id, - application_version=application_version, - external_id=external_id, - has_output=has_output, + limit=limit, + status=ApplicationRunStatus.COMPLETED if completed_only else None, note_regex=note_regex, note_query_case_insensitive=note_query_case_insensitive, - tags=tags, - query=query, - limit=limit, ) ] - def application_runs( # noqa: C901, PLR0912, PLR0913, PLR0914, PLR0915, PLR0917 + def application_runs( self, - application_id: str | None = None, - application_version: str | None = None, - external_id: str | None = None, - has_output: bool = False, + limit: int | None = None, + status: ApplicationRunStatus | None = None, note_regex: str | None = None, note_query_case_insensitive: bool = True, - tags: set[str] | None = None, - query: str | None = None, - limit: int | None = None, - ) -> list[RunData]: + ) -> list[ApplicationRunData]: """Get a list of all application runs. Args: - application_id (str | None): The ID of the application to filter runs. If None, no filtering is applied. - application_version (str | None): The version of the application to filter runs. - If None, no filtering is applied. - external_id (str | None): The external ID to filter runs. If None, no filtering is applied. - has_output (bool): If True, only runs with partial or full output are retrieved. + limit (int | None): The maximum number of runs to retrieve. If None, all runs are retrieved. + status (ApplicationRunStatus | None): Filter runs by status. If None, all runs are retrieved. note_regex (str | None): Optional regex to filter runs by note metadata. If None, no filtering is applied. - Cannot be used together with query parameter. note_query_case_insensitive (bool): If True, the note_regex is case insensitive. Default is True. - tags (set[str] | None): Optional set of tags to filter runs. All tags must match. - If None, no filtering is applied. Cannot be used together with query parameter. - query (str | None): Optional string to filter runs by note OR tags (case insensitive partial match). - If None, no filtering is applied. Cannot be used together with custom_metadata, note_regex, or tags. - Performs a union search: matches runs where the query appears in the note OR matches any tag. - limit (int | None): The maximum number of runs to retrieve. If None, all runs are retrieved. Returns: - list[RunData]: A list of all application runs. + list[ApplicationRunData]: A list of all application runs. Raises: - ValueError: If query is used together with custom_metadata, note_regex, or tags. RuntimeError: If the application run list cannot be retrieved. """ - # Validate that query is not used with other metadata filters - if query is not None: - if note_regex is not None: - message = "Cannot use 'query' parameter together with 'note_regex' parameter." - logger.warning(message) - raise ValueError(message) - if tags is not None: - message = "Cannot use 'query' parameter together with 'tags' parameter." - logger.warning(message) - raise ValueError(message) - if limit is not None and limit <= 0: return [] runs = [] page_size = LIST_APPLICATION_RUNS_MAX_PAGE_SIZE try: - # Handle query parameter with union semantics (note OR tags) - if query: - # Search for runs matching query in notes - note_runs_dict: dict[str, RunData] = {} - flag_case_insensitive = ' flag "i"' - escaped_query = query.replace("\\", "\\\\").replace('"', '\\"') - custom_metadata_note = f'$.sdk.note ? (@ like_regex "{escaped_query}"{flag_case_insensitive})' - - note_run_iterator = self._get_platform_client().runs.list_data( - application_id=application_id, - application_version=application_version, - external_id=external_id, - custom_metadata=custom_metadata_note, - sort="-submitted_at", - page_size=page_size, - ) - for run in note_run_iterator: - if has_output and run.output == RunOutput.NONE: - continue - note_runs_dict[run.run_id] = run - if limit is not None and len(note_runs_dict) >= limit: - break - - # Search for runs matching query in tags - tag_runs_dict: dict[str, RunData] = {} - custom_metadata_tags = f'$.sdk.tags ? (@ like_regex "{escaped_query}"{flag_case_insensitive})' - - tag_run_iterator = self._get_platform_client().runs.list_data( - application_id=application_id, - application_version=application_version, - external_id=external_id, - custom_metadata=custom_metadata_tags, - sort="-submitted_at", - page_size=page_size, - ) - for run in tag_run_iterator: - if has_output and run.output == RunOutput.NONE: - continue - # Add to dict if not already present from note search - if run.run_id not in note_runs_dict: - tag_runs_dict[run.run_id] = run - if limit is not None and len(note_runs_dict) + len(tag_runs_dict) >= limit: - break - - # Union of results from both searches - runs = list(note_runs_dict.values()) + list(tag_runs_dict.values()) - - # Apply limit after union - if limit is not None and len(runs) > limit: - runs = runs[:limit] - - return runs - - custom_metadata = None - client_side_note_filter = None - - # Handle note_regex filter if note_regex: flag_case_insensitive = ' flag "i"' if note_query_case_insensitive else "" - # If we also have tags, we'll need to do note filtering client-side - if tags: - # Store for client-side filtering - client_side_note_filter = (note_regex, note_query_case_insensitive) - else: - # No tags, so we can filter note on backend - custom_metadata = f'$.sdk.note ? (@ like_regex "{note_regex}"{flag_case_insensitive})' - - # Handle tags filter - if tags: - # JSONPath filter to match all of the provided tags in the sdk.tags array - # PostgreSQL limitation: Cannot use && between separate path expressions as backend crashes with 500 - # Workaround: Filter on backend for ANY tag match, then filter client-side for ALL - # Use regex alternation to match any of the tags - escaped_tags = [tag.replace('"', '\\"').replace("\\", "\\\\") for tag in tags] - # Create regex pattern: ^(tag1|tag2|tag3)$ - regex_pattern = "^(" + "|".join(escaped_tags) + ")$" - custom_metadata = f'$.sdk.tags ? (@ like_regex "{regex_pattern}")' + metadata = f'$.sdk.note ? (@ like_regex "{note_regex}"{flag_case_insensitive})' + else: + metadata = None run_iterator = self._get_platform_client().runs.list_data( - application_id=application_id, - application_version=application_version, - external_id=external_id, - custom_metadata=custom_metadata, - sort="-submitted_at", - page_size=page_size, + sort="-triggered_at", page_size=page_size, metadata=metadata ) for run in run_iterator: - if has_output and run.output == RunOutput.NONE: + if status is not None and run.status != status: continue - # Client-side filtering when combining multiple criteria - # 1. If multiple tags specified, ensure ALL are present - if tags and len(tags) > 1: - # Backend filter with regex alternation matches ANY tag - # Now verify ALL tags are present in run metadata - run_tags = set() - if run.custom_metadata and "sdk" in run.custom_metadata: - sdk_metadata = run.custom_metadata.get("sdk", {}) - if "tags" in sdk_metadata: - run_tags = set(sdk_metadata.get("tags", [])) - - # Check if all required tags are present - if not tags.issubset(run_tags): - continue # Skip this run, not all tags match - - # 2. If note filter is applied client-side (when combined with tags) - if client_side_note_filter: - note_pattern, case_insensitive = client_side_note_filter - run_note = None - if run.custom_metadata and "sdk" in run.custom_metadata: - sdk_metadata = run.custom_metadata.get("sdk", {}) - run_note = sdk_metadata.get("note") - - # Check if note matches the regex pattern - if run_note: - flags = re.IGNORECASE if case_insensitive else 0 - if not re.search(note_pattern, run_note, flags): - continue # Skip this run, note doesn't match - else: - continue # Skip this run, no note present - runs.append(run) if limit is not None and len(runs) >= limit: break @@ -768,14 +688,14 @@ def application_runs( # noqa: C901, PLR0912, PLR0913, PLR0914, PLR0915, PLR0917 logger.exception(message) raise RuntimeError(message) from e - def application_run(self, run_id: str) -> Run: - """Select a run by its ID. + def application_run(self, run_id: str) -> ApplicationRun: + """Find a run by its ID. Args: - run_id (str): The ID of the run to find + run_id: The ID of the run to find Returns: - Run: The run that can be fetched using the .details() call. + ApplicationRun: The run that can be fetched using the .details() call. Raises: RuntimeError: If initializing the client fails or the run cannot be retrieved. @@ -787,62 +707,46 @@ def application_run(self, run_id: str) -> Run: logger.exception(message) raise RuntimeError(message) from e - def application_run_submit_from_metadata( # noqa: PLR0913, PLR0917 + def application_run_submit_from_metadata( self, - application_id: str, + application_version_id: str, metadata: list[dict[str, Any]], - application_version: str | None = None, custom_metadata: dict[str, Any] | None = None, - note: str | None = None, - tags: set[str] | None = None, - due_date: str | None = None, - deadline: str | None = None, onboard_to_aignostics_portal: bool = False, - validate_only: bool = False, - ) -> Run: + ) -> ApplicationRun: """Submit a run for the given application. Args: - application_id (str): The ID of the application to run. - metadata (list[dict[str, Any]]): The metadata for the run. - custom_metadata (dict[str, Any] | None): Optional custom metadata to attach to the run. - note (str | None): An optional note for the run. - tags (set[str] | None): Optional set of tags to attach to the run for filtering. - due_date (str | None): An optional requested completion time for the run, ISO8601 format. - The scheduler will try to complete the run before this time, taking - the subscription tier and available GPU resources into account. - deadline (str | None): An optional hard deadline for the run, ISO8601 format. - If processing exceeds this deadline, the run can be aborted. - application_version (str | None): The version of the application. - If not given latest version is used. - onboard_to_aignostics_portal (bool): True if the run should be onboarded to the Aignostics Portal. - validate_only (bool): If True, cancel the run post validation, before analysis. + application_version_id: The ID of the application version to run. + If application id is given, the latest version of that application is used. + metadata: The metadata for the run. + custom_metadata: Optional custom metadata to attach to the run. + onboard_to_aignostics_portal: True if the run should be onboarded to the Aignostics Portal. Returns: - Run: The submitted run. + ApplicationRun: The submitted run. Raises: NotFoundException: If the application version with the given ID is not found. - ValueError: If - platform bucket URL is missing - or has unsupported protocol, - or if the application version ID is invalid, - or if due_date is not ISO 8601 - or if due_date not in the future. + ValueError: If platform bucket URL is missing or has unsupported protocol, + or if the application version ID is invalid. RuntimeError: If submitting the run failed unexpectedly. """ - validate_due_date(due_date) logger.debug("Submitting application run with metadata: %s", metadata) - app_version = self.application_version(application_id, application_version=application_version) - if len(app_version.input_artifacts) != 1: + if onboard_to_aignostics_portal: + custom_metadata = custom_metadata or {} + custom_metadata.setdefault("sdk", {}) + custom_metadata["sdk"]["onboard_to_aignostics_portal"] = onboard_to_aignostics_portal + application_version = self.application_version(application_version_id, use_latest_if_no_version_given=True) + if len(application_version.input_artifacts) != 1: message = ( - f"Application version '{app_version.version_number}' has " - f"{len(app_version.input_artifacts)} input artifacts, " + f"Application version '{application_version_id}' has " + f"{len(application_version.input_artifacts)} input artifacts, " "but only 1 is supported." ) logger.warning(message) raise RuntimeError(message) - input_artifact_name = app_version.input_artifacts[0].name + input_artifact_name = application_version.input_artifacts[0].name items = [] for row in metadata: @@ -857,317 +761,98 @@ def application_run_submit_from_metadata( # noqa: PLR0913, PLR0917 logger.warning(message) raise ValueError(message) - item_metadata = { - "checksum_base64_crc32c": row["checksum_base64_crc32c"], - "height_px": int(row["height_px"]), - "width_px": int(row["width_px"]), - "media_type": ( - "image/tiff" - if row["external_id"].lower().endswith((".tif", ".tiff")) - else "application/dicom" - if row["external_id"].lower().endswith(".dcm") - else "application/octet-stream" - ), - "resolution_mpp": float(row["resolution_mpp"]), - } - - # Only add specimen and staining_method metadata if not test-app - # TODO(Helmut): Remove condition when test-app reached input parity with heta - if application_id != TEST_APP_APPLICATION_ID: - item_metadata["specimen"] = { - "disease": row["disease"], - "tissue": row["tissue"], - } - item_metadata["staining_method"] = row["staining_method"] - items.append( InputItem( - external_id=row["external_id"], + reference=row["reference"], input_artifacts=[ InputArtifact( name=input_artifact_name, download_url=download_url, - metadata=item_metadata, + metadata={ + "checksum_base64_crc32c": row["checksum_base64_crc32c"], + "height_px": int(row["height_px"]), + "width_px": int(row["width_px"]), + "media_type": ( + "image/tiff" + if row["reference"].lower().endswith((".tif", ".tiff")) + else "application/dicom" + if row["reference"].lower().endswith(".dcm") + else "application/octet-stream" + ), + "resolution_mpp": float(row["resolution_mpp"]), + "specimen": { + "disease": row["disease"], + "tissue": row["tissue"], + }, + "staining_method": row["staining_method"], + }, ) ], - custom_metadata={ - "sdk": { - "platform_bucket": { - "bucket_name": bucket_name, - "object_key": object_key, - "signed_download_url": download_url, - } - } - }, ) ) logger.debug("Items for application run submission: %s", items) try: - run = self.application_run_submit( - application_id=application_id, - items=items, - application_version=app_version.version_number, - custom_metadata=custom_metadata, - note=note, - tags=tags, - due_date=due_date, - deadline=deadline, - onboard_to_aignostics_portal=onboard_to_aignostics_portal, - validate_only=validate_only, - ) + run = self.application_run_submit(application_version.application_version_id, items, custom_metadata) logger.info( "Submitted application run with items: %s, application run id %s, custom metadata: %s", items, - run.run_id, + run.application_run_id, custom_metadata, ) return run except ValueError as e: - message = ( - f"Failed to submit application run for application '{application_id}' " - f"(version: {app_version.version_number}): {e}" - ) + message = f"Failed to submit application run for version '{application_version_id}': {e}" logger.warning(message) raise ValueError(message) from e except Exception as e: - message = ( - f"Failed to submit application run for application '{application_id}' " - f"(version: {app_version.version_number}): {e}" - ) + message = f"Failed to submit application run for version '{application_version_id}': {e}" logger.exception(message) raise RuntimeError(message) from e - def application_run_submit( # noqa: PLR0913, PLR0917 - self, - application_id: str, - items: list[InputItem], - application_version: str | None = None, - custom_metadata: dict[str, Any] | None = None, - note: str | None = None, - tags: set[str] | None = None, - due_date: str | None = None, - deadline: str | None = None, - onboard_to_aignostics_portal: bool = False, - validate_only: bool = False, - ) -> Run: + def application_run_submit( + self, application_version_id: str, items: list[InputItem], custom_metadata: dict[str, Any] | None = None + ) -> ApplicationRun: """Submit a run for the given application. Args: - application_id (str): The ID of the application to run. - items (list[InputItem]): The input items for the run. - application_version (str | None): The version of the application to run. - custom_metadata (dict[str, Any] | None): Optional custom metadata to attach to the run. - note (str | None): An optional note for the run. - tags (set[str] | None): Optional set of tags to attach to the run for filtering. - due_date (str | None): An optional requested completion time for the run, ISO8601 format. - The scheduler will try to complete the run before this time, taking - the subscription tier and available GPU resources into account. - deadline (str | None): An optional hard deadline for the run, ISO8601 format. - If processing exceeds this deadline, the run can be aborted. - onboard_to_aignostics_portal (bool): True if the run should be onboarded to the Aignostics Portal. - validate_only (bool): If True, cancel the run post validation, before analysis. + application_version_id: The ID of the application version to run. + items: The input items for the run. + custom_metadata: Optional custom metadata to attach to the run. Returns: - Run: The submitted run. + ApplicationRun: The submitted run. Raises: NotFoundException: If the application version with the given ID is not found. - ValueError: If - the application version ID is invalid - or items invalid - or due_date not ISO 8601 - or due_date not in the future. + ValueError: If the application version ID is invalid or items invalid. RuntimeError: If submitting the run failed unexpectedly. """ - validate_due_date(due_date) - try: - if custom_metadata is None: - custom_metadata = {} - - sdk_metadata: dict[str, Any] = {} - if note: - sdk_metadata["note"] = note - if tags: - sdk_metadata["tags"] = tags - if onboard_to_aignostics_portal or validate_only: - sdk_metadata["workflow"] = { - "onboard_to_aignostics_portal": onboard_to_aignostics_portal, - "validate_only": validate_only, - } - if due_date or deadline: - sdk_metadata["scheduling"] = {} - if due_date: - sdk_metadata["scheduling"]["due_date"] = due_date - if deadline: - sdk_metadata["scheduling"]["deadline"] = deadline - - custom_metadata["sdk"] = sdk_metadata - - return self._get_platform_client().runs.submit( - application_id=application_id, - items=items, - application_version=application_version, - custom_metadata=custom_metadata, - ) - except ValueError as e: - message = f"Failed to submit application run for '{application_id}' (version: {application_version}): {e}" - logger.warning(message) - raise ValueError(message) from e - except Exception as e: - message = f"Failed to submit application run for '{application_id}' (version: {application_version}): {e}" - logger.exception(message) - raise RuntimeError(message) from e - - def application_run_update_custom_metadata( - self, - run_id: str, - custom_metadata: dict[str, Any], - ) -> None: - """Update custom metadata for an existing application run. - - Args: - run_id (str): The ID of the run to update - custom_metadata (dict[str, Any]): The new custom metadata to attach to the run. - - Raises: - NotFoundException: If the application run with the given ID is not found. - ValueError: If the run ID is invalid. - RuntimeError: If updating the run metadata fails unexpectedly. - """ - try: - logger.debug("Updating custom metadata for run with ID '%s'", run_id) - self._get_platform_client().run(run_id).update_custom_metadata(custom_metadata) - logger.debug("Updated custom metadata for run with ID '%s'", run_id) - except ValueError as e: - message = f"Failed to update custom metadata for run with ID '{run_id}': ValueError {e}" - logger.warning(message) - raise ValueError(message) from e - except NotFoundException as e: - message = f"Application run with ID '{run_id}' not found: {e}" - logger.warning(message) - raise NotFoundException(message) from e - except ApiException as e: - if e.status == HTTPStatus.UNPROCESSABLE_ENTITY: - message = f"Run ID '{run_id}' invalid: {e!s}." - logger.warning(message) - raise ValueError(message) from e - message = f"Failed to update custom metadata for run with ID '{run_id}': {e}" - logger.exception(message) - raise RuntimeError(message) from e - except Exception as e: - message = f"Failed to update custom metadata for run with ID '{run_id}': {e}" - logger.exception(message) - raise RuntimeError(message) from e - - @staticmethod - def application_run_update_custom_metadata_static( - run_id: str, - custom_metadata: dict[str, Any], - ) -> None: - """Static wrapper for updating custom metadata for an application run. - - Args: - run_id (str): The ID of the run to update - custom_metadata (dict[str, Any]): The new custom metadata to attach to the run. - - Raises: - NotFoundException: If the application run with the given ID is not found. - ValueError: If the run ID is invalid. - RuntimeError: If updating the run metadata fails unexpectedly. - """ - Service().application_run_update_custom_metadata(run_id, custom_metadata) - - def application_run_update_item_custom_metadata( - self, - run_id: str, - external_id: str, - custom_metadata: dict[str, Any], - ) -> None: - """Update custom metadata for an existing item in an application run. - - Args: - run_id (str): The ID of the run containing the item - external_id (str): The external ID of the item to update - custom_metadata (dict[str, Any]): The new custom metadata to attach to the item. - - Raises: - NotFoundException: If the application run or item with the given IDs is not found. - ValueError: If the run ID or item external ID is invalid. - RuntimeError: If updating the item metadata fails unexpectedly. - """ try: - logger.debug( - "Updating custom metadata for item '%s' in run with ID '%s'", - external_id, - run_id, - ) - self._get_platform_client().run(run_id).update_item_custom_metadata( - external_id, - custom_metadata, - ) - logger.debug( - "Updated custom metadata for item '%s' in run with ID '%s'", - external_id, - run_id, + return self._get_platform_client().runs.create( + application_version=application_version_id, items=items, custom_metadata=custom_metadata ) except ValueError as e: - message = ( - f"Failed to update custom metadata for item '{external_id}' in run with ID '{run_id}': ValueError {e}" - ) + message = f"Failed to submit application run for version '{application_version_id}': {e}" logger.warning(message) raise ValueError(message) from e - except NotFoundException as e: - message = f"Application run with ID '{run_id}' or item '{external_id}' not found: {e}" - logger.warning(message) - raise NotFoundException(message) from e - except ApiException as e: - if e.status == HTTPStatus.UNPROCESSABLE_ENTITY: - message = f"Run ID '{run_id}' or item external ID '{external_id}' invalid: {e!s}." - logger.warning(message) - raise ValueError(message) from e - message = f"Failed to update custom metadata for item '{external_id}' in run with ID '{run_id}': {e}" - logger.exception(message) - raise RuntimeError(message) from e except Exception as e: - message = f"Failed to update custom metadata for item '{external_id}' in run with ID '{run_id}': {e}" + message = f"Failed to submit application run for version '{application_version_id}': {e}" logger.exception(message) raise RuntimeError(message) from e - @staticmethod - def application_run_update_item_custom_metadata_static( - run_id: str, - external_id: str, - custom_metadata: dict[str, Any], - ) -> None: - """Static wrapper for updating custom metadata for an item in an application run. - - Args: - run_id (str): The ID of the run containing the item - external_id (str): The external ID of the item to update - custom_metadata (dict[str, Any]): The new custom metadata to attach to the item. - - Raises: - NotFoundException: If the application run or item with the given IDs is not found. - ValueError: If the run ID or item external ID is invalid. - RuntimeError: If updating the item metadata fails unexpectedly. - """ - Service().application_run_update_item_custom_metadata(run_id, external_id, custom_metadata) - def application_run_cancel(self, run_id: str) -> None: """Cancel a run by its ID. Args: - run_id (str): The ID of the run to cancel + run_id: The ID of the run to cancel Raises: Exception: If the client cannot be created. Raises: NotFoundException: If the application run with the given ID is not found. - ValueError: If - the run ID is invalid - or the run cannot be canceled given its current state. + ValueError: If the run ID is invalid or the run cannot be canceled given its current state. RuntimeError: If canceling the run fails unexpectedly. """ try: @@ -1180,14 +865,6 @@ def application_run_cancel(self, run_id: str) -> None: message = f"Application run with ID '{run_id}' not found: {e}" logger.warning(message) raise NotFoundException(message) from e - except ApiException as e: - if e.status == HTTPStatus.UNPROCESSABLE_ENTITY: - message = f"Run ID '{run_id}' invalid: {e!s}." - logger.warning(message) - raise ValueError(message) from e - message = f"Failed to retrieve application run with ID '{run_id}': {e}" - logger.exception(message) - raise RuntimeError(message) from e except Exception as e: message = f"Failed to cancel application run with ID '{run_id}': {e}" logger.exception(message) @@ -1197,16 +874,14 @@ def application_run_delete(self, run_id: str) -> None: """Delete a run by its ID. Args: - run_id (str): The ID of the run to delete + run_id: The ID of the run to delete Raises: Exception: If the client cannot be created. Raises: NotFoundException: If the application run with the given ID is not found. - ValueError: If - the run ID is invalid - or the run cannot be deleted given its current state. + ValueError: If the run ID is invalid or the run cannot be deleted given its current state. RuntimeError: If deleting the run fails unexpectedly. """ try: @@ -1244,7 +919,7 @@ def application_run_download_static( # noqa: PLR0913, PLR0917 create_subdirectory_for_run (bool): Whether to create a subdirectory for the run. create_subdirectory_per_item (bool): Whether to create a subdirectory for each item, if not set, all items will be downloaded to the same directory but prefixed - with the item external ID and underscore. + with the item reference and underscore. wait_for_completion (bool): Whether to wait for run completion. Defaults to True. qupath_project (bool): If True, create QuPath project referencing input slides and results. This requires QuPath to be installed. The QuPath project will be created in a subfolder @@ -1255,12 +930,10 @@ def application_run_download_static( # noqa: PLR0913, PLR0917 Path: The directory containing downloaded results. Raises: - ValueError: If - the run ID is invalid - or destination directory cannot be created - or QuPath extra is not installed when qupath_project=True. + ValueError: If the run ID is invalid or destination directory cannot be created. NotFoundException: If the application run with the given ID is not found. - RuntimeError: If run details cannot be retrieved or download fails. + RuntimeError: If run details cannot be retrieved or download fails unexpectedly. + requests.HTTPError: If the download fails with an HTTP error. """ return Service().application_run_download( run_id, @@ -1286,12 +959,13 @@ def application_run_download( # noqa: C901, PLR0912, PLR0913, PLR0914, PLR0915, """Download application run results with progress tracking. Args: + progress (DownloadProgress): Progress tracking object for GUI or CLI updates. run_id (str): The ID of the application run to download. destination_directory (Path): Directory to save downloaded files. create_subdirectory_for_run (bool): Whether to create a subdirectory for the run. create_subdirectory_per_item (bool): Whether to create a subdirectory for each item, if not set, all items will be downloaded to the same directory but prefixed - with the item external id and underscore. + with the item reference and underscore. wait_for_completion (bool): Whether to wait for run completion. Defaults to True. qupath_project (bool): If True, create QuPath project referencing input slides and results. This requires QuPath to be installed. The QuPath project will be created in a subfolder @@ -1303,12 +977,10 @@ def application_run_download( # noqa: C901, PLR0912, PLR0913, PLR0914, PLR0915, Path: The directory containing downloaded results. Raises: - ValueError: If - the run ID is invalid - or destination directory cannot be created - or QuPath extra is not installed when qupath_project=True. + ValueError: If the run ID is invalid or destination directory cannot be created. NotFoundException: If the application run with the given ID is not found. - RuntimeError: If run details cannot be retrieved or download fails. + RuntimeError: If run details cannot be retrieved or download fails unexpectedly. + requests.HTTPError: If the download fails with an HTTP error. """ if qupath_project and not has_qupath_extra: message = "QuPath project creation requested, but 'qupath' extra is not installed." @@ -1316,7 +988,7 @@ def application_run_download( # noqa: C901, PLR0912, PLR0913, PLR0914, PLR0915, logger.warning(message) raise ValueError(message) progress = DownloadProgress() - update_progress(progress, download_progress_callable, download_progress_queue) + Service._update_progress(progress, download_progress_callable, download_progress_queue) application_run = self.application_run(run_id) final_destination_directory = destination_directory @@ -1327,7 +999,7 @@ def application_run_download( # noqa: C901, PLR0912, PLR0913, PLR0914, PLR0915, logger.warning(message) raise NotFoundException(message) from e except ApiException as e: - if e.status == HTTPStatus.UNPROCESSABLE_ENTITY: + if e.status == HTTPStatus.UNPROCESSABLE_ENTITY: # Don't use UNPROCESSABLE_CONTENT message = f"Run ID '{run_id}' invalid: {e!s}." logger.warning(message) raise ValueError(message) from e @@ -1336,7 +1008,7 @@ def application_run_download( # noqa: C901, PLR0912, PLR0913, PLR0914, PLR0915, raise RuntimeError(message) from e if create_subdirectory_for_run: - final_destination_directory = destination_directory / details.run_id + final_destination_directory = destination_directory / details.application_run_id try: final_destination_directory.mkdir(parents=True, exist_ok=True) except OSError as e: @@ -1344,44 +1016,19 @@ def application_run_download( # noqa: C901, PLR0912, PLR0913, PLR0914, PLR0915, logger.warning(message) raise ValueError(message) from e - results = list(application_run.results()) - for item_index, item in enumerate(results): - if item.external_id.startswith(("gs://", "http://", "https://")): - # Download URL to local input directory and update external_id - try: - filename = extract_filename_from_url(item.external_id) - local_path = final_destination_directory / "input" / filename - if not local_path.exists(): - progress.item_index = item_index - progress.item = item - download_url_to_file_with_progress( - progress, - item.external_id, - local_path, - download_progress_queue, - download_progress_callable, - ) - item.external_id = str(local_path) # Update external_id so subsequent code uses the local path - except Exception as e: # noqa: BLE001 - logger.warning( - "Failed to download input slide from '%s' to '%s': %s", item.external_id, local_path, e - ) - if qupath_project: def update_qupath_add_input_progress(qupath_add_input_progress: QuPathAddProgress) -> None: progress.status = DownloadProgressState.QUPATH_ADD_INPUT progress.qupath_add_input_progress = qupath_add_input_progress - update_progress(progress, download_progress_callable, download_progress_queue) + Service._update_progress(progress, download_progress_callable, download_progress_queue) logger.debug("Adding input slides to QuPath project ...") image_paths = [] - for item in results: - local_path = Path(item.external_id) - if not local_path.is_file(): - logger.warning("Input slide '%s' not found, skipping QuPath addition.", local_path) - continue - image_paths.append(local_path.resolve()) + for item in application_run.results(): + image_path = Path(item.reference) + if image_path.is_file(): + image_paths.append(image_path.resolve()) added = QuPathService.add( final_destination_directory / "qupath", image_paths, update_qupath_add_input_progress ) @@ -1391,15 +1038,15 @@ def update_qupath_add_input_progress(qupath_add_input_progress: QuPathAddProgres logger.debug("Downloading results for run '%s' to '%s'", run_id, final_destination_directory) progress.status = DownloadProgressState.CHECKING - update_progress(progress, download_progress_callable, download_progress_queue) + Service._update_progress(progress, download_progress_callable, download_progress_queue) downloaded_items: set[str] = set() # Track downloaded items to avoid re-downloading while True: run_details = application_run.details() # (Re)load current run details progress.run = run_details - update_progress(progress, download_progress_callable, download_progress_queue) + Service._update_progress(progress, download_progress_callable, download_progress_queue) - download_available_items( + self._download_available_items( progress, application_run, final_destination_directory, @@ -1409,32 +1056,29 @@ def update_qupath_add_input_progress(qupath_add_input_progress: QuPathAddProgres download_progress_callable, ) - if run_details.state == RunState.TERMINATED: - logger.debug( - "Run '%s' reached final status '%s' with message '%s' (%s).", - run_id, - run_details.state, - run_details.error_message, - run_details.error_code, - ) + if run_details.status in { + ApplicationRunStatus.CANCELED_SYSTEM, + ApplicationRunStatus.CANCELED_USER, + ApplicationRunStatus.COMPLETED, + ApplicationRunStatus.COMPLETED_WITH_ERROR, + ApplicationRunStatus.REJECTED, + }: + logger.debug("Run '%s' reached final status '%s'.", run_id, run_details.status) break if not wait_for_completion: logger.debug( - "Run '%s' is in progress with status '%s' and message '%s' (%s), " - "but not requested to wait for completion.", + "Run '%s' is in progress with status '%s', but not requested to wait for completion.", run_id, - run_details.state, - run_details.error_message, - run_details.error_code, + run_details.status, ) break logger.debug( - "Run '%s' is in progress with status '%s', waiting for completion ...", run_id, run_details.state + "Run '%s' is in progress with status '%s', waiting for completion ...", run_id, run_details.status ) progress.status = DownloadProgressState.WAITING - update_progress(progress, download_progress_callable, download_progress_queue) + Service._update_progress(progress, download_progress_callable, download_progress_queue) time.sleep(APPLICATION_RUN_DOWNLOAD_SLEEP_SECONDS) if qupath_project: @@ -1443,7 +1087,7 @@ def update_qupath_add_input_progress(qupath_add_input_progress: QuPathAddProgres def update_qupath_add_results_progress(qupath_add_results_progress: QuPathAddProgress) -> None: progress.status = DownloadProgressState.QUPATH_ADD_RESULTS progress.qupath_add_results_progress = qupath_add_results_progress - update_progress(progress, download_progress_callable, download_progress_queue) + Service._update_progress(progress, download_progress_callable, download_progress_queue) added = QuPathService.add( final_destination_directory / "qupath", @@ -1459,17 +1103,17 @@ def update_qupath_annotate_input_with_results_progress( ) -> None: progress.status = DownloadProgressState.QUPATH_ANNOTATE_INPUT_WITH_RESULTS progress.qupath_annotate_input_with_results_progress = qupath_annotate_input_with_results_progress - update_progress(progress, download_progress_callable, download_progress_queue) + Service._update_progress(progress, download_progress_callable, download_progress_queue) total_annotations = 0 + results = list(application_run.results()) progress.item_count = len(results) - for item_index, item in enumerate(results): + for item_index, item in enumerate(application_run.results()): progress.item_index = item_index - update_progress(progress, download_progress_callable, download_progress_queue) + Service._update_progress(progress, download_progress_callable, download_progress_queue) - image_path = Path(item.external_id) + image_path = Path(item.reference) if not image_path.is_file(): - logger.warning("Input slide '%s' not found, skipping QuPath annotation.", image_path) continue for artifact in item.output_artifacts: if ( @@ -1478,8 +1122,8 @@ def update_qupath_annotate_input_with_results_progress( ): artifact_name = artifact.name if create_subdirectory_per_item: - path = Path(item.external_id) - stem_name = path.stem + reference_path = Path(item.reference) + stem_name = reference_path.stem artifact_path = ( final_destination_directory / stem_name @@ -1504,6 +1148,197 @@ def update_qupath_annotate_input_with_results_progress( logger.info(message) progress.status = DownloadProgressState.COMPLETED - update_progress(progress, download_progress_callable, download_progress_queue) + Service._update_progress(progress, download_progress_callable, download_progress_queue) return final_destination_directory + + @staticmethod + def _update_progress( + progress: DownloadProgress, + download_progress_callable: Callable | None = None, # type: ignore[type-arg] + download_progress_queue: Any | None = None, # noqa: ANN401 + ) -> None: + if download_progress_callable: + download_progress_callable(progress) + if download_progress_queue: + download_progress_queue.put_nowait(progress) + + def _download_available_items( # noqa: PLR0913, PLR0917 + self, + progress: DownloadProgress, + application_run: ApplicationRun, + destination_directory: Path, + downloaded_items: set[str], + create_subdirectory_per_item: bool = False, + download_progress_queue: Any | None = None, # noqa: ANN401 + download_progress_callable: Callable | None = None, # type: ignore[type-arg] + ) -> None: + """Download items that are available and not yet downloaded. + + Args: + progress (DownloadProgress): Progress tracking object for GUI or CLI updates. + application_run (ApplicationRun): The application run object. + destination_directory (Path): Directory to save files. + downloaded_items (set): Set of already downloaded item references. + create_subdirectory_per_item (bool): Whether to create a subdirectory for each item. + download_progress_queue (Queue | None): Queue for GUI progress updates. + download_progress_callable (Callable | None): Callback for CLI progress updates. + """ + items = list(application_run.results()) + progress.item_count = len(items) + for item_index, item in enumerate(items): + if item.reference in downloaded_items: + continue + + if item.status == ItemStatus.SUCCEEDED: + progress.status = DownloadProgressState.DOWNLOADING + progress.item_index = item_index + progress.item = item + progress.item_reference = item.reference + + progress.artifact_count = len(item.output_artifacts) + Service._update_progress(progress, download_progress_callable, download_progress_queue) + + if create_subdirectory_per_item: + reference_path = Path(item.reference) + stem_name = reference_path.stem + try: + # Handle case where reference might be relative to destination + rel_path = reference_path.relative_to(destination_directory) + stem_name = rel_path.stem + except ValueError: + # Not a subfolder - just use the stem + pass + item_directory = destination_directory / stem_name + else: + item_directory = destination_directory + item_directory.mkdir(exist_ok=True) + + for artifact_index, artifact in enumerate(item.output_artifacts): + progress.artifact_index = artifact_index + progress.artifact = artifact + Service._update_progress(progress, download_progress_callable, download_progress_queue) + + self._download_item_artifact( + progress, + artifact, + item_directory, + item.reference if not create_subdirectory_per_item else "", + download_progress_queue, + download_progress_callable, + ) + + downloaded_items.add(item.reference) + + def _download_item_artifact( # noqa: PLR0913, PLR0917 + self, + progress: DownloadProgress, + artifact: Any, # noqa: ANN401 + destination_directory: Path, + prefix: str = "", + download_progress_queue: Any | None = None, # noqa: ANN401 + download_progress_callable: Callable | None = None, # type: ignore[type-arg] + ) -> None: + """Download a an artifact of a result item with progress tracking. + + Args: + progress (DownloadProgress): Progress tracking object for GUI or CLI updates. + artifact (Any): The artifact to download. + destination_directory (Path): Directory to save the file. + prefix (str): Prefix for the file name, if needed. + download_progress_queue (Queue | None): Queue for GUI progress updates. + download_progress_callable (Callable | None): Callback for CLI progress updates. + + Raises: + ValueError: If no checksum metadata is found for the artifact. + requests.HTTPError: If the download fails. + """ + metadata = artifact.metadata or {} + metadata_checksum = metadata.get("checksum_base64_crc32c", "") or metadata.get("checksum_crc32c", "") + if not metadata_checksum: + message = f"No checksum metadata found for artifact {artifact.name}" + logger.error(message) + raise ValueError(message) + + artifact_path = ( + destination_directory + / f"{prefix}{sanitize_path_component(artifact.name)}{get_file_extension_for_artifact(artifact)}" + ) + + if artifact_path.exists(): + checksum = google_crc32c.Checksum() # type: ignore[no-untyped-call] + with open(artifact_path, "rb") as f: + while chunk := f.read(APPLICATION_RUN_FILE_READ_CHUNK_SIZE): + checksum.update(chunk) # type: ignore[no-untyped-call] + existing_checksum = base64.b64encode(checksum.digest()).decode("ascii") # type: ignore[no-untyped-call] + if existing_checksum == metadata_checksum: + logger.debug("File %s already exists with correct checksum", artifact_path) + return + + self._download_file_with_progress( + progress, + artifact.download_url, + artifact_path, + metadata_checksum, + download_progress_queue, + download_progress_callable, + ) + + @staticmethod + def _download_file_with_progress( # noqa: PLR0913, PLR0917 + progress: DownloadProgress, + signed_url: str, + artifact_path: Path, + metadata_checksum: str, + download_progress_queue: Any | None = None, # noqa: ANN401 + download_progress_callable: Callable | None = None, # type: ignore[type-arg] + ) -> None: + """Download a file with progress tracking support. + + Args: + progress (DownloadProgress): Progress tracking object for GUI or CLI updates. + signed_url (str): The signed URL to download from. + artifact_path (Path): Path to save the file. + metadata_checksum (str): Expected CRC32C checksum in base64. + download_progress_queue (Any | None): Queue for GUI progress updates. + download_progress_callable (Callable | None): Callback for CLI progress updates. + + Raises: + ValueError: If checksum verification fails. + requests.HTTPError: If download fails. + """ + logger.debug( + "Downloading artifact '%s' to '%s' with expected checksum '%s' for item reference '%s'", + progress.artifact.name if progress.artifact else "unknown", + artifact_path, + metadata_checksum, + progress.item_reference or "unknown", + ) + progress.artifact_download_url = signed_url + progress.artifact_path = artifact_path + progress.artifact_downloaded_size = 0 + progress.artifact_downloaded_chunk_size = 0 + progress.artifact_size = None + Service._update_progress(progress, download_progress_callable, download_progress_queue) + + checksum = google_crc32c.Checksum() # type: ignore[no-untyped-call] + + with requests.get(signed_url, stream=True, timeout=60) as stream: + stream.raise_for_status() + progress.artifact_size = int(stream.headers.get("content-length", 0)) + Service._update_progress(progress, download_progress_callable, download_progress_queue) + with open(artifact_path, mode="wb") as file: + for chunk in stream.iter_content(chunk_size=APPLICATION_RUN_DOWNLOAD_CHUNK_SIZE): + if chunk: + file.write(chunk) + checksum.update(chunk) # type: ignore[no-untyped-call] + progress.artifact_downloaded_chunk_size = len(chunk) + progress.artifact_downloaded_size += progress.artifact_downloaded_chunk_size + Service._update_progress(progress, download_progress_callable, download_progress_queue) + + downloaded_checksum = base64.b64encode(checksum.digest()).decode("ascii") # type: ignore[no-untyped-call] + if downloaded_checksum != metadata_checksum: + artifact_path.unlink() # Remove corrupted file + msg = f"Checksum mismatch for {artifact_path}: {downloaded_checksum} != {metadata_checksum}" + logger.error(msg) + raise ValueError(msg) diff --git a/src/aignostics/application/_utils.py b/src/aignostics/application/_utils.py index 6538a84a..a2e85c63 100644 --- a/src/aignostics/application/_utils.py +++ b/src/aignostics/application/_utils.py @@ -3,32 +3,23 @@ 1. Printing of application resources 2. Reading/writing metadata CSV files 3. Mime type handling. -4. Date/time validation. """ import csv import mimetypes -from datetime import UTC, datetime from enum import StrEnum from pathlib import Path -from typing import Any +from typing import Any, Literal import humanize -from aignostics.constants import ( - HETA_APPLICATION_ID, - TEST_APP_APPLICATION_ID, - WSI_SUPPORTED_FILE_EXTENSIONS, - WSI_SUPPORTED_FILE_EXTENSIONS_TEST_APP, -) from aignostics.platform import ( + ApplicationRun, + ApplicationRunData, + ApplicationRunStatus, InputArtifactData, OutputArtifactData, OutputArtifactElement, - Run, - RunData, - RunItemStatistics, - RunState, ) from aignostics.utils import console, get_logger @@ -37,53 +28,6 @@ RUN_FAILED_MESSAGE = "Failed to get status for run with ID '%s'" -def validate_due_date(due_date: str | None) -> None: - """Validate that due_date is in ISO 8601 format and in the future. - - Args: - due_date (str | None): The datetime string to validate. - - Raises: - ValueError: If - the format is invalid - or the due_date is not in the future. - """ - if due_date is None: - return - - # Try parsing with fromisoformat (handles most ISO 8601 formats) - try: - # Handle 'Z' suffix by replacing with '+00:00' - normalized = due_date.replace("Z", "+00:00") - parsed_dt = datetime.fromisoformat(normalized) - except (ValueError, TypeError) as e: - message = ( - f"Invalid ISO 8601 format for due_date. " - f"Expected format like '2025-10-19T19:53:00+00:00' or '2025-10-19T19:53:00Z', " - f"but got: '{due_date}' (error: {e})" - ) - raise ValueError(message) from e - - # Ensure the datetime is timezone-aware (reject naive datetimes) - if parsed_dt.tzinfo is None: - message = ( - f"Invalid ISO 8601 format for due_date. " - f"Expected format with timezone like '2025-10-19T19:53:00+00:00' or '2025-10-19T19:53:00Z', " - f"but got: '{due_date}' (missing timezone information)" - ) - raise ValueError(message) - - # Check that the datetime is in the future - now = datetime.now(UTC) - if parsed_dt <= now: - message = ( - f"due_date must be in the future. " - f"Got '{due_date}' ({parsed_dt.isoformat()}), " - f"but current UTC time is {now.isoformat()}" - ) - raise ValueError(message) - - class OutputFormat(StrEnum): """ Enum representing the supported output formats. @@ -97,179 +41,173 @@ class OutputFormat(StrEnum): JSON = "json" -def _format_status_string(state: RunState, termination_reason: str | None = None) -> str: - """Format status string with optional termination reason. +def retrieve_and_print_run_details(run: ApplicationRun) -> None: + """Retrieve and print detailed information about a run. Args: - state (RunState): The run state - termination_reason (str | None): Optional termination reason + run (ApplicationRun): The ApplicationRun object - Returns: - str: Formatted status string """ - if state is RunState.TERMINATED and termination_reason: - return f"{state.value} ({termination_reason})" - return f"{state.value}" - - -def _format_duration_string(submitted_at: datetime | None, terminated_at: datetime | None) -> str: - """Format duration string for a run. + run_data = run.details() + console.print(f"[bold]Run Details for {run.application_run_id}[/bold]") + console.print("=" * 80) + console.print(f"[bold]App Version:[/bold] {run_data.application_version_id}") + console.print(f"[bold]Status:[/bold] {run_data.status.value}") + console.print(f"[bold]Message:[/bold] {run_data.message}") + if run_data.terminated_at and run_data.triggered_at: + duration = run_data.terminated_at - run_data.triggered_at + duration_str = humanize.precisedelta(duration) + console.print(f"[bold]Duration:[/bold] {duration_str}") + console.print(f"[bold]Triggered at:[/bold] {run_data.triggered_at}") + console.print(f"[bold]Terminated at:[/bold] {run_data.terminated_at}") + console.print(f"[bold]Triggered by:[/bold] {run_data.triggered_by}") + console.print(f"[bold]Organization:[/bold] {run_data.organization_id}") + + console.print(f"[bold]Custom Metadata:[/bold] {run_data.metadata or 'None'}") + + # Get and display detailed item status + console.print() + console.print("[bold]Items:[/bold]") + + _retrieve_and_print_run_items(run) + _print_run_status_summary(run) + + +def _retrieve_and_print_run_items(run: ApplicationRun) -> None: + """Retrieve and print information about items in a run. Args: - submitted_at: Submission timestamp - terminated_at: Termination timestamp - - Returns: - str: Formatted duration string + run (ApplicationRun): The ApplicationRun object """ - if terminated_at and submitted_at: - duration = terminated_at - submitted_at - return humanize.precisedelta(duration) - return "still processing" + # Get results with detailed information + results = run.results() + if not results: + console.print(" No item results available.") + return + for item in results: + console.print(f" [bold]Item Reference:[/bold] {item.reference}") + console.print(f" [bold]Item ID:[/bold] {item.item_id}") + console.print(f" [bold]Status:[/bold] {item.status.value}") + console.print(f" [bold]Message:[/bold] {item.message}") -def _format_run_statistics(statistics: RunItemStatistics) -> str: - """Format run statistics as a multi-line string. + if item.error: + console.print(f" [error]Error:[/error] {item.error}") - Args: - statistics: Run statistics object + if item.output_artifacts: + console.print(" [bold]Output Artifacts:[/bold]") + for artifact in item.output_artifacts: + console.print(f" - Name: {artifact.name}") + console.print(f" MIME Type: {get_mime_type_for_artifact(artifact)}") + console.print(f" Artifact ID: {artifact.output_artifact_id}") + console.print(f" Download URL: {artifact.download_url}") - Returns: - str: Formatted statistics string - """ - return ( - f" - {statistics.item_count} items\n" - f" - {statistics.item_pending_count} pending\n" - f" - {statistics.item_processing_count} processing\n" - f" - {statistics.item_skipped_count} skipped\n" - f" - {statistics.item_succeeded_count} succeeded\n" - f" - {statistics.item_user_error_count} user errors\n" - f" - {statistics.item_system_error_count} system errors" - ) + console.print() -def _format_run_details(run: RunData) -> str: - """Format detailed run information as a single string. +def _print_run_status_summary(run: ApplicationRun) -> None: + """Print summary of item statuses in a run. Args: - run (RunData): Run data to format - - Returns: - str: Formatted run details + run (ApplicationRun): The ApplicationRun object """ - status_str = _format_status_string(run.state, run.termination_reason) - duration_str = _format_duration_string(run.submitted_at, run.terminated_at) - - output = ( - f"[bold]Run ID:[/bold] {run.run_id}\n" - f"[bold]Application (Version):[/bold] {run.application_id} ({run.version_number})\n" - f"[bold]Status (Termination Reason):[/bold] {status_str}\n" - f"[bold]Output:[/bold] {run.output.value}\n" - ) - - if run.error_message or run.error_code: - output += f"[bold]Error Message (Code):[/bold] {run.error_message or 'N/A'} ({run.error_code or 'N/A'})\n" + # Get and display item status counts + item_statuses = run.item_status() + if not item_statuses: + return - output += ( - f"[bold]Statistics:[/bold]\n" - f"{_format_run_statistics(run.statistics)}\n" - f"[bold]Submitted (by):[/bold] {run.submitted_at} ({run.submitted_by})\n" - f"[bold]Terminated (duration):[/bold] {run.terminated_at} ({duration_str})\n" - f"[bold]Custom Metadata:[/bold] {run.custom_metadata or 'None'}" - ) + status_counts: dict[ + Literal["PENDING", "CANCELED_USER", "CANCELED_SYSTEM", "ERROR_USER", "ERROR_SYSTEM", "SUCCEEDED"], int + ] = {} + for status in item_statuses.values(): + status_counts[status.value] = status_counts.get(status.value, 0) + 1 - return output + console.print("[bold]Item Status Summary:[/bold]") + for status, count in status_counts.items(): + console.print(f" {status}: {count}") -def retrieve_and_print_run_details(run_handle: Run) -> None: - """Retrieve and print detailed information about a run. +def _retrieve_and_print_item_status_counts(run: ApplicationRun) -> bool: + """Retrieve and print item status counts for a run. Args: - run_handle (Run): The Run handle - - """ - run = run_handle.details() - - output = f"[bold]Run Details for {run.run_id}[/bold]\n{'=' * 80}\n{_format_run_details(run)}\n\n[bold]Items:[/bold]" - - console.print(output) - _retrieve_and_print_run_items(run_handle) - - -def _retrieve_and_print_run_items(run_handle: Run) -> None: - """Retrieve and print information about items in a run. + run (ApplicationRun): The run object - Args: - run_handle (Run): The Run handle + Returns: + bool: True if successful, False otherwise """ - results = run_handle.results() - if not results: - console.print(" No item results available.") - return - - for item in results: - item_output = ( - f" [bold]Item ID:[/bold] {item.item_id}\n" - f" [bold]Item External ID:[/bold] {item.external_id}\n" - f" [bold]Status (Termination Reason):[/bold] {item.state.value} ({item.termination_reason})\n" - f" [bold]Error Message (Code):[/bold] {item.error_message} ({item.error_code})\n" - f" [bold]Custom Metadata:[/bold] {item.custom_metadata or 'None'}" + try: + item_statuses = run.item_status() + except Exception as e: + logger.exception("Failed to get item status for run with ID '%s'", run.application_run_id) + console.print( + f"[error]Error:[/error] Failed to get item statuses for run with ID '{run.application_run_id}': {e}" ) + return False - if item.output_artifacts: - artifacts_output = "\n [bold]Output Artifacts:[/bold]" - for artifact in item.output_artifacts: - artifacts_output += ( - f"\n - Name: {artifact.name}" - f"\n MIME Type: {get_mime_type_for_artifact(artifact)}" - f"\n Artifact ID: {artifact.output_artifact_id}" - f"\n Download URL: {artifact.download_url}" - ) - item_output += artifacts_output + status_counts: dict[ + Literal["PENDING", "CANCELED_USER", "CANCELED_SYSTEM", "ERROR_USER", "ERROR_SYSTEM", "SUCCEEDED"], int + ] = {} + for status in item_statuses.values(): + status_counts[status.value] = status_counts.get(status.value, 0) + 1 + + if status_counts: + console.print("[bold]Item Status Counts:[/bold]") + for status, count in status_counts.items(): + console.print(f" {status}: {count}") - console.print(f"{item_output}\n") + return True -def print_runs_verbose(runs: list[RunData]) -> None: - """Print detailed information about runs, sorted by submitted_at in descending order. +def print_runs_verbose(runs: list[ApplicationRunData]) -> None: + """Print detailed information about runs, sorted by triggered_at in descending order. Args: - runs (list[RunData]): List of run data + runs (list[ApplicationRunData]): List of run data + service (Service): The Service instance to use """ - output = f"[bold]Application Runs:[/bold]\n{'=' * 80}" - - for run in runs: - output += f"\n{_format_run_details(run)}\n{'-' * 80}" + from ._service import Service # noqa: PLC0415 - console.print(output) + console.print("[bold]Application Runs:[/bold]") + console.print("=" * 80) - -def print_runs_non_verbose(runs: list[RunData]) -> None: - """Print simplified information about runs, sorted by submitted_at in descending order. + for run in runs: + console.print(f"[bold]Run ID:[/bold] {run.application_run_id}") + console.print(f"[bold]App Version:[/bold] {run.application_version_id}") + console.print(f"[bold]Status:[/bold] {run.status.value}") + console.print(f"[bold]Triggered at:[/bold] {run.triggered_at.astimezone().strftime('%Y-%m-%d %H:%M:%S %Z')}") + console.print(f"[bold]Organization:[/bold] {run.organization_id}") + + try: + _retrieve_and_print_item_status_counts(Service().application_run(run.application_run_id)) + except Exception as e: + logger.exception("Failed to retrieve item status counts for run with ID '%s'", run.application_run_id) + console.print( + f"[error]Error:[/error] Failed to retrieve item status counts for run with ID " + f"'{run.application_run_id}': {e}" + ) + continue + console.print("-" * 80) + + +def print_runs_non_verbose(runs: list[ApplicationRunData]) -> None: + """Print simplified information about runs, sorted by triggered_at in descending order. Args: - runs (list[RunData]): List of runs + runs (list[ApplicationRunData]): List of runs """ - output = "[bold]Application Run IDs:[/bold]" - - for run in runs: - status_str = _format_status_string(run.state, run.termination_reason) - - if run.error_message or run.error_code: - status_str += f" | error: {run.error_message or 'N/A'} ({run.error_code or 'N/A'})" - - output += ( - f"\n- [bold]{run.run_id}[/bold] of " - f"[bold]{run.application_id} ({run.version_number})[/bold] " - f"(submitted: {run.submitted_at.astimezone().strftime('%Y-%m-%d %H:%M:%S %Z')}, " - f"status: {status_str}, " - f"output: {run.output.value})" + console.print("[bold]Application Run IDs:[/bold]") + + for run_status in runs: + console.print( + f"- [bold]{run_status.application_run_id}[/bold] of " + f"[bold]{run_status.application_version_id}[/bold] " + f"(triggered: {run_status.triggered_at.astimezone().strftime('%Y-%m-%d %H:%M:%S %Z')}, " + f"status: {run_status.status.value})" ) - console.print(output) - def write_metadata_dict_to_csv( metadata_csv: Path, @@ -316,12 +254,12 @@ def read_metadata_csv_to_dict( def application_run_status_to_str( - status: RunState, + status: ApplicationRunStatus, ) -> str: """Convert application status to a human-readable string. Args: - status (RunState): The application status + status (ApplicationRunStatus): The application status Raises: RuntimeError: If the status is invalid or unknown @@ -330,9 +268,14 @@ def application_run_status_to_str( str: Human-readable string representation of the status """ status_mapping = { - RunState.PENDING: "pending", - RunState.PROCESSING: "processing", - RunState.TERMINATED: "terminated", + ApplicationRunStatus.CANCELED_SYSTEM: "canceled by platform", + ApplicationRunStatus.CANCELED_USER: "canceled by user", + ApplicationRunStatus.COMPLETED: "completed", + ApplicationRunStatus.COMPLETED_WITH_ERROR: "completed with error", + ApplicationRunStatus.RECEIVED: "received by platform", + ApplicationRunStatus.REJECTED: "rejected by platform", + ApplicationRunStatus.RUNNING: "running on platform", + ApplicationRunStatus.SCHEDULED: "scheduled for processing", } if status in status_mapping: @@ -382,25 +325,3 @@ def get_file_extension_for_artifact(artifact: OutputArtifactData) -> str: file_extension = ".bin" logger.debug("Guessed file extension: '%s' for artifact '%s'", file_extension, artifact.name) return file_extension - - -def get_supported_extensions_for_application(application_id: str) -> set[str]: - """Get the list of supported file extensions for a given application. - - Args: - application_id (str): The application ID - - Returns: - set[str]: List of supported file extensions - - Raises: - RuntimeError: If the application ID is not supported - """ - if application_id == HETA_APPLICATION_ID: - return WSI_SUPPORTED_FILE_EXTENSIONS - if application_id == TEST_APP_APPLICATION_ID: - return WSI_SUPPORTED_FILE_EXTENSIONS_TEST_APP - - message = f"Unsupported application {application_id}" - logger.critical(message) - raise RuntimeError(message) diff --git a/src/aignostics/bucket/__init__.py b/src/aignostics/bucket/__init__.py index 346731a5..405fd402 100644 --- a/src/aignostics/bucket/__init__.py +++ b/src/aignostics/bucket/__init__.py @@ -6,9 +6,11 @@ from ._cli import cli from ._service import Service -from ._settings import Settings -__all__ += ["Service", "Settings", "cli"] +__all__ += [ + "Service", + "cli", +] # advertise PageBuilder to enable auto-discovery if find_spec("nicegui"): diff --git a/src/aignostics/bucket/_service.py b/src/aignostics/bucket/_service.py index efe73e71..cc7a05ce 100644 --- a/src/aignostics/bucket/_service.py +++ b/src/aignostics/bucket/_service.py @@ -14,7 +14,7 @@ from botocore.client import BaseClient from aignostics.platform import Service as PlatformService -from aignostics.utils import BaseService, Health, get_logger, get_user_data_directory +from aignostics.utils import UNHIDE_SENSITIVE_INFO, BaseService, Health, get_logger, get_user_data_directory from ._settings import Settings @@ -93,7 +93,7 @@ def __init__(self) -> None: super().__init__(Settings) self._platform_service = PlatformService() - def info(self, mask_secrets: bool = True) -> dict[str, Any]: # noqa: ARG002, PLR6301 + def info(self, mask_secrets: bool = True) -> dict[str, Any]: """Determine info of this service. Args: @@ -102,7 +102,7 @@ def info(self, mask_secrets: bool = True) -> dict[str, Any]: # noqa: ARG002, PL Returns: dict[str,Any]: The info of this service. """ - return {} + return {"settings": self._settings.model_dump(context={UNHIDE_SENSITIVE_INFO: not mask_secrets})} def health(self) -> Health: # noqa: PLR6301 """Determine health of this service. diff --git a/src/aignostics/constants.py b/src/aignostics/constants.py index ecab8366..7a4b49cc 100644 --- a/src/aignostics/constants.py +++ b/src/aignostics/constants.py @@ -11,8 +11,4 @@ # Project specific configuration os.environ["MATPLOTLIB"] = "false" os.environ["NICEGUI_STORAGE_PATH"] = str(Path.home().resolve() / ".aignostics" / ".nicegui") - -HETA_APPLICATION_ID = "he-tme" -TEST_APP_APPLICATION_ID = "test-app" WSI_SUPPORTED_FILE_EXTENSIONS = {".dcm", ".tiff", ".tif", ".svs"} -WSI_SUPPORTED_FILE_EXTENSIONS_TEST_APP = {".tiff"} diff --git a/src/aignostics/dataset/_cli.py b/src/aignostics/dataset/_cli.py index 8e5ae825..3f29d835 100644 --- a/src/aignostics/dataset/_cli.py +++ b/src/aignostics/dataset/_cli.py @@ -1,17 +1,18 @@ """CLI of dataset module.""" -import sys import webbrowser from pathlib import Path from typing import Annotated +import requests import typer +from aignostics.platform import generate_signed_url as platform_generate_signed_url from aignostics.utils import console, get_logger, get_user_data_directory logger = get_logger(__name__) -PATH_LENGTH_MAX = 260 +PATH_LENFTH_MAX = 260 TARGET_LAYOUT_DEFAULT = "%collection_id/%PatientID/%StudyInstanceUID/%Modality_%SeriesInstanceUID/" cli = typer.Typer( @@ -41,14 +42,8 @@ def indices() -> None: """List available columns in given of the IDC Portal.""" from aignostics.third_party.idc_index import IDCClient # noqa: PLC0415 - try: - client = IDCClient.client() - console.print(list(client.indices_overview.keys())) - except Exception as e: - message = f"Error fetching indices overview: {e!s}" - logger.exception(message) - console.print(f"[red]{message}[/red]") - sys.exit(1) + client = IDCClient.client() + console.print(list(client.indices_overview.keys())) @idc_app.command() @@ -64,15 +59,9 @@ def columns( """List available columns in given of the IDC Portal.""" from aignostics.third_party.idc_index import IDCClient # noqa: PLC0415 - try: - client = IDCClient.client() - client.fetch_index(index) - console.print(list(getattr(client, index).columns)) - except Exception as e: - message = f"Error fetching columns for index '{index}': {e!s}" - logger.exception(message) - console.print(f"[red]{message}[/red]") - sys.exit(1) + client = IDCClient.client() + client.fetch_index(index) + console.print(list(getattr(client, index).columns)) @idc_app.command() @@ -107,19 +96,13 @@ def query( from aignostics.third_party.idc_index import IDCClient # noqa: PLC0415 - try: - client = IDCClient.client() - for idx in [idx.strip() for idx in indices.split(",") if idx.strip()]: - logger.info("Fetching index: '%s'", idx) - client.fetch_index(idx) + client = IDCClient.client() + for idx in [idx.strip() for idx in indices.split(",") if idx.strip()]: + logger.info("Fetching index: '%s'", idx) + client.fetch_index(idx) - pd.set_option("display.max_colwidth", None) - console.print(client.sql_query(sql_query=query)) # type: ignore[no-untyped-call] - except Exception as e: - message = f"Error executing query '{query}': {e!s}" - logger.exception(message) - console.print(f"[red]{message}[/red]") - sys.exit(1) + pd.set_option("display.max_colwidth", None) + console.print(client.sql_query(sql_query=query)) # type: ignore[no-untyped-call] @idc_app.command(name="download") @@ -129,7 +112,6 @@ def idc_download( typer.Argument( help="Identifier or comma-separated set of identifiers." " IDs matched against collection_id, PatientId, StudyInstanceUID, SeriesInstanceUID or SOPInstanceUID." - " Example: 1.3.6.1.4.1.5962.99.1.1069745200.1645485340.1637452317744.2.0" ), ], target: Annotated[ @@ -149,26 +131,66 @@ def idc_download( ] = TARGET_LAYOUT_DEFAULT, dry_run: Annotated[bool, typer.Option(help="dry run")] = False, ) -> None: - """Download from manifest file, identifier, or comma-separate set of identifiers.""" - from ._service import Service # noqa: PLC0415 - - try: - matches_found = Service.download_idc( - source=source, - target=target, - target_layout=target_layout, - dry_run=dry_run, + """Download from manifest file, identifier, or comma-separate set of identifiers. + + Raises: + typer.Exit: If the target directory does not exist. + """ + from aignostics.third_party.idc_index import IDCClient # noqa: PLC0415 + + client = IDCClient.client() + logger.info("Downloading instance index from IDC version: %s", client.get_idc_version()) # type: ignore[no-untyped-call] + + target_directory = Path(target) + if not target_directory.is_dir(): + logger.error("Target directory does not exist: %s", target_directory) + raise typer.Exit(code=1) + + item_ids = [item for item in source.split(",") if item] + + if not item_ids: + logger.error("No valid IDs provided.") + + index_df = client.index + client.fetch_index("sm_instance_index") + logger.info("Downloaded instance index") + sm_instance_index_df = client.sm_instance_index + + def check_and_download(column_name: str, item_ids: list[str], target_directory: Path, kwarg_name: str) -> bool: + if column_name != "SOPInstanceUID": + matches = index_df[column_name].isin(item_ids) + matched_ids = index_df[column_name][matches].unique().tolist() + else: + matches = sm_instance_index_df[column_name].isin(item_ids) # type: ignore + matched_ids = sm_instance_index_df[column_name][matches].unique().tolist() # type: ignore + if not matched_ids: + return False + unmatched_ids = list(set(item_ids) - set(matched_ids)) + if unmatched_ids: + logger.debug("Partial match for %s: matched %s, unmatched %s", column_name, matched_ids, unmatched_ids) + logger.info("Identified matching %s: %s", column_name, matched_ids) + client.download_from_selection(**{ # type: ignore[no-untyped-call] + kwarg_name: matched_ids, + "downloadDir": target_directory, + "dirTemplate": target_layout, + "quiet": False, + "show_progress_bar": True, + "use_s5cmd_sync": True, + "dry_run": dry_run, + }) + return True + + matches_found = 0 + matches_found += check_and_download("collection_id", item_ids, target_directory, "collection_id") + matches_found += check_and_download("PatientID", item_ids, target_directory, "patientId") + matches_found += check_and_download("StudyInstanceUID", item_ids, target_directory, "studyInstanceUID") + matches_found += check_and_download("SeriesInstanceUID", item_ids, target_directory, "seriesInstanceUID") + matches_found += check_and_download("SOPInstanceUID", item_ids, target_directory, "sopInstanceUID") + if not matches_found: + logger.error( + "None of the values passed matched any of the identifiers: " + "collection_id, PatientID, StudyInstanceUID, SeriesInstanceUID, SOPInstanceUID." ) - console.print(f"[green]Successfully downloaded {matches_found} identifier type(s) to {target}[/green]") - except ValueError as e: - logger.warning("Bad input to download from IDC for IDs '%s': %s", source, e) - console.print(f"[warning]Warning:[/warning] {e}") - sys.exit(2) - except Exception as e: - message = f"Error downloading data for IDs '{source}': {e!s}" - logger.exception(message) - console.print(f"[red]{message}[/red]") - sys.exit(1) @aignostics_app.command("download") @@ -176,8 +198,7 @@ def aignostics_download( source_url: Annotated[ str, typer.Argument( - help="URL to download." - " Example: gs://aignx-storage-service-dev/sample_data_formatted/9375e3ed-28d2-4cf3-9fb9-8df9d11a6627.tiff" + help="URL to download, e.g. gs://aignx-storage-service-dev/sample_data_formatted/9375e3ed-28d2-4cf3-9fb9-8df9d11a6627.tiff" ), ], destination_directory: Annotated[ @@ -205,40 +226,41 @@ def aignostics_download( TransferSpeedColumn, ) - from ._service import Service # noqa: PLC0415 - - try: - # Get filename for progress display - filename = source_url.split("/")[-1] - - with Progress( - TextColumn("[progress.description]Downloading"), - BarColumn(), - TaskProgressColumn(), - TimeRemainingColumn(), - FileSizeColumn(), - TotalFileSizeColumn(), - TransferSpeedColumn(), - TextColumn("[progress.description]{task.description}"), - ) as progress: - task = progress.add_task(f"Downloading {filename}", total=0) - - def update_progress(bytes_downloaded: int, total_size: int, _filename: str) -> None: - progress.update(task, advance=bytes_downloaded, total=total_size) - - output_path = Service.download_aignostics( - source_url=source_url, - destination_directory=destination_directory, - download_progress_callable=update_progress, - ) - - console.print(f"[green]Successfully downloaded to {output_path}[/green]") - except ValueError as e: - logger.warning("Bad input to download from '%s': %s", source_url, e) - console.print(f"[warning]Warning:[/warning] Bad input: {e}") - sys.exit(2) - except Exception as e: - message = f"Error downloading data from '{source_url}': {e!s}" - logger.exception(message) - console.print(f"[red]{message}[/red]") - sys.exit(1) + # Get filename from URL + filename = source_url.split("/")[-1] + + # Generate a signed URL + source_url_signed = platform_generate_signed_url(source_url) + + output_path = Path(destination_directory) / filename + + console.print(f"Downloading from {source_url} to {output_path}") + + # Make sure the destination directory exists + Path(destination_directory).mkdir(parents=True, exist_ok=True) + + # Start the request to get content length + response = requests.get(source_url_signed, stream=True, timeout=60) + total_size = int(response.headers.get("content-length", 0)) + + with Progress( + TextColumn("[progress.description]Downloading"), + BarColumn(), + TaskProgressColumn(), + TimeRemainingColumn(), + FileSizeColumn(), + TotalFileSizeColumn(), + TransferSpeedColumn(), + TextColumn("[progress.description]{task.description}"), + ) as progress: + # Create a task for overall progress + task = progress.add_task(f"Downloading {filename}", total=total_size) + + # Write the file + with open(output_path, "wb") as f: + for chunk in response.iter_content(chunk_size=8192): + if chunk: # filter out keep-alive new chunks + f.write(chunk) + progress.update(task, advance=len(chunk)) + + console.print(f"[green]Successfully downloaded to {output_path}[/green]") diff --git a/src/aignostics/dataset/_service.py b/src/aignostics/dataset/_service.py index bc87fe0b..7b7c9c4b 100644 --- a/src/aignostics/dataset/_service.py +++ b/src/aignostics/dataset/_service.py @@ -6,14 +6,10 @@ import sys import threading import time -from collections.abc import Callable from multiprocessing import Queue from pathlib import Path from typing import Any -import requests - -from aignostics.platform import generate_signed_url as platform_generate_signed_url from aignostics.utils import SUBPROCESS_CREATION_FLAGS, BaseService, Health, get_logger logger = get_logger(__name__) @@ -207,10 +203,10 @@ def download_with_queue( # noqa: PLR0915, C901 def check_and_download(column_name: str, item_ids: list[str], target_directory: Path, kwarg_name: str) -> bool: if column_name != "SOPInstanceUID": matches = index_df[column_name].isin(item_ids) - matched_ids = index_df[column_name][matches].unique().tolist() # pyright: ignore[reportAttributeAccessIssue] + matched_ids = index_df[column_name][matches].unique().tolist() else: - matches = sm_instance_index_df[column_name].isin(item_ids) # type: ignore[index] - matched_ids = sm_instance_index_df[column_name][matches].unique().tolist() # type: ignore[index] # pyright: ignore[reportAttributeAccessIssue] + matches = sm_instance_index_df[column_name].isin(item_ids) # type: ignore + matched_ids = sm_instance_index_df[column_name][matches].unique().tolist() # type: ignore if not matched_ids: return False unmatched_ids = list(set(item_ids) - set(matched_ids)) @@ -311,141 +307,3 @@ def check_and_download(column_name: str, item_ids: list[str], target_directory: message += "collection_id, PatientID, StudyInstanceUID, SeriesInstanceUID, SOPInstanceUID." logger.warning(message) raise ValueError(message) - - @staticmethod - def download_idc( - source: str, - target: Path, - target_layout: str = TARGET_LAYOUT_DEFAULT, - dry_run: bool = False, - ) -> int: - """Download from IDC using identifier or comma-separated set of identifiers. - - Args: - source (str): Identifier or comma-separated set of identifiers. - IDs matched against collection_id, PatientId, StudyInstanceUID, SeriesInstanceUID or SOPInstanceUID. - target (Path): Target directory for download. - target_layout (str): Layout of the target directory. - dry_run (bool): If True, perform a dry run. - - Returns: - int: Number of matched identifier types. - - Raises: - ValueError: If target directory does not exist or no valid IDs provided. - RuntimeError: If download fails. - """ - from aignostics.third_party.idc_index import IDCClient # noqa: PLC0415 - - client = IDCClient.client() - logger.info("Downloading instance index from IDC version: %s", client.get_idc_version()) # type: ignore[no-untyped-call] - - target_directory = Path(target) - if not target_directory.is_dir(): - message = f"Target directory does not exist: {target_directory}" - logger.error(message) - raise ValueError(message) - - item_ids = [item for item in source.split(",") if item] - - if not item_ids: - message = "No valid IDs provided." - logger.error(message) - raise ValueError(message) - - index_df = client.index - client.fetch_index("sm_instance_index") - logger.info("Downloaded instance index") - sm_instance_index_df = client.sm_instance_index - - def check_and_download(column_name: str, item_ids: list[str], target_directory: Path, kwarg_name: str) -> bool: - if column_name != "SOPInstanceUID": - matches = index_df[column_name].isin(item_ids) - matched_ids = index_df[column_name][matches].unique().tolist() # pyright: ignore[reportAttributeAccessIssue] - else: - matches = sm_instance_index_df[column_name].isin(item_ids) # type: ignore[index] - matched_ids = sm_instance_index_df[column_name][matches].unique().tolist() # type: ignore[index] # pyright: ignore[reportAttributeAccessIssue] - if not matched_ids: - return False - unmatched_ids = list(set(item_ids) - set(matched_ids)) - if unmatched_ids: - logger.debug("Partial match for %s: matched %s, unmatched %s", column_name, matched_ids, unmatched_ids) - logger.info("Identified matching %s: %s", column_name, matched_ids) - client.download_from_selection(**{ # type: ignore[no-untyped-call] - kwarg_name: matched_ids, - "downloadDir": target_directory, - "dirTemplate": target_layout, - "quiet": False, - "show_progress_bar": True, - "use_s5cmd_sync": True, - "dry_run": dry_run, - }) - return True - - matches_found = 0 - matches_found += check_and_download("collection_id", item_ids, target_directory, "collection_id") - matches_found += check_and_download("PatientID", item_ids, target_directory, "patientId") - matches_found += check_and_download("StudyInstanceUID", item_ids, target_directory, "studyInstanceUID") - matches_found += check_and_download("SeriesInstanceUID", item_ids, target_directory, "seriesInstanceUID") - matches_found += check_and_download("SOPInstanceUID", item_ids, target_directory, "sopInstanceUID") - - if not matches_found: - message = ( - "None of the values passed matched any of the identifiers: " - "collection_id, PatientID, StudyInstanceUID, SeriesInstanceUID, SOPInstanceUID." - ) - logger.error(message) - raise ValueError(message) - - return matches_found - - @staticmethod - def download_aignostics( - source_url: str, - destination_directory: Path, - download_progress_callable: Callable[[int, int, str], None] | None = None, - ) -> Path: - """Download from bucket to folder via a bucket URL. - - Args: - source_url (str): URL to download, e.g. gs://aignx-storage-service-dev/sample_data_formatted/... - destination_directory (Path): Destination directory to download to. - download_progress_callable (Callable[[int, int, str], None] | None): Optional callback for progress updates. - Called with (bytes_downloaded, total_size, filename). - - Returns: - Path: The path to the downloaded file. - - Raises: - ValueError: If the source URL is invalid. - RuntimeError: If the download fails. - """ - try: - # Get filename from URL - filename = source_url.rsplit("/", maxsplit=1)[-1] - - source_url_signed = platform_generate_signed_url(source_url) - - output_path = Path(destination_directory) / filename - - logger.info("Downloading from %s to %s", source_url, output_path) - - Path(destination_directory).mkdir(parents=True, exist_ok=True) - - # Start the request to get content length - response = requests.get(source_url_signed, stream=True, timeout=60) - total_size = int(response.headers.get("content-length", 0)) - - with open(output_path, "wb") as f: - for chunk in response.iter_content(chunk_size=8192): - if chunk: # filter out keep-alive new chunks - f.write(chunk) - if download_progress_callable: - download_progress_callable(len(chunk), total_size, filename) - - logger.info("Successfully downloaded to %s", output_path) - return output_path - except Exception as e: - message = f"Failed to download data from '{source_url}': {e}" - logger.exception(message) - raise RuntimeError(message) from e diff --git a/src/aignostics/gui/_error.py b/src/aignostics/gui/_error.py index 77eda572..952146d2 100644 --- a/src/aignostics/gui/_error.py +++ b/src/aignostics/gui/_error.py @@ -42,4 +42,4 @@ async def page_error_force() -> None: # noqa: RUF029 """ with frame("Forced Error", left_sidebar=False): pass - raise Exception("forced") # noqa: EM101, TRY002 + raise Exception("bla") # noqa: EM101, TRY002 diff --git a/src/aignostics/gui/_theme.py b/src/aignostics/gui/_theme.py index e7daad8f..6dbd38a0 100644 --- a/src/aignostics/gui/_theme.py +++ b/src/aignostics/gui/_theme.py @@ -84,12 +84,6 @@ def theme() -> None: height: 100%; min-height: 900px; } - :global(.jse-modal-window.jse-modal-window-jsoneditor) - { - width: 100%; - height: 100%; - min-height: 900px; - } """) diff --git a/src/aignostics/notebook/_gui.py b/src/aignostics/notebook/_gui.py index 1a9e9195..3971b903 100644 --- a/src/aignostics/notebook/_gui.py +++ b/src/aignostics/notebook/_gui.py @@ -80,24 +80,24 @@ def page_index() -> None: ) ui.space() - @ui.page("/notebook/{run_id}") - def page_application_run_marimo(run_id: str, results_folder: str) -> None: + @ui.page("/notebook/{application_run_id}") + def page_application_run_marimo(application_run_id: str, results_folder: str) -> None: """Inspect Application Run in Marimo.""" theme() with ui.row().classes("w-full justify-end"): ui.button( - "Back to Application Run" if run_id != "all" else "Back to Marimo Extension", + "Back to Application Run" if application_run_id != "all" else "Back to Marimo Extension", icon="arrow_back", - on_click=lambda _, run_id=run_id: ui.navigate.to( - f"/application/run/{run_id}" if run_id != "all" else "/notebook" + on_click=lambda _, application_run_id=application_run_id: ui.navigate.to( + f"/application/run/{application_run_id}" if application_run_id != "all" else "/notebook" ), ).mark("BUTTON_NOTEBOOK_BACK") try: server_url = Service().start() ui.html( - f'', sanitize=False, diff --git a/src/aignostics/notebook/_notebook.py b/src/aignostics/notebook/_notebook.py index 0ab8bd6e..6f5b64d8 100644 --- a/src/aignostics/notebook/_notebook.py +++ b/src/aignostics/notebook/_notebook.py @@ -50,8 +50,8 @@ def _(): @app.cell def _(Path, mo): query_params = mo.query_params() - run_id = query_params.get("run_id","Not set") - print(f"Application run id '{run_id}'") + application_run_id = query_params.get("application_run_id","Not set") + print(f"Application run id '{application_run_id}'") mo.vstack([ mo.ui.file_browser( Path(query_params.get("results_folder", get_user_data_directory("results"))), diff --git a/src/aignostics/platform/CLAUDE.md b/src/aignostics/platform/CLAUDE.md index 7d59f934..bca9ee54 100644 --- a/src/aignostics/platform/CLAUDE.md +++ b/src/aignostics/platform/CLAUDE.md @@ -8,45 +8,20 @@ The platform module serves as the foundational API client interface for the Aign ### Core Responsibilities -**Authentication & API Access:** - -- **OAuth 2.0 Authentication**: Device flow, JWT validation, token lifecycle management with 5-minute refresh buffer -- **Environment Management**: Multi-environment support (dev/staging/production) with automatic endpoint detection -- **Resource Abstraction**: Type-safe wrappers for applications, versions, runs with memory-efficient pagination - -**Performance & Reliability (NEW in v1.0.0-beta.7):** - -- **Operation Caching**: Token-aware caching for read operations with configurable TTLs (5-15 min) -- **Retry Logic**: Exponential backoff with jitter for transient failures (4 attempts default) -- **Timeout Management**: Per-operation timeouts (30s default, configurable 0.1s-300s) -- **Cache Invalidation**: Automatic global cache clearing on mutations for consistency - -**Observability & Tracking (NEW in v1.0.0-beta.7):** - -- **SDK Metadata System**: Automatic tracking of execution context, user, CI/CD environment for all runs -- **JSON Schema Validation**: Pydantic-based validation with versioned schemas (v0.0.1) -- **Enhanced User Agent**: Context-aware user agent with pytest and GitHub Actions integration -- **Structured Logging**: Retry warnings, cache hits/misses, performance metrics - -**API v1.0.0-beta.7 Support:** - -- **State Models**: Enum-based RunState, ItemState, ArtifactState with termination reasons -- **Statistics Tracking**: Aggregate RunItemStatistics for progress monitoring -- **Error Handling**: Comprehensive error recovery with user guidance +- **Authentication & Security**: OAuth 2.0 device flow, JWT validation, token lifecycle management +- **API Client Management**: Resource-oriented client with automatic retries, connection pooling, proxy support +- **Resource Abstraction**: Type-safe wrappers for applications, versions, runs with pagination +- **Environment Management**: Multi-environment support (dev/staging/production) with automatic detection +- **Error Recovery**: Comprehensive error handling with user guidance and automatic recovery strategies +- **Retry Handling**: Retry handling on auth requests ### User Interfaces **CLI Commands (`_cli.py`):** -User authentication commands: - -- `user login` - Authenticate with Aignostics Platform (device flow or browser) -- `user logout` - Remove cached authentication token -- `user whoami` - Display current user information and organization details - -SDK metadata commands: - -- `sdk metadata-schema` - Display or export the JSON Schema for SDK metadata (supports `--pretty` flag) +- `login` - Authenticate with Aignostics Platform (device flow or browser) +- `logout` - Remove cached authentication token +- `whoami` - Display current user information and organization details **Service Layer (`_service.py`):** @@ -105,9 +80,9 @@ class Client: """Get current user info.""" return self._api.get_me_v1_me_get() - def run(self, run_id: str) -> Run: + def run(self, application_run_id: str) -> ApplicationRun: """Get specific run by ID.""" - return Run(self._api, run_id) + return ApplicationRun(self._api, application_run_id) def application(self, application_id: str) -> Application: """Find application by ID (iterates through list).""" @@ -116,23 +91,6 @@ class Client: if app.application_id == application_id: return app raise NotFoundException - - def application_version(self, application_id: str, - version_number: str | None = None) -> ApplicationVersion: - """Get application version details. - - Args: - application_id: The ID of the application (e.g., 'heta') - version_number: The semantic version number (e.g., '1.0.0') - If None, returns the latest version - - Returns: - ApplicationVersion with application_id and version_number attributes - """ - return Versions(self._api).details( - application_id=application_id, - application_version=version_number - ) ``` ### Authentication Flow (`_authentication.py`) @@ -207,1038 +165,17 @@ def paginate(func, *args, page_size=PAGE_SIZE, **kwargs): class Runs: def list( self, - application_id: str | None = None, - application_version: str | None = None, + application_version_id: str | None = None, page_size: int = LIST_APPLICATION_RUNS_MAX_PAGE_SIZE ): - """List runs with pagination. - - Args: - application_id: Optional filter by application ID - application_version: Optional filter by version number (not version_id) - page_size: Number of results per page (max 100) - - Returns: - Iterator[Run] Iterator of Run instances - """ + """List runs with pagination.""" if page_size > LIST_APPLICATION_RUNS_MAX_PAGE_SIZE: raise ValueError(f"page_size must be <= {LIST_APPLICATION_RUNS_MAX_PAGE_SIZE}") # Uses paginate helper internally - # Returns iterator of run instances - # Each run has application_id and version_number attributes -``` - -### SDK Metadata System (`_sdk_metadata.py`) - -**ENHANCED FEATURE:** The SDK now automatically attaches structured metadata to every application run and item, providing comprehensive tracking of execution context, user information, CI/CD environment details, tags, and timestamps. - -**Architecture:** - -``` -┌────────────────────────────────────────────────────┐ -│ SDK Metadata System │ -├────────────────────────────────────────────────────┤ -│ Pydantic Models (Validation + Schema Generation) │ -│ ├─ RunSdkMetadata (run-level metadata) │ -│ │ ├─ SubmissionMetadata (how/when submitted) │ -│ │ ├─ UserMetadata (organization/user info) │ -│ │ ├─ CIMetadata (GitHub Actions + pytest) │ -│ │ ├─ WorkflowMetadata (control flags) │ -│ │ ├─ SchedulingMetadata (due dates/deadlines) │ -│ │ ├─ tags (set[str]) - NEW │ -│ │ ├─ created_at (timestamp) - NEW │ -│ │ └─ updated_at (timestamp) - NEW │ -│ └─ ItemSdkMetadata (item-level metadata) - NEW │ -│ ├─ PlatformBucketMetadata (storage info) │ -│ ├─ tags (set[str]) │ -│ ├─ created_at (timestamp) │ -│ └─ updated_at (timestamp) │ -├────────────────────────────────────────────────────┤ -│ Runtime Functions │ -│ ├─ build_run_sdk_metadata() → dict │ -│ ├─ validate_run_sdk_metadata() → bool │ -│ ├─ get_run_sdk_metadata_json_schema() → dict │ -│ ├─ build_item_sdk_metadata() → dict - NEW │ -│ ├─ validate_item_sdk_metadata() → bool - NEW │ -│ └─ get_item_sdk_metadata_json_schema() → dict │ -├────────────────────────────────────────────────────┤ -│ JSON Schema (Versioned) │ -│ ├─ Run schema version: 0.0.4 │ -│ └─ Item schema version: 0.0.3 │ -│ Published at: docs/source/_static/ │ -│ URLs: sdk_{run|item}_custom_metadata_schema_* │ -└────────────────────────────────────────────────────┘ -``` - -**Schema Versions:** Run `0.0.4`, Item `0.0.3` - -**Core Pydantic Models:** - -```python -# From _sdk_metadata.py (actual implementation) - -class SubmissionMetadata(BaseModel): - """Metadata about how the SDK was invoked.""" - date: str # ISO 8601 timestamp - interface: Literal["script", "cli", "launchpad"] # How SDK was accessed - source: Literal["user", "test", "bridge"] # Who initiated the run - -class UserMetadata(BaseModel): - """User information metadata.""" - organization_id: str - organization_name: str - user_email: str - user_id: str - -class GitHubCIMetadata(BaseModel): - """GitHub Actions CI metadata.""" - action: str | None - job: str | None - ref: str | None - ref_name: str | None - ref_type: str | None # branch or tag - repository: str # owner/repo - run_attempt: str | None - run_id: str - run_number: str | None - run_url: str # Full URL to workflow run - runner_arch: str | None # x64, ARM64, etc. - runner_os: str | None # Linux, Windows, macOS - sha: str | None # Git commit SHA - workflow: str | None - workflow_ref: str | None - -class PytestCIMetadata(BaseModel): - """Pytest test execution metadata.""" - current_test: str # Test name being executed - markers: list[str] | None # Pytest markers applied - -class CIMetadata(BaseModel): - """CI/CD environment metadata.""" - github: GitHubCIMetadata | None - pytest: PytestCIMetadata | None - -class WorkflowMetadata(BaseModel): - """Workflow control metadata.""" - onboard_to_aignostics_portal: bool = False - validate_only: bool = False - -class SchedulingMetadata(BaseModel): - """Scheduling metadata for run execution.""" - due_date: str | None # ISO 8601, requested completion time - deadline: str | None # ISO 8601, hard deadline - -class RunSdkMetadata(BaseModel): - """Complete Run SDK metadata schema.""" - schema_version: str # Currently "0.0.4" - created_at: str # ISO 8601 timestamp - NEW - updated_at: str # ISO 8601 timestamp - NEW - tags: set[str] | None # Optional tags - NEW - submission: SubmissionMetadata - user_agent: str # Enhanced user agent from utils module - user: UserMetadata | None # Present if authenticated - ci: CIMetadata | None # Present if running in CI - note: str | None # Optional user note - workflow: WorkflowMetadata | None # Optional workflow control - scheduling: SchedulingMetadata | None # Optional scheduling info - - model_config = {"extra": "forbid"} # Strict validation - -class PlatformBucketMetadata(BaseModel): - """Platform bucket storage metadata for items - NEW""" - bucket_name: str # Name of the cloud storage bucket - object_key: str # Object key/path within the bucket - signed_download_url: str # Signed URL for downloading - -class ItemSdkMetadata(BaseModel): - """Complete Item SDK metadata schema - NEW""" - schema_version: str # Currently "0.0.3" - created_at: str # ISO 8601 timestamp - updated_at: str # ISO 8601 timestamp - tags: set[str] | None # Optional item-level tags - platform_bucket: PlatformBucketMetadata | None # Storage location - - model_config = {"extra": "forbid"} # Strict validation -``` - -**Automatic Metadata Generation:** - -```python -def build_run_sdk_metadata(existing_metadata: dict[str, Any] | None = None) -> dict[str, Any]: - """Build SDK metadata automatically attached to runs. - - Detection Logic: - - Interface: Detects script vs CLI vs launchpad (NiceGUI) - - Source: Detects user vs test (pytest) vs bridge - - User info: Fetches from Client().me() if authenticated - - GitHub CI: Reads GITHUB_* environment variables - - Pytest: Reads PYTEST_CURRENT_TEST environment variable - - Preserves created_at and submission.date from existing metadata - - Args: - existing_metadata: Existing SDK metadata to preserve timestamps - - Returns: - dict with complete metadata structure including timestamps - """ - # Interface detection - if "typer" in sys.argv[0] or "aignostics" in sys.argv[0]: - interface = "cli" - elif os.getenv("NICEGUI_HOST"): - interface = "launchpad" - else: - interface = "script" - - # Source detection (initiator) - if os.environ.get("AIGNOSTICS_BRIDGE_VERSION"): - initiator = "bridge" - elif os.environ.get("PYTEST_CURRENT_TEST"): - initiator = "test" - else: - initiator = "user" - - # Handle timestamps - preserve created_at, always update updated_at - now = datetime.now(UTC).isoformat(timespec="seconds") - existing_sdk = existing_metadata or {} - created_at = existing_sdk.get("created_at", now) - - # Preserve submission.date from existing metadata - existing_submission = existing_sdk.get("submission", {}) - submission_date = existing_submission.get("date", now) - - # Build metadata structure - metadata = { - "schema_version": "0.0.4", - "created_at": created_at, # NEW - "updated_at": now, # NEW - "submission": { - "date": submission_date, # Preserved from existing - "interface": interface, - "initiator": initiator, # Changed from "source" - }, - "user_agent": user_agent(), # From utils module - } - - # Add user info if authenticated - try: - me = Client().me() - metadata["user"] = { - "organization_id": me.organization.id, - "organization_name": me.organization.name, - "user_email": me.user.email, - "user_id": me.user.id, - } - except Exception: - pass # User info optional - - # Add GitHub CI metadata if present - if os.environ.get("GITHUB_RUN_ID"): - metadata["ci"] = {"github": {...}} # Populated from env vars - - # Add pytest metadata if running in test - if os.environ.get("PYTEST_CURRENT_TEST"): - metadata["ci"] = metadata.get("ci", {}) - metadata["ci"]["pytest"] = { - "current_test": os.environ["PYTEST_CURRENT_TEST"], - "markers": os.environ.get("PYTEST_MARKERS", "").split(",") - } - - return metadata -``` - -**Integration with Run Submission:** - -```python -# From resources/runs.py (actual implementation) - -def submit(self, application_id: str, items: list, custom_metadata: dict = None): - """Submit run with automatic SDK metadata attachment.""" - - # Build SDK metadata automatically - sdk_metadata = build_sdk_metadata() - - # Validate SDK metadata - validate_sdk_metadata(sdk_metadata) - - # Merge with custom metadata under 'sdk' key - if custom_metadata is None: - custom_metadata = {} - - custom_metadata.setdefault("sdk", {}) - custom_metadata["sdk"].update(sdk_metadata) - - # Submit run with merged metadata - return self._api.create_run( - application_id=application_id, - items=items, - custom_metadata=custom_metadata - ) -``` - -**JSON Schema Generation:** - -The SDK provides versioned JSON Schemas for metadata validation: - -```bash -# Via CLI -aignostics sdk metadata-schema --pretty > schema.json - -# Schema location (in repository) -docs/source/_static/sdk_metadata_schema_v0.0.1.json -docs/source/_static/sdk_metadata_schema_latest.json - -# Public URL -https://raw.githubusercontent.com/aignostics/python-sdk/main/docs/source/_static/sdk_metadata_schema_latest.json -``` - -**Schema Generation (Noxfile Task):** - -```python -# From noxfile.py -def _generate_sdk_metadata_schema(session: nox.Session) -> None: - """Generate SDK metadata JSON schema with versioned filename.""" - - # Generate schema by calling CLI - session.run( - "aignostics", - "sdk", - "metadata-schema", - "--no-pretty", - stdout=output_file, - external=True, - ) - - # Extract version from schema $id - schema = json.load(output_file) - version = extract_version_from_id(schema["$id"]) - - # Write to both versioned and latest files - Path(f"docs/source/_static/sdk_metadata_schema_{version}.json").write(schema) - Path("docs/source/_static/sdk_metadata_schema_latest.json").write(schema) -``` - -**Validation Functions:** - -```python -def validate_run_sdk_metadata(metadata: dict[str, Any]) -> bool: - """Validate Run SDK metadata and raise ValidationError if invalid.""" - try: - RunSdkMetadata.model_validate(metadata) - return True - except ValidationError: - logger.exception("SDK metadata validation failed") - raise - -def validate_run_sdk_metadata_silent(metadata: dict[str, Any]) -> bool: - """Validate Run SDK metadata without raising exceptions.""" - try: - RunSdkMetadata.model_validate(metadata) - return True - except ValidationError: - return False - -def get_run_sdk_metadata_json_schema() -> dict[str, Any]: - """Get JSON Schema for Run SDK metadata with $schema and $id fields.""" - schema = RunSdkMetadata.model_json_schema() - schema["$schema"] = "https://json-schema.org/draft/2020-12/schema" - schema["$id"] = ( - f"https://raw.githubusercontent.com/aignostics/python-sdk/main/" - f"docs/source/_static/sdk_run_custom_metadata_schema_v{SDK_METADATA_SCHEMA_VERSION}.json" - ) - return schema - -def build_item_sdk_metadata(existing_metadata: dict[str, Any] | None = None) -> dict[str, Any]: - """Build SDK metadata to attach to individual items - NEW""" - now = datetime.now(UTC).isoformat(timespec="seconds") - existing_sdk = existing_metadata or {} - created_at = existing_sdk.get("created_at", now) - - return { - "schema_version": ITEM_SDK_METADATA_SCHEMA_VERSION, - "created_at": created_at, - "updated_at": now, - } - -def validate_item_sdk_metadata(metadata: dict[str, Any]) -> bool: - """Validate Item SDK metadata - NEW""" - try: - ItemSdkMetadata.model_validate(metadata) - return True - except ValidationError: - logger.exception("Item SDK metadata validation failed") - raise - -def get_item_sdk_metadata_json_schema() -> dict[str, Any]: - """Get JSON Schema for Item SDK metadata - NEW""" - schema = ItemSdkMetadata.model_json_schema() - schema["$schema"] = "https://json-schema.org/draft/2020-12/schema" - schema["$id"] = ( - f"https://raw.githubusercontent.com/aignostics/python-sdk/main/" - f"docs/source/_static/sdk_item_custom_metadata_schema_v{ITEM_SDK_METADATA_SCHEMA_VERSION}.json" - ) - return schema -``` - -**Key Features:** - -1. **Automatic Attachment** - SDK metadata added to every run and item submission without user action -2. **Environment Detection** - Automatically detects script/CLI/GUI, user/test/bridge contexts -3. **CI/CD Integration** - Captures GitHub Actions workflow details and pytest test context -4. **User Agent Integration** - Uses enhanced user_agent() from utils module -5. **Strict Validation** - Pydantic models with `extra="forbid"` ensure data quality -6. **Versioned Schema** - JSON Schema published with semantic versioning (Run: v0.0.4, Item: v0.0.3) -7. **Silent Fallback** - User info and CI data are optional, won't fail if unavailable -8. **Custom Metadata Support** - Users can add custom fields alongside SDK metadata -9. **Tags Support** (NEW) - Associate runs and items with searchable tags (`set[str]`) -10. **Timestamps** (NEW) - Track `created_at` (first submission) and `updated_at` (last modification) -11. **Item Metadata** (NEW) - Separate schema for item-level metadata with platform bucket information -12. **Metadata Updates** (NEW) - Update metadata via CLI (`aignostics application run custom-metadata update`) - -**Testing:** - -Comprehensive test suite in `tests/aignostics/platform/sdk_metadata_test.py`: - -- Metadata building in various environments -- Schema validation (valid and invalid cases) -- GitHub CI metadata extraction -- Pytest metadata extraction -- Interface and source detection -- User agent integration -- JSON Schema generation - -### Operation Caching System (`_operation_cache.py`) - -**NEW FEATURE (as of v1.0.0-beta.7):** The platform client now implements intelligent operation caching to reduce redundant API calls and improve performance. - -**Architecture:** - -``` -┌────────────────────────────────────────────────────┐ -│ Operation Caching System │ -├────────────────────────────────────────────────────┤ -│ Cache Storage: dict[cache_key, (result, expiry)] │ -│ ├─ Token-aware caching (per-user isolation) │ -│ ├─ TTL-based expiration │ -│ └─ Automatic invalidation on mutations │ -├────────────────────────────────────────────────────┤ -│ Decorator: @cached_operation │ -│ ├─ ttl: Time-to-live in seconds │ -│ ├─ use_token: Include auth token in key │ -│ └─ instance_attrs: Per-instance caching │ -├────────────────────────────────────────────────────┤ -│ Cache Key Generation │ -│ ├─ cache_key(): func_name:args:kwargs │ -│ └─ cache_key_with_token(): token_hash:... │ -├────────────────────────────────────────────────────┤ -│ Cache Invalidation │ -│ └─ operation_cache_clear(): Clear on mutations │ -└────────────────────────────────────────────────────┘ -``` - -**Core Implementation:** - -```python -# From _operation_cache.py (actual implementation) - -# Global cache storage -_operation_cache: dict[str, tuple[Any, float]] = {} - -def cached_operation( - ttl: int, *, use_token: bool = True, instance_attrs: tuple[str, ...] | None = None -) -> Callable: - """Decorator for caching function results with TTL. - - Args: - ttl: Time-to-live for cache in seconds - use_token: Include authentication token in cache key for per-user isolation - instance_attrs: Instance attributes to include in key (e.g., 'run_id') - - Behavior: - - Generates unique cache key from function name, args, kwargs, and optional token - - Returns cached result if present and not expired - - Deletes expired entries automatically - - Stores new results with expiry timestamp - """ - def decorator(func): - def wrapper(*args, **kwargs): - # Build cache key - func_qualified_name = func.__qualname__ # e.g., "Client.me" - - if use_token: - token_hash = hashlib.sha256(get_token().encode()).hexdigest()[:16] - key = f"{token_hash}:{func_qualified_name}:{args}:{sorted(kwargs.items())}" - else: - key = f"{func_qualified_name}:{args}:{sorted(kwargs.items())}" - - # Check cache - if key in _operation_cache: - result, expiry = _operation_cache[key] - if time.time() < expiry: - return result - del _operation_cache[key] - - # Call function and cache result - result = func(*args, **kwargs) - _operation_cache[key] = (result, time.time() + ttl) - return result - return wrapper - return decorator - -def operation_cache_clear(func: Callable | list[Callable] | None = None) -> int: - """Clear operation cache, optionally filtering by function(s). - - Args: - func: Function(s) to clear, or None to clear all entries - - Returns: - Number of cache entries removed - - Usage: - operation_cache_clear() # Clear all - operation_cache_clear(Client.me) # Clear specific function - operation_cache_clear([Client.me, Client.application]) # Clear multiple - """ - if func is None: - removed_count = len(_operation_cache) - _operation_cache.clear() - return removed_count - - # Filter by function qualified name(s) - func_list = func if isinstance(func, list) else [func] - func_qualified_names = [f.__qualname__ for f in func_list] - - keys_to_remove = [ - key for key in _operation_cache - if any(name in key for name in func_qualified_names) - ] - - for key in keys_to_remove: - del _operation_cache[key] - - return len(keys_to_remove) -``` - -**Cache TTL Configuration (from Settings):** - -```python -# Default cache TTLs (from _settings.py) -CACHE_TTL_DEFAULT = 60 * 5 # 5 minutes (most operations) -RUN_CACHE_TTL_DEFAULT = 15 # 15 seconds (runs change frequently) -AUTH_JWK_SET_CACHE_TTL_DEFAULT = 60 * 60 * 24 # 1 day (JWK sets rarely change) - -# Configurable per operation type -me_cache_ttl: int = 300 # 5 minutes -application_cache_ttl: int = 300 # 5 minutes -application_version_cache_ttl: int = 300 # 5 minutes -run_cache_ttl: int = 15 # 15 seconds -auth_jwk_set_cache_ttl: int = 86400 # 1 day -``` - -**Usage in Client Methods:** - -```python -# From _client.py -@cached_operation(ttl=settings().me_cache_ttl, use_token=True) -def me_with_retry() -> Me: - return Retrying(...)( - lambda: self._api.get_me_v1_me_get(...) - ) - -# From resources/runs.py -@cached_operation(ttl=settings().run_cache_ttl, use_token=True) -def details_with_retry(run_id: str) -> RunData: - return Retrying(...)( - lambda: self._api.get_run_v1_runs_run_id_get(run_id, ...) - ) -``` - -**Cache Invalidation Strategy:** - -**Automatic Invalidation on Mutations:** - -```python -# From resources/runs.py - Submit operation -def submit(...) -> Run: - # Clear ALL caches before mutation - operation_cache_clear() - - # Perform mutation - res = self._api.create_run_v1_runs_post(...) - return Run(self._api, res.run_id) - -# Cancel operation -def cancel(self) -> None: - operation_cache_clear() # Clear all caches - self._api.cancel_run_v1_runs_run_id_cancel_post(...) - -# Delete operation -def delete(self) -> None: - operation_cache_clear() # Clear all caches - self._api.delete_run_items_v1_runs_run_id_artifacts_delete(...) -``` - -**Key Design Decisions:** - -1. **Global Cache Clearing**: All caches are cleared on ANY mutation to ensure consistency -2. **Token-Aware**: Caching is per-user by default (use_token=True), preventing data leakage -3. **No Partial Invalidation**: Simplicity over optimization - clear everything on write -4. **TTL-Based Expiration**: Stale data automatically expires after configured TTL -5. **Token Changes**: Cache keys include token hash, so token refresh creates new cache namespace - -**Operations That Are Cached:** - -- ✅ `Client.me()` - User information (5 min TTL) -- ✅ `Client.application()` - Application details (5 min TTL) -- ✅ `Client.application_version()` - Version details (5 min TTL) -- ✅ `Applications.list()` - Application list (5 min TTL) -- ✅ `Applications.details()` - Application details (5 min TTL) -- ✅ `Runs.details()` - Run details (15 sec TTL) -- ✅ `Runs.results()` - Run results (15 sec TTL) -- ✅ `Runs.list()` - Run list (15 sec TTL) - -**Cache Bypass (NEW):** - -All cached operations now support a `nocache=True` parameter to force fresh API calls: - -```python -# Bypass cache for specific operations -run = client.runs.details(run_id, nocache=True) # Force API call -applications = client.applications.list(nocache=True) # Bypass cache -me = client.me(nocache=True) # Fresh user info - -# Useful in tests to avoid race conditions -def test_run_update(): - run = client.runs.details(run_id, nocache=True) # Always fresh - assert run.output.state == RunState.PROCESSING -``` - -The `nocache` parameter is particularly useful in: -- **Testing**: Avoid race conditions from stale cached data -- **Real-time monitoring**: Ensure latest status in dashboards -- **After mutations**: Get fresh data immediately after updates - -**Operations That Clear Cache:** - -- ❌ `Runs.submit()` - Creates new run -- ❌ `Run.cancel()` - Changes run state -- ❌ `Run.delete()` - Removes run data - -**Performance Impact:** - -- **Cache Hit**: ~0.1ms (dictionary lookup + expiry check) -- **Cache Miss**: Full API roundtrip (~50-500ms depending on operation) -- **Typical Benefit**: 100-1000x speedup for repeated reads within TTL -- **Memory Usage**: Minimal (~1KB per cached operation result) - -**Configuration:** - -All cache TTLs are configurable via environment variables or `.env` file: - -```bash -# Example .env configuration -AIGNOSTICS_ME_CACHE_TTL=300 # 5 minutes -AIGNOSTICS_APPLICATION_CACHE_TTL=300 # 5 minutes -AIGNOSTICS_RUN_CACHE_TTL=15 # 15 seconds -AIGNOSTICS_AUTH_JWK_SET_CACHE_TTL=86400 # 1 day -``` - -**Testing:** - -Comprehensive test suite in `tests/aignostics/platform/client_cache_test.py`: - -- Cache hit/miss scenarios -- TTL expiration -- Token-aware caching -- Cache invalidation on mutations -- Concurrent access patterns - -### Retry Logic and Timeout System - -**NEW FEATURE (as of v1.0.0-beta.7):** All read operations now include intelligent retry logic with exponential backoff and configurable timeouts. - -**Architecture:** - -``` -┌────────────────────────────────────────────────────┐ -│ Retry and Timeout System (Tenacity) │ -├────────────────────────────────────────────────────┤ -│ Retry Policy │ -│ ├─ Exponential backoff with jitter │ -│ ├─ Configurable max attempts (default: 4) │ -│ ├─ Configurable wait times (0.1s - 60s) │ -│ └─ Logs warnings before sleep │ -├────────────────────────────────────────────────────┤ -│ Retryable Exceptions │ -│ ├─ ServiceException (5xx errors) │ -│ ├─ Urllib3TimeoutError │ -│ ├─ PoolError │ -│ ├─ IncompleteRead │ -│ ├─ ProtocolError │ -│ └─ ProxyError │ -├────────────────────────────────────────────────────┤ -│ Timeout Configuration │ -│ ├─ Per-operation timeouts (default: 30s) │ -│ ├─ Range: 0.1s - 300s │ -│ └─ Separate timeouts for mutating ops │ -└────────────────────────────────────────────────────┘ -``` - -**Retryable Exceptions:** - -```python -# From _client.py and resources/*.py -RETRYABLE_EXCEPTIONS = ( - ServiceException, # 5xx server errors - Urllib3TimeoutError, # Connection timeout - PoolError, # Connection pool exhausted - IncompleteRead, # Partial response received - ProtocolError, # Protocol violation - ProxyError, # Proxy connection failed -) -``` - -**Retry Implementation Pattern:** - -```python -# Standard retry pattern used throughout the codebase -@cached_operation(ttl=settings().me_cache_ttl, use_token=True) -def me_with_retry() -> Me: - return Retrying( - retry=retry_if_exception_type(exception_types=RETRYABLE_EXCEPTIONS), - stop=stop_after_attempt(settings().me_retry_attempts), # Max 4 attempts - wait=wait_exponential_jitter( - initial=settings().me_retry_wait_min, # 0.1s - max=settings().me_retry_wait_max # 60s - ), - before_sleep=before_sleep_log(logger, logging.WARNING), - reraise=True, # Re-raise after all attempts exhausted - )( - lambda: self._api.get_me_v1_me_get( - _request_timeout=settings().me_timeout, # 30s - _headers={"User-Agent": user_agent()} - ) - ) -``` - -**Retry Configuration (from Settings):** - -```python -# Defaults (from _settings.py) -RETRY_ATTEMPTS_DEFAULT = 4 -RETRY_WAIT_MIN_DEFAULT = 0.1 # seconds -RETRY_WAIT_MAX_DEFAULT = 60.0 # seconds -TIMEOUT_DEFAULT = 30.0 # seconds - -# Per-operation configuration -auth_retry_attempts: int = 4 -auth_retry_wait_min: float = 0.1 -auth_retry_wait_max: float = 60.0 -auth_timeout: float = 30.0 - -me_retry_attempts: int = 4 -me_retry_wait_min: float = 0.1 -me_retry_wait_max: float = 60.0 -me_timeout: float = 30.0 - -application_retry_attempts: int = 4 -application_retry_wait_min: float = 0.1 -application_retry_wait_max: float = 60.0 -application_timeout: float = 30.0 - -run_retry_attempts: int = 4 -run_retry_wait_min: float = 0.1 -run_retry_wait_max: float = 60.0 -run_timeout: float = 30.0 - -# Special timeouts for mutating operations -run_submit_timeout: float = 30.0 -run_cancel_timeout: float = 30.0 -run_delete_timeout: float = 30.0 -``` - -**Exponential Backoff with Jitter:** - -``` -Attempt 1: 0ms wait (first attempt) -Attempt 2: ~100ms wait (initial) -Attempt 3: ~200-400ms wait (exponential + jitter) -Attempt 4: ~400-800ms wait (exponential + jitter) -Max wait capped at: 60s (retry_wait_max) -``` - -**Environment Variable Configuration:** - -```bash -# Example .env configuration -AIGNOSTICS_ME_RETRY_ATTEMPTS=4 -AIGNOSTICS_ME_RETRY_WAIT_MIN=0.1 -AIGNOSTICS_ME_RETRY_WAIT_MAX=60.0 -AIGNOSTICS_ME_TIMEOUT=30.0 - -AIGNOSTICS_RUN_RETRY_ATTEMPTS=4 -AIGNOSTICS_RUN_RETRY_WAIT_MIN=0.1 -AIGNOSTICS_RUN_RETRY_WAIT_MAX=60.0 -AIGNOSTICS_RUN_TIMEOUT=30.0 -``` - -**Operations with Retry Logic:** - -**Read Operations (All have retry + cache):** - -- ✅ `Client.me()` - 4 retries, 30s timeout -- ✅ `Client.application()` - 4 retries, 30s timeout -- ✅ `Client.application_version()` - 4 retries, 30s timeout -- ✅ `Applications.list()` - 4 retries, 30s timeout -- ✅ `Runs.details()` - 4 retries, 30s timeout -- ✅ `Runs.results()` - 4 retries, 30s timeout -- ✅ `Runs.list()` - 4 retries, 30s timeout - -**Write Operations (No retry, no cache):** - -- ❌ `Runs.submit()` - No retry (idempotency concerns), 30s timeout -- ❌ `Run.cancel()` - No retry, 30s timeout -- ❌ `Run.delete()` - No retry, 30s timeout - -**Key Design Decisions:** - -1. **Read-Only Retries**: Only read operations retry (mutations could have side effects) -2. **Exponential Backoff**: Reduces load on failing servers -3. **Jitter**: Prevents thundering herd problem -4. **Logging**: Warnings logged before retry sleeps for observability -5. **Re-raise**: After exhausting retries, original exception is re-raised - -**Logging Output:** - -``` -WARNING - Retrying aignostics.platform._client.Client.me in 0.123 seconds - (attempt 1/4, ServiceException: 503 Service Unavailable) -WARNING - Retrying aignostics.platform._client.Client.me in 0.456 seconds - (attempt 2/4, Urllib3TimeoutError) -WARNING - Retrying aignostics.platform._client.Client.me in 1.234 seconds - (attempt 3/4, PoolError) -ERROR - Failed after 4 attempts: ServiceException: 503 Service Unavailable -``` - -**Testing:** - -Comprehensive test suite in `tests/aignostics/platform/client_me_retry_test.py`: - -- Retry on transient errors -- Exponential backoff timing -- Max attempts enforcement -- Timeout behavior -- Exception re-raising - -### API v1.0.0-beta.7 State Models - -**MAJOR CHANGE (as of v1.0.0-beta.7):** Complete refactoring of run, item, and artifact state management with new enum-based state models. - -**New State Enums:** - -```python -# From codegen/out/aignx/codegen/models/ - -class RunState(str, Enum): - """Run lifecycle states.""" - PENDING = 'PENDING' # Run created, waiting to start - PROCESSING = 'PROCESSING' # Run actively processing items - TERMINATED = 'TERMINATED' # Run completed (check termination_reason) - -class ItemState(str, Enum): - """Item (slide) processing states.""" - PENDING = 'PENDING' # Item queued for processing - PROCESSING = 'PROCESSING' # Item being analyzed - TERMINATED = 'TERMINATED' # Item processing done (check termination_reason) - -class ArtifactState(str, Enum): - """Individual artifact processing states.""" - PENDING = 'PENDING' # Artifact generation pending - PROCESSING = 'PROCESSING' # Artifact being created - TERMINATED = 'TERMINATED' # Artifact ready or failed -``` - -**New Termination Reason Enums:** - -```python -class RunTerminationReason(str, Enum): - """Why a run terminated.""" - ALL_ITEMS_PROCESSED = 'ALL_ITEMS_PROCESSED' # Normal completion - CANCELED_BY_SYSTEM = 'CANCELED_BY_SYSTEM' # System initiated cancellation - CANCELED_BY_USER = 'CANCELED_BY_USER' # User canceled the run - -class ItemTerminationReason(str, Enum): - """Why an item terminated.""" - SUCCEEDED = 'SUCCEEDED' # Item processed successfully - USER_ERROR = 'USER_ERROR' # Input validation or user-caused error - SYSTEM_ERROR = 'SYSTEM_ERROR' # Infrastructure or application error - SKIPPED = 'SKIPPED' # Item skipped (e.g., duplicate) - -class ArtifactTerminationReason(str, Enum): - """Why an artifact terminated.""" - SUCCEEDED = 'SUCCEEDED' # Artifact created successfully - USER_ERROR = 'USER_ERROR' # Input validation error - SYSTEM_ERROR = 'SYSTEM_ERROR' # Generation failed due to system issue -``` - -**State Machine Architecture:** - -``` -Run State Machine: -PENDING → PROCESSING → TERMINATED - ↓ - [termination_reason] - ├─ ALL_ITEMS_PROCESSED (success) - ├─ CANCELED_BY_USER - └─ CANCELED_BY_SYSTEM - -Item State Machine (per slide): -PENDING → PROCESSING → TERMINATED - ↓ - [termination_reason] - ├─ SUCCEEDED (normal) - ├─ USER_ERROR (bad input) - ├─ SYSTEM_ERROR (internal) - └─ SKIPPED (duplicate, etc) - -Artifact State Machine (per output file): -PENDING → PROCESSING → TERMINATED - ↓ - [termination_reason] - ├─ SUCCEEDED - ├─ USER_ERROR - └─ SYSTEM_ERROR -``` - -**New Output Models:** - -```python -class RunOutput(BaseModel): - """Run execution results summary.""" - state: RunState - termination_reason: RunTerminationReason | None - statistics: RunItemStatistics # NEW: Aggregate item counts - # ... other fields - -class ItemOutput(BaseModel): - """Individual item processing results.""" - state: ItemState - termination_reason: ItemTerminationReason | None - artifacts: list[ArtifactOutput] # List of output artifacts - # ... other fields - -class ArtifactOutput(BaseModel): - """Individual artifact details.""" - state: ArtifactState - termination_reason: ArtifactTerminationReason | None - download_url: str | None # Available when SUCCEEDED - # ... other fields - -class RunItemStatistics(BaseModel): - """NEW: Aggregate statistics for run.""" - total: int # Total items in run - succeeded: int # Successfully processed - user_error: int # Failed due to user errors - system_error: int # Failed due to system errors - skipped: int # Skipped items - pending: int # Not yet started - processing: int # Currently processing -``` - -**Model Migrations (Deleted Models):** - -**Deleted in v1.0.0-beta.7:** - -- ❌ `UserPayload` - Replaced with structured user/organization models -- ❌ `PayloadItem` - Replaced with `ItemOutput` -- ❌ `ApplicationVersionReadResponse` - Renamed to `ApplicationVersion` -- ❌ `InputArtifactReadResponse` - Simplified artifact handling -- ❌ `TransferUrls` - Merged into artifact models - -**New Models in v1.0.0-beta.7:** - -- ✅ `Auth0User` - Structured user information -- ✅ `Auth0Organization` - Structured organization information -- ✅ `ApplicationReadShortResponse` - Lightweight application summary -- ✅ `ApplicationVersion` - Complete version details with metadata -- ✅ `RunItemStatistics` - Aggregate item statistics -- ✅ `CustomMetadataUpdateRequest` - Metadata update payload - -**Usage Patterns:** - -**Checking Run Status:** - -```python -run = client.run("run-123") -details = run.details() - -# Check run state -if details.output.state == RunState.TERMINATED: - # Check how it terminated - if details.output.termination_reason == RunTerminationReason.ALL_ITEMS_PROCESSED: - print("Run completed successfully!") - print(f"Items succeeded: {details.output.statistics.succeeded}") - print(f"Items failed: {details.output.statistics.user_error + details.output.statistics.system_error}") - elif details.output.termination_reason == RunTerminationReason.CANCELED_BY_USER: - print("Run was canceled") -elif details.output.state == RunState.PROCESSING: - print(f"Run in progress: {details.output.statistics.processing} items processing") -``` - -**Checking Item Status:** - -```python -for item in run.results(): - if item.output.state == ItemState.TERMINATED: - if item.output.termination_reason == ItemTerminationReason.SUCCEEDED: - print(f"Item {item.item_id} succeeded") - # Access artifacts - for artifact in item.output.artifacts: - if artifact.state == ArtifactState.TERMINATED: - if artifact.termination_reason == ArtifactTerminationReason.SUCCEEDED: - print(f" - Artifact ready: {artifact.download_url}") - elif item.output.termination_reason == ItemTerminationReason.USER_ERROR: - print(f"Item {item.item_id} failed: user error") - elif item.output.termination_reason == ItemTerminationReason.SYSTEM_ERROR: - print(f"Item {item.item_id} failed: system error") -``` - -**Migration Guide (v1.0.0-beta.6 → v1.0.0-beta.7):** - -**Before (v1.0.0-beta.6):** - -```python -# Old status checking (hypothetical old API) -if run.status == "COMPLETED": - ... + # Returns generator of ApplicationRun instances ``` -**After (v1.0.0-beta.7):** - -```python -# New state + termination reason pattern -if run.output.state == RunState.TERMINATED: - if run.output.termination_reason == RunTerminationReason.ALL_ITEMS_PROCESSED: - ... -``` - -**Key Benefits of New State Models:** - -1. **Type Safety**: Enum-based states prevent typos and invalid states -2. **Clear Semantics**: Separate state and termination_reason clarifies "what" vs "why" -3. **Granular Error Handling**: Distinguish user errors from system errors -4. **Consistent Pattern**: Same state machine pattern across runs, items, and artifacts -5. **Better Observability**: RunItemStatistics provides aggregate view of run progress - -**Testing:** - -Updated test suite in `tests/aignostics/platform/e2e_test.py`: - -- State transitions -- Termination reason validation -- Statistics accuracy -- Error scenarios (user_error vs system_error) - ## Usage Patterns & Best Practices ### Basic Client Usage @@ -1257,71 +194,13 @@ print(f"User: {me.email}, Organization: {me.organization.name}") for app in client.applications.list(): print(f"App: {app.application_id}") -# Get application version -app_version = client.application_version( - application_id="heta", - version_number="1.0.0" # Omit for latest version -) -print(f"Application: {app_version.application_id}") -print(f"Version: {app_version.version_number}") - -# Get latest version -latest = client.application_version( - application_id="heta", - version_number=None -) - # Get specific run run = client.run("run-id-123") -# Access application info from run -print(f"Run application: {run.payload.application_id}") -print(f"Run version: {run.payload.version_number}") # List runs with custom page size runs = client.runs.list(page_size=50) # Max 100 for run in runs: - print(f"Run: {run.run_id}") -``` - -### SDK Metadata Usage - -```python -from aignostics.platform import Client -from aignostics.platform._sdk_metadata import ( - build_sdk_metadata, - validate_sdk_metadata, - get_sdk_metadata_json_schema -) - -# SDK metadata is AUTOMATICALLY attached to every run submission -client = Client() - -# Submit run - SDK metadata added automatically -run = client.runs.submit( - application_id="heta", - items=[...], - custom_metadata={ - "experiment_id": "exp-123", - "dataset_version": "v2.1", - # SDK metadata will be added under "sdk" key automatically - } -) - -# Access SDK metadata from run -sdk_metadata = run.payload.custom_metadata.get("sdk", {}) -print(f"Submitted via: {sdk_metadata['submission']['interface']}") # cli, script, or launchpad -print(f"Submitted by: {sdk_metadata['submission']['initiator']}") # user, test, or bridge -print(f"User: {sdk_metadata['user']['user_email']}") # if authenticated -if "ci" in sdk_metadata: - print(f"GitHub Run: {sdk_metadata['ci']['github']['run_url']}") # if in CI - -# Manually build and validate metadata (for testing or inspection) -metadata = build_sdk_metadata() -assert validate_sdk_metadata(metadata) - -# Get JSON Schema for documentation or external validation -schema = get_sdk_metadata_json_schema() -print(f"Schema version: {schema['$id']}") + print(f"Run: {run.application_run_id}") ``` ### Error Handling @@ -1388,9 +267,9 @@ def expired_token() -> str: ```python def test_runs_list_with_pagination(runs, mock_api): # Setup pages - page1 = [Mock(spec=RunReadResponse, run_id=f"run-{i}") + page1 = [Mock(spec=RunReadResponse, application_run_id=f"run-{i}") for i in range(PAGE_SIZE)] - page2 = [Mock(spec=RunReadResponse, run_id=f"run-{i + PAGE_SIZE}") + page2 = [Mock(spec=RunReadResponse, application_run_id=f"run-{i + PAGE_SIZE}") for i in range(5)] mock_api.list_application_runs_v1_runs_get.side_effect = [page1, page2] @@ -1398,7 +277,7 @@ def test_runs_list_with_pagination(runs, mock_api): # Test pagination result = list(runs.list()) assert len(result) == PAGE_SIZE + 5 - assert all(isinstance(run, Run) for run in result) + assert all(isinstance(run, ApplicationRun) for run in result) ``` ## Operational Requirements @@ -1484,23 +363,14 @@ all_apps = list(client.applications.list()) app_dict = {app.application_id: app for app in all_apps} # Now lookups are O(1) app = app_dict.get("app-id") - -# For version lookups, use direct API call -version = client.application_version( - application_id="heta", - version_number="1.0.0" # or None for latest -) -# Access version attributes -print(f"App: {version.application_id}, Version: {version.version_number}") ``` ## Module Dependencies ### Internal Dependencies -- `utils` - Logging via `get_logger()`, user agent generation via `user_agent()` -- `utils._constants` - Project metadata and environment detection -- `constants` - API versioning (not directly used in main client) +- `utils` - Logging via `get_logger()` +- `constants` - Not directly used in main client ### External Dependencies diff --git a/src/aignostics/platform/__init__.py b/src/aignostics/platform/__init__.py index a58dcb7b..a8d19e9c 100644 --- a/src/aignostics/platform/__init__.py +++ b/src/aignostics/platform/__init__.py @@ -12,27 +12,20 @@ from aignx.codegen.exceptions import ApiException, NotFoundException from aignx.codegen.models import ApplicationReadResponse as Application -from aignx.codegen.models import ApplicationReadShortResponse as ApplicationSummary -from aignx.codegen.models import InputArtifact as InputArtifactData +from aignx.codegen.models import ApplicationRunStatus, ItemStatus +from aignx.codegen.models import ApplicationVersionReadResponse as ApplicationVersion from aignx.codegen.models import InputArtifactCreationRequest as InputArtifact +from aignx.codegen.models import InputArtifactReadResponse as InputArtifactData from aignx.codegen.models import ItemCreationRequest as InputItem -from aignx.codegen.models import ItemOutput as ItemOutput from aignx.codegen.models import ItemResultReadResponse as ItemResult -from aignx.codegen.models import ItemState as ItemState -from aignx.codegen.models import ItemTerminationReason as ItemTerminationReason from aignx.codegen.models import MeReadResponse as Me from aignx.codegen.models import OrganizationReadResponse as Organization -from aignx.codegen.models import OutputArtifact as OutputArtifactData +from aignx.codegen.models import OutputArtifactReadResponse as OutputArtifactData from aignx.codegen.models import OutputArtifactResultReadResponse as OutputArtifactElement -from aignx.codegen.models import RunItemStatistics as RunItemStatistics -from aignx.codegen.models import RunOutput as RunOutput -from aignx.codegen.models import RunReadResponse as RunData -from aignx.codegen.models import RunState as RunState # TODO(Helmut): Refactor -from aignx.codegen.models import RunTerminationReason as RunTerminationReason +from aignx.codegen.models import RunReadResponse as ApplicationRunData from aignx.codegen.models import UserReadResponse as User -from aignx.codegen.models import VersionReadResponse as ApplicationVersion -from ._cli import cli_sdk, cli_user +from ._cli import cli from ._client import Client from ._constants import ( API_ROOT_DEV, @@ -70,7 +63,7 @@ get_mime_type_for_artifact, mime_type_to_file_ending, ) -from .resources.runs import LIST_APPLICATION_RUNS_MAX_PAGE_SIZE, LIST_APPLICATION_RUNS_MIN_PAGE_SIZE, Run +from .resources.runs import LIST_APPLICATION_RUNS_MAX_PAGE_SIZE, LIST_APPLICATION_RUNS_MIN_PAGE_SIZE, ApplicationRun __all__ = [ "API_ROOT_DEV", @@ -105,36 +98,29 @@ "UNKNOWN_ENDPOINT_URL", "ApiException", "Application", - "ApplicationSummary", + "ApplicationRun", + "ApplicationRunData", + "ApplicationRunStatus", + "ApplicationRunStatus", "ApplicationVersion", "Client", "InputArtifact", "InputArtifactData", "InputItem", - "ItemOutput", "ItemResult", - "ItemState", - "ItemTerminationReason", + "ItemStatus", "Me", "NotFoundException", "Organization", "OutputArtifactData", "OutputArtifactElement", - "Run", - "RunData", - "RunItemStatistics", - "RunOutput", - "RunState", - "RunState", - "RunTerminationReason", "Service", "Settings", "TokenInfo", "User", "UserInfo", "calculate_file_crc32c", - "cli_sdk", - "cli_user", + "cli", "download_file", "generate_signed_url", "get_mime_type_for_artifact", diff --git a/src/aignostics/platform/_authentication.py b/src/aignostics/platform/_authentication.py index d0eee41e..96ebb58d 100644 --- a/src/aignostics/platform/_authentication.py +++ b/src/aignostics/platform/_authentication.py @@ -35,17 +35,43 @@ logger = get_logger(__name__) -CALLBACK_PORT_RETRY_COUNT = 20 -CALLBACK_PORT_BACKOFF_DELAY = 1 +CALLBACK_PORT_RETRY_COUNT = 10 JWK_CLIENT_CACHE_SIZE = 4 # Multiple entries exist in the rare case of settings changing at runtime only - try: import sentry_sdk except ImportError: sentry_sdk = None # type: ignore[assignment] +@functools.lru_cache(maxsize=JWK_CLIENT_CACHE_SIZE) +def _get_jwk_client(url: str, timeout: int, lifespan: int) -> jwt.PyJWKClient: + """Returns a cached PyJWKClient instance for JWT verification. + + Creates a client lazily on first access for each unique combination of URL, timeout, + and lifespan, and reuses it for subsequent calls with the same parameters. The LRU cache + is thread-safe and ensures that only one client is created per unique parameter set. + + We intentionally have one cache entry per combination of url, timeout and lifespan, so that if any of these + settings change at runtime, we get a new client with the updated settings. This is useful for handling + different JWK sets for different environments or configurations, and not a cache invalidation gap. It's + considered safe if different threads briefly use different jwt clients while settings change. + + Args: + url: The JWS JSON URL to fetch the JWK set from. + timeout: The timeout in seconds for HTTP requests to fetch the JWK set. + lifespan: The lifespan in seconds for caching the JWK set. + + Returns: + jwt.PyJWKClient: The cached PyJWKClient instance for the given parameters. + + Raises: + PyJWKClientError: If the JWS endpoint did not return a JSON, nor key matches kid etc. + PyJWKClientConnectionError: If there are connection issues fetching the JWK set. + """ + return jwt.PyJWKClient(url, timeout=timeout, lifespan=lifespan) + + class AuthenticationResult(BaseModel): """Represents the result of an OAuth authentication flow.""" @@ -201,41 +227,13 @@ def verify_and_decode_token(token: str) -> dict[str, str]: retry=retry_if_exception( # Have to unpack wrapped exception lambda e: isinstance(e, RuntimeError) and isinstance(e.__cause__, jwt.PyJWKClientConnectionError) ), - stop=stop_after_attempt(settings().auth_retry_attempts), + stop=stop_after_attempt(settings().auth_retry_attempts_max), wait=wait_exponential_jitter(initial=settings().auth_retry_wait_min, max=settings().auth_retry_wait_max), before_sleep=before_sleep_log(logger, logging.WARNING), reraise=True, )(_do_verify_and_decode_token, token) # Retryer will pass down arguments -@functools.lru_cache(maxsize=JWK_CLIENT_CACHE_SIZE) -def _get_jwk_client(url: str, timeout: int, lifespan: int) -> jwt.PyJWKClient: - """Returns a cached PyJWKClient instance for JWT verification. - - Creates a client lazily on first access for each unique combination of URL, timeout, - and lifespan, and reuses it for subsequent calls with the same parameters. The LRU cache - is thread-safe and ensures that only one client is created per unique parameter set. - - We intentionally have one cache entry per combination of url, timeout and lifespan, so that if any of these - settings change at runtime, we get a new client with the updated settings. This is useful for handling - different JWK sets for different environments or configurations, and not a cache invalidation gap. It's - considered safe if different threads briefly use different jwt clients while settings change. - - Args: - url: The JWS JSON URL to fetch the JWK set from. - timeout: The timeout in seconds for HTTP requests to fetch the JWK set. - lifespan: The lifespan in seconds for caching the JWK set. - - Returns: - jwt.PyJWKClient: The cached PyJWKClient instance for the given parameters. - - Raises: - PyJWKClientError: If the JWS endpoint did not return a JSON, nor key matches kid etc. - PyJWKClientConnectionError: If there are connection issues fetching the JWK set. - """ - return jwt.PyJWKClient(url, timeout=timeout, lifespan=lifespan) - - def _do_verify_and_decode_token(token: str) -> dict[str, str]: """ Verifies and decodes the JWT token using the public key from JWS JSON URL. @@ -298,7 +296,7 @@ def _can_open_browser() -> bool: return launch_browser -def _perform_authorization_code_with_pkce_flow() -> str: # noqa: C901 +def _perform_authorization_code_with_pkce_flow() -> str: """Performs the OAuth 2.0 Authorization Code flow with PKCE. Opens a browser for user authentication and uses a local redirect @@ -377,33 +375,26 @@ def log_message(self, _format: str, *_args) -> None: # type: ignore[no-untyped- host, port = parsed_redirect.hostname, parsed_redirect.port if not host or not port: raise RuntimeError(INVALID_REDIRECT_URI) - - for retry_count in range(CALLBACK_PORT_RETRY_COUNT): - try: - with HTTPServer((host, port), OAuthCallbackHandler) as server: - # Enable socket reuse to prevent "Address already in use" errors - # This allows the socket to be reused immediately after the server closes, - # even if the previous connection is in TIME_WAIT state - server.socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) - - webbrowser.open_new(authorization_url) - server.handle_request() - # If we get here, authentication was successful - break - except OSError as e: - if e.errno == errno.EADDRINUSE: - if retry_count < CALLBACK_PORT_RETRY_COUNT - 1: - # Port is in use, wait with exponential backoff before retrying - time.sleep(CALLBACK_PORT_BACKOFF_DELAY) - continue - # Max retries reached - port_unavailable_msg = ( - f"Port {port} is already in use after {CALLBACK_PORT_RETRY_COUNT} retries. " - "Please wait a moment and try again, or use device flow authentication." - ) - raise RuntimeError(port_unavailable_msg) from e - # Different OS error, not related to port being in use - raise RuntimeError(AUTHENTICATION_FAILED) from e + # check if port is callback port is available + port_unavailable_msg = f"Port {port} is already in use. Free the port, or use the device flow." + if not _ensure_local_port_is_available(port): + raise RuntimeError(port_unavailable_msg) + # start the server + try: + with HTTPServer((host, port), OAuthCallbackHandler) as server: + # Enable socket reuse to prevent "Address already in use" errors + # This allows the socket to be reused immediately after the server closes, + # even if the previous connection is in TIME_WAIT state + server.socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) + + # Call Auth0 with challenge and redirect to localhost with code after successful authN + webbrowser.open_new(authorization_url) + # Extract authorization_code from redirected request, see: OAuthCallbackHandler + server.handle_request() + except OSError as e: + if e.errno == errno.EADDRINUSE: + raise RuntimeError(port_unavailable_msg) from e + raise RuntimeError(AUTHENTICATION_FAILED) from e if authentication_result.error or not authentication_result.token: raise RuntimeError(AUTHENTICATION_FAILED) @@ -523,7 +514,7 @@ def _access_token_from_refresh_token(refresh_token: SecretStr) -> str: """ return Retrying( # We are not using Tenacity annotations as settings can change at runtime retry=retry_if_exception(_is_not_client_or_key_error), - stop=stop_after_attempt(settings().auth_retry_attempts), + stop=stop_after_attempt(settings().auth_retry_attempts_max), wait=wait_exponential_jitter(initial=settings().auth_retry_wait_min, max=settings().auth_retry_wait_max), before_sleep=before_sleep_log(logger, logging.WARNING), reraise=True, diff --git a/src/aignostics/platform/_cli.py b/src/aignostics/platform/_cli.py index 1517aeb7..ff23c682 100644 --- a/src/aignostics/platform/_cli.py +++ b/src/aignostics/platform/_cli.py @@ -1,6 +1,5 @@ """CLI of platform module.""" -import json import sys from typing import Annotated @@ -8,12 +7,11 @@ from aignostics.utils import console, get_logger -from ._sdk_metadata import get_item_sdk_metadata_json_schema, get_run_sdk_metadata_json_schema from ._service import Service logger = get_logger(__name__) -cli_user = typer.Typer(name="user", help="User operations such as login, logout and whoami.") +cli = typer.Typer(name="user", help="User operations such as login, logout and whoami.") service: Service | None = None @@ -30,7 +28,7 @@ def _get_service() -> Service: return service -@cli_user.command("logout") +@cli.command("logout") def logout() -> None: """Logout if authenticated. @@ -50,7 +48,7 @@ def logout() -> None: sys.exit(1) -@cli_user.command("login") +@cli.command("login") def login( relogin: Annotated[bool, typer.Option(help="Re-login")] = False, ) -> None: @@ -69,7 +67,7 @@ def login( sys.exit(1) -@cli_user.command("whoami") +@cli.command("whoami") def whoami( mask_secrets: Annotated[bool, typer.Option(help="Mask secrets")] = True, relogin: Annotated[bool, typer.Option(help="Re-login")] = False, @@ -87,52 +85,3 @@ def whoami( console.print(message, style="error") sys.exit(1) sys.exit(1) - - -cli_sdk = typer.Typer(name="sdk", help="Platform operations such as dumping the SDK metadata schema.") - - -@cli_sdk.command("run-metadata-schema") -def run_sdk_metadata_schema( - pretty: Annotated[bool, typer.Option(help="Pretty print JSON output")] = True, -) -> None: - """Print the JSON Schema for Run SDK metadata. - - This schema defines the structure and validation rules for metadata - that the SDK attaches to application runs. Use this to understand - what fields are expected and their types. - """ - try: - schema = get_run_sdk_metadata_json_schema() - if pretty: - console.print_json(data=schema) - else: - print(json.dumps(schema)) - except Exception as e: - message = f"Error getting run SDK metadata schema: {e!s}" - logger.exception(message) - console.print(message, style="error") - sys.exit(1) - - -@cli_sdk.command("item-metadata-schema") -def item_sdk_metadata_schema( - pretty: Annotated[bool, typer.Option(help="Pretty print JSON output")] = True, -) -> None: - """Print the JSON Schema for Item SDK metadata. - - This schema defines the structure and validation rules for metadata - that the SDK attaches to individual items within application runs. - Use this to understand what fields are expected and their types. - """ - try: - schema = get_item_sdk_metadata_json_schema() - if pretty: - console.print_json(data=schema) - else: - print(json.dumps(schema)) - except Exception as e: - message = f"Error getting item SDK metadata schema: {e!s}" - logger.exception(message) - console.print(message, style="error") - sys.exit(1) diff --git a/src/aignostics/platform/_client.py b/src/aignostics/platform/_client.py index 7aca7a60..51b656a5 100644 --- a/src/aignostics/platform/_client.py +++ b/src/aignostics/platform/_client.py @@ -1,17 +1,18 @@ +import hashlib import logging import os +import time from collections.abc import Callable -from typing import ClassVar +from functools import wraps +from typing import Any, ClassVar from urllib.request import getproxies -import semver from aignx.codegen.api.public_api import PublicApi from aignx.codegen.api_client import ApiClient from aignx.codegen.configuration import AuthSettings, Configuration from aignx.codegen.exceptions import NotFoundException, ServiceException from aignx.codegen.models import ApplicationReadResponse as Application from aignx.codegen.models import MeReadResponse as Me -from aignx.codegen.models import VersionReadResponse as ApplicationVersion from tenacity import ( Retrying, before_sleep_log, @@ -23,9 +24,8 @@ from urllib3.exceptions import TimeoutError as Urllib3TimeoutError from aignostics.platform._authentication import get_token -from aignostics.platform._operation_cache import cached_operation from aignostics.platform.resources.applications import Applications, Versions -from aignostics.platform.resources.runs import Run, Runs +from aignostics.platform.resources.runs import ApplicationRun, Runs from aignostics.utils import get_logger, user_agent from ._settings import settings @@ -79,12 +79,13 @@ class Client: - Caches operation results for specific operations. """ + _operation_cache: ClassVar[dict[str, tuple[Any, float]]] = {} _api_client_cached: ClassVar[PublicApi | None] = None _api_client_uncached: ClassVar[PublicApi | None] = None applications: Applications - versions: Versions runs: Runs + versions: Versions def __init__(self, cache_token: bool = True) -> None: """Initializes a client instance with authenticated API access. @@ -100,155 +101,117 @@ def __init__(self, cache_token: bool = True) -> None: self._api = Client.get_api_client(cache_token=cache_token) self.applications: Applications = Applications(self._api) self.runs: Runs = Runs(self._api) - self.versions: Versions = Versions(self._api) logger.debug("Client initialized successfully.") except Exception: logger.exception("Failed to initialize client.") raise - def me(self, nocache: bool = False) -> Me: - """Retrieves info about the current user and their organisation. - - Retries on network and server errors. - - Note: - - We are not using urllib3s retry class as it does not support fine grained definition when to retry, - exponential backoff with jitter, logging before retry, and is difficult to configure. + @staticmethod + def _cache_key(token: str, method_name: str, *args: object, **kwargs: object) -> str: + """Generates a cache key based on the token, method name, and parameters. Args: - nocache (bool): If True, skip reading from cache and fetch fresh data from the API. - The fresh result will still be cached for subsequent calls. Defaults to False. + token (str): The authentication token. + method_name (str): The name of the method being cached. + *args: Positional arguments to the method. + **kwargs: Keyword arguments to the method. Returns: - Me: User and organization information. - - Raises: - aignx.codegen.exceptions.ApiException: If the API call fails. + str: A unique cache key. """ + token_hash = hashlib.sha256((token or "").encode()).hexdigest()[:16] + params = f"{args}:{sorted(kwargs.items())}" + return f"{token_hash}:{method_name}:{params}" - @cached_operation(ttl=settings().me_cache_ttl, use_token=True) - def me_with_retry() -> Me: - return Retrying( # We are not using Tenacity annotations as settings can change at runtime - retry=retry_if_exception_type(exception_types=RETRYABLE_EXCEPTIONS), - stop=stop_after_attempt(settings().me_retry_attempts), - wait=wait_exponential_jitter(initial=settings().me_retry_wait_min, max=settings().me_retry_wait_max), - before_sleep=before_sleep_log(logger, logging.WARNING), - reraise=True, - )( - lambda: self._api.get_me_v1_me_get( - _request_timeout=settings().me_timeout, _headers={"User-Agent": user_agent()} - ) - ) # Retryer will pass down arguments - - return me_with_retry(nocache=nocache) # type: ignore[call-arg] - - def application(self, application_id: str, nocache: bool = False) -> Application: - """Find application by id. - - Retries on network and server errors. + @staticmethod + def cached_operation(ttl: int) -> Callable[[Callable[..., object]], Callable[..., object]]: + """Caches the result of a method call for a specified time-to-live (TTL). Args: - application_id (str): The ID of the application. - nocache (bool): If True, skip reading from cache and fetch fresh data from the API. - The fresh result will still be cached for subsequent calls. Defaults to False. - - Raises: - NotFoundException: If the application with the given ID is not found. - aignx.codegen.exceptions.ApiException: If the API call fails. + ttl (int): Time-to-live for the cache in seconds. Returns: - Application: The application object. + Callable: A decorator that caches the method result. """ - @cached_operation(ttl=settings().application_cache_ttl, use_token=True) - def application_with_retry(application_id: str) -> Application: - return Retrying( - retry=retry_if_exception_type(exception_types=RETRYABLE_EXCEPTIONS), - stop=stop_after_attempt(settings().application_retry_attempts), - wait=wait_exponential_jitter( - initial=settings().application_retry_wait_min, max=settings().application_retry_wait_max - ), - before_sleep=before_sleep_log(logger, logging.WARNING), - reraise=True, - )( - lambda: self._api.read_application_by_id_v1_applications_application_id_get( - application_id=application_id, - _request_timeout=settings().application_timeout, - _headers={"User-Agent": user_agent()}, - ) - ) + def decorator(func: Callable[..., object]) -> Callable[..., object]: + @wraps(func) + def wrapper(self: "Client", *args: object, **kwargs: object) -> object: + token = get_token(True) + cache_key = Client._cache_key(token, func.__name__, *args, **kwargs) + + if cache_key in Client._operation_cache: + value, expiry = Client._operation_cache[cache_key] + if time.time() < expiry: + return value + del Client._operation_cache[cache_key] - return application_with_retry(application_id, nocache=nocache) # type: ignore[call-arg] + result = func(self, *args, **kwargs) + Client._operation_cache[cache_key] = (result, time.time() + ttl) + return result - def application_version( - self, application_id: str, version_number: str | None = None, nocache: bool = False - ) -> ApplicationVersion: - """Find application version by id. + return wrapper + + return decorator + + @cached_operation(ttl=60) + def me(self) -> Me: + """Retrieves info about the current user and their organisation. Retries on network and server errors. - Args: - application_id (str): The ID of the application. - version_number (str | None): The version number of the application. - If None, the latest version will be retrieved. - nocache (bool): If True, skip reading from cache and fetch fresh data from the API. - The fresh result will still be cached for subsequent calls. Defaults to False. + Note: + - We are not using urllib3s retry class as it does not support fine grained definition when to retry, + exponential backoff with jitter, logging before retry, and is difficult to configure. + + Returns: + Me: User and organization information. Raises: - NotFoundException: If the application with the given ID and version number is not found. - ValueError: If the version is not valid semver. aignx.codegen.exceptions.ApiException: If the API call fails. - - Returns: - ApplicationVersion: The application version object. """ - # Handle version resolution and validation first (not retried) - if version_number is None: - # Get the latest version - this call already has its own retry logic in Versions - version_tuple = Versions(self._api).latest(application=application_id) - if version_tuple is None: - message = f"No versions found for application '{application_id}'." - raise NotFoundException(message) - version_number = version_tuple.number - - # Validate semver format - if version_number and not semver.Version.is_valid(version_number): - message = f"Invalid version format: '{version_number}' not compliant with semantic versioning." - raise ValueError(message) - - # Make the API call with retry logic and caching - @cached_operation(ttl=settings().application_version_cache_ttl, use_token=True) - def application_version_with_retry(application_id: str, version: str) -> ApplicationVersion: - return Retrying( - retry=retry_if_exception_type(exception_types=RETRYABLE_EXCEPTIONS), - stop=stop_after_attempt(settings().application_version_retry_attempts), - wait=wait_exponential_jitter( - initial=settings().application_version_retry_wait_min, - max=settings().application_version_retry_wait_max, - ), - before_sleep=before_sleep_log(logger, logging.WARNING), - reraise=True, - )( - lambda: self._api.application_version_details_v1_applications_application_id_versions_version_get( - application_id=application_id, - version=version, - _request_timeout=settings().application_version_timeout, - _headers={"User-Agent": user_agent()}, - ) + return Retrying( # We are not using Tenacity annotations as settings can change at runtime + retry=retry_if_exception_type(exception_types=RETRYABLE_EXCEPTIONS), + stop=stop_after_attempt(settings().me_retry_attempts_max), + wait=wait_exponential_jitter(initial=settings().me_retry_wait_min, max=settings().me_retry_wait_max), + before_sleep=before_sleep_log(logger, logging.WARNING), + reraise=True, + )( + lambda: self._api.get_me_v1_me_get( + _request_timeout=settings().me_timeout, _headers={"User-Agent": user_agent()} ) + ) # Retryer will pass down arguments - return application_version_with_retry(application_id, version_number, nocache=nocache) # type: ignore[call-arg] - - def run(self, run_id: str) -> Run: - """Finds run by id. + def run(self, application_run_id: str) -> ApplicationRun: + """Finds a specific run by id. Args: - run_id (str): The ID of the application run. + application_run_id (str): The ID of the application run. Returns: Run: The run object. """ - return Run(self._api, run_id) + return ApplicationRun(self._api, application_run_id) + + # TODO(Andreas): Provide a /v1/applications/{application_id} endpoint and use that + def application(self, application_id: str) -> Application: + """Finds a specific application by id. + + Args: + application_id (str): The ID of the application. + + Raises: + NotFoundException: If the application with the given ID is not found. + + Returns: + Application: The application object. + """ + applications = self.applications.list() + for application in applications: + if application.application_id == application_id: + return application + logger.warning("Application with ID '%s' not found.", application_id) + raise NotFoundException @staticmethod def get_api_client(cache_token: bool = True) -> PublicApi: diff --git a/src/aignostics/platform/_constants.py b/src/aignostics/platform/_constants.py index a4c1d3f6..b5ab422c 100644 --- a/src/aignostics/platform/_constants.py +++ b/src/aignostics/platform/_constants.py @@ -9,7 +9,7 @@ DEVICE_URL_PRODUCTION = "https://aignostics-platform.eu.auth0.com/oauth/device/code" JWS_JSON_URL_PRODUCTION = "https://aignostics-platform.eu.auth0.com/.well-known/jwks.json" -API_ROOT_STAGING = "https://platform-staging.aignostics.com" +API_ROOT_STAGING = "https://platform-staging.aignostics.ai" CLIENT_ID_INTERACTIVE_STAGING = "fQkbvYzQPPVwLxc3uque5JsyFW00rJ7b" # not a secret, but a public client ID AUDIENCE_STAGING = "https://aignostics-platform-staging-samia" AUTHORIZATION_BASE_URL_STAGING = "https://aignostics-platform-staging.eu.auth0.com/authorize" @@ -18,7 +18,7 @@ DEVICE_URL_STAGING = "https://aignostics-platform-staging.eu.auth0.com/oauth/device/code" JWS_JSON_URL_STAGING = "https://aignostics-platform-staging.eu.auth0.com/.well-known/jwks.json" -API_ROOT_DEV = "https://platform-dev.aignostics.ai" +API_ROOT_DEV = "https://platform-dev.aignostics.com" CLIENT_ID_INTERACTIVE_DEV = "gqduveFvx7LX90drQPGzr4JGUYdh24gA" # not a secret, but a public client ID AUDIENCE_DEV = "https://dev-8ouohmmrbuh2h4vu-samia" AUTHORIZATION_BASE_URL_DEV = "https://dev-8ouohmmrbuh2h4vu.eu.auth0.com/authorize" diff --git a/src/aignostics/platform/_operation_cache.py b/src/aignostics/platform/_operation_cache.py deleted file mode 100644 index 90b9cc8d..00000000 --- a/src/aignostics/platform/_operation_cache.py +++ /dev/null @@ -1,153 +0,0 @@ -"""Operation caching utilities for the Aignostics Platform client. - -This module provides caching functionality for API operations to reduce redundant calls -and improve performance. It includes cache management, key generation, and a decorator -for automatic caching of function results with configurable time-to-live (TTL). - -The caching mechanism: -- Caches operation results based on authentication tokens and function parameters -- Respects TTL (time-to-live) for cached values -- Automatically invalidates cache when tokens change -- Supports selective cache clearing by function -""" - -import hashlib -import time -import typing as t -from collections.abc import Callable -from typing import Any, ParamSpec, TypeVar - -from ._authentication import get_token - -# Cache storage for operation results -_operation_cache: dict[str, tuple[Any, float]] = {} - -# Type variables for the cached_operation decorator -P = ParamSpec("P") -T = TypeVar("T") - - -def operation_cache_clear(func: Callable[..., Any] | list[Callable[..., Any]] | None = None) -> int: - """Clears the operation cache, optionally filtering by function(s). - - Args: - func (Callable | list[Callable] | None): If provided, only clear cache entries - for the specified function(s). Can be: - - A callable (function/method) - - A list of callables - - None to clear all cache entries - - Returns: - int: Number of cache entries removed. - """ - removed_count = 0 - - if func is None: - # Remove all cache entries - removed_count = len(_operation_cache) - _operation_cache.clear() - else: - # Normalize input to a list of function qualified names - func_list = func if isinstance(func, list) else [func] - func_qualified_names = [f.__qualname__ for f in func_list] - - # Remove entries matching any of the function qualified names - keys_to_remove = [key for key in _operation_cache if any(name in key for name in func_qualified_names)] - - for key in keys_to_remove: - del _operation_cache[key] - removed_count += 1 - - return removed_count - - -def cache_key(func_qualified_name: str, *args: object, **kwargs: object) -> str: - """Generates a cache key based on the function name and parameters. - - Args: - func_qualified_name (str): The qualified name of the function being cached (e.g., 'ClassName.func1'). - *args: Positional arguments to the function. - **kwargs: Keyword arguments to the function. - - Returns: - str: A unique cache key. - """ - return f"{func_qualified_name}:{args}:{sorted(kwargs.items())}" - - -def cache_key_with_token(token: str, func_qualified_name: str, *args: object, **kwargs: object) -> str: - """Generates a cache key based on the token, function name, and parameters. - - Args: - token (str): The authentication token. - func_qualified_name (str): The qualified name of the function being cached (e.g., 'ClassName.func1'). - *args: Positional arguments to the function. - **kwargs: Keyword arguments to the function. - - Returns: - str: A unique cache key. - """ - token_hash = hashlib.sha256((token or "").encode()).hexdigest()[:16] - return f"{token_hash}:{func_qualified_name}:{args}:{sorted(kwargs.items())}" - - -def cached_operation( - ttl: int, *, use_token: bool = True, instance_attrs: tuple[str, ...] | None = None -) -> Callable[[Callable[P, T]], Callable[P, T]]: - """Caches the result of a function call for a specified time-to-live (TTL). - - Args: - ttl (int): Time-to-live for the cache in seconds. - use_token (bool): If True, includes the authentication token in the cache key. - This is useful for Client methods that should cache per-user. - When use_token is True and no instance_attrs are specified, the 'self' - argument is excluded from the cache key to enable cache sharing across instances. - instance_attrs (tuple[str, ...] | None): Instance attributes to include in the cache key. - This is useful for instance methods where caching should be per-instance based on - specific attributes (e.g., 'run_id' for Run.details()). - - Returns: - Callable: A decorator that caches the function result. - - Note: - The decorated function can accept a 'nocache' keyword argument (bool) to bypass - reading from the cache. When nocache=True, the function is executed directly - and the result is still cached for subsequent calls. - """ - - def decorator(func: Callable[P, T]) -> Callable[P, T]: - def wrapper(*args: P.args, **kwargs: P.kwargs) -> T: - # Check if nocache is requested and remove it from kwargs before passing to func - nocache = kwargs.pop("nocache", False) - - # Build cache key components - cache_args: tuple[object, ...] = args - - # Get qualified name (including class name if it's a method) - func_qualified_name = func.__qualname__ - - # If instance_attrs specified, extract them from self (args[0]) - if instance_attrs and args: - instance = args[0] - instance_values = tuple(getattr(instance, attr) for attr in instance_attrs) - cache_args = instance_values + args[1:] - - if use_token: - key = cache_key_with_token(get_token(True), func_qualified_name, *cache_args, **kwargs) - else: - key = cache_key(func_qualified_name, *cache_args, **kwargs) - - # If nocache=True, skip cache lookup but still cache the result - if not nocache and key in _operation_cache: - result, expiry = _operation_cache[key] - if time.time() < expiry: - return t.cast("T", result) - del _operation_cache[key] - - result = func(*args, **kwargs) - _operation_cache[key] = (result, time.time() + ttl) - return result - - return wrapper - - return decorator diff --git a/src/aignostics/platform/_sdk_metadata.py b/src/aignostics/platform/_sdk_metadata.py deleted file mode 100644 index 8e3a0be4..00000000 --- a/src/aignostics/platform/_sdk_metadata.py +++ /dev/null @@ -1,382 +0,0 @@ -"""SDK metadata generation for application runs. - -This module provides functionality to build structured metadata about the SDK execution context, -including user information, CI/CD environment details, and test execution context. -""" - -import os -import sys -from datetime import UTC, datetime -from typing import Any, Literal - -from pydantic import BaseModel, Field, ValidationError - -from aignostics.utils import get_logger, user_agent - -logger = get_logger(__name__) - -SDK_METADATA_SCHEMA_VERSION = "0.0.4" -ITEM_SDK_METADATA_SCHEMA_VERSION = "0.0.3" - - -class SubmissionMetadata(BaseModel): - """Metadata about how the SDK was invoked.""" - - date: str = Field(..., description="ISO 8601 timestamp of submission") - interface: Literal["script", "cli", "launchpad"] = Field( - ..., description="How the SDK was accessed (script, cli, launchpad)" - ) - initiator: Literal["user", "test", "bridge"] = Field( - ..., description="Who/what initiated the run (user, test, bridge)" - ) - - -class UserMetadata(BaseModel): - """User information metadata.""" - - organization_id: str = Field(..., description="User's organization ID") - organization_name: str = Field(..., description="User's organization name") - user_email: str = Field(..., description="User's email address") - user_id: str = Field(..., description="User's unique ID") - - -class GitHubCIMetadata(BaseModel): - """GitHub Actions CI metadata.""" - - action: str | None = Field(None, description="GitHub Action name") - job: str | None = Field(None, description="GitHub job name") - ref: str | None = Field(None, description="Git reference") - ref_name: str | None = Field(None, description="Git reference name") - ref_type: str | None = Field(None, description="Git reference type (branch, tag)") - repository: str = Field(..., description="Repository name (owner/repo)") - run_attempt: str | None = Field(None, description="Attempt number for this run") - run_id: str = Field(..., description="Unique ID for this workflow run") - run_number: str | None = Field(None, description="Run number for this workflow") - run_url: str = Field(..., description="URL to the workflow run") - runner_arch: str | None = Field(None, description="Runner architecture (x64, ARM64, etc.)") - runner_os: str | None = Field(None, description="Runner operating system") - sha: str | None = Field(None, description="Git commit SHA") - workflow: str | None = Field(None, description="Workflow name") - workflow_ref: str | None = Field(None, description="Reference to the workflow file") - - -class PytestCIMetadata(BaseModel): - """Pytest test execution metadata.""" - - current_test: str = Field(..., description="Current test being executed") - markers: list[str] | None = Field(None, description="Pytest markers applied to the test") - - -class CIMetadata(BaseModel): - """CI/CD environment metadata.""" - - github: GitHubCIMetadata | None = Field(None, description="GitHub Actions metadata") - pytest: PytestCIMetadata | None = Field(None, description="Pytest test metadata") - - -class WorkflowMetadata(BaseModel): - """Workflow control metadata.""" - - onboard_to_aignostics_portal: bool = Field( - default=False, description="Whether to onboard results to the Aignostics Portal" - ) - validate_only: bool = Field(default=False, description="Whether to only validate without running analysis") - - -class SchedulingMetadata(BaseModel): - """Scheduling metadata for run execution.""" - - due_date: str | None = Field( - None, - description="Requested completion time (ISO 8601). Scheduler will try to complete before this time.", - ) - deadline: str | None = Field( - None, description="Hard deadline (ISO 8601). Run may be aborted if processing exceeds this time." - ) - - -class RunSdkMetadata(BaseModel): - """Complete Run SDK metadata schema. - - This model defines the structure and validation rules for SDK metadata - that is attached to application runs. It includes information about: - - SDK version and timestamps - - User information (when available) - - CI/CD environment context (GitHub Actions, pytest) - - Workflow control flags - - Scheduling information - - Optional user note - """ - - schema_version: str = Field( - ..., description="Schema version for this metadata format", pattern=r"^\d+\.\d+\.\d+-?.*$" - ) - - created_at: str = Field(..., description="ISO 8601 timestamp when the metadata was first created") - updated_at: str = Field(..., description="ISO 8601 timestamp when the metadata was last updated") - tags: set[str] | None = Field(None, description="Optional list of tags associated with the run") - submission: SubmissionMetadata = Field(..., description="Submission context metadata") - user_agent: str = Field(..., description="User agent string for the SDK client") - user: UserMetadata | None = Field(None, description="User information (when authenticated)") - ci: CIMetadata | None = Field(None, description="CI/CD environment metadata") - note: str | None = Field(None, description="Optional user note for the run") - workflow: WorkflowMetadata | None = Field(None, description="Workflow control flags") - scheduling: SchedulingMetadata | None = Field(None, description="Scheduling information") - - model_config = {"extra": "forbid"} # Reject unknown fields - - -class PlatformBucketMetadata(BaseModel): - """Platform bucket storage metadata for items.""" - - bucket_name: str = Field(..., description="Name of the cloud storage bucket") - object_key: str = Field(..., description="Object key/path within the bucket") - signed_download_url: str = Field(..., description="Signed URL for downloading the object") - - -class ItemSdkMetadata(BaseModel): - """Complete Item SDK metadata schema. - - This model defines the structure and validation rules for SDK metadata - that is attached to individual items within application runs. It includes - information about where the item is stored in the platform's cloud storage. - """ - - schema_version: str = Field( - ..., description="Schema version for this metadata format", pattern=r"^\d+\.\d+\.\d+-?.*$" - ) - - created_at: str = Field(..., description="ISO 8601 timestamp when the metadata was first created") - updated_at: str = Field(..., description="ISO 8601 timestamp when the metadata was last updated") - tags: set[str] | None = Field(None, description="Optional list of tags associated with the item") - platform_bucket: PlatformBucketMetadata | None = Field(None, description="Platform bucket storage information") - - model_config = {"extra": "forbid"} # Reject unknown fields - - -def build_run_sdk_metadata(existing_metadata: dict[str, Any] | None = None) -> dict[str, Any]: # noqa: PLR0914 - """Build SDK metadata to attach to runs. - - Includes user agent, user information, GitHub CI/CD context when running in GitHub Actions, - and test context when running in pytest. - - Args: - existing_metadata (dict[str, Any] | None): Existing SDK metadata to preserve created_at and submission.date. - - Returns: - dict[str, Any]: Dictionary containing SDK metadata including user agent, - user information, and optionally CI information (GitHub workflow and pytest test context). - """ - from aignostics.platform._client import Client # noqa: PLC0415 - - submission_initiator = "user" # who/what initiated the run (user, test, bridge) - submission_interface = "script" # how the SDK was accessed (script, cli, launchpad) - - if os.environ.get("AIGNOSTICS_BRIDGE_VERSION"): - submission_initiator = "bridge" - elif os.environ.get("PYTEST_CURRENT_TEST"): - submission_initiator = "test" - - if "typer" in sys.argv[0] or "aignostics" in sys.argv[0]: - submission_interface = "cli" - elif os.getenv("NICEGUI_HOST"): - submission_interface = "launchpad" - - now = datetime.now(UTC).isoformat(timespec="seconds") - existing_sdk = existing_metadata or {} - - # Preserve created_at if it exists, otherwise use current time - created_at = existing_sdk.get("created_at", now) - - # Preserve submission.date if it exists, otherwise use current time - existing_submission = existing_sdk.get("submission", {}) - submission_date = existing_submission.get("date", now) - - metadata: dict[str, Any] = { - "schema_version": SDK_METADATA_SCHEMA_VERSION, - "created_at": created_at, - "updated_at": now, - "submission": { - "date": submission_date, - "interface": submission_interface, - "initiator": submission_initiator, - }, - "user_agent": user_agent(), - } - - try: - me = Client().me() - metadata["user"] = { - "organization_id": me.organization.id, - "organization_name": me.organization.name, - "user_email": me.user.email, - "user_id": me.user.id, - } - except Exception: # noqa: BLE001 - logger.warning("Failed to fetch user information for SDK metadata") - - ci_metadata: dict[str, Any] = {} - - github_run_id = os.environ.get("GITHUB_RUN_ID") - if github_run_id: - github_server_url = os.environ.get("GITHUB_SERVER_URL", "https://github.com") - github_repository = os.environ.get("GITHUB_REPOSITORY", "") - - ci_metadata["github"] = { - "action": os.environ.get("GITHUB_ACTION"), - "job": os.environ.get("GITHUB_JOB"), - "ref": os.environ.get("GITHUB_REF"), - "ref_name": os.environ.get("GITHUB_REF_NAME"), - "ref_type": os.environ.get("GITHUB_REF_TYPE"), - "repository": github_repository, - "run_attempt": os.environ.get("GITHUB_RUN_ATTEMPT"), - "run_id": github_run_id, - "run_number": os.environ.get("GITHUB_RUN_NUMBER"), - "run_url": f"{github_server_url}/{github_repository}/actions/runs/{github_run_id}", - "runner_arch": os.environ.get("RUNNER_ARCH"), - "runner_os": os.environ.get("RUNNER_OS"), - "sha": os.environ.get("GITHUB_SHA"), - "workflow": os.environ.get("GITHUB_WORKFLOW"), - "workflow_ref": os.environ.get("GITHUB_WORKFLOW_REF"), - } - - pytest_current_test = os.environ.get("PYTEST_CURRENT_TEST") - if pytest_current_test: - pytest_metadata: dict[str, Any] = { - "current_test": pytest_current_test, - } - - pytest_markers = os.environ.get("PYTEST_MARKERS") - if pytest_markers: - pytest_metadata["markers"] = pytest_markers.split(",") - - ci_metadata["pytest"] = pytest_metadata - - if ci_metadata: - metadata["ci"] = ci_metadata - - return metadata - - -def validate_run_sdk_metadata(metadata: dict[str, Any]) -> bool: - """Validate the Run SDK metadata structure against the schema. - - Args: - metadata (dict[str, Any]): The Run SDK metadata to validate. - - Returns: - bool: True if the metadata is valid, False otherwise. - - Raises: - ValidationError: If the metadata does not conform to the schema. - """ - try: - RunSdkMetadata.model_validate(metadata) - return True - except ValidationError: - logger.exception("SDK metadata validation failed") - raise - - -def get_run_sdk_metadata_json_schema() -> dict[str, Any]: - """Get the JSON Schema for Run SDK metadata. - - Returns: - dict[str, Any]: JSON Schema definition for Run SDK metadata with $schema and $id fields. - """ - schema = RunSdkMetadata.model_json_schema() - schema["$schema"] = "https://json-schema.org/draft/2020-12/schema" - schema["$id"] = ( - f"https://raw.githubusercontent.com/aignostics/python-sdk/main/docs/source/_static/sdk_metadata_schema_v{SDK_METADATA_SCHEMA_VERSION}.json" - ) - return schema - - -def validate_run_sdk_metadata_silent(metadata: dict[str, Any]) -> bool: - """Validate Run SDK metadata without raising exceptions. - - Args: - metadata (dict[str, Any]): The Run SDK metadata to validate. - - Returns: - bool: True if valid, False if invalid. - """ - try: - RunSdkMetadata.model_validate(metadata) - return True - except ValidationError: - return False - - -def build_item_sdk_metadata(existing_metadata: dict[str, Any] | None = None) -> dict[str, Any]: - """Build SDK metadata to attach to individual items. - - Args: - existing_metadata (dict[str, Any] | None): Existing SDK metadata to preserve created_at. - - Returns: - dict[str, Any]: Dictionary containing item SDK metadata including platform bucket information. - """ - now = datetime.now(UTC).isoformat(timespec="seconds") - existing_sdk = existing_metadata or {} - - # Preserve created_at if it exists, otherwise use current time - created_at = existing_sdk.get("created_at", now) - - metadata: dict[str, Any] = { - "schema_version": ITEM_SDK_METADATA_SCHEMA_VERSION, - "created_at": created_at, - "updated_at": now, - } - - return metadata - - -def validate_item_sdk_metadata(metadata: dict[str, Any]) -> bool: - """Validate the Item SDK metadata structure against the schema. - - Args: - metadata (dict[str, Any]): The Item SDK metadata to validate. - - Returns: - bool: True if the metadata is valid, False otherwise. - - Raises: - ValidationError: If the metadata does not conform to the schema. - """ - try: - ItemSdkMetadata.model_validate(metadata) - return True - except ValidationError: - logger.exception("Item SDK metadata validation failed") - raise - - -def get_item_sdk_metadata_json_schema() -> dict[str, Any]: - """Get the JSON Schema for Item SDK metadata. - - Returns: - dict[str, Any]: JSON Schema definition for Item SDK metadata with $schema and $id fields. - """ - schema = ItemSdkMetadata.model_json_schema() - schema["$schema"] = "https://json-schema.org/draft/2020-12/schema" - schema["$id"] = ( - f"https://raw.githubusercontent.com/aignostics/python-sdk/main/docs/source/_static/item_sdk_metadata_schema_v{ITEM_SDK_METADATA_SCHEMA_VERSION}.json" - ) - return schema - - -def validate_item_sdk_metadata_silent(metadata: dict[str, Any]) -> bool: - """Validate Item SDK metadata without raising exceptions. - - Args: - metadata (dict[str, Any]): The Item SDK metadata to validate. - - Returns: - bool: True if valid, False if invalid. - """ - try: - ItemSdkMetadata.model_validate(metadata) - return True - except ValidationError: - return False diff --git a/src/aignostics/platform/_service.py b/src/aignostics/platform/_service.py index 269382a7..c090759b 100644 --- a/src/aignostics/platform/_service.py +++ b/src/aignostics/platform/_service.py @@ -11,7 +11,7 @@ from aignx.codegen.models import UserReadResponse as User from pydantic import BaseModel, computed_field -from aignostics.utils import BaseService, Health, get_logger, user_agent +from aignostics.utils import UNHIDE_SENSITIVE_INFO, BaseService, Health, get_logger, user_agent from ._authentication import get_token, remove_cached_token, verify_and_decode_token from ._client import Client @@ -170,13 +170,9 @@ def info(self, mask_secrets: bool = True) -> dict[str, Any]: Returns: dict[str,Any]: The info of this service. """ - user_info = None - try: - user_info = self.get_user_info() - except RuntimeError: - message = "Failed to retrieve user info for system info." - logger.warning(message) + user_info = self.get_user_info(relogin=mask_secrets) return { + "settings": self._settings.model_dump(context={UNHIDE_SENSITIVE_INFO: not mask_secrets}), "userinfo": (user_info.model_dump_secrets_masked() if mask_secrets else user_info.model_dump(mode="json")) if user_info else None, diff --git a/src/aignostics/platform/_settings.py b/src/aignostics/platform/_settings.py index e36ab1d9..172a6edc 100644 --- a/src/aignostics/platform/_settings.py +++ b/src/aignostics/platform/_settings.py @@ -53,28 +53,6 @@ T = TypeVar("T", bound=BaseSettings) -TIMEOUT_MIN_DEFAULT = 0.1 # seconds -TIMEOUT_MAX_DEFAULT = 300.0 # seconds -TIMEOUT_DEFAULT = 30.0 # seconds - -RETRY_ATTEMPTS_MIN_DEFAULT = 0 -RETRY_ATTEMPTS_MAX_DEFAULT = 10 -RETRY_ATTEMPTS_DEFAULT = 4 - -RETRY_WAIT_MIN_MIN_DEFAULT = 0.0 # seconds -RETRY_WAIT_MIN_MAX_DEFAULT = 600.0 # seconds -RETRY_WAIT_MIN_DEFAULT = 0.1 # seconds - -RETRY_WAIT_MAX_MIN_DEFAULT = 0.0 # seconds -RETRY_WAIT_MAX_MAX_DEFAULT = 600.0 # seconds -RETRY_WAIT_MAX_DEFAULT = 60.0 # seconds - -CACHE_TTL_MIN_DEFAULT = 0 # seconds -CACHE_TTL_MAX_DEFAULT = 60 * 60 * 24 * 7 # 1 week -CACHE_TTL_DEFAULT = 60 * 5 # 5 minutes -AUTH_JWK_SET_CACHE_TTL_DEFAULT = 60 * 60 * 24 # 1 day -RUN_CACHE_TTL_DEFAULT = 15 # 15 seconds - def _validate_url(value: str) -> str: """Validate that a string is a valid URL. @@ -121,34 +99,14 @@ class Settings(OpaqueSettings): health_timeout (float): Timeout for health checks in seconds. auth_jwk_set_cache_ttl (int): Time-to-live for JWK set cache in seconds. auth_timeout (float): Authentication request timeout in seconds. - auth_retry_attempts (int): Number of retry attempts for authentication requests. + auth_retry_attempts_max (int): Maximum number of retry attempts for authentication requests. auth_retry_wait_min (float): Minimum wait time between authentication request retries in seconds. auth_retry_wait_max (float): Maximum wait time between authentication request retries in seconds. me_timeout (float): Timeout for "me" requests in seconds. - me_retry_attempts (int): Number of retry attempts for "me" requests. + me_retry_attempts_max (int): Maximum number of retry attempts for "me" requests. me_retry_wait_min (float): Minimum wait time between "me" request retries in seconds. me_retry_wait_max (float): Maximum wait time between "me" request retries in seconds. me_cache_ttl (int): Time-to-live for "me" cache in seconds. - application_timeout (float): Timeout for application requests in seconds. - application_retry_attempts (int): Number of retry attempts for application requests. - application_retry_wait_min (float): Minimum wait time between application request retries in seconds. - application_retry_wait_max (float): Maximum wait time between application request retries in seconds. - application_cache_ttl (int): Time-to-live for application cache in seconds. - application_version_timeout (float): Timeout for application version requests in seconds. - application_version_retry_attempts (int): Number of retry attempts for application version requests. - application_version_retry_wait_min (float): Minimum wait time between application version request - retries in seconds. - application_version_retry_wait_max (float): Maximum wait time between application version request - retries in seconds. - application_version_cache_ttl (int): Time-to-live for application version cache in seconds. - run_timeout (float): Timeout for run requests in seconds. - run_retry_attempts (int): Number of retry attempts for run requests. - run_retry_wait_min (float): Minimum wait time between run request retries in seconds. - run_retry_wait_max (float): Maximum wait time between run request retries in seconds. - run_cache_ttl (int): Time-to-live for run cache in seconds. - run_cancel_timeout (float): Timeout for run cancel requests in seconds. - run_delete_timeout (float): Timeout for run delete requests in seconds. - run_submit_timeout (float): Timeout for run submit requests in seconds. scope (str): OAuth scopes required by the SDK. scope_elements (list[str]): OAuth scopes split into individual elements. token_file (Path): Path to the token storage file. @@ -283,243 +241,93 @@ def serialize_token_file(self, token_file: Path, _info: FieldSerializationInfo) float, Field( description="Timeout for health checks", - ge=TIMEOUT_MIN_DEFAULT, - le=TIMEOUT_MAX_DEFAULT, + ge=0.1, + le=300.0, ), - ] = TIMEOUT_DEFAULT + ] = 30.0 auth_jwk_set_cache_ttl: Annotated[ int, Field( description="Time-to-live for JWK set cache (in seconds)", - ge=CACHE_TTL_MIN_DEFAULT, - le=CACHE_TTL_MAX_DEFAULT, + ge=0, + le=604800, ), - ] = AUTH_JWK_SET_CACHE_TTL_DEFAULT + ] = 60 * 60 * 24 auth_timeout: Annotated[ float, Field( description="Timeout for authentication requests", - ge=TIMEOUT_MIN_DEFAULT, - le=TIMEOUT_MAX_DEFAULT, + ge=0.1, + le=300.0, ), - ] = TIMEOUT_DEFAULT - auth_retry_attempts: Annotated[ + ] = 30.0 + auth_retry_attempts_max: Annotated[ int, Field( - description="Number of retry attempts for authentication requests", - ge=RETRY_ATTEMPTS_MIN_DEFAULT, - le=RETRY_ATTEMPTS_MAX_DEFAULT, + description="Maximum number of retry attempts for authentication requests", + ge=0, + le=10, ), - ] = RETRY_ATTEMPTS_DEFAULT + ] = 4 auth_retry_wait_min: Annotated[ float, Field( description="Minimum wait time between retry attempts (in seconds)", - ge=RETRY_WAIT_MIN_MIN_DEFAULT, - le=RETRY_WAIT_MIN_MAX_DEFAULT, + ge=0.0, + le=600.0, ), - ] = RETRY_WAIT_MIN_DEFAULT + ] = 0.1 auth_retry_wait_max: Annotated[ float, Field( description="Maximum wait time between retry attempts (in seconds)", - ge=RETRY_WAIT_MAX_MIN_DEFAULT, - le=RETRY_WAIT_MAX_MAX_DEFAULT, + ge=0.0, + le=600.0, ), - ] = RETRY_WAIT_MAX_DEFAULT + ] = 60.0 me_timeout: Annotated[ float, Field( description="Timeout for me requests", - ge=TIMEOUT_MIN_DEFAULT, - le=TIMEOUT_MAX_DEFAULT, + ge=0.1, + le=300.0, ), - ] = TIMEOUT_DEFAULT - me_retry_attempts: Annotated[ + ] = 30.0 + me_retry_attempts_max: Annotated[ int, Field( - description="Number of retry attempts for me requests", - ge=RETRY_ATTEMPTS_MIN_DEFAULT, - le=RETRY_ATTEMPTS_MAX_DEFAULT, + description="Maximum number of retry attempts for me requests", + ge=0, + le=10, ), - ] = RETRY_ATTEMPTS_DEFAULT + ] = 4 me_retry_wait_min: Annotated[ float, Field( description="Minimum wait time between retry attempts (in seconds)", - ge=RETRY_WAIT_MIN_MIN_DEFAULT, - le=RETRY_WAIT_MIN_MAX_DEFAULT, + ge=0.0, + le=600.0, ), - ] = RETRY_WAIT_MIN_DEFAULT + ] = 0.1 me_retry_wait_max: Annotated[ float, Field( description="Maximum wait time between retry attempts (in seconds)", - ge=RETRY_WAIT_MAX_MIN_DEFAULT, - le=RETRY_WAIT_MAX_MAX_DEFAULT, + ge=0.0, + le=600.0, ), - ] = RETRY_WAIT_MAX_DEFAULT + ] = 60.0 me_cache_ttl: Annotated[ int, Field( description="Time-to-live for me cache (in seconds)", - ge=CACHE_TTL_MIN_DEFAULT, - le=CACHE_TTL_MAX_DEFAULT, - ), - ] = CACHE_TTL_DEFAULT - - application_timeout: Annotated[ - float, - Field( - description="Timeout for application requests", - ge=TIMEOUT_MIN_DEFAULT, - le=TIMEOUT_MAX_DEFAULT, - ), - ] = TIMEOUT_DEFAULT - application_retry_attempts: Annotated[ - int, - Field( - description="Number of retry attempts for application requests", - ge=RETRY_ATTEMPTS_MIN_DEFAULT, - le=RETRY_ATTEMPTS_MAX_DEFAULT, - ), - ] = RETRY_ATTEMPTS_DEFAULT - application_retry_wait_min: Annotated[ - float, - Field( - description="Minimum wait time between retry attempts (in seconds)", - ge=RETRY_WAIT_MIN_MIN_DEFAULT, - le=RETRY_WAIT_MIN_MAX_DEFAULT, - ), - ] = RETRY_WAIT_MIN_DEFAULT - application_retry_wait_max: Annotated[ - float, - Field( - description="Maximum wait time between retry attempts (in seconds)", - ge=RETRY_WAIT_MAX_MIN_DEFAULT, - le=RETRY_WAIT_MAX_MAX_DEFAULT, - ), - ] = RETRY_WAIT_MAX_DEFAULT - application_cache_ttl: Annotated[ - int, - Field( - description="Time-to-live for application cache (in seconds)", - ge=CACHE_TTL_MIN_DEFAULT, - le=CACHE_TTL_MAX_DEFAULT, - ), - ] = CACHE_TTL_DEFAULT - - application_version_timeout: Annotated[ - float, - Field( - description="Timeout for application version requests", - ge=TIMEOUT_MIN_DEFAULT, - le=TIMEOUT_MAX_DEFAULT, - ), - ] = TIMEOUT_DEFAULT - application_version_retry_attempts: Annotated[ - int, - Field( - description="Number of retry attempts for application version requests", - ge=RETRY_ATTEMPTS_MIN_DEFAULT, - le=RETRY_ATTEMPTS_MAX_DEFAULT, - ), - ] = RETRY_ATTEMPTS_DEFAULT - application_version_retry_wait_min: Annotated[ - float, - Field( - description="Minimum wait time between retry attempts (in seconds)", - ge=RETRY_WAIT_MIN_MIN_DEFAULT, - le=RETRY_WAIT_MIN_MAX_DEFAULT, - ), - ] = RETRY_WAIT_MIN_DEFAULT - application_version_retry_wait_max: Annotated[ - float, - Field( - description="Maximum wait time between retry attempts (in seconds)", - ge=RETRY_WAIT_MAX_MIN_DEFAULT, - le=RETRY_WAIT_MAX_MAX_DEFAULT, - ), - ] = RETRY_WAIT_MAX_DEFAULT - application_version_cache_ttl: Annotated[ - int, - Field( - description="Time-to-live for application version cache (in seconds)", - ge=CACHE_TTL_MIN_DEFAULT, - le=CACHE_TTL_MAX_DEFAULT, - ), - ] = CACHE_TTL_DEFAULT - - run_timeout: Annotated[ - float, - Field( - description="Timeout for run requests", - ge=TIMEOUT_MIN_DEFAULT, - le=TIMEOUT_MAX_DEFAULT, - ), - ] = TIMEOUT_DEFAULT - run_retry_attempts: Annotated[ - int, - Field( - description="Number of retry attempts for run requests", - ge=RETRY_ATTEMPTS_MIN_DEFAULT, - le=RETRY_ATTEMPTS_MAX_DEFAULT, - ), - ] = RETRY_ATTEMPTS_DEFAULT - run_retry_wait_min: Annotated[ - float, - Field( - description="Minimum wait time between retry attempts (in seconds)", - ge=RETRY_WAIT_MIN_MIN_DEFAULT, - le=RETRY_WAIT_MIN_MAX_DEFAULT, - ), - ] = RETRY_WAIT_MIN_DEFAULT - run_retry_wait_max: Annotated[ - float, - Field( - description="Maximum wait time between retry attempts (in seconds)", - ge=RETRY_WAIT_MAX_MIN_DEFAULT, - le=RETRY_WAIT_MAX_MAX_DEFAULT, - ), - ] = RETRY_WAIT_MAX_DEFAULT - run_cache_ttl: Annotated[ - int, - Field( - description="Time-to-live for run cache (in seconds)", - ge=CACHE_TTL_MIN_DEFAULT, - le=CACHE_TTL_MAX_DEFAULT, - ), - ] = RUN_CACHE_TTL_DEFAULT - - run_cancel_timeout: Annotated[ - float, - Field( - description="Timeout for run cancel requests", - ge=TIMEOUT_MIN_DEFAULT, - le=TIMEOUT_MAX_DEFAULT, - ), - ] = TIMEOUT_DEFAULT - - run_delete_timeout: Annotated[ - float, - Field( - description="Timeout for run delete requests", - ge=TIMEOUT_MIN_DEFAULT, - le=TIMEOUT_MAX_DEFAULT, - ), - ] = TIMEOUT_DEFAULT - - run_submit_timeout: Annotated[ - float, - Field( - description="Timeout for run submit requests", - ge=TIMEOUT_MIN_DEFAULT, - le=TIMEOUT_MAX_DEFAULT, + ge=0, + le=3600, ), - ] = TIMEOUT_DEFAULT + ] = 60 @model_validator(mode="before") def pre_init(cls, values: dict) -> dict: # type: ignore[type-arg] # noqa: N805 @@ -589,13 +397,14 @@ def pre_init(cls, values: dict) -> dict: # type: ignore[type-arg] # noqa: N805 @model_validator(mode="after") def validate_retry_wait_times(self) -> "Settings": - """Validate that retry wait min is less or equal than retry wait max for all operations. + """Validate that retry wait min is less or equal than retry wait max for both auth and me requests. Returns: Settings: The validated settings instance. Raises: - ValueError: If any operation's retry_wait_min is greater than retry_wait_max. + ValueError: If auth_retry_wait_min is greater than or equal to auth_retry_wait_max, + or if me_retry_wait_min is greater than or equal to me_retry_wait_max. """ if self.auth_retry_wait_min > self.auth_retry_wait_max: msg = ( @@ -609,25 +418,6 @@ def validate_retry_wait_times(self) -> "Settings": f"me_retry_wait_max ({self.me_retry_wait_max})" ) raise ValueError(msg) - if self.application_retry_wait_min > self.application_retry_wait_max: - msg = ( - f"application_retry_wait_min ({self.application_retry_wait_min}) must be less or equal than " - f"application_retry_wait_max ({self.application_retry_wait_max})" - ) - raise ValueError(msg) - if self.application_version_retry_wait_min > self.application_version_retry_wait_max: - msg = ( - f"application_version_retry_wait_min ({self.application_version_retry_wait_min}) " - f"must be less or equal than application_version_retry_wait_max " - f"({self.application_version_retry_wait_max})" - ) - raise ValueError(msg) - if self.run_retry_wait_min > self.run_retry_wait_max: - msg = ( - f"run_retry_wait_min ({self.run_retry_wait_min}) must be less or equal than " - f"run_retry_wait_max ({self.run_retry_wait_max})" - ) - raise ValueError(msg) return self diff --git a/src/aignostics/platform/_utils.py b/src/aignostics/platform/_utils.py index 1b3ae238..7fdcf950 100644 --- a/src/aignostics/platform/_utils.py +++ b/src/aignostics/platform/_utils.py @@ -20,48 +20,15 @@ import google_crc32c import requests -from aignx.codegen.models import InputArtifact as InputArtifactData -from aignx.codegen.models import OutputArtifact as OutputArtifactData +from aignx.codegen.models import InputArtifactReadResponse as InputArtifactData +from aignx.codegen.models import OutputArtifactReadResponse as OutputArtifactData from aignx.codegen.models import OutputArtifactResultReadResponse as OutputArtifactElement from tqdm.auto import tqdm -from aignostics.utils import get_logger - -logger = get_logger(__name__) - EIGHT_MB = 8_388_608 SIGNED_DOWNLOAD_URL_EXPIRES_SECONDS_DEFAULT = 6 * 60 * 60 # 6 hours -def convert_to_json_serializable(obj: object) -> object: - """Recursively convert non-JSON-serializable types to serializable equivalents. - - Handles common Python types that are not directly JSON-serializable: - - set → sorted list (for consistency and deterministic output) - - Recursively processes nested dict, list, and tuple structures - - Args: - obj: The object to convert. - - Returns: - The converted object with all non-serializable types replaced. - - Examples: - >>> convert_to_json_serializable({"tags": {"a", "c", "b"}}) - {"tags": ["a", "b", "c"]} - - >>> convert_to_json_serializable({"nested": {"items": {1, 2, 3}}}) - {"nested": {"items": [1, 2, 3]}} - """ - if isinstance(obj, set): - return sorted(obj) # Convert set to sorted list for consistency - if isinstance(obj, dict): - return {key: convert_to_json_serializable(value) for key, value in obj.items()} - if isinstance(obj, (list, tuple)): - return [convert_to_json_serializable(item) for item in obj] - return obj - - def mime_type_to_file_ending(mime_type: str) -> str: """Converts a MIME type to an appropriate file extension. @@ -182,9 +149,8 @@ def calculate_file_crc32c(file: Path) -> str: """ checksum = google_crc32c.Checksum() # type: ignore[no-untyped-call] with open(file, mode="rb") as f: - # Iterate through file chunks - checksum is calculated as side effect of consume() for _ in checksum.consume(f, EIGHT_MB): # type: ignore[no-untyped-call] - continue # Consume all chunks; checksum accumulates internally + pass return base64.b64encode(checksum.digest()).decode("ascii") # type: ignore[no-untyped-call] diff --git a/src/aignostics/platform/resources/applications.py b/src/aignostics/platform/resources/applications.py index e9704323..10566dc0 100644 --- a/src/aignostics/platform/resources/applications.py +++ b/src/aignostics/platform/resources/applications.py @@ -5,42 +5,16 @@ """ import builtins -import logging +import re import typing as t from operator import itemgetter import semver from aignx.codegen.api.public_api import PublicApi -from aignx.codegen.exceptions import NotFoundException, ServiceException from aignx.codegen.models import ApplicationReadResponse as Application -from aignx.codegen.models import ApplicationReadShortResponse as ApplicationSummary -from aignx.codegen.models import ApplicationVersion as VersionTuple -from aignx.codegen.models import VersionReadResponse as ApplicationVersion -from tenacity import ( - Retrying, - before_sleep_log, - retry_if_exception_type, - stop_after_attempt, - wait_exponential_jitter, -) -from urllib3.exceptions import IncompleteRead, PoolError, ProtocolError, ProxyError -from urllib3.exceptions import TimeoutError as Urllib3TimeoutError - -from aignostics.platform._operation_cache import cached_operation -from aignostics.platform._settings import settings -from aignostics.platform.resources.utils import paginate -from aignostics.utils import get_logger, user_agent - -logger = get_logger(__name__) +from aignx.codegen.models import ApplicationVersionReadResponse as ApplicationVersion -RETRYABLE_EXCEPTIONS = ( - ServiceException, - Urllib3TimeoutError, - PoolError, - IncompleteRead, - ProtocolError, - ProxyError, -) +from aignostics.platform.resources.utils import paginate class Versions: @@ -49,6 +23,8 @@ class Versions: Provides operations to list and retrieve application versions. """ + APPLICATION_VERSION_REGEX = re.compile(r"^(?P[^:]+):v(?P[^:].+)$") + def __init__(self, api: PublicApi) -> None: """Initializes the Versions resource with the API platform. @@ -57,117 +33,73 @@ def __init__(self, api: PublicApi) -> None: """ self._api = api - def list(self, application: Application | str, nocache: bool = False) -> list[VersionTuple]: + def list(self, application: Application | str) -> t.Iterator[ApplicationVersion]: """Find all versions for a specific application. - Retries on network and server errors. - Args: application (Application | str): The application to find versions for, either object or id - nocache (bool): If True, skip reading from cache and fetch fresh data from the API. - The fresh result will still be cached for subsequent calls. Defaults to False. Returns: - list[VersionTuple]: List of the available application versions. + Iterator[ApplicationVersion]: A Iterator over the available application versions. Raises: - aignx.codegen.exceptions.ApiException: If the API request fails. + Exception: If the API request fails. """ application_id = application.application_id if isinstance(application, Application) else application - @cached_operation(ttl=settings().application_cache_ttl, use_token=True) - def list_with_retry(app_id: str) -> Application: - return Retrying( - retry=retry_if_exception_type(exception_types=RETRYABLE_EXCEPTIONS), - stop=stop_after_attempt(settings().application_retry_attempts), - wait=wait_exponential_jitter( - initial=settings().application_retry_wait_min, max=settings().application_retry_wait_max - ), - before_sleep=before_sleep_log(logger, logging.WARNING), - reraise=True, - )( - lambda: self._api.read_application_by_id_v1_applications_application_id_get( - application_id=app_id, - _request_timeout=settings().application_timeout, - _headers={"User-Agent": user_agent()}, - ) - ) - - app = list_with_retry(application_id, nocache=nocache) # type: ignore[call-arg] - return app.versions if app.versions is not None else [] - - def details( - self, application_id: str, application_version: VersionTuple | str | None = None, nocache: bool = False - ) -> ApplicationVersion: - """Retrieves details for a specific application version. + return paginate( + self._api.list_versions_by_application_id_v1_applications_application_id_versions_get, + application_id=application_id, + ) - Retries on network and server errors. + def details(self, application_version: ApplicationVersion | str) -> ApplicationVersion: + """Retrieves details for a specific application version. Args: - application_id (str): The ID of the application. - application_version (VersionTuple | str | None): The version of the application. - If None, the latest version will be retrieved. - nocache (bool): If True, skip reading from cache and fetch fresh data from the API. - The fresh result will still be cached for subsequent calls. Defaults to False. + application_version (ApplicationVersion | str): The ID of the application version. Returns: ApplicationVersion: The version details. Raises: - ValueError: If the version is not valid semver. - NotFoundException: If the application or version is not found. - aignx.codegen.exceptions.ApiException: If the API request fails. + RuntimeError: If the application version ID is invalid or if the API request fails. + Exception: If the API request fails. """ - # Handle version resolution and validation first (not retried) - if application_version is None: - application_version = self.latest(application=application_id) - if application_version is None: - message = f"No versions found for application '{application_id}'." - raise NotFoundException(message) - application_version = application_version.number - elif isinstance(application_version, VersionTuple): - application_version = application_version.number - elif application_version and not semver.Version.is_valid(application_version): - message = f"Invalid version format: '{application_version}' not compliant with semantic versioning." - raise ValueError(message) - - # Make the API call with retry logic and caching - @cached_operation(ttl=settings().application_version_cache_ttl, use_token=True) - def details_with_retry(app_id: str, app_version: str) -> ApplicationVersion: - return Retrying( - retry=retry_if_exception_type(exception_types=RETRYABLE_EXCEPTIONS), - stop=stop_after_attempt(settings().application_version_retry_attempts), - wait=wait_exponential_jitter( - initial=settings().application_version_retry_wait_min, - max=settings().application_version_retry_wait_max, - ), - before_sleep=before_sleep_log(logger, logging.WARNING), - reraise=True, - )( - lambda: self._api.application_version_details_v1_applications_application_id_versions_version_get( - application_id=app_id, - version=app_version, - _request_timeout=settings().application_version_timeout, - _headers={"User-Agent": user_agent()}, - ) - ) - - return details_with_retry(application_id, application_version, nocache=nocache) # type: ignore[call-arg] - - # TODO(Helmut): Refactor given new API capabilities - def list_sorted(self, application: Application | str, nocache: bool = False) -> builtins.list[VersionTuple]: + if isinstance(application_version, ApplicationVersion): + application_id = application_version.application_id + version = application_version.version + else: + # Parse and validate the application version ID + match = self.APPLICATION_VERSION_REGEX.match(application_version) + if not match: + msg = f"Invalid application_version_id: {application_version}" + raise RuntimeError(msg) + + application_id = match.group("application_id") + version = match.group("version") + + application_versions = self._api.list_versions_by_application_id_v1_applications_application_id_versions_get( + application_id=application_id, + version=version, + ) + if len(application_versions) != 1: + # this invariance is enforced by the system. If that error occurs, we have an internal error + msg = "Internal server error. Please contact Aignostics support." + raise RuntimeError(msg) + return application_versions[0] + + # TODO(Andreas): Remove when supported in backend + def list_sorted(self, application: Application | str) -> builtins.list[ApplicationVersion]: """Get application versions sorted by semver, descending. Args: application (Application | str): The application to find versions for, either object or id - nocache (bool): If True, skip reading from cache and fetch fresh data from the API. - The fresh result will still be cached for subsequent calls. Defaults to False. Returns: - list[VersionTuple]: List of version objects sorted by semantic versioning (latest first), + list[ApplicationVersion]: List of version objects sorted by semantic versioning (latest first), or empty list if no versions are found """ - versions = builtins.list(self.list(application=application, nocache=nocache)) + versions = builtins.list(self.list(application=application)) # If no versions available if not versions: @@ -177,7 +109,7 @@ def list_sorted(self, application: Application | str, nocache: bool = False) -> versions_with_semver = [] for v in versions: try: - parsed_version = semver.Version.parse(v.number) + parsed_version = semver.Version.parse(v.version) versions_with_semver.append((v, parsed_version)) except (ValueError, AttributeError): # If we can't parse the version or version attribute doesn't exist, skip it @@ -192,18 +124,16 @@ def list_sorted(self, application: Application | str, nocache: bool = False) -> # If we couldn't parse any versions, return all versions as is return versions - def latest(self, application: Application | str, nocache: bool = False) -> VersionTuple | None: + def latest(self, application: Application | str) -> ApplicationVersion | None: """Get latest version. Args: application (Application | str): The application to find versions for, either object or id - nocache (bool): If True, skip reading from cache and fetch fresh data from the API. - The fresh result will still be cached for subsequent calls. Defaults to False. Returns: - VersionTuple | None: The latest version, or None if no versions found. + ApplicationVersion | None: The latest version id, or None if no versions found. """ - sorted_versions = self.list_sorted(application=application, nocache=nocache) + sorted_versions = self.list_sorted(application=application) return sorted_versions[0] if sorted_versions else None @@ -222,81 +152,13 @@ def __init__(self, api: PublicApi) -> None: self._api = api self.versions: Versions = Versions(self._api) - def details(self, application_id: str, nocache: bool = False) -> Application: - """Find application by id. - - Retries on network and server errors. - - Args: - application_id (str): The ID of the application. - nocache (bool): If True, skip reading from cache and fetch fresh data from the API. - The fresh result will still be cached for subsequent calls. Defaults to False. - - Returns: - Application: The application object - - Raises: - NotFoundException: If the application with the given ID is not found. - aignx.codegen.exceptions.ApiException: If the API call fails. - """ - - @cached_operation(ttl=settings().application_cache_ttl, use_token=True) - def details_with_retry(application_id: str) -> Application: - return Retrying( - retry=retry_if_exception_type(exception_types=RETRYABLE_EXCEPTIONS), - stop=stop_after_attempt(settings().application_retry_attempts), - wait=wait_exponential_jitter( - initial=settings().application_retry_wait_min, max=settings().application_retry_wait_max - ), - before_sleep=before_sleep_log(logger, logging.WARNING), - reraise=True, - )( - lambda: self._api.read_application_by_id_v1_applications_application_id_get( - application_id=application_id, - _request_timeout=settings().application_timeout, - _headers={"User-Agent": user_agent()}, - ) - ) - - return details_with_retry(application_id, nocache=nocache) # type: ignore[call-arg] - - def list(self, nocache: bool = False) -> t.Iterator[ApplicationSummary]: + def list(self) -> t.Iterator[Application]: """Find all available applications. - Retries on network and server errors for each page. - Returns: - Iterator[ApplicationSummary]: An iterator over the available applications. - notcache (bool): If True, skip reading from cache and fetch fresh data from the API. - The fresh result will still be cached for subsequent calls. Defaults to False. + Iterator[Application]: A Iterator over the available applications. Raises: - aignx.codegen.exceptions.ApiException: If the API request fails. + Exception: If the API request fails. """ - - # Create a wrapper function that applies retry logic and caching to each API call - # Caching at this level ensures having a fresh iterator on cache hits - @cached_operation(ttl=settings().application_cache_ttl, use_token=True) - def list_with_retry(**kwargs: object) -> list[ApplicationSummary]: - return Retrying( - retry=retry_if_exception_type(exception_types=RETRYABLE_EXCEPTIONS), - stop=stop_after_attempt(settings().application_retry_attempts), - wait=wait_exponential_jitter( - initial=settings().application_retry_wait_min, max=settings().application_retry_wait_max - ), - before_sleep=before_sleep_log(logger, logging.WARNING), - reraise=True, - )( - lambda: self._api.list_applications_v1_applications_get( - _request_timeout=settings().application_timeout, - _headers={"User-Agent": user_agent()}, - **kwargs, # pyright: ignore[reportArgumentType] - ) - ) - - return paginate( - lambda **kwargs: list_with_retry( - nocache=nocache, - **kwargs, - ) - ) + return paginate(self._api.list_applications_v1_applications_get) diff --git a/src/aignostics/platform/resources/runs.py b/src/aignostics/platform/resources/runs.py index b814d35c..c9afeaa2 100644 --- a/src/aignostics/platform/resources/runs.py +++ b/src/aignostics/platform/resources/runs.py @@ -4,165 +4,116 @@ It includes functionality for starting runs, monitoring status, and downloading results. """ -import builtins -import logging -import time import typing as t -from collections.abc import Iterator +from collections.abc import Generator from pathlib import Path from time import sleep -from typing import Any, cast +from typing import Any from aignx.codegen.api.public_api import PublicApi -from aignx.codegen.exceptions import ServiceException from aignx.codegen.models import ( - CustomMetadataUpdateRequest, + ApplicationRunStatus, ItemCreationRequest, - ItemOutput, ItemResultReadResponse, - ItemState, + ItemStatus, RunCreationRequest, RunCreationResponse, - RunState, ) from aignx.codegen.models import ( ItemResultReadResponse as ItemResultData, ) from aignx.codegen.models import ( - RunReadResponse as RunData, -) -from aignx.codegen.models import ( - VersionReadResponse as ApplicationVersion, + RunReadResponse as ApplicationRunData, ) from jsonschema.exceptions import ValidationError from jsonschema.validators import validate -from tenacity import ( - Retrying, - before_sleep_log, - retry_if_exception_type, - stop_after_attempt, - wait_exponential_jitter, -) -from urllib3.exceptions import IncompleteRead, PoolError, ProtocolError, ProxyError -from urllib3.exceptions import TimeoutError as Urllib3TimeoutError - -from aignostics.platform._operation_cache import cached_operation, operation_cache_clear -from aignostics.platform._sdk_metadata import ( - build_item_sdk_metadata, - build_run_sdk_metadata, - validate_item_sdk_metadata, - validate_run_sdk_metadata, -) -from aignostics.platform._settings import settings + from aignostics.platform._utils import ( calculate_file_crc32c, - convert_to_json_serializable, download_file, get_mime_type_for_artifact, mime_type_to_file_ending, ) from aignostics.platform.resources.applications import Versions from aignostics.platform.resources.utils import paginate -from aignostics.utils import get_logger, user_agent - -logger = get_logger(__name__) - -RETRYABLE_EXCEPTIONS = ( - ServiceException, # TODO(Helmut): Do we want this down the road? - Urllib3TimeoutError, - PoolError, - IncompleteRead, - ProtocolError, - ProxyError, -) +from aignostics.utils import user_agent LIST_APPLICATION_RUNS_MAX_PAGE_SIZE = 100 LIST_APPLICATION_RUNS_MIN_PAGE_SIZE = 5 -class DownloadTimeoutError(RuntimeError): - """Exception raised when the download operation exceeds its timeout.""" - - -class Run: +class ApplicationRun: """Represents a single application run. Provides operations to check status, retrieve results, and download artifacts. """ - def __init__(self, api: PublicApi, run_id: str) -> None: - """Initializes an Run instance. + def __init__(self, api: PublicApi, application_run_id: str) -> None: + """Initializes an ApplicationRun instance. Args: api (PublicApi): The configured API client. - run_id (str): The ID of the application run. + application_run_id (str): The ID of the application run. """ self._api = api - self.run_id = run_id + self.application_run_id = application_run_id @classmethod - def for_run_id(cls, run_id: str, cache_token: bool = True) -> "Run": - """Creates an Run instance for an existing run. + def for_application_run_id(cls, application_run_id: str) -> "ApplicationRun": + """Creates an ApplicationRun instance for an existing run. Args: - run_id (str): The ID of the application run. - cache_token (bool): Whether to cache the API token. + application_run_id (str): The ID of the application run. Returns: - Run: The initialized Run instance. + ApplicationRun: The initialized ApplicationRun instance. """ from aignostics.platform._client import Client # noqa: PLC0415 - return cls(Client.get_api_client(cache_token=cache_token), run_id) + return cls(Client.get_api_client(cache_token=False), application_run_id) - def details(self, nocache: bool = False) -> RunData: + # TODO(Andreas): Deprecated, please remove when you updated your integration code + def status(self) -> ApplicationRunData: """Retrieves the current status of the application run. - Retries on network and server errors. + Returns: + ApplicationRunData: The run data. - Args: - nocache (bool): If True, skip reading from cache and fetch fresh data from the API. - The fresh result will still be cached for subsequent calls. Defaults to False. + Raises: + Exception: If the API request fails. + """ + return self.details() + + def details(self) -> ApplicationRunData: + """Retrieves the current status of the application run. Returns: - RunData: The run data. + ApplicationRunData: The run data. Raises: Exception: If the API request fails. """ + return self._api.get_run_v1_runs_application_run_id_get(self.application_run_id) - @cached_operation(ttl=settings().run_cache_ttl, use_token=True) - def details_with_retry(run_id: str) -> RunData: - return Retrying( - retry=retry_if_exception_type(exception_types=RETRYABLE_EXCEPTIONS), - stop=stop_after_attempt(settings().run_retry_attempts), - wait=wait_exponential_jitter(initial=settings().run_retry_wait_min, max=settings().run_retry_wait_max), - before_sleep=before_sleep_log(logger, logging.WARNING), - reraise=True, - )( - lambda: self._api.get_run_v1_runs_run_id_get( - run_id, - _request_timeout=settings().run_timeout, - _headers={"User-Agent": user_agent()}, - ) - ) + def item_status(self) -> dict[str, ItemStatus]: + """Retrieves the status of all items in the run. - return details_with_retry(self.run_id, nocache=nocache) # type: ignore[call-arg] + Returns: + dict[str, ItemStatus]: A dictionary mapping item references to their status. + + Raises: + Exception: If the API request fails. + """ + return {item.reference: item.status for item in self.results()} - # TODO(Andreas): Low Prio / existed prior to API migration: Please check if this still fails with - # Internal Server Error if run was already canceled, should rather fail with 400 bad request in that state. + # TODO(Andreas): Fails with Internal Server Error if run canceled; don't throw generic exceptions def cancel(self) -> None: """Cancels the application run. Raises: Exception: If the API request fails. """ - self._api.cancel_run_v1_runs_run_id_cancel_post( - self.run_id, - _request_timeout=settings().run_cancel_timeout, - _headers={"User-Agent": user_agent()}, - ) - operation_cache_clear() # Clear all caches since we added a new run + self._api.cancel_application_run_v1_runs_application_run_id_cancel_post(self.application_run_id) def delete(self) -> None: """Delete the application run. @@ -170,57 +121,24 @@ def delete(self) -> None: Raises: Exception: If the API request fails. """ - self._api.delete_run_items_v1_runs_run_id_artifacts_delete( - self.run_id, - _request_timeout=settings().run_delete_timeout, - _headers={"User-Agent": user_agent()}, - ) - operation_cache_clear() # Clear all caches since we added a new run + self._api.delete_application_run_results_v1_runs_application_run_id_results_delete(self.application_run_id) - def results(self, nocache: bool = False) -> t.Iterator[ItemResultData]: + def results(self) -> t.Iterator[ItemResultData]: """Retrieves the results of all items in the run. - Retries on network and server errors. - - Args: - nocache (bool): If True, skip reading from cache and fetch fresh data from the API. - The fresh result will still be cached for subsequent calls. Defaults to False. - Returns: list[ItemResultData]: A list of item results. Raises: Exception: If the API request fails. """ + return paginate( + self._api.list_run_results_v1_runs_application_run_id_results_get, + application_run_id=self.application_run_id, + ) - # Create a wrapper function that applies retry logic and caching to each API call - # Caching at this level ensures having a fresh iterator on cache hits - @cached_operation(ttl=settings().run_cache_ttl, use_token=True) - def results_with_retry(run_id: str, **kwargs: object) -> list[ItemResultData]: - return Retrying( - retry=retry_if_exception_type(exception_types=RETRYABLE_EXCEPTIONS), - stop=stop_after_attempt(settings().run_retry_attempts), - wait=wait_exponential_jitter(initial=settings().run_retry_wait_min, max=settings().run_retry_wait_max), - before_sleep=before_sleep_log(logger, logging.WARNING), - reraise=True, - )( - lambda: self._api.list_run_items_v1_runs_run_id_items_get( - run_id=run_id, - _request_timeout=settings().run_timeout, - _headers={"User-Agent": user_agent()}, - **kwargs, # pyright: ignore[reportArgumentType] - ) - ) - - return paginate(lambda **kwargs: results_with_retry(self.run_id, nocache=nocache, **kwargs)) - - def download_to_folder( # noqa: C901 - self, - download_base: Path | str, - checksum_attribute_key: str = "checksum_base64_crc32c", - sleep_interval: int = 5, - timeout_seconds: int | None = None, - print_status: bool = True, + def download_to_folder( + self, download_base: Path | str, checksum_attribute_key: str = "checksum_base64_crc32c" ) -> None: """Downloads all result artifacts to a folder. @@ -229,73 +147,39 @@ def download_to_folder( # noqa: C901 Args: download_base (Path | str): Base directory to download results to. checksum_attribute_key (str): The key used to validate the checksum of the output artifacts. - sleep_interval (int): Time in seconds to wait between checks for new results. - timeout_seconds (int | None): Optional timeout in seconds for the entire download operation. - print_status (bool): If True, prints status updates to the console, otherwise just logs. Raises: ValueError: If the provided path is not a directory. - DownloadTimeoutError: If the timeout is exceeded while waiting for the run to terminate. - RuntimeError: If downloads or API requests fail. + Exception: If downloads or API requests fail. """ - try: - # create application run base folder - download_base = Path(download_base) - if not download_base.is_dir(): - msg = f"{download_base} is not a directory" - raise ValueError(msg) # noqa: TRY301 - application_run_dir = Path(download_base) / self.run_id - - # track timeout if specified - start_time = time.time() if timeout_seconds is not None else None - - # incrementally check for available results - application_run_state = self.details(nocache=True).state # no cache to get fresh results - while application_run_state in {RunState.PROCESSING, RunState.PENDING}: - # check timeout - if start_time is not None and timeout_seconds is not None: - elapsed = time.time() - start_time - if elapsed >= timeout_seconds: - msg = ( - f"Timeout of {timeout_seconds} seconds exceeded while waiting for run {self.run_id} " - f"to terminate. Run state: {application_run_state.value}" - ) - raise DownloadTimeoutError(msg) # noqa: TRY301 - for item in self.results(nocache=True): - if item.state == ItemState.TERMINATED and item.output == ItemOutput.FULL: - self.ensure_artifacts_downloaded(application_run_dir, item, checksum_attribute_key) - sleep(sleep_interval) - application_run_state = self.details(nocache=True).state - logger.debug("Continuing to wait for run %s, current state: %r", self.run_id, self) - print(self) if print_status else None - - # check if last results have been downloaded yet and report on errors - for item in self.results(nocache=True): - match item.output: - case ItemOutput.FULL: - self.ensure_artifacts_downloaded(application_run_dir, item, checksum_attribute_key) - case ItemOutput.NONE: - message = ( - f"{item.external_id} failed with `{item.state.value}`.\n" - f"Termination reason `{item.termination_reason}`, " - f"error_code:`{item.error_code}`, message `{item.error_message}`." - ) - logger.error(message) - print(message) if print_status else None - except (ValueError, DownloadTimeoutError): - # Re-raise ValueError and DownloadTimeoutError as-is - raise - except Exception as e: - # Wrap all other exceptions in RuntimeError - msg = f"Download operation failed for run {self.run_id}: {e}" - raise RuntimeError(msg) from e + # create application run base folder + download_base = Path(download_base) + if not download_base.is_dir(): + msg = f"{download_base} is not a directory" + raise ValueError(msg) + application_run_dir = Path(download_base) / self.application_run_id + + # incrementally check for available results + application_run_status = self.details().status + while application_run_status == ApplicationRunStatus.RUNNING: + for item in self.results(): + if item.status == ItemStatus.SUCCEEDED: + self.ensure_artifacts_downloaded(application_run_dir, item, checksum_attribute_key) + sleep(5) + application_run_status = self.details().status + print(self) + + # check if last results have been downloaded yet and report on errors + for item in self.results(): + match item.status: + case ItemStatus.SUCCEEDED: + self.ensure_artifacts_downloaded(application_run_dir, item, checksum_attribute_key) + case ItemStatus.ERROR_SYSTEM | ItemStatus.ERROR_USER: + print(f"{item.reference} failed with {item.status.value}: {item.error}") @staticmethod def ensure_artifacts_downloaded( - base_folder: Path, - item: ItemResultReadResponse, - checksum_attribute_key: str = "checksum_base64_crc32c", - print_status: bool = True, + base_folder: Path, item: ItemResultReadResponse, checksum_attribute_key: str = "checksum_base64_crc32c" ) -> None: """Ensures all artifacts for an item are downloaded. @@ -305,13 +189,12 @@ def ensure_artifacts_downloaded( base_folder (Path): Base directory to download artifacts to. item (ItemResultReadResponse): The item result containing the artifacts to download. checksum_attribute_key (str): The key used to validate the checksum of the output artifacts. - print_status (bool): If True, prints status updates to the console, otherwise just logs. Raises: ValueError: If checksums don't match. Exception: If downloads fail. """ - item_dir = base_folder / item.external_id + item_dir = base_folder / item.reference downloaded_at_least_one_artifact = False for artifact in item.output_artifacts: @@ -319,100 +202,25 @@ def ensure_artifacts_downloaded( item_dir.mkdir(exist_ok=True, parents=True) file_ending = mime_type_to_file_ending(get_mime_type_for_artifact(artifact)) file_path = item_dir / f"{artifact.name}{file_ending}" - if not artifact.metadata: - logger.error( - "Skipping artifact %s for item %s, no metadata present", artifact.name, item.external_id - ) - print( - f"> Skipping artifact {artifact.name} for item {item.external_id}, no metadata present" - ) if print_status else None - continue checksum = artifact.metadata[checksum_attribute_key] if file_path.exists(): file_checksum = calculate_file_crc32c(file_path) if file_checksum != checksum: - logger.debug("Resume download for %s to %s", artifact.name, file_path) - print(f"> Resume download for {artifact.name} to {file_path}") if print_status else None + print(f"> Resume download for {artifact.name} to {file_path}") else: continue else: downloaded_at_least_one_artifact = True - logger.debug("Download for %s to %s", artifact.name, file_path) - print(f"> Download for {artifact.name} to {file_path}") if print_status else None + print(f"> Download for {artifact.name} to {file_path}") # if file is not there at all or only partially downloaded yet download_file(artifact.download_url, str(file_path), checksum) if downloaded_at_least_one_artifact: - logger.debug("Downloaded results for item: %s to %s", item.external_id, item_dir) - print(f"Downloaded results for item: {item.external_id} to {item_dir}") if print_status else None + print(f"Downloaded results for item: {item.reference} to {item_dir}") else: - logger.debug("Results for item: %s already present in %s", item.external_id, item_dir) - print(f"Results for item: {item.external_id} already present in {item_dir}") if print_status else None - - def update_custom_metadata( - self, - custom_metadata: dict[str, Any], - ) -> None: - """Update custom metadata for this application run. - - Args: - custom_metadata (dict[str, Any]): The new custom metadata to attach to the run. - - Raises: - Exception: If the API request fails. - """ - custom_metadata = custom_metadata or {} - custom_metadata.setdefault("sdk", {}) - existing_sdk_metadata = custom_metadata.get("sdk", {}) - sdk_metadata = build_run_sdk_metadata(existing_sdk_metadata) - custom_metadata["sdk"].update(sdk_metadata) - validate_run_sdk_metadata(custom_metadata["sdk"]) - - self._api.put_run_custom_metadata_v1_runs_run_id_custom_metadata_put( - self.run_id, - custom_metadata_update_request=CustomMetadataUpdateRequest( - custom_metadata=cast("dict[str, Any]", convert_to_json_serializable(custom_metadata)) - ), - _request_timeout=settings().run_submit_timeout, - _headers={"User-Agent": user_agent()}, - ) - operation_cache_clear() # Clear all caches since we updated a run - - # TODO(Andreas): Always returns 404, likely connected with external_id encoding, - # see test test_cli_run_dump_and_update_item_custom_metadata - def update_item_custom_metadata( - self, - external_id: str, - custom_metadata: dict[str, Any], - ) -> None: - """Update custom metadata for an item in this application run. - - Args: - external_id (str): The external ID of the item. - custom_metadata (dict[str, Any]): The new custom metadata to attach to the item. - - Raises: - Exception: If the API request fails. - """ - custom_metadata = custom_metadata or {} - custom_metadata.setdefault("sdk", {}) - existing_sdk_metadata = custom_metadata.get("sdk", {}) - sdk_metadata = build_item_sdk_metadata(existing_sdk_metadata) - custom_metadata["sdk"].update(sdk_metadata) - validate_item_sdk_metadata(custom_metadata["sdk"]) - - self._api.put_item_custom_metadata_by_run_v1_runs_run_id_items_external_id_custom_metadata_put( - self.run_id, - external_id, - custom_metadata_update_request=CustomMetadataUpdateRequest( - custom_metadata=cast("dict[str, Any]", convert_to_json_serializable(custom_metadata)) - ), - _request_timeout=settings().run_submit_timeout, - _headers={"User-Agent": user_agent()}, - ) - operation_cache_clear() # Clear all caches since we updated a run + print(f"Results for item: {item.reference} already present in {item_dir}") def __str__(self) -> str: """Returns a string representation of the application run. @@ -422,25 +230,26 @@ def __str__(self) -> str: Returns: str: String representation of the application run. """ - details = cast("RunData", self.details()) - app_status = details.state.value - items = ( - f"{details.statistics.item_count} items: " - f"{details.statistics.item_pending_count}/" - f"{details.statistics.item_processing_count}/" - f"{details.statistics.item_user_error_count}/" - f"{details.statistics.item_system_error_count}/" - f"{details.statistics.item_skipped_count}/" - f"{details.statistics.item_succeeded_count}" - " [pending/processing/user-error/system-error/skipped/succeeded]" - ) - return f"Run `{self.run_id}`: {app_status}, {items}" + app_status = self.details().status.value + item_status = self.item_status() + pending, succeeded, error = 0, 0, 0 + for item in item_status.values(): + match item: + case ItemStatus.PENDING: + pending += 1 + case ItemStatus.SUCCEEDED: + succeeded += 1 + case ItemStatus.ERROR_USER | ItemStatus.ERROR_SYSTEM: + error += 1 + + items = f"{len(item_status)} items - ({pending}/{succeeded}/{error}) [pending/succeeded/error]" + return f"Application run `{self.application_run_id}`: {app_status}, {items}" class Runs: """Resource class for managing application runs. - Provides operations to submit, find, and retrieve runs. + Provides operations to create, find, and retrieve runs. """ def __init__(self, api: PublicApi) -> None: @@ -451,35 +260,29 @@ def __init__(self, api: PublicApi) -> None: """ self._api = api - def __call__(self, run_id: str) -> Run: - """Retrieves an Run instance for an existing run. + def __call__(self, application_run_id: str) -> ApplicationRun: + """Retrieves an ApplicationRun instance for an existing run. Args: - run_id (str): The ID of the application run. + application_run_id (str): The ID of the application run. Returns: - Run: The initialized Run instance. + ApplicationRun: The initialized ApplicationRun instance. """ - return Run(self._api, run_id) + return ApplicationRun(self._api, application_run_id) - def submit( - self, - application_id: str, - items: list[ItemCreationRequest], - application_version: str | None = None, - custom_metadata: dict[str, Any] | None = None, - ) -> Run: - """Submit a new application run. + def create( + self, application_version: str, items: list[ItemCreationRequest], custom_metadata: dict[str, Any] | None = None + ) -> ApplicationRun: + """Creates a new application run. Args: - application_id (str): The ID of the application. + application_version (str): The ID of the application version. items (list[ItemCreationRequest]): The run creation request payload. - application_version (str|None): The version of the application to use. - If None, the latest version is used. custom_metadata (dict[str, Any] | None): Optional metadata to attach to the run. Returns: - Run: The submitted application run. + ApplicationRun: The created application run. Raises: ValueError: If the payload is invalid. @@ -487,96 +290,48 @@ def submit( """ custom_metadata = custom_metadata or {} custom_metadata.setdefault("sdk", {}) - existing_sdk_metadata = custom_metadata.get("sdk", {}) - sdk_metadata = build_run_sdk_metadata(existing_sdk_metadata) - custom_metadata["sdk"].update(sdk_metadata) - validate_run_sdk_metadata(custom_metadata["sdk"]) - self._amend_input_items_with_sdk_metadata(items) - payload = RunCreationRequest( - application_id=application_id, - version_number=application_version, - custom_metadata=cast("dict[str, Any]", convert_to_json_serializable(custom_metadata)), - items=items, - ) + custom_metadata["sdk"]["user_agent"] = user_agent() + payload = RunCreationRequest(application_version_id=application_version, items=items, metadata=custom_metadata) self._validate_input_items(payload) - res: RunCreationResponse = self._api.create_run_v1_runs_post( - payload, - _request_timeout=settings().run_submit_timeout, - _headers={"User-Agent": user_agent()}, - ) - operation_cache_clear() # Clear all caches since we added a new run - return Run(self._api, str(res.run_id)) + res: RunCreationResponse = self._api.create_application_run_v1_runs_post(payload) + return ApplicationRun(self._api, str(res.application_run_id)) - def list( # noqa: PLR0913, PLR0917 - self, - application_id: str | None = None, - application_version: str | None = None, - external_id: str | None = None, - custom_metadata: str | None = None, - sort: str | None = None, - page_size: int = LIST_APPLICATION_RUNS_MAX_PAGE_SIZE, - nocache: bool = False, - ) -> Iterator[Run]: - """Find application runs, optionally filtered by application id and/or version. - - Retries on network and server errors. + def list(self, for_application_version: str | None = None) -> Generator[ApplicationRun, Any, None]: + """Find application runs, optionally filtered by application version. Args: - application_id (str | None): Optional application ID to filter by. - application_version (str | None): Optional application version to filter by. - external_id (str | None): The external ID to filter runs. If None, no filtering is applied. - custom_metadata (str | None): Optional metadata filter in JSONPath format. - sort (str | None): Optional field to sort by. Prefix with '-' for descending order. - page_size (int): Number of items per page, defaults to max - nocache (bool): If True, skip reading from cache and fetch fresh data from the API. - The fresh result will still be cached for subsequent calls. Defaults to False. + for_application_version (str | None): Optional application version ID to filter by. Returns: - Iterator[Run]: An iterator yielding application run handles. + Generator[ApplicationRun, Any, None]: A generator yielding application runs. Raises: - ValueError: If page_size is greater than 100. Exception: If the API request fails. """ - return ( - Run(self._api, response.run_id) - for response in self.list_data( - application_id=application_id, - application_version=application_version, - external_id=external_id, - custom_metadata=custom_metadata, - sort=sort, - page_size=page_size, - nocache=nocache, - ) - ) + if not for_application_version: + res = paginate(self._api.list_application_runs_v1_runs_get) + else: + res = paginate(self._api.list_application_runs_v1_runs_get, application_version_id=for_application_version) + return (ApplicationRun(self._api, response.application_run_id) for response in res) - def list_data( # noqa: PLR0913, PLR0917 + # TODO(Andreas): Think about merging by having list(...) above return active records that as well hold data + def list_data( self, - application_id: str | None = None, - application_version: str | None = None, - external_id: str | None = None, - custom_metadata: str | None = None, + for_application_version: str | None = None, + metadata: str | None = None, sort: str | None = None, page_size: int = LIST_APPLICATION_RUNS_MAX_PAGE_SIZE, - nocache: bool = False, - ) -> t.Iterator[RunData]: + ) -> t.Iterator[ApplicationRunData]: """Fetch application runs, optionally filtered by application version. - Retries on network and server errors. - Args: - application_id (str | None): Optional application ID to filter by. - application_version (str | None): Optional application version ID to filter by. - external_id (str | None): The external ID to filter runs. If None, no filtering is applied. - custom_metadata (str | None): Optional metadata filter in JSONPath format. + for_application_version (str | None): Optional application version ID to filter by. + metadata (str | None): Optional metadata filter in JSONPath format. sort (str | None): Optional field to sort by. Prefix with '-' for descending order. page_size (int): Number of items per page, defaults to max - nocache (bool): If True, skip reading from cache and fetch fresh data from the API. - The fresh result will still be cached for subsequent calls. Defaults to False. Returns: - Iterator[RunData]: Iterator yielding application run data. + Iterator[ApplicationRunData]: Iterator yielding application run data. Raises: ValueError: If page_size is greater than 100. @@ -587,56 +342,27 @@ def list_data( # noqa: PLR0913, PLR0917 f"page_size is must be less than or equal to {LIST_APPLICATION_RUNS_MAX_PAGE_SIZE}, but got {page_size}" ) raise ValueError(message) - - @cached_operation(ttl=settings().run_cache_ttl, use_token=True) - def list_data_with_retry(**kwargs: object) -> list[RunData]: - return Retrying( - retry=retry_if_exception_type(exception_types=RETRYABLE_EXCEPTIONS), - stop=stop_after_attempt(settings().run_retry_attempts), - wait=wait_exponential_jitter(initial=settings().run_retry_wait_min, max=settings().run_retry_wait_max), - before_sleep=before_sleep_log(logger, logging.WARNING), - reraise=True, - )( - lambda: self._api.list_runs_v1_runs_get( - _request_timeout=settings().run_timeout, - _headers={"User-Agent": user_agent()}, - **kwargs, # pyright: ignore[reportArgumentType] - ) + if not for_application_version: + res = paginate( + self._api.list_application_runs_v1_runs_get, + page_size=page_size, + metadata=metadata, + sort=[sort] if sort else None, ) - - return paginate( - lambda **kwargs: list_data_with_retry( - application_id=application_id, - application_version=application_version, - external_id=external_id, - custom_metadata=custom_metadata, + else: + res = paginate( + self._api.list_application_runs_v1_runs_get, + page_size=page_size, + application_version_id=for_application_version, + metadata=metadata, sort=[sort] if sort else None, - nocache=nocache, - **kwargs, - ), - page_size=page_size, - ) - - @staticmethod - def _amend_input_items_with_sdk_metadata(items: builtins.list[ItemCreationRequest]) -> None: - """Amends input items with SDK metadata. - - Args: - items (builtins.list[ItemCreationRequest]): The list of item creation requests to amend. - """ - for item in items: - item_custom_metadata = item.custom_metadata or {} - item_custom_metadata.setdefault("sdk", {}) - existing_item_sdk_metadata = item_custom_metadata.get("sdk") - item_sdk_metadata = build_item_sdk_metadata(existing_item_sdk_metadata) - item_custom_metadata["sdk"].update(item_sdk_metadata) - validate_item_sdk_metadata(item_custom_metadata["sdk"]) - item.custom_metadata = cast("dict[str, Any]", convert_to_json_serializable(item_custom_metadata)) + ) + return res def _validate_input_items(self, payload: RunCreationRequest) -> None: """Validates the input items in a run creation request. - Checks that external ids are unique, all required artifacts are provided, + Checks that references are unique, all required artifacts are provided, and artifact metadata matches the expected schema. Args: @@ -647,22 +373,17 @@ def _validate_input_items(self, payload: RunCreationRequest) -> None: Exception: If the API request fails. """ # validate metadata based on schema of application version - app_version = cast( - "ApplicationVersion", - Versions(self._api).details( - application_id=payload.application_id, application_version=payload.version_number - ), - ) + app_version = Versions(self._api).details(application_version=payload.application_version_id) schema_idx = { input_artifact.name: input_artifact.metadata_schema for input_artifact in app_version.input_artifacts } - external_ids = set() + references = set() for item in payload.items: - # verify external IDs are unique - if item.external_id in external_ids: - msg = f"Duplicate external ID `{item.external_id}` in items." + # verify references are unique + if item.reference in references: + msg = f"Duplicate reference `{item.reference}` in items." raise ValueError(msg) - external_ids.add(item.external_id) + references.add(item.reference) schema_check = set(schema_idx.keys()) for artifact in item.input_artifacts: diff --git a/src/aignostics/qupath/__init__.py b/src/aignostics/qupath/__init__.py index f449548e..e15ef05b 100644 --- a/src/aignostics/qupath/__init__.py +++ b/src/aignostics/qupath/__init__.py @@ -2,8 +2,6 @@ from importlib.util import find_spec -from ._settings import Settings - __all__ = [] # advertise PageBuilder to enable auto-discovery @@ -19,6 +17,5 @@ "AnnotateProgress", "PageBuilder", "Service", - "Settings", "cli", ] diff --git a/src/aignostics/qupath/_cli.py b/src/aignostics/qupath/_cli.py index 82935ccf..04213f60 100644 --- a/src/aignostics/qupath/_cli.py +++ b/src/aignostics/qupath/_cli.py @@ -180,8 +180,9 @@ def processes( "--json", "-j", help="Output the running QuPath processes as JSON.", + is_flag=True, ), - ] = False, + ], ) -> None: """List running QuPath processes. @@ -376,7 +377,7 @@ def annotate( try: annotation_count = Service().annotate(project=project, image=image, annotations=annotations) console.print( - f"Added '{annotation_count}' annotations to '{image}' in '{project}'.", + f"Added {annotation_count} annotations to {image} in {project}.", style="success", ) except Exception as e: diff --git a/src/aignostics/qupath/_service.py b/src/aignostics/qupath/_service.py index f6218513..65ed7d9e 100644 --- a/src/aignostics/qupath/_service.py +++ b/src/aignostics/qupath/_service.py @@ -27,6 +27,7 @@ from aignostics.utils import ( SUBPROCESS_CREATION_FLAGS, + UNHIDE_SENSITIVE_INFO, BaseService, Health, __project_name__, @@ -210,7 +211,7 @@ def __init__(self) -> None: """Initialize service.""" super().__init__(Settings) - def info(self, mask_secrets: bool = True) -> dict[str, Any]: # noqa: ARG002, PLR6301 + def info(self, mask_secrets: bool = True) -> dict[str, Any]: """Determine info of this service. Args: @@ -219,6 +220,7 @@ def info(self, mask_secrets: bool = True) -> dict[str, Any]: # noqa: ARG002, PL Returns: dict[str,Any]: The info of this service. """ + settings = self._settings.model_dump(context={UNHIDE_SENSITIVE_INFO: not mask_secrets}) executable = Service.find_qupath_executable() version = Service.get_version() return { @@ -226,10 +228,11 @@ def info(self, mask_secrets: bool = True) -> dict[str, Any]: # noqa: ARG002, PL "path": str(executable) if executable else None, "version": dict(version) if version else None, "expected_version": Service.get_expected_version(), - } + }, + "settings": settings, } - def health(self) -> Health: # noqa: PLR6301 + def health(self) -> Health: """Determine health of this service. Returns: @@ -237,8 +240,36 @@ def health(self) -> Health: # noqa: PLR6301 """ return Health( status=Health.Code.UP, + components={ + "application": self._determine_application_health(), + }, ) + @staticmethod + def _determine_application_health() -> Health: + """Determine we can reach a well known and secure endpoint. + + - Checks if health endpoint is reachable and returns 200 OK + - Uses requests library for a direct connection check without authentication + + Returns: + Health: The healthiness of the network connection via basic unauthenticated request. + """ + try: + version = Service.get_version() + if not version: + message = "QuPath not installed." + return Health(status=Health.Code.DOWN, reason=message) + if version.version != Service.get_expected_version(): + message = f"QuPath version mismatch: expected {QUPATH_VERSION}, got {version.version}" + logger.warning(message) + return Health(status=Health.Code.DOWN, reason=message) + except Exception as e: + message = f"Exception while checking health of QuPath application {e!s}" + logger.exception(message) + return Health(status=Health.Code.DOWN, reason=message) + return Health(status=Health.Code.UP) + @staticmethod def _get_app_dir_from_qupath_dir(qupath_dir: Path, platform_system: str | None) -> Path: """Get the QuPath application directory based on the platform system. @@ -707,7 +738,7 @@ def _extract_qupath( # noqa: C901, PLR0912, PLR0915 f"cat '{payload_path.resolve()!s}' | gunzip -dc | cpio -i", ] if platform.system() == "Darwin" - else ["7z", "x", str(payload_path.resolve()), f"-o{payload_extract_dir.resolve()!s}"] + else ["7z", "x", str(payload_path.resolve()), "-o" + str(payload_extract_dir.resolve())] ) subprocess.run( # noqa: S603 command, diff --git a/src/aignostics/system/_cli.py b/src/aignostics/system/_cli.py index 4049e635..1a3b177b 100644 --- a/src/aignostics/system/_cli.py +++ b/src/aignostics/system/_cli.py @@ -4,14 +4,13 @@ import sys from enum import StrEnum from importlib.util import find_spec -from pathlib import Path from typing import Annotated import typer import yaml from ..constants import API_VERSIONS # noqa: TID252 -from ..utils import Health, console, get_logger # noqa: TID252 +from ..utils import console, get_logger # noqa: TID252 from ._service import Service logger = get_logger(__name__) @@ -53,17 +52,14 @@ def health( Args: output_format (OutputFormat): Output format (JSON or YAML). """ - health = _service.health() match output_format: case OutputFormat.JSON: - console.print_json(data=health.model_dump()) + console.print_json(data=_service.health().model_dump()) case OutputFormat.YAML: console.print( - yaml.dump(data=json.loads(health.model_dump_json()), width=80, default_flow_style=False), + yaml.dump(data=json.loads(_service.health().model_dump_json()), width=80, default_flow_style=False), end="", ) - if health.status is not Health.Code.UP: - sys.exit(1) @cli.command() @@ -89,30 +85,6 @@ def info( console.print(yaml.dump(info, width=80, default_flow_style=False), end="") -@cli.command() -def dump_dot_env_file( - destination: Annotated[ - Path, - typer.Option( - help="Path pointing to .env file to gnerate, defaults to .env.current in current working directory.", - exists=False, - file_okay=True, - dir_okay=False, - writable=True, - readable=True, - resolve_path=True, - ), - ] = Path(".env.current"), -) -> None: - """Dump settings to .env file. - - Args: - destination (Path): Path pointing to .env file to generate. - """ - _service.dump_dot_env_file(destination=destination) - console.print(f"Settings dumped to {destination}", style="success") - - if find_spec("nicegui"): from ..utils import gui_run # noqa: TID252 @@ -127,6 +99,7 @@ def serve( Args: host (str): Host to bind the server to. port (int): Port to bind the server to. + watch (bool): Enable auto-reload on changes of source code. open_browser (bool): Open app in browser after starting the server. """ console.print(f"Starting web application server at http://{host}:{port}") diff --git a/src/aignostics/system/_gui.py b/src/aignostics/system/_gui.py index 84a83526..8529f4a0 100644 --- a/src/aignostics/system/_gui.py +++ b/src/aignostics/system/_gui.py @@ -17,10 +17,18 @@ def register_pages() -> None: # noqa: PLR0915 locate_subclasses(BaseService) # Ensure settings are loaded app.add_static_files("/system_assets", Path(__file__).parent / "assets") - @ui.page("/alive") - def alive() -> None: - """Simple page to check the GUI is alive.""" - ui.label("Yes") + # TODO(Helmut): Remove when working with nicegui 3 + def deprecated() -> None: + ui.add_head_html(""" + + """) @ui.page("/system") async def page_system() -> None: # noqa: PLR0915 @@ -54,9 +62,8 @@ async def page_system() -> None: # noqa: PLR0915 properties["content"] = {"json": "Health check failed."} # type: ignore[unreachable] else: properties["content"] = {"json": health.model_dump()} - # Note: editor.update(...) broken in NiceGUI 3.0.4 - editor.run_editor_method("update", properties["content"]) - editor.run_editor_method(":expand", "[]", "path => true") + editor.update() + editor.run_editor_method(":expand", "path => true") spinner.set_visibility(False) editor.set_visibility(True) with ui.tab_panel(tab_info).classes("min-h-[calc(100vh-12rem)]"): @@ -91,9 +98,8 @@ async def load_info(mask_secrets: bool = True) -> None: properties["content"] = {"json": "Info retrieval failed."} # type: ignore[unreachable] else: properties["content"] = {"json": info} - # Note: editor.update(...) broken in NiceGUI 3.0.4 - editor.run_editor_method("update", properties["content"]) - editor.run_editor_method(":expand", "[]", "path => true") + editor.update() + editor.run_editor_method(":expand", "path => true") spinner.set_visibility(False) editor.set_visibility(True) mask_secrets_switch.set_visibility(True) diff --git a/src/aignostics/system/_service.py b/src/aignostics/system/_service.py index 9c65aa24..dea80bad 100644 --- a/src/aignostics/system/_service.py +++ b/src/aignostics/system/_service.py @@ -174,6 +174,7 @@ def is_token_valid(self, token: str) -> bool: Returns: bool: True if the token is valid, False otherwise. """ + logger.info(token) if not self._settings.token: logger.warning("Token is not set in settings.") return False @@ -270,28 +271,6 @@ def _is_secret_key(key: str) -> bool: return True # Check simple string match terms return any(term in key_lower for term in string_match_terms) - @staticmethod - def _collect_all_settings(mask_secrets: bool = True) -> dict[str, Any]: - """Collect settings from all BaseSettings subclasses. - - Args: - mask_secrets (bool): Whether to mask sensitive information in the output. - - Returns: - dict[str, Any]: Flattened settings dictionary with env_prefix + key as the key. - """ - settings: dict[str, Any] = {} - for settings_class in locate_subclasses(BaseSettings): - settings_instance = load_settings(settings_class) - env_prefix = settings_instance.model_config.get("env_prefix", "") - settings_dict = json.loads( - settings_instance.model_dump_json(context={UNHIDE_SENSITIVE_INFO: not mask_secrets}) - ) - for key, value in settings_dict.items(): - flat_key = f"{env_prefix}{key}".upper() - settings[flat_key] = value - return {k: settings[k] for k in sorted(settings)} - @staticmethod def info(include_environ: bool = False, mask_secrets: bool = True) -> dict[str, Any]: # type: ignore[override] """ @@ -411,7 +390,17 @@ def info(include_environ: bool = False, mask_secrets: bool = True) -> dict[str, else: runtime["environ"] = dict(sorted(os.environ.items())) - rtn["settings"] = Service._collect_all_settings(mask_secrets=mask_secrets) + settings: dict[str, Any] = {} + for settings_class in locate_subclasses(BaseSettings): + settings_instance = load_settings(settings_class) + env_prefix = settings_instance.model_config.get("env_prefix", "") + settings_dict = json.loads( + settings_instance.model_dump_json(context={UNHIDE_SENSITIVE_INFO: not mask_secrets}) + ) + for key, value in settings_dict.items(): + flat_key = f"{env_prefix}{key}".upper() + settings[flat_key] = value + rtn["settings"] = {k: settings[k] for k in sorted(settings)} # Convert the TypedDict to a regular dict before adding dynamic service keys result_dict: dict[str, Any] = dict(rtn) @@ -424,21 +413,6 @@ def info(include_environ: bool = False, mask_secrets: bool = True) -> dict[str, logger.info("Service info: %s", result_dict) return result_dict - @staticmethod - def dump_dot_env_file(destination: Path) -> None: - """Dump settings to .env file. - - Args: - destination (Path): Path pointing to .env file to generate. - - Raises: - ValueError: If the primary .env file does not exist. - """ - dump = Service._collect_all_settings(mask_secrets=False) - with destination.open("w", encoding="utf-8") as f: - for key, value in dump.items(): - f.write(f"{key}={value}\n") - @staticmethod def openapi_schema() -> JsonType: """ diff --git a/src/aignostics/third_party/showinfm/showinfm.py b/src/aignostics/third_party/showinfm/showinfm.py index e4d5e6ce..dee5582a 100644 --- a/src/aignostics/third_party/showinfm/showinfm.py +++ b/src/aignostics/third_party/showinfm/showinfm.py @@ -343,12 +343,7 @@ def show_in_file_manager( for d in directories: if verbose: print("Executing Windows shell to open", d) - # Validate path exists and is a directory before opening - path_obj = Path(d) - if path_obj.exists() and path_obj.is_dir(): - os.startfile(d) # noqa: S606 - elif verbose: - print(f"Skipping invalid or non-directory path: {d}", file=sys.stderr) + os.startfile(d) else: if uris_and_paths: # Some file managers must be passed only one or zero paths / URIs diff --git a/src/aignostics/utils/CLAUDE.md b/src/aignostics/utils/CLAUDE.md index 309c56ef..4da87956 100644 --- a/src/aignostics/utils/CLAUDE.md +++ b/src/aignostics/utils/CLAUDE.md @@ -21,7 +21,6 @@ The utils module provides core infrastructure and shared utilities used across a - `_settings.py` - Settings management with Pydantic validation - `_log.py` - Structured logging configuration - `_health.py` - Health check framework -- `_user_agent.py` - **Enhanced user agent generation with CI/CD context** (NEW) - `boot.py` - Application bootstrap and initialization **System Utilities:** @@ -61,27 +60,6 @@ class MyService(BaseService): return {"version": "1.0.0"} ``` -**User Agent Generation:** - -```python -from aignostics.utils import user_agent - -# Generate enhanced user agent with CI/CD context -ua = user_agent() -# Format: {project_name}/{version} ({platform}; {pytest_test}; {github_run_url}) - -# Examples: -# "aignostics/1.0.0-beta.7 (darwin)" -# "aignostics/1.0.0-beta.7 (linux; tests/platform/test_auth.py::test_login)" -# "aignostics/1.0.0-beta.7 (linux; +https://github.com/org/repo/actions/runs/123)" -# "aignostics/1.0.0-beta.7 (linux; tests/.../test_e2e.py; +https://github.com/org/repo/actions/runs/456)" - -# Used automatically by: -# - SDK metadata system (platform._sdk_metadata) -# - API client HTTP headers -# - Logging context -``` - **Logging:** ```python @@ -118,61 +96,6 @@ class MyService(BaseService): ## Technical Implementation -**User Agent System (`_user_agent.py`):** - -**NEW FEATURE**: Enhanced user agent generation with automatic CI/CD context detection. - -```python -def user_agent() -> str: - """Generate user agent string for HTTP requests. - - Format: {project_name}/{version} ({platform}; {current_test}; {github_run}) - - Detection: - - Platform: sys.platform (darwin, linux, win32) - - Pytest: PYTEST_CURRENT_TEST environment variable - - GitHub Actions: GITHUB_RUN_ID, GITHUB_REPOSITORY environment variables - - Returns: - str: User agent string with contextual information - """ - current_test = os.getenv("PYTEST_CURRENT_TEST") # e.g., "tests/test_foo.py::test_bar" - github_run_id = os.getenv("GITHUB_RUN_ID") # GitHub Actions workflow run ID - github_repository = os.getenv("GITHUB_REPOSITORY") # e.g., "owner/repo" - - optional_parts = [] - - # Add test context if running under pytest - if current_test: - optional_parts.append(current_test) - - # Add GitHub Actions context if available - if github_run_id and github_repository: - github_run_url = f"+https://github.com/{github_repository}/actions/runs/{github_run_id}" - optional_parts.append(github_run_url) - - # Build user agent - base = f"{PROJECT_NAME}/{VERSION} ({sys.platform})" - if optional_parts: - return f"{base}; {'; '.join(optional_parts)}" - return base -``` - -**Usage in SDK:** - -1. **SDK Metadata**: Included in every run's metadata (`platform._sdk_metadata.build_sdk_metadata()`) -2. **HTTP Headers**: Set in API client configuration for all HTTP requests -3. **Logging Context**: Available for structured logging and observability -4. **Debugging**: Provides traceability from API requests back to specific tests or workflow runs - -**Key Features:** - -- **Automatic Context Detection**: No manual configuration required -- **CI/CD Integration**: Captures GitHub Actions workflow context with direct links to runs -- **Test Traceability**: Links API requests to specific pytest tests -- **Platform Identification**: Operating system detection for debugging platform-specific issues -- **Lightweight**: Minimal performance overhead, simple environment variable reads - **Service Discovery System:** - Dynamic discovery of implementations and subclasses diff --git a/src/aignostics/utils/_fs.py b/src/aignostics/utils/_fs.py index 4793bac8..3bad7539 100644 --- a/src/aignostics/utils/_fs.py +++ b/src/aignostics/utils/_fs.py @@ -46,6 +46,7 @@ def sanitize_path(path: str | Path) -> str | Path: Args: path (str | Path): The path to sanitize. + replace_colon_with_underscore (bool): if True (default) will apply rule #2 Returns: str | Path: The sanitized path. diff --git a/src/aignostics/utils/_gui.py b/src/aignostics/utils/_gui.py index b5e653bc..659d0eae 100644 --- a/src/aignostics/utils/_gui.py +++ b/src/aignostics/utils/_gui.py @@ -90,7 +90,6 @@ def gui_run( # noqa: PLR0913, PLR0917 show_welcome_message=native is False, show=show, window_size=WINDOW_SIZE if native else None, - reconnect_timeout=60 * 60 * 24 * 7, ) diff --git a/src/aignostics/utils/_user_agent.py b/src/aignostics/utils/_user_agent.py index a9d4e1ea..1455ba7c 100644 --- a/src/aignostics/utils/_user_agent.py +++ b/src/aignostics/utils/_user_agent.py @@ -3,35 +3,20 @@ import os import platform -from ._constants import __project_name__, __repository_url__, __version_full__ +from ._constants import __project_name__, __version_full__ def user_agent() -> str: """Generate a user agent string for HTTP requests. - Format: {project_name}/{version} ({platform}; {current_test}; {github_run}) + Format: {project_name}/{version} ({platform}; {current_test}) Returns: str: The user agent string. """ - current_test = os.getenv("PYTEST_CURRENT_TEST") # Set if running under pytest - github_run_id = os.getenv("GITHUB_RUN_ID") # Set if running in GitHub Actions - github_repository = os.getenv("GITHUB_REPOSITORY") # Set if running in GitHub Actions - - optional_parts = [] - - if current_test: - optional_parts.append(current_test) - - if github_run_id and github_repository: - github_run_url = f"+https://github.com/{github_repository}/actions/runs/{github_run_id}" - optional_parts.append(github_run_url) - - optional_suffix = "; " + "; ".join(optional_parts) if optional_parts else "" - - # TODO(Helmut): Find a way to not hard code python-sdk here. - # Format: {project}/{version} ({platform}; {repository}; {optional_parts}) - base_info = f"{__project_name__}-python-sdk/{__version_full__}" - system_info = f"{platform.platform()}; +{__repository_url__}{optional_suffix}" - - return f"{base_info} ({system_info})" + current_test = os.getenv("PYTEST_CURRENT_TEST") + return ( + f"{__project_name__}-python-sdk/{__version_full__} " + f"({platform.platform()}" + f"{'; ' + current_test if current_test else ''})" + ) diff --git a/src/aignostics/utils/boot.py b/src/aignostics/utils/boot.py index 55080ff0..651e8710 100644 --- a/src/aignostics/utils/boot.py +++ b/src/aignostics/utils/boot.py @@ -31,6 +31,8 @@ def boot(modules_to_instrument: list[str]) -> None: Args: modules_to_instrument (list): List of modules to be instrumented. + repository_url (str): URL of the repository. + repository_root_path (str): The root path of the repository. Default is the root path. """ global _boot_called # noqa: PLW0603 if _boot_called: @@ -59,9 +61,6 @@ def _parse_env_args() -> None: - Last but not least removes those args so typer does not complain about them. """ - logger = get_logger(__name__) - logger.debug("_parse_env_args called with sys.argv: %s", sys.argv) - i = 1 # Start after script name to_remove = [] prefix = f"{__project_name__.upper()}_" diff --git a/src/aignostics/wsi/_cli.py b/src/aignostics/wsi/_cli.py index 2a9bafec..6d8d26bb 100644 --- a/src/aignostics/wsi/_cli.py +++ b/src/aignostics/wsi/_cli.py @@ -1,6 +1,5 @@ """CLI for operations on wsi files.""" -import sys from pathlib import Path from typing import Annotated @@ -18,81 +17,73 @@ @cli.command() -def inspect( # noqa: PLR0915 +def inspect( path: Annotated[Path, typer.Argument(help="Path to the wsi file", exists=True)], ) -> None: """Inspect a wsi file and display its metadata.""" - try: - metadata = Service().get_metadata(path) - - # Basics - console.print("Format:", style="blue", end=" ") - console.print(metadata["format"], style="green") - console.print("Path:", style="blue", end=" ") - console.print(metadata["file"]["path"], style="green") - console.print("Size (human):", style="blue", end=" ") - console.print(metadata["file"]["size_human"], style="green") - console.print("Width:", style="blue", end=" ") - console.print(metadata["dimensions"]["width"], style="green") - console.print("Height:", style="blue", end=" ") - console.print(metadata["dimensions"]["height"], style="green") - console.print("MPP (x):", style="blue", end=" ") - console.print(metadata["resolution"]["mpp_x"], style="green") - console.print("MPP (y):", style="blue", end=" ") - console.print(metadata["resolution"]["mpp_y"], style="green") - - # Image Properties - if "properties" in metadata and "image" in metadata["properties"]: - img = metadata["properties"]["image"] - created = f"{img['date']} (libvips {img['version']})" - console.print("Created:", style="blue", end=" ") - console.print(created, style="green") - - if "properties" in img and "bands" in img["properties"]: - console.print("Color channels:", style="blue", end=" ") - console.print(str(img["properties"]["bands"]), style="green") - - if "properties" in img and "aix-original-format" in img["properties"]: - console.print("aix-original-format:", style="blue", end=" ") - console.print(str(img["properties"]["aix-original-format"]), style="green") - - # Level Structure - console.print("\nLevel Structure:", style="bold blue") - for level in metadata["levels"]["data"]: - console.print(f"\nLevel {level['index']}", style="blue") - - dimensions = f"{level['dimensions']['width']} x {level['dimensions']['height']} pixels" - console.print(" Dimensions:", style="blue", end=" ") - console.print(dimensions, style="green") - - downsample = f"{level['downsample']:.1f}x" - console.print(" Downsample factor:", style="blue", end=" ") - console.print(downsample, style="green") - - pixel_size = f"{metadata['resolution']['mpp_x'] * level['downsample']:.3f} μm/pixel" - console.print(" Pixel size:", style="blue", end=" ") - console.print(pixel_size, style="green") - - tile_size = f"{level['tile']['width']} x {level['tile']['height']} pixels" - console.print(" Tile size:", style="blue", end=" ") - console.print(tile_size, style="green") - - tiles = ( - f"{level['tile']['grid']['x']} x {level['tile']['grid']['y']} ({level['tile']['grid']['total']} total)" - ) - console.print(" Tiles:", style="blue", end=" ") - console.print(tiles, style="green") - - # Associated Images - if metadata.get("associated_images"): - console.print("\nAssociated Images:", style="bold blue") - for img in metadata["associated_images"]: - console.print(f" - {img}", style="green") - except Exception as e: - message = f"Failed to inspect path '{path}': {e!s}" - logger.exception(message) - console.print(f"[red]{message}[/red]") - sys.exit(1) + metadata = Service().get_metadata(path) + + # Basics + console.print("Format:", style="blue", end=" ") + console.print(metadata["format"], style="green") + console.print("Path:", style="blue", end=" ") + console.print(metadata["file"]["path"], style="green") + console.print("Size (human):", style="blue", end=" ") + console.print(metadata["file"]["size_human"], style="green") + console.print("Width:", style="blue", end=" ") + console.print(metadata["dimensions"]["width"], style="green") + console.print("Height:", style="blue", end=" ") + console.print(metadata["dimensions"]["height"], style="green") + console.print("MPP (x):", style="blue", end=" ") + console.print(metadata["resolution"]["mpp_x"], style="green") + console.print("MPP (y):", style="blue", end=" ") + console.print(metadata["resolution"]["mpp_y"], style="green") + + # Image Properties + if "properties" in metadata and "image" in metadata["properties"]: + img = metadata["properties"]["image"] + created = f"{img['date']} (libvips {img['version']})" + console.print("Created:", style="blue", end=" ") + console.print(created, style="green") + + if "properties" in img and "bands" in img["properties"]: + console.print("Color channels:", style="blue", end=" ") + console.print(str(img["properties"]["bands"]), style="green") + + if "properties" in img and "aix-original-format" in img["properties"]: + console.print("aix-original-format:", style="blue", end=" ") + console.print(str(img["properties"]["aix-original-format"]), style="green") + + # Level Structure + console.print("\nLevel Structure:", style="bold blue") + for level in metadata["levels"]["data"]: + console.print(f"\nLevel {level['index']}", style="blue") + + dimensions = f"{level['dimensions']['width']} x {level['dimensions']['height']} pixels" + console.print(" Dimensions:", style="blue", end=" ") + console.print(dimensions, style="green") + + downsample = f"{level['downsample']:.1f}x" + console.print(" Downsample factor:", style="blue", end=" ") + console.print(downsample, style="green") + + pixel_size = f"{metadata['resolution']['mpp_x'] * level['downsample']:.3f} μm/pixel" + console.print(" Pixel size:", style="blue", end=" ") + console.print(pixel_size, style="green") + + tile_size = f"{level['tile']['width']} x {level['tile']['height']} pixels" + console.print(" Tile size:", style="blue", end=" ") + console.print(tile_size, style="green") + + tiles = f"{level['tile']['grid']['x']} x {level['tile']['grid']['y']} ({level['tile']['grid']['total']} total)" + console.print(" Tiles:", style="blue", end=" ") + console.print(tiles, style="green") + + # Associated Images + if metadata.get("associated_images"): + console.print("\nAssociated Images:", style="bold blue") + for img in metadata["associated_images"]: + console.print(f" - {img}", style="green") cli_dicom = typer.Typer(no_args_is_help=True) @@ -111,28 +102,22 @@ def dicom_inspect( """Inspect DICOM files at any hierarchy level.""" from ._pydicom_handler import PydicomHandler # noqa: PLC0415 - try: - with PydicomHandler.from_file(str(path)) as handler: - metadata = handler.get_metadata(verbose) + with PydicomHandler.from_file(str(path)) as handler: + metadata = handler.get_metadata(verbose) - if metadata["type"] == "empty": - console.print("[bold red]No DICOM files found in the specified path.[/bold red]") - return + if metadata["type"] == "empty": + console.print("[bold red]No DICOM files found in the specified path.[/bold red]") + return - # Print hierarchy - for study_uid, study_data in metadata["studies"].items(): - console.print(f"\n[bold]Study:[/bold] {study_uid}") - print_study_info(study_data) + # Print hierarchy + for study_uid, study_data in metadata["studies"].items(): + console.print(f"\n[bold]Study:[/bold] {study_uid}") + print_study_info(study_data) - if not summary: - for container_id, slide_data in study_data["slides"].items(): - console.print(f"\n[bold]Slide (Container ID):[/bold] {container_id}") - print_slide_info(slide_data, indent=1, verbose=verbose) - except Exception as e: - message = f"Failed to inspect DICOM path '{path}': {e!s}" - logger.exception(message) - console.print(f"[red]{message}[/red]") - sys.exit(1) + if not summary: + for container_id, slide_data in study_data["slides"].items(): + console.print(f"\n[bold]Slide (Container ID):[/bold] {container_id}") + print_slide_info(slide_data, indent=1, verbose=verbose) @cli_dicom.command(name="geojson_import") @@ -143,11 +128,5 @@ def dicom_geojson_import( """Import GeoJSON annotations into DICOM ANN instance.""" from ._pydicom_handler import PydicomHandler # noqa: PLC0415 - try: - console.print("\nImporting GeoJSON annotations into DICOM ANN instance...", style="blue") - PydicomHandler.geojson_import(dicom_path, geojson_path) - except Exception as e: - message = f"Failed to import GeoJSON '{geojson_path}' into DICOM '{dicom_path}': {e!s}" - logger.exception(message) - console.print(f"[red]{message}[/red]") - sys.exit(1) + console.print("\nImporting GeoJSON annotations into DICOM ANN instance...", style="blue") + PydicomHandler.geojson_import(dicom_path, geojson_path) diff --git a/src/aignostics/wsi/_openslide_handler.py b/src/aignostics/wsi/_openslide_handler.py index 9a22ed6f..28e83efe 100644 --- a/src/aignostics/wsi/_openslide_handler.py +++ b/src/aignostics/wsi/_openslide_handler.py @@ -1,9 +1,9 @@ """Handler for wsi files using OpenSlide.""" +import xml.etree.ElementTree as ET # noqa: S405 from pathlib import Path from typing import Any -import defusedxml.ElementTree as ET # noqa: N817 import openslide from openslide import ImageSlide, OpenSlide, open_slide from PIL.Image import Image @@ -44,10 +44,11 @@ def _detect_format(self) -> str | None: str: The detected format of the TIFF file. """ props = dict(self.slide.properties) + # Check for libvips signature in XML metadata if TIFF_IMAGE_DESCRIPTION in props: try: - root = ET.fromstring(props[TIFF_IMAGE_DESCRIPTION]) + root = ET.fromstring(props[TIFF_IMAGE_DESCRIPTION]) # noqa: S314 if root.get("xmlns") == "http://www.vips.ecs.soton.ac.uk//dzsave": return "pyramidal-tiff (libvips)" except ET.ParseError: @@ -86,7 +87,7 @@ def _parse_xml_image_description(self, xml_string: str) -> dict[str, Any]: # no dict[str, Any]: Parsed image description as a dictionary with metadata properties. """ try: - root = ET.fromstring(xml_string) + root = ET.fromstring(xml_string) # noqa: S314 namespace = {"ns": "http://www.vips.ecs.soton.ac.uk//dzsave"} image_desc: dict[str, Any] = { "date": root.get("date"), @@ -109,9 +110,9 @@ def _parse_xml_image_description(self, xml_string: str) -> dict[str, Any]: # no value_type = value_elem.get("type", "") if value_type == "gint": - value = int(value) + value = int(value) # type: ignore[assignment] elif value_type == "gdouble": - value = float(value) + value = float(value) # type: ignore[assignment] elif value_type == "VipsRefString": # Handle special libvips string properties if name == "aix-libVips-version": diff --git a/src/aignostics/wsi/_utils.py b/src/aignostics/wsi/_utils.py index 9cc92e45..d9507ee8 100644 --- a/src/aignostics/wsi/_utils.py +++ b/src/aignostics/wsi/_utils.py @@ -8,6 +8,26 @@ from aignostics.utils import console +def get_tag_info(tag_str: str) -> str: + """Convert DICOM tag to human readable name. + + Args: + tag_str(str): DICOM tag string in format '00100010'. + + Returns: + str: Human readable name of the DICOM tag or the original tag string if not found. + """ + from pydicom.datadict import dictionary_description # noqa: PLC0415 + + try: + # Convert string tag like '00100010' to tuple format (0010,0010) + tag_tuple = (int(tag_str[0:4], 16), int(tag_str[4:8], 16)) + description = dictionary_description(tag_tuple) + return f"{description}" if description else tag_str + except KeyError: + return tag_str + + def print_file_info(file_info: dict[str, Any], indent: int = 0) -> None: # noqa: C901, PLR0912, PLR0915 """Print formatted file information. diff --git a/tests/CLAUDE.md b/tests/CLAUDE.md index 8c98802a..b2d27132 100644 --- a/tests/CLAUDE.md +++ b/tests/CLAUDE.md @@ -14,8 +14,6 @@ tests/ ├── aignostics/ │ ├── platform/ # Platform module tests │ │ ├── authentication_test.py # OAuth flow testing -│ │ ├── sdk_metadata_test.py # SDK metadata system tests (NEW) -│ │ ├── cli_test.py # CLI command testing (includes metadata schema) │ │ ├── resources/ # Resource-specific tests │ │ └── scheduled_test.py # Periodic validation │ ├── application/ # Application orchestration tests @@ -79,525 +77,26 @@ def test_token_refresh_timing(): def test_application_version_formats(): """Test all valid and invalid semver formats.""" valid = [ - "1.0.0", - "1.0.0-alpha", - "1.0.0+meta", - "1.0.0-rc.1+meta" + "app:v1.0.0", + "app:v1.0.0-alpha", + "app:v1.0.0+meta", + "app:v1.0.0-rc.1+meta" ] invalid = [ - "v1.0.0", # 'v' prefix not allowed - "1.0", # Incomplete - "", # Empty string + "app:1.0.0", # Missing v + "app:v1.0", # Incomplete + ":v1.0.0", # Missing app ] for v in valid: - assert service.application_version("test-app", v) + assert service.application_version(v) for v in invalid: with pytest.raises(ValueError): - service.application_version("test-app", v) + service.application_version(v) ``` -### SDK Metadata Testing (`platform/sdk_metadata_test.py`) - -**ENHANCED FEATURE TESTS (Run v0.0.4, Item v0.0.3):** Comprehensive testing of the SDK metadata system with separate Run and Item metadata schemas, tags support, and timestamps. - -**Test Coverage:** - -1. **Metadata Building Tests** - Verify automatic metadata generation in various environments -2. **Schema Validation Tests** - Ensure strict Pydantic validation catches invalid data -3. **CI/CD Integration Tests** - Test GitHub Actions and pytest context capture -4. **Environment Detection Tests** - Verify interface and source detection logic -5. **JSON Schema Generation Tests** - Validate schema structure and versioning - -**Clean Environment Fixture:** - -```python -@pytest.fixture -def clean_env(): - """Clean environment for SDK metadata tests.""" - # Save original environment - original_env = os.environ.copy() - - # Clear SDK-related variables - for key in list(os.environ.keys()): - if key.startswith(("GITHUB_", "PYTEST_", "NICEGUI_", "AIGNOSTICS_")): - del os.environ[key] - - yield - - # Restore original environment - os.environ.clear() - os.environ.update(original_env) -``` - -**Metadata Building Tests:** - -```python -class TestBuildSdkMetadata: - """Test cases for build_sdk_metadata function.""" - - def test_build_metadata_minimal(clean_env: None) -> None: - """Test metadata building with minimal environment.""" - metadata = build_sdk_metadata() - - # Required fields always present - assert "schema_version" in metadata - assert metadata["schema_version"] == "0.0.1" - assert "submission" in metadata - assert "user_agent" in metadata - assert metadata["submission"]["interface"] in ["script", "cli", "launchpad"] - assert metadata["submission"]["initiator"] in ["user", "test", "bridge"] - assert "date" in metadata["submission"] - - # Optional fields may be absent - # user, ci, note, workflow, scheduling are optional - - def test_build_metadata_with_github_ci(clean_env: None) -> None: - """Test metadata with GitHub Actions environment.""" - # Set GitHub Actions environment variables - os.environ["GITHUB_RUN_ID"] = "12345" - os.environ["GITHUB_REPOSITORY"] = "aignostics/python-sdk" - os.environ["GITHUB_SHA"] = "abc123def456" # pragma: allowlist secret - os.environ["GITHUB_REF"] = "refs/heads/main" - os.environ["GITHUB_WORKFLOW"] = "CI/CD" - - metadata = build_sdk_metadata() - - # GitHub CI metadata should be present - assert "ci" in metadata - assert "github" in metadata["ci"] - assert metadata["ci"]["github"]["run_id"] == "12345" - assert metadata["ci"]["github"]["repository"] == "aignostics/python-sdk" - assert metadata["ci"]["github"]["sha"] == "abc123def456" # pragma: allowlist secret - assert metadata["ci"]["github"]["run_url"] == ( - "https://github.com/aignostics/python-sdk/actions/runs/12345" - ) - - def test_build_metadata_with_pytest(clean_env: None) -> None: - """Test metadata with pytest environment.""" - os.environ["PYTEST_CURRENT_TEST"] = "tests/platform/sdk_metadata_test.py::test_foo" - os.environ["PYTEST_MARKERS"] = "unit,sequential" - - metadata = build_sdk_metadata() - - # Pytest CI metadata should be present - assert "ci" in metadata - assert "pytest" in metadata["ci"] - assert metadata["ci"]["pytest"]["current_test"] == ( - "tests/platform/sdk_metadata_test.py::test_foo" - ) - assert metadata["ci"]["pytest"]["markers"] == ["unit", "sequential"] - - def test_interface_detection_cli(clean_env: None) -> None: - """Test CLI interface detection.""" - with patch("sys.argv", ["aignostics", "user", "login"]): - metadata = build_sdk_metadata() - assert metadata["submission"]["interface"] == "cli" - - def test_interface_detection_launchpad(clean_env: None) -> None: - """Test launchpad (GUI) interface detection.""" - os.environ["NICEGUI_HOST"] = "localhost" - metadata = build_sdk_metadata() - assert metadata["submission"]["interface"] == "launchpad" - - def test_source_detection_test(clean_env: None) -> None: - """Test source detection for pytest.""" - os.environ["PYTEST_CURRENT_TEST"] = "test.py::test_foo" - metadata = build_sdk_metadata() - assert metadata["submission"]["initiator"] == "test" - - def test_source_detection_bridge(clean_env: None) -> None: - """Test source detection for bridge.""" - os.environ["AIGNOSTICS_BRIDGE_VERSION"] = "1.0.0" - metadata = build_sdk_metadata() - assert metadata["submission"]["initiator"] == "bridge" -``` - -**Validation Tests:** - -```python -class TestValidateSdkMetadata: - """Test SDK metadata validation.""" - - def test_validate_valid_metadata(clean_env: None) -> None: - """Test validation of valid metadata.""" - metadata = build_sdk_metadata() - assert validate_sdk_metadata(metadata) is True - assert validate_sdk_metadata_silent(metadata) is True - - def test_validate_missing_required_field() -> None: - """Test validation fails for missing required fields.""" - metadata = { - # Missing schema_version - "submission": { - "date": "2025-10-19T12:00:00Z", - "interface": "script", - "source": "user", - }, - "user_agent": "test/1.0.0" - } - - with pytest.raises(ValidationError): - validate_sdk_metadata(metadata) - - assert validate_sdk_metadata_silent(metadata) is False - - def test_validate_invalid_enum_value() -> None: - """Test validation fails for invalid enum values.""" - metadata = { - "schema_version": "0.0.1", - "submission": { - "date": "2025-10-19T12:00:00Z", - "interface": "invalid_interface", # Invalid enum value - "source": "user", - }, - "user_agent": "test/1.0.0" - } - - with pytest.raises(ValidationError): - validate_sdk_metadata(metadata) - - def test_validate_extra_fields_forbidden() -> None: - """Test validation fails when extra fields are present.""" - metadata = build_sdk_metadata() - metadata["unknown_field"] = "value" # Extra field - - with pytest.raises(ValidationError, match="extra fields not permitted"): - validate_sdk_metadata(metadata) -``` - -**JSON Schema Tests:** - -```python -class TestGetSdkMetadataJsonSchema: - """Test JSON schema generation.""" - - def test_schema_structure() -> None: - """Test JSON schema has required fields.""" - schema = get_sdk_metadata_json_schema() - - assert "$schema" in schema - assert schema["$schema"] == "https://json-schema.org/draft/2020-12/schema" - - assert "$id" in schema - assert ( - schema["$id"] - == f"https://raw.githubusercontent.com/aignostics/python-sdk/main/" - f"docs/source/_static/sdk_metadata_schema_v{SDK_METADATA_SCHEMA_VERSION}.json" - ) - - assert "properties" in schema - assert "required" in schema - - def test_schema_validates_built_metadata(clean_env: None) -> None: - """Test that generated schema validates built metadata.""" - import jsonschema - - schema = get_sdk_metadata_json_schema() - metadata = build_sdk_metadata() - - # Should not raise ValidationError - jsonschema.validate(instance=metadata, schema=schema) -``` - -**CLI Tests (`platform/cli_test.py`):** - -```python -class TestSdkMetadataSchemaCommand: - """Test SDK metadata schema CLI command.""" - - def test_sdk_metadata_schema_pretty(runner: CliRunner) -> None: - """Test schema output with pretty printing.""" - result = runner.invoke(cli_sdk, ["metadata-schema", "--pretty"]) - - assert result.exit_code == 0 - assert "$schema" in result.output - assert "$id" in result.output - assert "sdk_metadata_schema" in result.output - - # Should be valid JSON - schema = json.loads(result.output) - assert schema["$schema"] == "https://json-schema.org/draft/2020-12/schema" - - def test_sdk_metadata_schema_no_pretty(runner: CliRunner) -> None: - """Test schema output without pretty printing (compact).""" - result = runner.invoke(cli_sdk, ["metadata-schema", "--no-pretty"]) - - assert result.exit_code == 0 - # Compact JSON (no indentation) - assert "\n " not in result.output or result.output.count("\n") < 10 - - # Should still be valid JSON - schema = json.loads(result.output) - assert "$schema" in schema -``` - -**Integration with Run Submission:** - -Tested in `application/service_test.py` and `application/cli_test.py` to ensure SDK metadata is automatically attached to all run submissions. - -**Key Testing Principles:** - -1. **Clean Environment**: Use `clean_env` fixture to ensure test isolation -2. **Environment Simulation**: Mock GitHub Actions and pytest environments -3. **Validation Strictness**: Test both valid and invalid metadata structures -4. **Schema Consistency**: Verify generated schema validates built metadata -5. **CLI Integration**: Test schema export command -6. **Optional Fields**: Verify system works with missing optional fields -7. **Error Cases**: Test validation catches all invalid inputs - -### Cache Bypass Testing (`platform/nocache_test.py` - NEW in v1.0.0-beta.7) - -**Comprehensive testing of the nocache parameter** for cache bypass functionality across all cached operations. - -**Test Coverage:** - -1. **Decorator Behavior Tests** - Verify @cached_operation decorator handles nocache correctly -2. **Client Method Tests** - Test nocache on Client.me(), Client.application(), Client.application_version() -3. **Resource Method Tests** - Test nocache on Runs.list(), Run.details(), Applications.list() -4. **Edge Case Tests** - Expired cache entries, multiple consecutive nocache calls, interleaved usage -5. **Cache Clear Integration** - Test interaction between nocache and cache clearing - -**Core Testing Principles:** - -```python -class TestNocacheDecoratorBehavior: - """Test the nocache parameter handling in the cached_operation decorator.""" - - def test_decorator_without_nocache_uses_cache() -> None: - """Verify default behavior uses cache.""" - call_count = 0 - - @cached_operation(ttl=60, use_token=False) - def test_func() -> int: - nonlocal call_count - call_count += 1 - return call_count - - # First call - executes function - result1 = test_func() - assert result1 == 1 - assert call_count == 1 - - # Second call - uses cache - result2 = test_func() - assert result2 == 1 # Same value from cache - assert call_count == 1 # Function NOT called again - - def test_decorator_with_nocache_true_skips_reading_cache() -> None: - """Verify nocache=True skips cache read.""" - call_count = 0 - - @cached_operation(ttl=60, use_token=False) - def test_func() -> int: - nonlocal call_count - call_count += 1 - return call_count - - # First call - populates cache - result1 = test_func() - assert result1 == 1 - - # Second call with nocache=True - skips cache, executes function - result2 = test_func(nocache=True) - assert result2 == 2 # NEW value, not from cache - assert call_count == 2 # Function called again - - def test_decorator_with_nocache_true_still_writes_to_cache() -> None: - """Verify nocache=True still writes result to cache.""" - call_count = 0 - - @cached_operation(ttl=60, use_token=False) - def test_func() -> int: - nonlocal call_count - call_count += 1 - return call_count - - # First call - populates cache - result1 = test_func() - assert result1 == 1 - - # Second call with nocache=True - skips read, writes new value - result2 = test_func(nocache=True) - assert result2 == 2 - - # Third call without nocache - uses value cached by second call - result3 = test_func() - assert result3 == 2 # Uses value from second call - assert call_count == 2 # Function NOT called again - - def test_decorator_nocache_parameter_not_passed_to_function() -> None: - """Verify nocache is intercepted and not passed to decorated function.""" - received_kwargs = {} - - @cached_operation(ttl=60, use_token=False) - def test_func(**kwargs: bool) -> dict: - nonlocal received_kwargs - received_kwargs = kwargs - return {"called": True} - - # Call with nocache=True - test_func(nocache=True) - - # The decorated function should NOT receive nocache in kwargs - assert "nocache" not in received_kwargs -``` - -**Client Method Testing:** - -```python -class TestClientMeNocache: - """Test nocache parameter for Client.me() method.""" - - def test_me_default_uses_cache( - client_with_mock_api: Client, mock_api_client: MagicMock - ) -> None: - """Verify me() uses cache by default.""" - mock_me_response = {"user_id": "test-user", "org_id": "test-org"} - mock_api_client.get_me_v1_me_get.return_value = mock_me_response - - # First call - result1 = client_with_mock_api.me() - assert result1 == mock_me_response - assert mock_api_client.get_me_v1_me_get.call_count == 1 - - # Second call - should use cache - result2 = client_with_mock_api.me() - assert result2 == mock_me_response - assert mock_api_client.get_me_v1_me_get.call_count == 1 # No additional call - - def test_me_nocache_true_fetches_fresh_data( - client_with_mock_api: Client, mock_api_client: MagicMock - ) -> None: - """Verify me(nocache=True) fetches fresh data.""" - mock_me_response_1 = {"user_id": "user-1"} - mock_me_response_2 = {"user_id": "user-2"} - - # First call - populates cache - mock_api_client.get_me_v1_me_get.return_value = mock_me_response_1 - result1 = client_with_mock_api.me() - assert result1 == mock_me_response_1 - - # Change API response - mock_api_client.get_me_v1_me_get.return_value = mock_me_response_2 - - # Second call with nocache=True - fetches fresh data - result2 = client_with_mock_api.me(nocache=True) - assert result2 == mock_me_response_2 - assert mock_api_client.get_me_v1_me_get.call_count == 2 # Additional call made - - def test_me_nocache_true_updates_cache( - client_with_mock_api: Client, mock_api_client: MagicMock - ) -> None: - """Verify me(nocache=True) updates cache with fresh data.""" - mock_me_response_1 = {"user_id": "user-1"} - mock_me_response_2 = {"user_id": "user-2"} - - # First call - populates cache - mock_api_client.get_me_v1_me_get.return_value = mock_me_response_1 - result1 = client_with_mock_api.me() - - # Change API response - mock_api_client.get_me_v1_me_get.return_value = mock_me_response_2 - - # Second call with nocache=True - fetches and caches new data - result2 = client_with_mock_api.me(nocache=True) - assert result2 == mock_me_response_2 - - # Third call without nocache - uses updated cache - result3 = client_with_mock_api.me() - assert result3 == mock_me_response_2 # Uses new cached value - assert mock_api_client.get_me_v1_me_get.call_count == 2 # No additional call -``` - -**Edge Case Testing:** - -```python -class TestNocacheEdgeCases: - """Test edge cases and special scenarios.""" - - def test_nocache_with_expired_cache_entry() -> None: - """Test nocache behavior when cache entry expired.""" - @cached_operation(ttl=1, use_token=False) # 1 second TTL - def test_func() -> int: - return time.time_ns() - - # First call - populates cache - result1 = test_func() - - # Wait for cache to expire - time.sleep(1.1) - - # Call with nocache=True on expired entry - result2 = test_func(nocache=True) - assert result2 != result1 # Different value - - def test_multiple_consecutive_nocache_calls() -> None: - """Test multiple consecutive calls with nocache=True.""" - call_count = 0 - - @cached_operation(ttl=60, use_token=False) - def test_func() -> int: - nonlocal call_count - call_count += 1 - return call_count - - # Multiple calls with nocache=True - assert test_func(nocache=True) == 1 - assert test_func(nocache=True) == 2 - assert test_func(nocache=True) == 3 - assert call_count == 3 - - # Last call without nocache uses cached value from third call - assert test_func() == 3 - assert call_count == 3 - - def test_nocache_interleaved_with_normal_calls() -> None: - """Test interleaving nocache=True with normal cached calls.""" - call_count = 0 - - @cached_operation(ttl=60, use_token=False) - def test_func() -> int: - nonlocal call_count - call_count += 1 - return call_count - - # Normal call - populates cache - assert test_func() == 1 - assert call_count == 1 - - # Normal call - uses cache - assert test_func() == 1 - assert call_count == 1 - - # Nocache call - skips cache, updates it - assert test_func(nocache=True) == 2 - assert call_count == 2 - - # Normal call - uses updated cache - assert test_func() == 2 - assert call_count == 2 -``` - -**Key Testing Principles:** - -1. **Cache Read Bypass**: nocache=True skips reading from cache -2. **Cache Write Preserved**: nocache=True still writes to cache -3. **Parameter Interception**: nocache parameter intercepted by decorator, not passed to function -4. **Cache Key Isolation**: nocache respects different cache keys (different function args) -5. **Edge Case Coverage**: Expired entries, multiple consecutive calls, interleaved usage -6. **Integration Testing**: Test across all cached Client and Resource methods -7. **Signature Verification**: Test method signatures include nocache parameter with correct type hints - -**Use Cases Tested:** - -* **Testing**: Avoid race conditions from stale cached data -* **Real-time Monitoring**: Ensure latest status in dashboards -* **After Mutations**: Get fresh data immediately after updates -* **Cache Refresh**: Force cache update without full cache clear - ### Process Management Testing (`dataset/service_test.py`) **Subprocess Cleanup Verification:** diff --git a/tests/aignostics/application/TC-APPLICATION-CLI-01.feature b/tests/aignostics/application/TC-APPLICATION-CLI-01.feature deleted file mode 100644 index a4706342..00000000 --- a/tests/aignostics/application/TC-APPLICATION-CLI-01.feature +++ /dev/null @@ -1,18 +0,0 @@ -Feature: Application Run Input Validation - - The system validates slide image resolution parameters during application - run submission to reject inputs that exceed application limits. - - @tests:SPEC-APPLICATION-SERVICE - @tests:SWR-APPLICATION-2-1 - @tests:SWR-APPLICATION-2-2 - @tests:SWR-APPLICATION-2-3 - @tests:SWR-APPLICATION-2-4 - @tests:SWR-APPLICATION-2-13 - @tests:SWR-APPLICATION-2-14 - @id:TC-APPLICATION-CLI-01 - Scenario: System rejects application run submission when slide resolution exceeds limits - Given the user provides slide metadata with resolution exceeding application limits - When the user uploads slides and submits application run - Then the system shall reject the submission with validation error - And the system shall indicate the resolution parameter exceeds allowed limits diff --git a/tests/aignostics/application/TC-APPLICATION-CLI-02.feature b/tests/aignostics/application/TC-APPLICATION-CLI-02.feature deleted file mode 100644 index c324c38b..00000000 --- a/tests/aignostics/application/TC-APPLICATION-CLI-02.feature +++ /dev/null @@ -1,28 +0,0 @@ -Feature: Application Run CLI Commands - - The system provides CLI commands for basic application run operations - including submission, status inquiry, cancellation, and result download - with proper functionality across different run states. - - - @tests:SPEC-APPLICATION-SERVICE - @tests:SWR-APPLICATION-2-5 - @tests:SWR-APPLICATION-2-6 - @tests:SWR-APPLICATION-2-7 - @tests:SWR-APPLICATION-3-1 - @id:TC-APPLICATION-CLI-02 - Scenario: System processes CLI commands for run management operations - Given the system receives a run submission request via CLI - When the system processes the submission - Then the system shall create a run and return a unique run identifier - When the system receives a describe command for the run - Then the system shall return run details and current status - When the system receives a download command for the active run - Then the system shall download results and indicate running state - When the system receives a cancel command for the run - Then the system shall cancel the run and confirm the operation - When the system receives another describe command for the canceled run - Then the system shall return updated status showing canceled state - When the system receives a download command for the canceled run - Then the system shall download results and indicate canceled state - And the system shall handle path verification for download destinations diff --git a/tests/aignostics/application/TC-APPLICATION-CLI-03.feature b/tests/aignostics/application/TC-APPLICATION-CLI-03.feature deleted file mode 100644 index 4df62b54..00000000 --- a/tests/aignostics/application/TC-APPLICATION-CLI-03.feature +++ /dev/null @@ -1,26 +0,0 @@ -Feature: Complete Application Execution Workflow - - The system supports end-to-end application execution including dataset download, - application selection, run execution, and result retrieval with automated - processing and output validation. - - @tests:SPEC-APPLICATION-SERVICE - @tests:SWR-APPLICATION-2-8 - @tests:SWR-APPLICATION-2-9 - @tests:SWR-APPLICATION-3-2 - @id:TC-APPLICATION-CLI-03 - Scenario: System processes complete slide analysis through sequential user actions - Given the system provides dataset download capabilities - When the user requests sample dataset download - Then the system shall download the requested slide data files - When the user selects an application for processing - Then the system shall prepare the application execution environment - When the user triggers run execution with processing parameters - Then the system shall automatically generate slide metadata - And the system shall upload slides to the processing platform - And the system shall submit slides for application processing - And the system shall monitor processing until completion - When the user requests result retrieval - Then the system shall download comprehensive analysis results - And the system shall generate multiple output artifact types - And the system shall validate all output files for integrity diff --git a/tests/aignostics/application/TC-APPLICATION-GUI-04.feature b/tests/aignostics/application/TC-APPLICATION-GUI-04.feature deleted file mode 100644 index 79a055c0..00000000 --- a/tests/aignostics/application/TC-APPLICATION-GUI-04.feature +++ /dev/null @@ -1,30 +0,0 @@ -Feature: GUI Application Workflow Management - - The system provides complete graphical interface for application workflow - management including dataset selection, metadata generation, file upload, - run submission, status monitoring, and run control operations including - user-initiated cancellation. - - @tests:SPEC-APPLICATION-SERVICE - @tests:SWR-APPLICATION-1-1 - @tests:SWR-APPLICATION-3-3 - @tests:SWR-APPLICATION-1-2 - @tests:SWR-APPLICATION-2-10 - @tests:SWR-APPLICATION-2-11 - @tests:SWR-APPLICATION-2-12 - @tests:SWR-APPLICATION-2-15 - @tests:SWR-APPLICATION-2-16 - @id:TC-APPLICATION-GUI-04 - Scenario: System processes user manual cancellation of application runs through complete GUI workflow - Given the system completes full application workflow through GUI interface - And the system downloads sample dataset files successfully - And the system navigates through application selection and file picking - And the system processes metadata generation and slide detection - And the system completes upload and submission creating a running application run - And the system displays run with running status and cancellation controls - When the user manually cancels the running application run through GUI button - Then the system shall process the manual cancellation request - And the system shall provide user feedback during cancellation process - And the system shall confirm cancellation completion to user - And the system shall update the run status to canceled state - And the system shall maintain the updated state in the interface diff --git a/tests/aignostics/application/cli_test.py b/tests/aignostics/application/cli_test.py index 3c0a6c3e..2630ce43 100644 --- a/tests/aignostics/application/cli_test.py +++ b/tests/aignostics/application/cli_test.py @@ -2,9 +2,7 @@ import platform import re -from datetime import UTC, datetime, timedelta from pathlib import Path -from time import sleep import pytest from typer.testing import CliRunner @@ -13,105 +11,56 @@ from aignostics.cli import cli from aignostics.utils import sanitize_path from tests.conftest import normalize_output, print_directory_structure -from tests.constants_test import ( - HETA_APPLICATION_ID, - HETA_APPLICATION_VERSION, - SPOT_1_EXPECTED_RESULT_FILES, - SPOT_1_FILENAME, - SPOT_1_FILESIZE, - SPOT_1_GS_URL, - TEST_APPLICATION_ID, - TEST_APPLICATION_VERSION, -) +from tests.contants_test import HETA_APPLICATION_ID, TEST_APPLICATION_ID MESSAGE_RUN_NOT_FOUND = "Warning: Run with ID '4711' not found" -TEST_APPLICATION_DEADLINE_SECONDS = 60 * 45 # 45 minutes -TEST_APPLICATION_DUE_DATE_SECONDS = 60 * 10 # 10 minutes -HETA_APPLICATION_DUE_DATE_SECONDS = 60 * 60 * 1 # 1 hour -HETA_APPLICATION_DEADLINE_SECONDS = 60 * 60 * 3 # 3 hours - - -@pytest.mark.e2e -@pytest.mark.timeout(timeout=60) -def test_cli_application_list_non_verbose(runner: CliRunner, record_property) -> None: +def test_cli_application_list_non_verbose(runner: CliRunner) -> None: """Check application list command runs successfully.""" - record_property("tested-item-id", "SPEC-APPLICATION-SERVICE") result = runner.invoke(cli, ["application", "list"]) assert result.exit_code == 0 assert HETA_APPLICATION_ID in normalize_output(result.output) assert TEST_APPLICATION_ID in normalize_output(result.output) -@pytest.mark.e2e -@pytest.mark.scheduled -@pytest.mark.timeout(timeout=60) -def test_cli_application_list_verbose(runner: CliRunner, record_property) -> None: +def test_cli_application_list_verbose(runner: CliRunner) -> None: """Check application list command runs successfully.""" - record_property("tested-item-id", "SPEC-APPLICATION-SERVICE") result = runner.invoke(cli, ["application", "list", "--verbose"]) assert result.exit_code == 0 assert HETA_APPLICATION_ID in normalize_output(result.output) - assert HETA_APPLICATION_VERSION in normalize_output(result.output) assert "Artifacts: 1 input(s), 6 output(s)" in normalize_output(result.output) assert TEST_APPLICATION_ID in normalize_output(result.output) - assert TEST_APPLICATION_VERSION in normalize_output(result.output) -@pytest.mark.e2e -@pytest.mark.timeout(timeout=60) -def test_cli_application_describe_success(runner: CliRunner, record_property) -> None: +def test_cli_application_describe(runner: CliRunner) -> None: """Check application describe command runs successfully.""" - record_property("tested-item-id", "SPEC-APPLICATION-SERVICE") result = runner.invoke(cli, ["application", "describe", HETA_APPLICATION_ID]) assert result.exit_code == 0 - - -@pytest.mark.e2e -@pytest.mark.timeout(timeout=60) -def test_cli_application_describe_verbose(runner: CliRunner) -> None: - """Check application describe command runs successfully.""" - result = runner.invoke(cli, ["application", "describe", HETA_APPLICATION_ID, "--verbose"]) - assert result.exit_code == 0 assert "tissue_qc:geojson_polygons" in normalize_output(result.output) -@pytest.mark.e2e -@pytest.mark.timeout(timeout=60) -def test_cli_application_describe_not_found(runner: CliRunner, record_property) -> None: - """Check application describe command fails as expected on unknown application.""" - record_property("tested-item-id", "SPEC-APPLICATION-SERVICE") +def test_cli_application_describe_not_found(runner: CliRunner) -> None: + """Check application describe command fails as expected on unknown upplication.""" result = runner.invoke(cli, ["application", "describe", "unknown"]) assert result.exit_code == 2 assert "Application with ID 'unknown' not found." in normalize_output(result.output) -@pytest.mark.e2e -@pytest.mark.timeout(timeout=60) -def test_cli_application_dump_schemata(runner: CliRunner, tmp_path: Path, record_property) -> None: +def test_cli_application_dump_schemata(runner: CliRunner, tmp_path: Path) -> None: """Check application dump schemata works as expected.""" - record_property("tested-item-id", "SPEC-APPLICATION-SERVICE") result = runner.invoke( cli, ["application", "dump-schemata", HETA_APPLICATION_ID, "--destination", str(tmp_path), "--zip"] ) - application_version = ApplicationService().application_version(HETA_APPLICATION_ID) - application_version = ApplicationService().application_version(HETA_APPLICATION_ID) + application_version = ApplicationService().application_version(HETA_APPLICATION_ID, True) assert result.exit_code == 0 assert "Zipped 11 files" in normalize_output(result.output) - zip_file = sanitize_path( - Path(tmp_path / f"{HETA_APPLICATION_ID}_{application_version.version_number}_schemata.zip") - ) + zip_file = sanitize_path(Path(tmp_path / f"{application_version.application_version_id}_schemata.zip")) assert zip_file.exists(), f"Expected zip file {zip_file} not found" -@pytest.mark.e2e -@pytest.mark.timeout(timeout=60) -def test_cli_application_run_prepare_upload_submit_fail_on_mpp( - runner: CliRunner, tmp_path: Path, record_property -) -> None: +def test_cli_application_run_prepare_upload_submit_fail_on_mpp(runner: CliRunner, tmp_path: Path) -> None: """Check application run prepare command and upload works and submit fails on mpp not supported.""" - record_property("tested-item-id", "TC-APPLICATION-CLI-01") # Step 1: Prepare the file, by scanning for wsi and generating metadata source_directory = Path(__file__).parent.parent.parent / "resources" / "run" metadata_csv = tmp_path / "metadata.csv" @@ -122,16 +71,16 @@ def test_cli_application_run_prepare_upload_submit_fail_on_mpp( assert metadata_csv.exists() assert ( metadata_csv.read_text() - == "external_id;checksum_base64_crc32c;resolution_mpp;width_px;height_px;staining_method;tissue;disease;" + == "reference;checksum_base64_crc32c;resolution_mpp;width_px;height_px;staining_method;tissue;disease;" "platform_bucket_url\n" f"{source_directory / 'small-pyramidal.dcm'};" "EfIIhA==;8.065226874391001;2054;1529;;;;\n" ) - # Step 2: Simulate user now upgrading the metadata.csv file, by setting the tissue to "LUNG" + # Step 2: Simulate user now upading the metadata.csv file, by setting the tissue to "LUNG" # and disease to "LUNG_CANCER" metadata_csv.write_text( - "external_id;checksum_base64_crc32c;resolution_mpp;width_px;height_px;staining_method;tissue;disease;" + "reference;checksum_base64_crc32c;resolution_mpp;width_px;height_px;staining_method;tissue;disease;" "platform_bucket_url\n" f"{source_directory / 'small-pyramidal.dcm'};" "EfIIhA==;8.065226874391001;2054;1529;H&E;LUNG;LUNG_CANCER;\n" @@ -149,14 +98,11 @@ def test_cli_application_run_prepare_upload_submit_fail_on_mpp( assert "8.065226874391001 is greater than" in normalize_output(result.stdout) -@pytest.mark.integration -@pytest.mark.timeout(timeout=60) -def test_cli_application_run_upload_fails_on_missing_source(runner: CliRunner, tmp_path: Path, record_property) -> None: +def test_cli_application_run_upload_fails_on_missing_source(runner: CliRunner, tmp_path: Path) -> None: """Check application run prepare command and upload works and submit fails on mpp not supported.""" - record_property("tested-item-id", "SPEC-APPLICATION-SERVICE") metadata_csv = tmp_path / "metadata.csv" metadata_csv.write_text( - "external_id;checksum_base64_crc32c;resolution_mpp;width_px;height_px;staining_method;tissue;disease;" + "reference;checksum_base64_crc32c;resolution_mpp;width_px;height_px;staining_method;tissue;disease;" "platform_bucket_url\n" "missing.file;" "EfIIhA==;8.065226874391001;2054;1529;H&E;LUNG;LUNG_CANCER;\n" @@ -167,106 +113,56 @@ def test_cli_application_run_upload_fails_on_missing_source(runner: CliRunner, t assert "Warning: Source file 'missing.file' (row 0) does not exist" in normalize_output(result.stdout) -@pytest.mark.e2e -@pytest.mark.timeout(timeout=60) -def test_cli_run_submit_fails_on_application_not_found(runner: CliRunner, tmp_path: Path, record_property) -> None: +def test_cli_run_submit_fails_on_application_not_found(runner: CliRunner, tmp_path: Path) -> None: """Check run submit command fails as expected.""" - record_property("tested-item-id", "SPEC-APPLICATION-SERVICE") - csv_content = "external_id;checksum_base64_crc32c;resolution_mpp;width_px;height_px;staining_method;tissue;disease;" + csv_content = "reference;checksum_base64_crc32c;resolution_mpp;width_px;height_px;staining_method;tissue;disease;" csv_content += "platform_bucket_url\n" csv_content += ";5onqtA==;0.26268186053789266;7447;7196;H&E;LUNG;LUNG_CANCER;gs://bucket/test" csv_path = tmp_path / "dummy.csv" csv_path.write_text(csv_content) - result = runner.invoke( - cli, - [ - "application", - "run", - "submit", - "wrong", - str(csv_path), - "--deadline", - (datetime.now(tz=UTC) + timedelta(minutes=10)).isoformat(), - ], - ) + result = runner.invoke(cli, ["application", "run", "submit", "wrong:v1.2.3", str(csv_path)]) - assert result.exit_code == 2 - assert 'HTTP response body: {"detail":"application not found"}' in normalize_output(result.stdout) - assert "Warning: Could not find application" in normalize_output(result.stdout) - assert result.exit_code == 2 + assert result.exit_code == 1 + assert "Error: Failed to create run for application version" in normalize_output(result.stdout) -@pytest.mark.e2e -@pytest.mark.timeout(timeout=60) -def test_cli_run_submit_fails_on_unsupported_cloud(runner: CliRunner, tmp_path: Path, record_property) -> None: +def test_cli_run_submit_fails_on_unsupported_cloud(runner: CliRunner, tmp_path: Path) -> None: """Check run submit command fails as expected.""" - record_property("tested-item-id", "SPEC-APPLICATION-SERVICE") - csv_content = "external_id;checksum_base64_crc32c;resolution_mpp;width_px;height_px;staining_method;tissue;disease;" + csv_content = "reference;checksum_base64_crc32c;resolution_mpp;width_px;height_px;staining_method;tissue;disease;" csv_content += "platform_bucket_url\n" csv_content += ";5onqtA==;0.26268186053789266;7447;7196;H&E;LUNG;LUNG_CANCER;aws://bucket/test" csv_path = tmp_path / "dummy.csv" csv_path.write_text(csv_content) - result = runner.invoke( - cli, - [ - "application", - "run", - "submit", - HETA_APPLICATION_ID, - str(csv_path), - "--deadline", - (datetime.now(tz=UTC) + timedelta(minutes=10)).isoformat(), - ], - ) + result = runner.invoke(cli, ["application", "run", "submit", HETA_APPLICATION_ID, str(csv_path)]) assert result.exit_code == 2 assert "Invalid platform bucket URL: 'aws://bucket/test'" in normalize_output(result.stdout) -@pytest.mark.e2e -@pytest.mark.timeout(timeout=60) -def test_cli_run_submit_fails_on_missing_url(runner: CliRunner, tmp_path: Path, record_property) -> None: +def test_cli_run_submit_fails_on_missing_url(runner: CliRunner, tmp_path: Path) -> None: """Check run submit command fails as expected.""" - record_property("tested-item-id", "SPEC-APPLICATION-SERVICE") - csv_content = "external_id;checksum_base64_crc32c;resolution_mpp;width_px;height_px;staining_method;tissue;disease;" + csv_content = "reference;checksum_base64_crc32c;resolution_mpp;width_px;height_px;staining_method;tissue;disease;" csv_content += "platform_bucket_url\n" csv_content += ";5onqtA==;0.26268186053789266;7447;7196;H&E;LUNG;LUNG_CANCER;" csv_path = tmp_path / "dummy.csv" csv_path.write_text(csv_content) - result = runner.invoke( - cli, - [ - "application", - "run", - "submit", - HETA_APPLICATION_ID, - str(csv_path), - "--deadline", - (datetime.now(tz=UTC) + timedelta(minutes=10)).isoformat(), - ], - ) + result = runner.invoke(cli, ["application", "run", "submit", HETA_APPLICATION_ID, str(csv_path)]) assert result.exit_code == 2 assert "Invalid platform bucket URL: ''" in normalize_output(result.stdout) -@pytest.mark.e2e -@pytest.mark.long_running -@pytest.mark.flaky(retries=3, delay=5) -@pytest.mark.timeout(timeout=60 * 10) -def test_cli_run_submit_and_describe_and_cancel_and_download_and_delete( # noqa: PLR0915 - runner: CliRunner, tmp_path: Path, silent_logging, record_property -) -> None: +def test_cli_run_submit_and_describe_and_cancel_and_download_and_delete(runner: CliRunner, tmp_path: Path) -> None: """Check run submit command runs successfully.""" - record_property("tested-item-id", "TC-APPLICATION-CLI-02") - csv_content = "external_id;checksum_base64_crc32c;resolution_mpp;width_px;height_px;staining_method;tissue;disease;" + csv_content = "reference;checksum_base64_crc32c;resolution_mpp;width_px;height_px;staining_method;tissue;disease;" csv_content += "platform_bucket_url\n" csv_content += ";5onqtA==;0.26268186053789266;7447;7196;H&E;LUNG;LUNG_CANCER;gs://bucket/test" csv_path = tmp_path / "dummy.csv" csv_path.write_text(csv_content) + result = runner.invoke( cli, [ @@ -276,12 +172,7 @@ def test_cli_run_submit_and_describe_and_cancel_and_download_and_delete( # noqa HETA_APPLICATION_ID, str(csv_path), "--note", - "note_of_this_complex_test", - "--tags", - "cli-test,test_cli_run_submit_and_describe_and_cancel_and_download_and_delete,further-tag", - "--deadline", - (datetime.now(tz=UTC) + timedelta(minutes=10)).isoformat(), - "--validate-only", + "test_cli_run_submit_and_describe_and_cancel_and_download_and_delete", ], ) output = normalize_output(result.stdout) @@ -296,182 +187,11 @@ def test_cli_run_submit_and_describe_and_cancel_and_download_and_delete( # noqa assert run_id_match, f"Failed to extract run ID from output '{output}'" run_id = run_id_match.group(1) - # TODO (Andreas): Causes internal server errors on some runs - if False: - # Test that we can find this run by it's note via the query parameter - list_result = runner.invoke( - cli, - [ - "application", - "run", - "list", - "--query", - "note_of_this_complex_test", - ], - ) - assert list_result.exit_code == 0 - list_output = normalize_output(list_result.stdout) - assert run_id in list_output, f"Run ID '{run_id}' not found when filtering by note via query" - - # Test that we can find this run by it's tag via the query parameter - list_result = runner.invoke( - cli, - [ - "application", - "run", - "list", - "--query", - "test_cli_run_submit_and_describe_and_cancel_and_download_and_delete", - ], - ) - assert list_result.exit_code == 0 - list_output = normalize_output(list_result.stdout) - assert run_id in list_output, f"Run ID '{run_id}' not found when filtering by tag via query" - - # Test that we cannot find this run by another tag via the query parameter - list_result = runner.invoke( - cli, - [ - "application", - "run", - "list", - "--query", - "another_tag", - ], - ) - assert list_result.exit_code == 0 - list_output = normalize_output(list_result.stdout) - assert run_id not in list_output, f"Run ID '{run_id}' found when filtering by another tag via query" - - # Test that we can find this run by it's note - list_result = runner.invoke( - cli, - [ - "application", - "run", - "list", - "--note-regex", - "note_of_this_complex_test", - ], - ) - assert list_result.exit_code == 0 - list_output = normalize_output(list_result.stdout) - assert run_id in list_output, f"Run ID '{run_id}' not found when filtering by note" - - # but not another note - list_result = runner.invoke( - cli, - [ - "application", - "run", - "list", - "--note-regex", - "other_note", - ], - ) - assert list_result.exit_code == 0 - list_output = normalize_output(list_result.stdout) - assert run_id not in list_output, f"Run ID '{run_id}' found when filtering by other note" - - # Test that we can find this run by one of its tags - list_result = runner.invoke( - cli, - [ - "application", - "run", - "list", - "--tags", - "test_cli_run_submit_and_describe_and_cancel_and_download_and_delete", - ], - ) - assert list_result.exit_code == 0 - list_output = normalize_output(list_result.stdout) - assert run_id in list_output, f"Run ID '{run_id}' not found when filtering by one tag" - - # but not another tag - list_result = runner.invoke( - cli, - [ - "application", - "run", - "list", - "--tags", - "other-tag", - ], - ) - assert list_result.exit_code == 0 - list_output = normalize_output(list_result.stdout) - assert run_id not in list_output, f"Run ID '{run_id}' found when filtering by other tag" - - # Test that we can find this run by two of its tags - list_result = runner.invoke( - cli, - [ - "application", - "run", - "list", - "--tags", - "cli-test,test_cli_run_submit_and_describe_and_cancel_and_download_and_delete", - ], - ) - assert list_result.exit_code == 0 - list_output = normalize_output(list_result.stdout) - assert run_id in list_output, f"Run ID '{run_id}' not found when filtering by two tags" - - # Test that we can find this run by all of its tags - list_result = runner.invoke( - cli, - [ - "application", - "run", - "list", - "--tags", - "cli-test,test_cli_run_submit_and_describe_and_cancel_and_download_and_delete,further-tag", - ], - ) - assert list_result.exit_code == 0 - list_output = normalize_output(list_result.stdout) - assert run_id in list_output, f"Run ID '{run_id}' not found when filtering by all tags" - - # Test that we cannot find this run by all of its tags and a non-existent tag - list_result = runner.invoke( - cli, - [ - "application", - "run", - "list", - "--tags", - "cli-test,test_cli_run_submit_and_describe_and_cancel_and_download_and_delete,further-tag,non-existing-tag", - ], - ) - assert list_result.exit_code == 0 - list_output = normalize_output(list_result.stdout) - assert run_id not in list_output, f"Run ID '{run_id}' found when filtering by all tags" - - # Test that we can find this run by all of its tags and it's note - list_result = runner.invoke( - cli, - [ - "application", - "run", - "list", - "--note-regex", - "note_of_this_complex_test", - "--tags", - "cli-test,test_cli_run_submit_and_describe_and_cancel_and_download_and_delete,further-tag", - ], - ) - assert list_result.exit_code == 0 - list_output = normalize_output(list_result.stdout) - assert run_id in list_output, f"Run ID '{run_id}' not found when filtering by all tags and note" - # Test the describe command with the extracted run ID describe_result = runner.invoke(cli, ["application", "run", "describe", run_id]) assert describe_result.exit_code == 0 assert f"Run Details for {run_id}" in normalize_output(describe_result.stdout) - assert "Status (Termination Reason): PENDING" in normalize_output( - describe_result.stdout - ) or "Status (Termination Reason): PROCESSING" in normalize_output(describe_result.stdout) + assert "Status: RUNNING" in normalize_output(describe_result.stdout) assert "test_cli_run_submit_and_describe_and_cancel_and_download_and_delete" in normalize_output( describe_result.stdout ) @@ -482,6 +202,7 @@ def test_cli_run_submit_and_describe_and_cancel_and_download_and_delete( # noqa ) assert download_result.exit_code == 0 assert f"Downloaded results of run '{run_id}'" in normalize_output(download_result.stdout) + assert "status: running on plat" in normalize_output(download_result.stdout) # Test the cancel command with the extracted run ID cancel_result = runner.invoke(cli, ["application", "run", "cancel", run_id]) @@ -492,17 +213,14 @@ def test_cli_run_submit_and_describe_and_cancel_and_download_and_delete( # noqa describe_result = runner.invoke(cli, ["application", "run", "describe", run_id]) assert describe_result.exit_code == 0 assert f"Run Details for {run_id}" in normalize_output(describe_result.stdout) - assert "Status (Termination Reason): TERMINATED (RunTerminationReason.CANCELED_BY_USER)" in normalize_output( - describe_result.stdout - ) + assert "Status: CANCELED_USER" in normalize_output(describe_result.stdout) download_result = runner.invoke(cli, ["application", "run", "result", "download", run_id, str(tmp_path)]) assert download_result.exit_code == 0 # Verify the download message and path assert f"Downloaded results of run '{run_id}'" in normalize_output(download_result.stdout) - # TODO(andreas): Would also be great to check if it is canceled by user - assert "status: terminated" in normalize_output(download_result.stdout) + assert "status: canceled by user." in normalize_output(download_result.stdout) # More robust path verification - normalize paths and check if the destination path is mentioned in the output normalized_tmp_path = str(Path(tmp_path).resolve()) @@ -529,9 +247,7 @@ def test_cli_run_submit_and_describe_and_cancel_and_download_and_delete( # noqa describe_result = runner.invoke(cli, ["application", "run", "describe", run_id]) assert describe_result.exit_code == 0 assert f"Run Details for {run_id}" in normalize_output(describe_result.stdout) - assert "Status (Termination Reason): TERMINATED (RunTerminationReason.CANCELED_BY_USER)" in normalize_output( - describe_result.stdout - ) + assert "Status: CANCELED_USER" in normalize_output(describe_result.stdout) # TODO(Helmut): Activate when PAPI fixed @@ -539,12 +255,8 @@ def test_cli_run_submit_and_describe_and_cancel_and_download_and_delete( # noqa # assert f"Run with id '{run_id}' not found." in normalize_output(describe_result.stdout) # noqa: ERA001 -@pytest.mark.e2e -@pytest.mark.scheduled -@pytest.mark.timeout(timeout=60) -def test_cli_run_list_limit_10(runner: CliRunner, record_property) -> None: +def test_cli_run_list_limit_10(runner: CliRunner) -> None: """Check run list command runs successfully.""" - record_property("tested-item-id", "SPEC-APPLICATION-SERVICE") result = runner.invoke(cli, ["application", "run", "list", "--limit", "10"]) assert result.exit_code == 0 output = normalize_output(result.stdout) @@ -556,76 +268,56 @@ def test_cli_run_list_limit_10(runner: CliRunner, record_property) -> None: assert displayed_count <= 10, f"Expected listed count to be <= 10, but got {displayed_count}" -@pytest.mark.e2e -@pytest.mark.timeout(timeout=60) -def test_cli_run_list_verbose_limit_1(runner: CliRunner, record_property) -> None: +def test_cli_run_list_verbose_limit_1(runner: CliRunner) -> None: """Check run list command runs successfully.""" - record_property("tested-item-id", "SPEC-APPLICATION-SERVICE") result = runner.invoke(cli, ["application", "run", "list", "--verbose", "--limit", "1"]) assert result.exit_code == 0 output = normalize_output(result.stdout) assert "Application Runs:" in output - assert "Statistics:" in output + assert "Item Status Counts:" in output match = re.search(r"Listed '(\d+)' run\(s\)\.", output) assert match, "Expected run count message not found" displayed_count = int(match.group(1)) assert displayed_count == 1, f"Expected listed count to be == 1, but got {displayed_count}" -# TODO(Andreas): This previously failed as invalid run id. Is it expected this now calls the API? -@pytest.mark.e2e -@pytest.mark.timeout(timeout=60) -def test_cli_run_describe_invalid_uuid(runner: CliRunner, record_property) -> None: +def test_cli_run_describe_invalid_uuid(runner: CliRunner) -> None: """Check run describe command fails as expected on run not found.""" - record_property("tested-item-id", "SPEC-APPLICATION-SERVICE") result = runner.invoke(cli, ["application", "run", "describe", "4711"]) assert result.exit_code == 1 assert "Error: Failed to retrieve run details for ID '4711'" in normalize_output(result.stdout) -@pytest.mark.e2e -@pytest.mark.timeout(timeout=60) -def test_cli_run_describe_not_found(runner: CliRunner, record_property) -> None: +def test_cli_run_describe_not_found(runner: CliRunner) -> None: """Check run describe command fails as expected on run not found.""" - record_property("tested-item-id", "SPEC-APPLICATION-SERVICE") result = runner.invoke(cli, ["application", "run", "describe", "00000000000000000000000000000000"]) assert result.exit_code == 2 assert "Warning: Run with ID '00000000000000000000000000000000' not found." in normalize_output(result.stdout) -@pytest.mark.e2e -def test_cli_run_cancel_invalid_run_id(runner: CliRunner, record_property) -> None: +def test_cli_run_cancel_invalid_run_id(runner: CliRunner) -> None: """Check run cancel command fails as expected on run not found.""" - record_property("tested-item-id", "SPEC-APPLICATION-SERVICE") result = runner.invoke(cli, ["application", "run", "cancel", "4711"]) - assert "Run ID '4711' invalid" in normalize_output(result.stdout) - assert result.exit_code == 2 + assert result.exit_code == 1 + assert "Failed to cancel run with ID '4711'" in normalize_output(result.stdout) -@pytest.mark.e2e -@pytest.mark.timeout(timeout=60) -def test_cli_run_cancel_not_found(runner: CliRunner, record_property) -> None: +def test_cli_run_cancel_not_found(runner: CliRunner) -> None: """Check run cancel command fails as expected on run not found.""" - record_property("tested-item-id", "SPEC-APPLICATION-SERVICE") result = runner.invoke(cli, ["application", "run", "cancel", "00000000000000000000000000000000"]) - assert "Warning: Run with ID '00000000000000000000000000000000' not found." in normalize_output(result.stdout) assert result.exit_code == 2 + assert "Warning: Run with ID '00000000000000000000000000000000' not found." in normalize_output(result.stdout) -@pytest.mark.e2e -def test_cli_run_result_download_invalid_uuid(runner: CliRunner, tmp_path: Path, record_property) -> None: +def test_cli_run_result_download_invalid_uuid(runner: CliRunner, tmp_path: Path) -> None: """Check run result download command fails on invalid uui.""" - record_property("tested-item-id", "SPEC-APPLICATION-SERVICE") result = runner.invoke(cli, ["application", "run", "result", "download", "4711", str(tmp_path)]) assert result.exit_code == 2 assert "Run ID '4711' invalid" in normalize_output(result.stdout) -@pytest.mark.e2e -@pytest.mark.timeout(timeout=60) -def test_cli_run_result_download_uuid_not_found(runner: CliRunner, tmp_path: Path, record_property) -> None: +def test_cli_run_result_download_uuid_not_found(runner: CliRunner, tmp_path: Path) -> None: """Check run result download fails on ID not found.""" - record_property("tested-item-id", "SPEC-APPLICATION-SERVICE") result = runner.invoke( cli, ["application", "run", "result", "download", "00000000000000000000000000000000", str(tmp_path)] ) @@ -633,34 +325,16 @@ def test_cli_run_result_download_uuid_not_found(runner: CliRunner, tmp_path: Pat assert result.exit_code == 2 -# TODO(Andreas): Please check API -@pytest.mark.skip(reason="API currently returns permission denied, not 404") -@pytest.mark.e2e -@pytest.mark.timeout(timeout=60) -def test_cli_run_result_delete_not_found(runner: CliRunner, record_property) -> None: - """Check run result delete command runs successfully.""" - result = runner.invoke(cli, ["application", "run", "result", "delete", "00000000000000000000000000000000"]) - assert "Run with ID '00000000000000000000000000000000' not found." in normalize_output(result.stdout) - assert result.exit_code == 2 - - -@pytest.mark.integration -def test_cli_run_result_delete_fails_on_no_arg(runner: CliRunner, record_property) -> None: +def test_cli_run_result_delete_fails_on_no_arg(runner: CliRunner) -> None: """Check run result delete command runs successfully.""" - record_property("tested-item-id", "SPEC-APPLICATION-SERVICE") result = runner.invoke(cli, ["application", "run", "result", "delete"]) assert "Missing argument 'RUN_ID'." in normalize_output(result.stderr) assert result.exit_code == 2 -# TODO (Helmut): Run this test on a schedule when the GPU ressourcing situation and PAPI pipeline reliabilty improved -@pytest.mark.e2e -@pytest.mark.very_long_running -@pytest.mark.timeout(timeout=HETA_APPLICATION_DEADLINE_SECONDS + 60 * 30) -def test_cli_run_execute(runner: CliRunner, tmp_path: Path, record_property) -> None: +@pytest.mark.long_running +def test_cli_run_execute(runner: CliRunner, tmp_path: Path) -> None: """Check run execution runs e2e.""" - record_property("tested-item-id", "TC-APPLICATION-CLI-03") - # Step 1: Download the sample file result = runner.invoke( cli, @@ -668,25 +342,17 @@ def test_cli_run_execute(runner: CliRunner, tmp_path: Path, record_property) -> "dataset", "aignostics", "download", - SPOT_1_GS_URL, + "gs://aignx-storage-service-dev/sample_data_formatted/9375e3ed-28d2-4cf3-9fb9-8df9d11a6627.tiff", str(tmp_path), ], ) - - # Explore what was download print_directory_structure(tmp_path, "download") - - # Validate what was downloaded + assert result.exit_code == 0 assert "Successfully downloaded" in normalize_output(result.stdout) - assert SPOT_1_FILENAME in normalize_output(result.stdout) - expected_file = tmp_path / SPOT_1_FILENAME + assert "9375e3ed-28d2-4cf3-9fb9-8df9d11a6627.tiff" in normalize_output(result.stdout) + expected_file = tmp_path / "9375e3ed-28d2-4cf3-9fb9-8df9d11a6627.tiff" assert expected_file.exists(), f"Expected file {expected_file} not found" - assert expected_file.stat().st_size == SPOT_1_FILESIZE, ( - f"Expected file size {SPOT_1_FILESIZE}, but got {expected_file.stat().st_size}" - ) - - # Validate the download command exited successfully - assert result.exit_code == 0 + assert expected_file.stat().st_size == 14681750 # Step 2: Execute the run, i.e. prepare, amend, upload, submit and download the results result = runner.invoke( @@ -700,38 +366,37 @@ def test_cli_run_execute(runner: CliRunner, tmp_path: Path, record_property) -> str(tmp_path), ".*\\.tiff:staining_method=H&E,tissue=LUNG,disease=LUNG_CANCER", "--no-create-subdirectory-for-run", - "--due-date", - (datetime.now(tz=UTC) + timedelta(seconds=HETA_APPLICATION_DUE_DATE_SECONDS)).isoformat(), - "--deadline", - (datetime.now(tz=UTC) + timedelta(seconds=HETA_APPLICATION_DEADLINE_SECONDS)).isoformat(), - "--validate-only", ], ) - - # Explore what was download print_directory_structure(tmp_path, "execute") - - # Validate no input dir, given we used an external id pointing to a local file - input_dir = tmp_path / "input" - assert not input_dir.is_dir(), f"Expected input directory {input_dir} not found" - - # Validate results generated and downloaded - results_dir = tmp_path / SPOT_1_FILENAME.replace(".tiff", "") - assert results_dir.is_dir(), f"Expected directory {results_dir} not found" - files_in_dir = list(results_dir.glob("*")) + assert result.exit_code == 0 + item_out_dir = tmp_path / "9375e3ed-28d2-4cf3-9fb9-8df9d11a6627" + assert item_out_dir.is_dir(), f"Expected directory {item_out_dir} not found" + files_in_dir = list(item_out_dir.glob("*")) assert len(files_in_dir) == 9, ( - f"Expected 9 files in {results_dir}, but found {len(files_in_dir)}: {[f.name for f in files_in_dir]}" + f"Expected 9 files in {item_out_dir}, but found {len(files_in_dir)}: {[f.name for f in files_in_dir]}" ) - print(f"Found files in {results_dir}:") - for filename, expected_size, tolerance_percent in SPOT_1_EXPECTED_RESULT_FILES: - file_path = results_dir / filename + expected_files = [ + ("tissue_segmentation_csv_class_information.csv", 342, 10), + ("cell_classification_geojson_polygons.json", 16054058, 10), + ("readout_generation_cell_readouts.csv", 2228907, 10), + ("tissue_qc_csv_class_information.csv", 232, 10), + ("tissue_segmentation_geojson_polygons.json", 270931, 10), + ("tissue_qc_geojson_polygons.json", 180522, 10), + ("tissue_qc_segmentation_map_image.tiff", 464908, 10), + ("readout_generation_slide_readouts.csv", 295268, 10), + ("tissue_segmentation_segmentation_map_image.tiff", 581258, 10), + ] + print(f"Found files in {item_out_dir}:") + for filename, expected_size, tolerance_percent in expected_files: + file_path = item_out_dir / filename if file_path.exists(): actual_size = file_path.stat().st_size print(f" {filename}: {actual_size} bytes (expected: {expected_size} ±{tolerance_percent}%)") else: print(f" {filename}: NOT FOUND") - for filename, expected_size, tolerance_percent in SPOT_1_EXPECTED_RESULT_FILES: - file_path = results_dir / filename + for filename, expected_size, tolerance_percent in expected_files: + file_path = item_out_dir / filename assert file_path.exists(), f"Expected file {filename} not found" actual_size = file_path.stat().st_size min_size = expected_size * (100 - tolerance_percent) // 100 @@ -740,266 +405,3 @@ def test_cli_run_execute(runner: CliRunner, tmp_path: Path, record_property) -> f"File size for {filename} ({actual_size} bytes) is outside allowed range " f"({min_size} to {max_size} bytes, ±{tolerance_percent}% of {expected_size})" ) - - # Validate the execute command exited successfully - assert result.exit_code == 0 - - -@pytest.mark.integration -def test_cli_run_update_metadata_invalid_json(runner: CliRunner) -> None: - """Check run update-metadata command fails with invalid JSON.""" - result = runner.invoke(cli, ["application", "run", "update-metadata", "run-123", "{invalid json}"]) - assert result.exit_code == 1 - assert "Invalid JSON" in result.output - - -@pytest.mark.integration -def test_cli_run_update_metadata_not_dict(runner: CliRunner) -> None: - """Check run update-metadata command fails with non-dict JSON.""" - result = runner.invoke(cli, ["application", "run", "update-metadata", "run-123", '["array", "not", "dict"]']) - assert result.exit_code == 1 - assert "Metadata must be a JSON object" in result.output - - -@pytest.mark.integration -def test_cli_run_update_item_metadata_invalid_json(runner: CliRunner) -> None: - """Check run update-item-metadata command fails with invalid JSON.""" - result = runner.invoke( - cli, ["application", "run", "update-item-metadata", "run-123", "item-ext-id", "{invalid json}"] - ) - assert result.exit_code == 1 - assert "Invalid JSON" in result.output - - -@pytest.mark.integration -def test_cli_run_update_item_metadata_not_dict(runner: CliRunner) -> None: - """Check run update-item-metadata command fails with non-dict JSON.""" - result = runner.invoke( - cli, ["application", "run", "update-item-metadata", "run-123", "item-ext-id", '["array", "not", "dict"]'] - ) - assert result.exit_code == 1 - assert "Metadata must be a JSON object" in result.output - - -@pytest.mark.e2e -@pytest.mark.timeout(timeout=120) -@pytest.mark.skipif( - (platform.system() == "Linux" and platform.machine() in {"aarch64", "arm64"}) - or (platform.system() in {"Darwin", "Windows"}), - reason="No parallel runners, otherwise race condition on metadata updates", -) -@pytest.mark.sequential -def test_cli_run_dump_and_update_custom_metadata(runner: CliRunner) -> None: - """Test dumping and updating custom metadata via CLI commands.""" - import json - import random - - # Step 1: List runs, limit to 1 - result = runner.invoke(cli, ["application", "run", "list", "--limit", "1"]) - assert result.exit_code == 0 - - # Check if any runs exist - if "You did not yet create a run" in result.output: - pytest.skip("No runs available. Please run tests that submit runs first.") - - # Extract run ID from the output (format: "- of ...") - normalized_output = normalize_output(result.output) - run_id_match = re.search(r"-\s+([a-f0-9\-]{36})\s+of\s+", normalized_output) - assert run_id_match is not None, f"Could not extract run ID from list output:\n{normalized_output}" - run_id = run_id_match.group(1) - - # Step 2: Dump custom metadata of run - result = runner.invoke(cli, ["application", "run", "dump-metadata", run_id]) - assert result.exit_code == 0 - initial_metadata = json.loads(result.stdout) - # If metadata is None/null, start with empty dict - if initial_metadata is None: - initial_metadata = {} - assert isinstance(initial_metadata, dict), "Custom metadata should be a dictionary" - - # Store initial SDK metadata timestamps for comparison - initial_created_at = initial_metadata.get("sdk", {}).get("created_at") - initial_submission_date = initial_metadata.get("sdk", {}).get("submission", {}).get("date") - initial_updated_at = initial_metadata.get("sdk", {}).get("updated_at") - - # Ensure some time passes to see timestamp changes - sleep(1) - - # Step 3: Add "random" node with a random number - random_value = random.randint(1000, 9999) - updated_metadata = initial_metadata.copy() - updated_metadata["random"] = random_value - - # Update the custom metadata - result = runner.invoke(cli, ["application", "run", "update-metadata", run_id, json.dumps(updated_metadata)]) - assert result.exit_code == 0 - assert "Successfully updated custom metadata" in result.output - - # Step 4: Dump metadata again and verify random number appeared - result = runner.invoke(cli, ["application", "run", "dump-metadata", run_id, "--pretty"]) - assert result.exit_code == 0 - metadata_with_random = json.loads(result.stdout) - assert "random" in metadata_with_random, "Random field should be present in metadata" - assert metadata_with_random["random"] == random_value, f"Random value should be {random_value}" - - # Verify SDK metadata timestamps behavior after update - updated_created_at = metadata_with_random.get("sdk", {}).get("created_at") - updated_submission_date = metadata_with_random.get("sdk", {}).get("submission", {}).get("date") - updated_updated_at = metadata_with_random.get("sdk", {}).get("updated_at") - - # created_at and submission.date should NOT change - # Only check created_at immutability if it was set initially - if initial_created_at is not None: - assert updated_created_at == initial_created_at, ( - f"sdk.created_at should not change: {initial_created_at} -> {updated_created_at}" - ) - - if initial_submission_date is not None: - assert updated_submission_date == initial_submission_date, ( - f"sdk.submission.date should not change: {initial_submission_date} -> {updated_submission_date}" - ) - - # updated_at SHOULD change (be more recent) - assert updated_updated_at != initial_updated_at, ( - f"sdk.updated_at should change after update: {initial_updated_at} -> {updated_updated_at}" - ) - assert updated_updated_at > initial_updated_at, ( - f"sdk.updated_at should be more recent: {initial_updated_at} -> {updated_updated_at}" - ) - - # Step 5: Remove the random number - del updated_metadata["random"] - result = runner.invoke(cli, ["application", "run", "update-metadata", run_id, json.dumps(updated_metadata)]) - assert result.exit_code == 0 - assert "Successfully updated custom metadata" in result.output - - # Step 6: Dump metadata and validate random element has been removed - result = runner.invoke(cli, ["application", "run", "dump-metadata", run_id]) - assert result.exit_code == 0 - final_metadata = json.loads(result.stdout) - assert "random" not in final_metadata, "Random field should have been removed from metadata" - - # Note: We can't compare final_metadata == initial_metadata because the SDK - # automatically updates some fields (e.g., submission.date, ci.pytest.current_test) - # when operations are performed. Instead, verify the random field was removed - # and the structure remains consistent. - assert isinstance(final_metadata, dict), "Final metadata should be a dictionary" - - -# TODO(Andreas): Update item metadata returns 404 always -@pytest.mark.skip(reason="Waiting for platform API fix to item metadata endpoint which currently returns 404 always") -@pytest.mark.e2e -@pytest.mark.timeout(timeout=120) -@pytest.mark.skipif( - (platform.system() == "Linux" and platform.machine() in {"aarch64", "arm64"}) - or (platform.system() in {"Darwin", "Windows"}), - reason="No parallel runners, otherwise race condition on metadata updates", -) -@pytest.mark.sequential -def test_cli_run_dump_and_update_item_custom_metadata(runner: CliRunner) -> None: # noqa: PLR0914, PLR0915 # noqa: PLR0914, PLR0915 - """Test dumping and updating item custom metadata via CLI commands.""" - import json - import random - - # Step 1: List runs, limit to 1 - result = runner.invoke(cli, ["application", "run", "list", "--limit", "1"]) - assert result.exit_code == 0 - - # Check if any runs exist - if "You did not yet create a run" in result.output: - pytest.skip("No runs available. Please run tests that submit runs first.") - - # Extract run ID from the output (format: "- of ...") - normalized_output = normalize_output(result.output) - run_id_match = re.search(r"-\s+([a-f0-9\-]{36})\s+of\s+", normalized_output) - assert run_id_match is not None, f"Could not extract run ID from list output:\n{normalized_output}" - run_id = run_id_match.group(1) - - # Get run details to extract an item's external_id - result = runner.invoke(cli, ["application", "run", "describe", run_id]) - assert result.exit_code == 0 - - normalized_describe = normalize_output(result.output) - # Match the line after "Item External ID:" - external_id_match = re.search(r"Item External ID:\s*\n\s*([^\s]+)", normalized_describe) - - if not external_id_match: - # Try single line format as fallback - external_id_match = re.search(r"Item External ID:\s*([^\n\s]+)", normalized_describe) - - if not external_id_match: - pytest.skip("Could not extract item external_id from run. Run may not have items yet.") - - external_id = external_id_match.group(1).strip() - print(external_id) - - # Step 2: Dump custom metadata of item - result = runner.invoke(cli, ["application", "run", "dump-item-metadata", run_id, external_id]) - assert result.exit_code == 0 - initial_metadata = json.loads(result.output) - # If metadata is None/null, start with empty dict - if initial_metadata is None: - initial_metadata = {} - assert isinstance(initial_metadata, dict), "Custom metadata should be a dictionary" - - # Store initial SDK metadata timestamps for comparison - initial_created_at = initial_metadata.get("sdk", {}).get("created_at") - initial_updated_at = initial_metadata.get("sdk", {}).get("updated_at") - - # Ensure some time passes to see timestamp changes - sleep(1) - - # Step 3: Add "random" node with a random number - random_value = random.randint(1000, 9999) - updated_metadata = initial_metadata.copy() - updated_metadata["random"] = random_value - - # Update the custom metadata - result = runner.invoke( - cli, ["application", "run", "update-item-metadata", run_id, external_id, json.dumps(updated_metadata)] - ) - assert result.exit_code == 0 - assert "Successfully updated custom metadata" in result.output - - # Step 4: Dump metadata again and verify random number appeared - result = runner.invoke(cli, ["application", "run", "dump-item-metadata", run_id, external_id, "--pretty"]) - assert result.exit_code == 0 - metadata_with_random = json.loads(result.output) - assert "random" in metadata_with_random, "Random field should be present in metadata" - assert metadata_with_random["random"] == random_value, f"Random value should be {random_value}" - - # Verify SDK metadata timestamps behavior after update - updated_created_at = metadata_with_random.get("sdk", {}).get("created_at") - updated_updated_at = metadata_with_random.get("sdk", {}).get("updated_at") - - # created_at should NOT change - if initial_created_at is not None: - assert updated_created_at == initial_created_at, ( - f"sdk.created_at should not change: {initial_created_at} -> {updated_created_at}" - ) - - # updated_at SHOULD change (be more recent) - assert updated_updated_at != initial_updated_at, ( - f"sdk.updated_at should change after update: {initial_updated_at} -> {updated_updated_at}" - ) - # Step 5: Remove the random numberresult.output) - assert "random" in metadata_with_random, "Random field should be present in metadata" - assert metadata_with_random["random"] == random_value, f"Random value should be {random_value}" - - # Step 5: Remove the random number - del updated_metadata["random"] - result = runner.invoke( - cli, ["application", "run", "update-item-metadata", run_id, external_id, json.dumps(updated_metadata)] - ) - assert result.exit_code == 0 - assert "Successfully updated custom metadata" in result.output - - # Step 6: Dump metadata and validate random element has been removed - result = runner.invoke(cli, ["application", "run", "dump-item-metadata", run_id, external_id]) - assert result.exit_code == 0 - final_metadata = json.loads(result.output) - assert "random" not in final_metadata, "Random field should have been removed from metadata" - - # Note: Similar to run metadata, we verify the structure remains consistent - # rather than doing exact equality comparison due to dynamic fields - assert isinstance(final_metadata, dict), "Final metadata should be a dictionary" diff --git a/tests/aignostics/application/download_test.py b/tests/aignostics/application/download_test.py deleted file mode 100644 index e5537ef6..00000000 --- a/tests/aignostics/application/download_test.py +++ /dev/null @@ -1,399 +0,0 @@ -"""Tests for download utility functions in the application module.""" - -from pathlib import Path -from unittest.mock import Mock, patch - -import pytest -import requests - -from aignostics.application._download import download_url_to_file_with_progress, extract_filename_from_url -from aignostics.application._models import DownloadProgress, DownloadProgressState - - -@pytest.mark.unit -def test_extract_filename_from_url_gs() -> None: - """Test filename extraction from gs:// URLs.""" - assert extract_filename_from_url("gs://bucket/path/to/file.tiff") == "file.tiff" - assert extract_filename_from_url("gs://bucket/file.svs") == "file.svs" - assert extract_filename_from_url("gs://bucket/path/to/folder/image.dcm") == "image.dcm" - - -@pytest.mark.unit -def test_extract_filename_from_url_https() -> None: - """Test filename extraction from https:// URLs.""" - assert extract_filename_from_url("https://example.com/slides/sample.svs") == "sample.svs" - assert extract_filename_from_url("https://example.com/path/to/image.tiff") == "image.tiff" - # URL with query parameters - assert extract_filename_from_url("https://example.com/download/file.svs?token=abc123") == "file.svs" - - -@pytest.mark.unit -def test_extract_filename_from_url_http() -> None: - """Test filename extraction from http:// URLs.""" - assert extract_filename_from_url("http://example.com/image.tiff") == "image.tiff" - assert extract_filename_from_url("http://server.com/data/slides/sample.dcm") == "sample.dcm" - - -@pytest.mark.unit -def test_extract_filename_from_url_edge_cases() -> None: - """Test filename extraction from URLs with edge cases.""" - # Trailing slash - assert extract_filename_from_url("https://example.com/folder/") == "folder" - # Root path - assert extract_filename_from_url("https://example.com/") == "download" - # Multiple extensions - assert extract_filename_from_url("gs://bucket/file.tar.gz") == "file.tar.gz" - # No extension - assert extract_filename_from_url("https://example.com/myfile") == "myfile" - - -@pytest.mark.unit -def test_download_url_to_file_with_progress_gs_url_success(tmp_path: Path) -> None: - """Test successful download from gs:// URL with progress tracking via callable.""" - gs_url = "gs://test-bucket/path/to/input.tiff" - signed_url = "https://storage.googleapis.com/signed-url" - destination = tmp_path / "input.tiff" - file_content = b"test file content for progress tracking" - - progress = DownloadProgress() - progress_updates = [] - - def progress_callback(p: DownloadProgress) -> None: - progress_updates.append({ - "status": p.status, - "input_slide_path": p.input_slide_path, - "input_slide_url": p.input_slide_url, - "input_slide_size": p.input_slide_size, - "input_slide_downloaded_size": p.input_slide_downloaded_size, - "input_slide_downloaded_chunk_size": p.input_slide_downloaded_chunk_size, - }) - - with patch("aignostics.application._download.generate_signed_url") as mock_generate_signed_url: - mock_generate_signed_url.return_value = signed_url - - with patch("aignostics.application._download.requests.get") as mock_get: - mock_response = Mock() - mock_response.raise_for_status = Mock() - mock_response.headers = {"content-length": str(len(file_content))} - mock_response.iter_content = Mock(return_value=[file_content]) - mock_get.return_value = mock_response - - # Call the function with progress tracking - result = download_url_to_file_with_progress( - progress, gs_url, destination, download_progress_callable=progress_callback - ) - - # Verify the result - assert result == destination - assert destination.exists() - assert destination.read_bytes() == file_content - - # Verify progress updates - assert len(progress_updates) >= 3 # Initial, size update, chunk update - - # Check initial update - assert progress_updates[0]["status"] == DownloadProgressState.DOWNLOADING_INPUT - assert progress_updates[0]["input_slide_url"] == gs_url - assert progress_updates[0]["input_slide_path"] == destination - assert progress_updates[0]["input_slide_size"] is None - assert progress_updates[0]["input_slide_downloaded_size"] == 0 - - # Check size update - assert progress_updates[1]["input_slide_size"] == len(file_content) - - # Check final chunk update - assert progress_updates[-1]["input_slide_downloaded_size"] == len(file_content) - assert progress_updates[-1]["input_slide_downloaded_chunk_size"] == len(file_content) - - -@pytest.mark.unit -def test_download_url_to_file_with_progress_queue(tmp_path: Path) -> None: - """Test download with progress tracking via queue.""" - gs_url = "gs://test-bucket/input.tiff" - signed_url = "https://storage.googleapis.com/signed-url" - destination = tmp_path / "input.tiff" - file_content = b"test content" - - progress = DownloadProgress() - progress_queue = Mock() - progress_queue.put_nowait = Mock() # Mock the put_nowait method - - with patch("aignostics.application._download.generate_signed_url") as mock_generate_signed_url: - mock_generate_signed_url.return_value = signed_url - - with patch("aignostics.application._download.requests.get") as mock_get: - mock_response = Mock() - mock_response.raise_for_status = Mock() - mock_response.headers = {"content-length": str(len(file_content))} - mock_response.iter_content = Mock(return_value=[file_content]) - mock_get.return_value = mock_response - - # Call with queue - download_url_to_file_with_progress(progress, gs_url, destination, download_progress_queue=progress_queue) - - # Verify queue was called with progress - assert progress_queue.put_nowait.call_count >= 3 - - # Verify final state - assert progress.status == DownloadProgressState.DOWNLOADING_INPUT - assert progress.input_slide_downloaded_size == len(file_content) - - -@pytest.mark.unit -def test_download_url_to_file_with_progress_chunked(tmp_path: Path) -> None: - """Test progress tracking with multiple chunks.""" - gs_url = "gs://test-bucket/large-input.tiff" - signed_url = "https://storage.googleapis.com/signed-url" - destination = tmp_path / "large-input.tiff" - chunks = [b"chunk1", b"chunk2", b"chunk3", b"chunk4"] - total_size = sum(len(c) for c in chunks) - - progress = DownloadProgress() - progress_updates = [] - - def progress_callback(p: DownloadProgress) -> None: - progress_updates.append(p.input_slide_downloaded_size) - - with patch("aignostics.application._download.generate_signed_url") as mock_generate_signed_url: - mock_generate_signed_url.return_value = signed_url - - with patch("aignostics.application._download.requests.get") as mock_get: - mock_response = Mock() - mock_response.raise_for_status = Mock() - mock_response.headers = {"content-length": str(total_size)} - mock_response.iter_content = Mock(return_value=chunks) - mock_get.return_value = mock_response - - # Call the function - download_url_to_file_with_progress( - progress, gs_url, destination, download_progress_callable=progress_callback - ) - - # Verify progressive size updates - assert progress_updates[0] == 0 # Initial - assert progress_updates[1] == 0 # After size header - # Each chunk updates the total - assert progress_updates[2] == len(chunks[0]) - assert progress_updates[3] == len(chunks[0]) + len(chunks[1]) - assert progress_updates[4] == len(chunks[0]) + len(chunks[1]) + len(chunks[2]) - assert progress_updates[5] == total_size # Final - - -@pytest.mark.unit -def test_download_url_to_file_with_progress_http_error(tmp_path: Path) -> None: - """Test that HTTP errors are wrapped in RuntimeError.""" - gs_url = "gs://test-bucket/missing.tiff" - signed_url = "https://storage.googleapis.com/signed-url" - destination = tmp_path / "missing.tiff" - - progress = DownloadProgress() - - with patch("aignostics.application._download.generate_signed_url") as mock_generate_signed_url: - mock_generate_signed_url.return_value = signed_url - - with patch("aignostics.application._download.requests.get") as mock_get: - mock_response = Mock() - mock_response.raise_for_status = Mock(side_effect=requests.HTTPError("404 Not Found")) - mock_get.return_value = mock_response - - # Verify that RuntimeError is raised (wrapping HTTPError) - with pytest.raises(RuntimeError, match="HTTP error downloading"): - download_url_to_file_with_progress(progress, gs_url, destination) - - # Verify file was not created - assert not destination.exists() - - -@pytest.mark.unit -def test_download_url_to_file_with_progress_normalized_values(tmp_path: Path) -> None: - """Test that DownloadProgress computes normalized progress correctly for input slides.""" - gs_url = "gs://test-bucket/input.tiff" - signed_url = "https://storage.googleapis.com/signed-url" - destination = tmp_path / "input.tiff" - file_size = 1000 - chunks = [b"x" * 250, b"x" * 250, b"x" * 250, b"x" * 250] # 4 chunks of 250 bytes - - progress = DownloadProgress() - progress.item_count = 5 # 5 items total - progress.item_index = 2 # Processing 3rd item - - normalized_values = [] - - def progress_callback(p: DownloadProgress) -> None: - # Capture all updates - normalized_values.append({ - "has_size": p.input_slide_size is not None, - "downloaded": p.input_slide_downloaded_size, - "item_progress": p.item_progress_normalized, - "artifact_progress": p.artifact_progress_normalized, - }) - - with patch("aignostics.application._download.generate_signed_url") as mock_generate_signed_url: - mock_generate_signed_url.return_value = signed_url - - with patch("aignostics.application._download.requests.get") as mock_get: - mock_response = Mock() - mock_response.raise_for_status = Mock() - mock_response.headers = {"content-length": str(file_size)} - mock_response.iter_content = Mock(return_value=chunks) - mock_get.return_value = mock_response - - # Call the function - download_url_to_file_with_progress( - progress, gs_url, destination, download_progress_callable=progress_callback - ) - - # Verify we have updates - assert len(normalized_values) >= 6 # Initial, size, 4 chunks - - # Item progress should always be (item_index + 1) / item_count = 3/5 = 0.6 - for val in normalized_values: - assert val["item_progress"] == 0.6 - - # Find updates with size information (after size header is read) - sized_updates = [v for v in normalized_values if v["has_size"]] - assert len(sized_updates) >= 4 # Size update + 4 chunks - - # Verify artifact progress increases correctly - # First sized update should be at 0% (size just set, no data yet) - assert sized_updates[0]["artifact_progress"] == 0.0 - assert sized_updates[0]["downloaded"] == 0 - - # After first chunk: 250/1000 = 0.25 - assert sized_updates[1]["artifact_progress"] == 0.25 - assert sized_updates[1]["downloaded"] == 250 - - # After second chunk: 500/1000 = 0.5 - assert sized_updates[2]["artifact_progress"] == 0.5 - assert sized_updates[2]["downloaded"] == 500 - - # After third chunk: 750/1000 = 0.75 - assert sized_updates[3]["artifact_progress"] == 0.75 - assert sized_updates[3]["downloaded"] == 750 - - # After fourth chunk: 1000/1000 = 1.0 - assert sized_updates[4]["artifact_progress"] == 1.0 - assert sized_updates[4]["downloaded"] == 1000 - - -@pytest.mark.unit -def test_download_url_to_file_with_progress_https_url_success(tmp_path: Path) -> None: - """Test successful download from https:// URL (no signed URL generation needed).""" - https_url = "https://example.com/path/to/input.tiff" - destination = tmp_path / "input.tiff" - file_content = b"test file content from https" - - progress = DownloadProgress() - progress_updates = [] - - def progress_callback(p: DownloadProgress) -> None: - progress_updates.append({ - "status": p.status, - "input_slide_url": p.input_slide_url, - "input_slide_downloaded_size": p.input_slide_downloaded_size, - }) - - with patch("aignostics.application._download.requests.get") as mock_get: - mock_response = Mock() - mock_response.raise_for_status = Mock() - mock_response.headers = {"content-length": str(len(file_content))} - mock_response.iter_content = Mock(return_value=[file_content]) - mock_get.return_value = mock_response - - # Call the function (should not call generate_signed_url for https://) - result = download_url_to_file_with_progress( - progress, https_url, destination, download_progress_callable=progress_callback - ) - - # Verify the result - assert result == destination - assert destination.exists() - assert destination.read_bytes() == file_content - - # Verify requests.get was called with the https URL directly (no signed URL conversion) - mock_get.assert_called_once_with(https_url, stream=True, timeout=60) - - # Verify progress updates - assert len(progress_updates) >= 3 - assert progress_updates[0]["status"] == DownloadProgressState.DOWNLOADING_INPUT - assert progress_updates[0]["input_slide_url"] == https_url - - -@pytest.mark.unit -def test_download_url_to_file_with_progress_http_url_success(tmp_path: Path) -> None: - """Test successful download from http:// URL (no signed URL generation needed).""" - http_url = "http://example.com/input.tiff" - destination = tmp_path / "input.tiff" - file_content = b"test file content from http" - - progress = DownloadProgress() - - with patch("aignostics.application._download.requests.get") as mock_get: - mock_response = Mock() - mock_response.raise_for_status = Mock() - mock_response.headers = {"content-length": str(len(file_content))} - mock_response.iter_content = Mock(return_value=[file_content]) - mock_get.return_value = mock_response - - # Call the function - result = download_url_to_file_with_progress(progress, http_url, destination) - - # Verify the result - assert result == destination - assert destination.exists() - assert destination.read_bytes() == file_content - - # Verify requests.get was called with the http URL directly - mock_get.assert_called_once_with(http_url, stream=True, timeout=60) - - -@pytest.mark.unit -def test_download_url_to_file_with_progress_unsupported_scheme(tmp_path: Path) -> None: - """Test that unsupported URL schemes raise ValueError.""" - ftp_url = "ftp://example.com/file.tiff" - destination = tmp_path / "file.tiff" - progress = DownloadProgress() - - # Verify that ValueError is raised for unsupported schemes - with pytest.raises(ValueError, match="Unsupported URL scheme"): - download_url_to_file_with_progress(progress, ftp_url, destination) - - # Verify file was not created - assert not destination.exists() - - -@pytest.mark.unit -def test_download_url_to_file_with_progress_https_with_chunked(tmp_path: Path) -> None: - """Test https:// download with multiple chunks and progress tracking.""" - https_url = "https://example.com/large-file.tiff" - destination = tmp_path / "large-file.tiff" - chunks = [b"chunk1", b"chunk2", b"chunk3"] - total_size = sum(len(c) for c in chunks) - - progress = DownloadProgress() - progress_updates = [] - - def progress_callback(p: DownloadProgress) -> None: - progress_updates.append(p.input_slide_downloaded_size) - - with patch("aignostics.application._download.requests.get") as mock_get: - mock_response = Mock() - mock_response.raise_for_status = Mock() - mock_response.headers = {"content-length": str(total_size)} - mock_response.iter_content = Mock(return_value=chunks) - mock_get.return_value = mock_response - - # Call the function - download_url_to_file_with_progress( - progress, https_url, destination, download_progress_callable=progress_callback - ) - - # Verify progressive size updates - assert progress_updates[0] == 0 # Initial - assert progress_updates[1] == 0 # After size header - assert progress_updates[2] == len(chunks[0]) - assert progress_updates[3] == len(chunks[0]) + len(chunks[1]) - assert progress_updates[4] == total_size # Final - - # Verify direct URL was used (no signed URL generation) - mock_get.assert_called_once_with(https_url, stream=True, timeout=60) diff --git a/tests/aignostics/application/gui_test.py b/tests/aignostics/application/gui_test.py index 66d110ba..3adac8c1 100644 --- a/tests/aignostics/application/gui_test.py +++ b/tests/aignostics/application/gui_test.py @@ -3,7 +3,6 @@ import re import tempfile from asyncio import sleep -from datetime import UTC, datetime, timedelta from pathlib import Path from typing import TYPE_CHECKING from unittest.mock import patch @@ -14,16 +13,10 @@ from aignostics.application import Service from aignostics.cli import cli +from aignostics.platform import ApplicationRunStatus from aignostics.utils import get_logger from tests.conftest import assert_notified, normalize_output, print_directory_structure -from tests.constants_test import ( - HETA_APPLICATION_ID, - HETA_APPLICATION_VERSION, - SPOT_0_EXPECTED_RESULT_FILES, - SPOT_0_FILENAME, - SPOT_0_FILESIZE, - SPOT_0_GS_URL, -) +from tests.contants_test import HETA_APPLICATION_ID, HETA_APPLICATION_VERSION_ID if TYPE_CHECKING: from nicegui import ui @@ -31,20 +24,16 @@ logger = get_logger(__name__) -@pytest.mark.e2e -@pytest.mark.timeout(timeout=30) -async def test_gui_index(user: User, record_property) -> None: +@pytest.mark.sequential +async def test_gui_index(user: User) -> None: """Test that the user sees the index page, and sees the intro.""" - record_property("tested-item-id", "SPEC-APPLICATION-SERVICE, SPEC-GUI-SERVICE") # hello world await user.open("/") await user.should_see("Atlas H&E-TME", retries=100) await user.should_see("Download Datasets") -@pytest.mark.e2e -@pytest.mark.flaky(retries=2, delay=5, only_on=[AssertionError]) -@pytest.mark.timeout(timeout=60 * 2) +@pytest.mark.flaky(retries=1, delay=5, only_on=[AssertionError]) @pytest.mark.parametrize( ("application_id", "application_name", "expected_text"), [ @@ -55,46 +44,34 @@ async def test_gui_index(user: User, record_property) -> None: ), ( "test-app", - "test-app", # TODO(Helmut): Check in with Ari + "Test Application", "This is the test application with two algorithms", ), ], ) -async def test_gui_home_to_application( # noqa: PLR0913, PLR0917 - user: User, application_id: str, application_name: str, expected_text: str, silent_logging: None, record_property +async def test_gui_home_to_application( + user: User, application_id: str, application_name: str, expected_text: str, silent_logging: None ) -> None: """Test that the user sees the specific application page with expected content.""" - record_property("tested-item-id", "SPEC-APPLICATION-SERVICE, SPEC-GUI-SERVICE") await user.open("/") await user.should_see(application_name, retries=100) user.find(marker=f"SIDEBAR_APPLICATION:{application_id}").click() await user.should_see(expected_text, retries=300) -@pytest.mark.e2e -@pytest.mark.long_running -@pytest.mark.flaky(retries=2, delay=5, only_on=[AssertionError]) -@pytest.mark.timeout(timeout=60 * 5) +@pytest.mark.flaky(retries=1, delay=5, only_on=[AssertionError]) @pytest.mark.sequential -async def test_gui_cli_submit_to_run_result_delete( - user: User, - runner: CliRunner, - silent_logging: None, - record_property, -) -> None: +async def test_gui_cli_submit_to_run_result_delete(user: User, runner: CliRunner, silent_logging) -> None: """Test that the user can submit a run via the CLI up to deleting the run results.""" - record_property("tested-item-id", "SPEC-APPLICATION-SERVICE, SPEC-GUI-SERVICE") - with tempfile.TemporaryDirectory() as tmpdir: tmp_path = Path(tmpdir) - application = Service().application(HETA_APPLICATION_ID) - latest_version_number = application.versions[0].number if application.versions else None - assert latest_version_number is not None, f"No versions found for application {HETA_APPLICATION_ID}" + latest_version = Service().application_version_latest(Service().application(HETA_APPLICATION_ID)) + latest_version_id = latest_version.application_version_id # Submit run csv_content = ( - "external_id;checksum_base64_crc32c;resolution_mpp;width_px;height_px;staining_method;tissue;disease;" + "reference;checksum_base64_crc32c;resolution_mpp;width_px;height_px;staining_method;tissue;disease;" ) csv_content += "platform_bucket_url\n" csv_content += ";5onqtA==;0.26268186053789266;7447;7196;H&E;LUNG;LUNG_CANCER;gs://bucket/test" @@ -110,19 +87,13 @@ async def test_gui_cli_submit_to_run_result_delete( str(csv_path), "--note", "test_gui_cli_submit_to_run_result_delete", - "--deadline", - (datetime.now(tz=UTC) + timedelta(minutes=5)).isoformat(), - "--validate-only", ], ) assert result.exit_code == 0 # Extract the run ID from the output output = normalize_output(result.output) - # Strip ANSI escape codes before matching - ansi_escape = re.compile(r"\x1b\[[0-9;]*m") - output_clean = ansi_escape.sub("", output) - run_id_match = re.search(r"Submitted run with id '([0-9a-f-]+)' for '", output_clean) + run_id_match = re.search(r"Submitted run with id '([0-9a-f-]+)' for '", output) assert run_id_match is not None, f"Could not extract run ID from output: {output}" run_id = run_id_match.group(1) @@ -132,23 +103,13 @@ async def test_gui_cli_submit_to_run_result_delete( await user.should_see(marker="SIDEBAR_APPLICATION:he-tme", retries=100) await user.should_see("Atlas H&E-TME", retries=100) await user.should_see("Runs") - await user.should_see(content=HETA_APPLICATION_ID, marker="LABEL_RUN_APPLICATION:0", retries=250) - await user.should_see(content=HETA_APPLICATION_VERSION, marker="LABEL_RUN_APPLICATION:0", retries=100) + await user.should_see(HETA_APPLICATION_VERSION_ID, marker="SIDEBAR_RUN_ITEM:0", retries=100) # Navigate to the extracted run ID await user.open(f"/application/run/{run_id}") - await user.should_see( - f"Run of {application.application_id} ({latest_version_number})", - retries=100, - ) - await user.should_see( - f"Application: {application.application_id} ({latest_version_number})", - retries=100, - ) - try: - await user.should_see("PENDING", retries=100) - except AssertionError: - await user.should_see("PROCESSING", retries=100) + await user.should_see(f"Run of {latest_version_id}") + await user.should_see(f"Application Version: {latest_version_id}", retries=100) + await user.should_see("status RUNNING", retries=100) await user.should_see("test_gui_cli_submit_to_run_result_delete", retries=100) await user.should_see(marker="BUTTON_APPLICATION_RUN_CANCEL") user.find(marker="BUTTON_APPLICATION_RUN_CANCEL").click() @@ -156,7 +117,7 @@ async def test_gui_cli_submit_to_run_result_delete( await assert_notified(user, "Application run cancelled!") # Check user sees refreshed run page and run is cancelled - await user.should_see("CANCELED_BY_USER", retries=100) + await user.should_see("status CANCELED_USER", retries=100) # ... and user can delete run await user.should_see(marker="BUTTON_APPLICATION_RUN_RESULT_DELETE", retries=100) @@ -170,279 +131,157 @@ async def test_gui_cli_submit_to_run_result_delete( await user.should_see("Welcome", retries=500) -@pytest.mark.e2e @pytest.mark.long_running -@pytest.mark.flaky(retries=1, delay=5) -@pytest.mark.timeout(timeout=60 * 10) -@pytest.mark.sequential -async def test_gui_download_dataset_via_application_to_run_cancel_to_find_back( # noqa: PLR0915 - user: User, runner: CliRunner, silent_logging: None, record_property +@pytest.mark.skip(reason="temporaryly skipped for intermediate release") +async def test_gui_download_dataset_via_application_to_run_cancel( # noqa: PLR0915 + user: User, runner: CliRunner, tmp_path: Path, silent_logging: None ) -> None: - """Test that the user can download a dataset via the application page and cancel the run, then find it back.""" - record_property("tested-item-id", "TC-APPLICATION-GUI-04, SPEC-GUI-SERVICE") - with tempfile.TemporaryDirectory() as tmpdir: - tmp_path = Path(tmpdir) + """Test that the user can download a dataset via the application page and cancel the run.""" + with patch("aignostics.application._gui._page_application_describe.Path.home", return_value=tmp_path): + # Download example wsi + result = runner.invoke( + cli, + [ + "dataset", + "aignostics", + "download", + "gs://aignx-storage-service-dev/sample_data_formatted/9375e3ed-28d2-4cf3-9fb9-8df9d11a6627.tiff", + str(tmp_path), + ], + ) + assert result.exit_code == 0 + assert "Successfully downloaded" in normalize_output(result.stdout) + assert "9375e3ed-28d2-4cf3-9fb9-8df9d11a6627.tiff" in normalize_output(result.stdout) + expected_file = Path(tmp_path) / "9375e3ed-28d2-4cf3-9fb9-8df9d11a6627.tiff" + assert expected_file.exists(), f"Expected file {expected_file} not found" + assert expected_file.stat().st_size == 14681750 - with patch( - "aignostics.application._gui._page_application_describe.Path.home", - return_value=tmp_path, - ): - # Download example wsi - result = runner.invoke( - cli, - [ - "dataset", - "aignostics", - "download", - "gs://aignx-storage-service-dev/sample_data_formatted/9375e3ed-28d2-4cf3-9fb9-8df9d11a6627.tiff", - str(tmp_path), - ], - ) - assert result.exit_code == 0 - assert "Successfully downloaded" in normalize_output(result.stdout) - assert "9375e3ed-28d2-4cf3-9fb9-8df9d11a6627.tiff" in normalize_output(result.stdout) - expected_file = Path(tmp_path) / "9375e3ed-28d2-4cf3-9fb9-8df9d11a6627.tiff" - assert expected_file.exists(), f"Expected file {expected_file} not found" - assert expected_file.stat().st_size == 14681750 - - # Open the GUI and navigate to Atlas H&E-TME application - await user.open("/") - await user.should_see("Applications") - await user.should_see("Atlas H&E-TME", retries=100) - await user.should_see(marker="SIDEBAR_APPLICATION:he-tme", retries=100) - user.find(marker="SIDEBAR_APPLICATION:he-tme").click() - await sleep(5) - await user.should_see("The Atlas H&E TME is an AI application", retries=100) - - # Check the latest application version is shown and select it - application = Service().application("he-tme") - latest_application_version = application.versions[0] if application.versions else None - assert latest_application_version is not None, "No application versions found for he-tme" - await user.should_see(latest_application_version.number) - user.find(marker="BUTTON_APPLICATION_VERSION_NEXT").click() - - # Check the file picker opens and closes - await user.should_see("Select the folder with the whole slide images you want to analyze then click Next") - user.find(marker="BUTTON_WSI_SELECT_DATA").click() - await user.should_see("Ok") - await user.should_see("Cancel") - user.find(marker="BUTTON_WSI_SELECT_CUSTOM").click() - await user.should_see("Ok") - await user.should_see("Cancel") - user.find(marker="BUTTON_FILEPICKER_CANCEL").click() - await assert_notified(user, "You did not make a selection") - - # Select the home directory and trigger metadata generation - user.find(marker="BUTTON_PYTEST_HOME").click() - await user.should_see(f"Selected folder {tmp_path!s} to analyze.") - await assert_notified(user, f"You chose directory {tmp_path!s}.") - user.find(marker="BUTTON_WSI_NEXT").click() - await assert_notified(user, "Finding WSIs and generating metadata", wait_seconds=5) - await assert_notified(user, "Found 1 slides for analysis", wait_seconds=120) - await sleep(10) - - # Generate remaining metadata, going to upload UI - await user.should_see( - "The Launchpad has found all compatible slide files in your selected folder.", - retries=100, - ) - - user.find(marker="BUTTON_PYTEST_META").click() - await assert_notified(user, "Your metadata is now valid! Feel free to continue to the next step.") - user.find(marker="BUTTON_METADATA_NEXT").click() - await assert_notified(user, "Metadata captured.") - - # Navigate through Notes and Tags step - await user.should_see("Note (optional)", retries=100) - user.find("TEXTAREA_NOTE").type("test_gui_download_dataset_via_application_to_run_cancel:note").trigger( - "keydown.enter" - ) - - await user.should_see("Tags (optional, press Enter to add)") - tags_input: ui.input_chips = user.find(marker="INPUT_TAGS").elements.pop() - tags_input.value = ["test_gui_tag1", "test_gui_tag2"] - - user.find(marker="BUTTON_NOTES_AND_TAGS_NEXT").click() - - # Navigate through Scheduling step - await user.should_see("Soft Due Date", retries=100) - await user.should_see("The platform will try to complete the run before this time", retries=100) - - await user.should_see("Hard Deadline") - await user.should_see("The platform might cancel the run if not completed by this time.", retries=100) - time_deadline: ui.time = user.find(marker="TIME_DEADLINE").elements.pop() - time_deadline.value = (datetime.now().astimezone() + timedelta(minutes=10)).strftime("%Y-%m-%d %H:%M") - - user.find(marker="BUTTON_SCHEDULING_NEXT").click() - await assert_notified(user, "Prepared upload UI.") - - # Now on Submission step - await user.should_see("Upload and submit your 1 slide(s) for analysis.", retries=100) - user.find(marker="CHECKBOX_VALIDATE_ONLY").click() # only for aignostics' orgs - - # Trigger upload and submission - await user.should_see(marker="BUTTON_SUBMISSION_UPLOAD") - button_submission_upload: ui.button = user.find(marker="BUTTON_SUBMISSION_UPLOAD").elements.pop() - assert button_submission_upload.enabled, "Upload button should be enabled" - user.find(marker="BUTTON_SUBMISSION_UPLOAD").click() - await assert_notified(user, "Uploading whole slide images to Aignostics Platform ...", 10) - button_submission_upload: ui.button = user.find(marker="BUTTON_SUBMISSION_UPLOAD").elements.pop() - assert not button_submission_upload.enabled, "Upload button should be disabled after click" - await assert_notified(user, "Upload to Aignostics Platform completed.", wait_seconds=60) - await assert_notified(user, "Submitting application run ...") - await assert_notified(user, "Application run submitted with id", wait_seconds=30) - - # Check user is redirected to the run page and run is running - await sleep(5) - await user.should_see(f"Run of he-tme ({latest_application_version.number})", retries=200) - try: - await user.should_see("PENDING", retries=100) - except AssertionError: - await user.should_see("PROCESSING", retries=100) - - code_run_metadata: ui.code = user.find(marker="CODE_RUN_METADATA").elements.pop() - metadata_text = code_run_metadata.props["content"] - # extract run id, with metadata text containing Run ID: '{run_data.run_id}' - run_id_match = re.search(r"Run ID: ([0-9a-f-]+)", metadata_text) - assert run_id_match is not None, f"Could not extract run ID from metadata: {metadata_text}" - run_id = run_id_match.group(1) - - # Check user can cancel run - await user.should_see(marker="BUTTON_APPLICATION_RUN_CANCEL", retries=100) - user.find(marker="BUTTON_APPLICATION_RUN_CANCEL").click() - await assert_notified(user, "Canceling application run with id") - await assert_notified(user, "Application run cancelled!", wait_seconds=20) - - # Check user sees refreshed run page and run is cancelled - await user.should_see("CANCELED_BY_USER", retries=200) - - # Check the tags were saved correctly - await user.should_see("test_gui_download_dataset_via_application_to_run_cancel:note", retries=100) - await user.should_see("test_gui_tag1", retries=100) - await user.should_see("test_gui_tag2", retries=100) - - # Click on a tag to go to the homagepage with filtered runs - user.find("test_gui_tag1").click() - await sleep(10) - - # Check. user is on the homepage and the run filter is set to the tag clicked - user.should_see("Welcome to the Aignostics Launchpad") - user.should_see("test_gui_tag1", marker="INPUT_RUNS_FILTER_NOTE_OR_TAGS") - - # Check the first run is the one we created - user.should_see(marker=f"SIDEBAR_RUN_ITEM:0:{run_id}") - - -@pytest.mark.e2e -@pytest.mark.long_running -@pytest.mark.flaky(retries=1, delay=5) -@pytest.mark.timeout(timeout=60 * 5) -@pytest.mark.sequential # Helps on Linux with image analysis step otherwise timing out -async def test_gui_run_download( # noqa: PLR0915 - user: User, runner: CliRunner, tmp_path: Path, silent_logging: None, record_property -) -> None: + # Open the GUI and navigate to Atlas H&E-TME application + await user.open("/") + await user.should_see("Applications") + await user.should_see("Atlas H&E-TME", retries=100) + await user.should_see(marker="SIDEBAR_APPLICATION:he-tme", retries=100) + user.find(marker="SIDEBAR_APPLICATION:he-tme").click() + await user.should_see("Atlas H&E-TME", retries=100) + await user.should_see("The Atlas", retries=100) + await user.should_see("The Atlas H&E TME is an AI application") + + # Check the latest application version is shown and select it + application_versions = Service().application_versions("he-tme") + latest_application_version = application_versions[0] + await user.should_see(latest_application_version.version) + user.find(marker="BUTTON_APPLICATION_VERSION_NEXT").click() + + # Check the file picker opens and closes + await user.should_see("Select the folder with the whole slide images you want to analyze then click Next") + user.find(marker="BUTTON_WSI_SELECT_DATA").click() + await user.should_see("Ok") + await user.should_see("Cancel") + user.find(marker="BUTTON_WSI_SELECT_CUSTOM").click() + await user.should_see("Ok") + await user.should_see("Cancel") + user.find(marker="BUTTON_FILEPICKER_CANCEL").click() + await assert_notified(user, "You did not make a selection") + + # Select the home directory and trigger metadata generation + user.find(marker="BUTTON_PYTEST_HOME").click() + await user.should_see(f"Selected folder {tmp_path!s} to analyze.") + await assert_notified(user, f"You chose directory {tmp_path!s}.") + user.find(marker="BUTTON_WSI_NEXT").click() + await assert_notified(user, "Found 1 slides for analysis", wait_seconds=20) + await sleep(10) + + # Generate remaining metadata, going to upload UI + await user.should_see( + "The Launchpad has found all compatible slide files in your selected folder.", + retries=100, + ) + + user.find(marker="BUTTON_PYTEST_META").click() + await assert_notified(user, "Your metadata is now valid! Feel free to continue to the next step.") + user.find(marker="BUTTON_METADATA_NEXT").click() + await assert_notified(user, "Prepared upload UI.") + await user.should_see("Upload and submit your 1 slide(s) for analysis.", retries=100) + + # Trigger upload and submission + await user.should_see(marker="BUTTON_SUBMISSION_UPLOAD") + user.find(marker="BUTTON_SUBMISSION_UPLOAD").click() + await assert_notified(user, "Uploading whole slide images to Aignostics Platform ...") + button_submission_upload: ui.button = user.find(marker="BUTTON_SUBMISSION_UPLOAD").elements.pop() + assert not button_submission_upload.enabled, "Upload button should be disabled after click" + await assert_notified(user, "Upload to Aignostics Platform completed.", wait_seconds=30) + await assert_notified(user, "Submitting application run ...") + await assert_notified(user, "Application run submitted with id", wait_seconds=10) + + # Check user is redirected to the run page and run is running + await user.should_see(f"Run of he-tme:v{latest_application_version.version}", retries=200) + await user.should_see("status RUNNING") + + # Check user can cancel run + await user.should_see(marker="BUTTON_APPLICATION_RUN_CANCEL", retries=100) + user.find(marker="BUTTON_APPLICATION_RUN_CANCEL").click() + await assert_notified(user, "Canceling application run with id") + await assert_notified(user, "Application run cancelled!") + + # Check user sees refreshed run page and run is cancelled + await user.should_see("status CANCELED_USER", retries=200) + + +@pytest.mark.sequential +async def test_gui_run_download(user: User, runner: CliRunner, tmp_path: Path, silent_logging: None) -> None: """Test that the user can download a run result via the GUI.""" - record_property("tested-item-id", "SPEC-APPLICATION-SERVICE, SPEC-GUI-SERVICE") with patch( - "aignostics.application._gui._page_application_run_describe.get_user_data_directory", - return_value=tmp_path, + "aignostics.application._gui._page_application_run_describe.get_user_data_directory", return_value=tmp_path ): - # Find run - runs = Service().application_runs( - application_id=HETA_APPLICATION_ID, - application_version=HETA_APPLICATION_VERSION, - external_id=SPOT_0_GS_URL, - has_output=True, - limit=1, - ) + latest_version = Service().application_version_latest(Service().application(HETA_APPLICATION_ID)) + latest_version_id = latest_version.application_version_id + runs = Service().application_runs(limit=1, status=ApplicationRunStatus.COMPLETED) + if not runs: - message = f"No matching runs found for application {HETA_APPLICATION_ID} ({HETA_APPLICATION_VERSION}). " - message += "This test requires the scheduled test test_application_runs_heta_version passing first." - pytest.skip(message) - - run_id = runs[0].run_id - - # Explore run - run = Service().application_run(run_id).details() - print( - f"Found existing run: {run.run_id}\n" - f"application: {run.application_id} ({run.version_number})\n" - f"status: {run.state}, output: {run.output}\n" - f"submitted at: {run.submitted_at}, terminated at: {run.terminated_at}\n" - f"statistics: {run.statistics!r}\n", - f"custom_metadata: {run.custom_metadata!r}\n", - ) + pytest.fail("No completed runs found, please run the test first.") + # Find a completed run with the latest application version ID + run = None + for potential_run in runs: + if potential_run.application_version_id == latest_version_id: + run = potential_run + break + if not run: + pytest.skip(f"No completed runs found with version {latest_version_id}") + # Step 1: Go to latest completed run - await user.open(f"/application/run/{run.run_id}") - await user.should_see(f"Run {run.run_id}", retries=100) - await user.should_see( - f"Run of {run.application_id} ({run.version_number})", - retries=100, - ) + print(f"Found existing run: {run.application_run_id}, status: {run.status}") + await user.open(f"/application/run/{run.application_run_id}") + await user.should_see(f"Run {run.application_run_id}", retries=100) + await user.should_see(f"Run of {latest_version_id}", retries=100) # Step 2: Open Result Download dialog await user.should_see(marker="BUTTON_DOWNLOAD_RUN", retries=100) user.find(marker="BUTTON_DOWNLOAD_RUN").click() # Step 3: Select Data - download_run_button: ui.button = user.find(marker="DIALOG_BUTTON_DOWNLOAD_RUN").elements.pop() - assert not download_run_button.enabled, "Download button should be disabled before selecting target" await user.should_see(marker="BUTTON_DOWNLOAD_DESTINATION_DATA", retries=100) user.find(marker="BUTTON_DOWNLOAD_DESTINATION_DATA").click() # Step 3: Trigger Download - await sleep(2) # Wait a bit for button state to update so we can click - download_run_button: ui.button = user.find(marker="DIALOG_BUTTON_DOWNLOAD_RUN").elements.pop() - assert download_run_button.enabled, "Download button should be enabled after selecting target" + await user.should_see(marker="DIALOG_BUTTON_DOWNLOAD_RUN", retries=100) user.find(marker="DIALOG_BUTTON_DOWNLOAD_RUN").click() - await assert_notified(user, "Downloading ...") # Check: Download completed - await assert_notified(user, "Download completed.", 60 * 4) - print_directory_structure(tmp_path, "downloaded_run") - - # Check for directory layout as expected - run_dir = tmp_path / run.run_id - assert run_dir.is_dir(), f"Expected run directory {run_dir} not found" - - subdirs = [d for d in run_dir.iterdir() if d.is_dir()] - assert len(subdirs) == 2, f"Expected two subdirectories in {run_dir}, but found {len(subdirs)}" - - input_dir = run_dir / "input" - assert input_dir.is_dir(), f"Expected input directory {input_dir} not found" - - results_dir = run_dir / SPOT_0_FILENAME.replace(".tiff", "") - assert results_dir.is_dir(), f"Expected run results directory {results_dir} not found" - - # Check for input file having been downloaded - input_file = input_dir / SPOT_0_FILENAME - assert input_file.exists(), f"Expected input file {input_file} not found" - assert input_file.stat().st_size == SPOT_0_FILESIZE, ( - f"Expected input file size {SPOT_0_FILESIZE}, but got {input_file.stat().st_size}" + await assert_notified(user, "Download completed.", 60) + print_directory_structure(tmp_path, "execute") + run_out_dir = tmp_path / run.application_run_id + assert run_out_dir.is_dir(), f"Expected run directory {run_out_dir} not found" + # Find any subdirectory in the run_out_dir + subdirs = [d for d in run_out_dir.iterdir() if d.is_dir()] + assert len(subdirs) > 0, f"Expected at least one subdirectory in {run_out_dir}, but found none" + + # Take the first subdirectory found (item_out_dir) + item_out_dir = subdirs[0] + print(f"Found subdirectory: {item_out_dir.name}") + + # Check for files in the item directory + files_in_item_dir = list(item_out_dir.glob("*")) + assert len(files_in_item_dir) == 9, ( + f"Expected 9 files in {item_out_dir}, but found {len(files_in_item_dir)}: " + f"{[f.name for f in files_in_item_dir]}" ) - - # Check for files in the results directory - files_in_results_dir = list(results_dir.glob("*")) - assert len(files_in_results_dir) == 9, ( - f"Expected 9 files in {results_dir}, but found {len(files_in_results_dir)}: " - f"{[f.name for f in files_in_results_dir]}" - ) - - print(f"Found files in {results_dir}:") - for filename, expected_size, tolerance_percent in SPOT_0_EXPECTED_RESULT_FILES: - file_path = results_dir / filename - if file_path.exists(): - actual_size = file_path.stat().st_size - print(f" {filename}: {actual_size} bytes (expected: {expected_size} ±{tolerance_percent}%)") - else: - print(f" {filename}: NOT FOUND") - for filename, expected_size, tolerance_percent in SPOT_0_EXPECTED_RESULT_FILES: - file_path = results_dir / filename - assert file_path.exists(), f"Expected file {filename} not found" - actual_size = file_path.stat().st_size - min_size = expected_size * (100 - tolerance_percent) // 100 - max_size = expected_size * (100 + tolerance_percent) // 100 - assert min_size <= actual_size <= max_size, ( - f"File size for {filename} ({actual_size} bytes) is outside allowed range " - f"({min_size} to {max_size} bytes, ±{tolerance_percent}% of {expected_size})" - ) diff --git a/tests/aignostics/application/service_test.py b/tests/aignostics/application/service_test.py index 6a3bd872..ca73ac75 100644 --- a/tests/aignostics/application/service_test.py +++ b/tests/aignostics/application/service_test.py @@ -1,104 +1,13 @@ """Tests to verify the service functionality of the application module.""" -from datetime import UTC, datetime, timedelta -from unittest.mock import MagicMock, patch - import pytest from typer.testing import CliRunner from aignostics.application import Service as ApplicationService -from aignostics.application._utils import validate_due_date -from aignostics.platform import NotFoundException, RunData, RunOutput -from tests.constants_test import HETA_APPLICATION_ID, HETA_APPLICATION_VERSION - - -@pytest.mark.unit -def test_validate_due_date_none() -> None: - """Test that None is accepted (optional parameter).""" - # Should not raise any exception - validate_due_date(None) - - -@pytest.mark.unit -def test_validate_due_date_valid_formats() -> None: - """Test that valid ISO 8601 formats in the future are accepted.""" - # Create a datetime 2 hours in the future - future_time = datetime.now(tz=UTC) + timedelta(hours=2) - - valid_formats = [ - future_time.isoformat(), # With timezone offset like +00:00 - future_time.strftime("%Y-%m-%dT%H:%M:%S") + "Z", # With Z suffix - future_time.strftime("%Y-%m-%dT%H:%M:%S.%f") + "Z", # With microseconds and Z - future_time.strftime("%Y-%m-%dT%H:%M:%S.%f%z"), # With microseconds and timezone - ] - - for time_str in valid_formats: - # Should not raise any exception - try: - validate_due_date(time_str) - except ValueError as e: - pytest.fail(f"Valid ISO 8601 format '{time_str}' was rejected: {e}") - - -@pytest.mark.unit -def test_validate_due_date_invalid_format() -> None: - """Test that invalid ISO 8601 formats are rejected.""" - invalid_formats = [ - "2025-10-19", # Date only - "19:53:00", # Time only - "2025/10/19 19:53:00", # Wrong separators - "2025-10-19 19:53:00", # Space instead of T - "not-a-date", # Completely invalid - "2025-13-45T25:70:99Z", # Invalid values - ] - - for time_str in invalid_formats: - with pytest.raises(ValueError, match=r"Invalid ISO 8601 format"): - validate_due_date(time_str) - - -@pytest.mark.unit -def test_validate_due_date_past_datetime() -> None: - """Test that datetimes in the past are rejected.""" - # Create a datetime 2 hours in the past - past_time = datetime.now(tz=UTC) - timedelta(hours=2) - - past_formats = [ - past_time.isoformat(), - past_time.strftime("%Y-%m-%dT%H:%M:%S") + "Z", - ] - - for time_str in past_formats: - with pytest.raises(ValueError, match=r"due_date must be in the future"): - validate_due_date(time_str) +from aignostics.platform import NotFoundException +from tests.contants_test import HETA_APPLICATION_ID -@pytest.mark.unit -def test_validate_due_date_current_time() -> None: - """Test that current time (not future) is rejected.""" - # Get current time - should be rejected as it's not in the future - current_time = datetime.now(tz=UTC) - current_time_str = current_time.isoformat() - - with pytest.raises(ValueError, match=r"due_date must be in the future"): - validate_due_date(current_time_str) - - -@pytest.mark.unit -def test_validate_due_date_edge_case_one_second_future() -> None: - """Test that a datetime 1 second in the future is accepted.""" - # Create a datetime 1 second in the future - future_time = datetime.now(tz=UTC) + timedelta(seconds=1) - future_time_str = future_time.isoformat() - - # Should not raise any exception - try: - validate_due_date(future_time_str) - except ValueError as e: - pytest.fail(f"Future datetime '{future_time_str}' was rejected: {e}") - - -@pytest.mark.e2e def test_application_version_valid_semver_formats(runner: CliRunner) -> None: """Test that valid semver formats are accepted.""" from aignostics.application import Service as ApplicationService @@ -132,302 +41,50 @@ def test_application_version_valid_semver_formats(runner: CliRunner) -> None: pytest.skip(f"Application '{version_id.split(':')[0]}' not found, skipping test for this version format.") -@pytest.mark.unit -def test_application_version_invalid_semver_formats(runner: CliRunner, record_property) -> None: +def test_application_version_invalid_semver_formats(runner: CliRunner) -> None: """Test that invalid semver formats are rejected with ValueError.""" - record_property("tested-item-id", "SPEC-APPLICATION-SERVICE") from aignostics.application import Service as ApplicationService service = ApplicationService() - invalid_application_versions = [ - "test-app:v1.0.0", # legacy format - "bla", # not semver + invalid_formats = [ + "test-app:1.0.0", # Missing 'v' prefix + "test-app:v1", # Incomplete version + "test-app:v1.0", # Incomplete version + "test-app:v1.0.0-", # Trailing dash + "test-app:v1.0.0+", # Trailing plus + "test-app:v1.0.0-+", # Invalid prerelease + "test-app:v1.0.0-+123", # Invalid prerelease + "test-app:v+invalid", # Invalid format + "test-app:v-invalid", # Invalid format + "test-app:v1.0.0.DEV.SNAPSHOT", # Too many version parts + "test-app:v1.0-SNAPSHOT-123", # Invalid format + "test-app:v", # Just 'v' + "test-app:vx.y.z", # Non-numeric + "test-app:v1.0.0-αα", # Non-ASCII in prerelease # noqa: RUF001 + ":v1.0.0", # Missing application ID + "test-app:", # Missing version + "no-colon-v1.0.0", # Missing colon separator ] - for application_version in invalid_application_versions: - with pytest.raises(ValueError, match=r"not compliant with semantic versioning"): - service.application_version("test-app", application_version) + for version_id in invalid_formats: + with pytest.raises(ValueError, match=r"Invalid application version id format"): + service.application_version(version_id) -@pytest.mark.e2e -def test_application_version_use_latest_fallback(runner: CliRunner, record_property) -> None: - """Test that latest version works and tested.""" - record_property("tested-item-id", "SPEC-APPLICATION-SERVICE") +def test_application_version_use_latest_fallback(runner: CliRunner) -> None: + """Test that use_latest_if_no_version_given works correctly.""" service = ApplicationService() try: - app_version = service.application_version(HETA_APPLICATION_ID) - assert app_version is not None - assert app_version.version_number == HETA_APPLICATION_VERSION - except NotFoundException as e: - if "No versions found for application" in str(e): - pass # This is expected behavior + result = service.application_version(HETA_APPLICATION_ID, use_latest_if_no_version_given=True) + assert result is not None + assert result.application_version_id.startswith(f"{HETA_APPLICATION_ID}:v") except ValueError as e: - pytest.fail(f"Unexpected error: {e}") - - with pytest.raises(ValueError, match=r"not compliant with semantic versioning"): - service.application_version(HETA_APPLICATION_ID, "invalid-format") - - -@pytest.mark.e2e -@pytest.mark.timeout(timeout=60 * 2) -def test_application_versions_are_unique(runner: CliRunner) -> None: - """Check that application versions are unique (currently fails due to backend bug).""" - # Get all applications - service = ApplicationService() - applications = service.applications() - - # Check each application for duplicate versions - for app in applications: - versions = service.application_versions(app.application_id) - - # Extract version numbers - version_numbers = [v.version_number for v in versions] - - # Check for duplicates - unique_versions = set(version_numbers) - assert len(version_numbers) == len(unique_versions), ( - f"Application '{app.application_id}' has duplicate versions. " - f"Found {len(version_numbers)} versions but only {len(unique_versions)} unique: {version_numbers}" - ) - - -@pytest.mark.unit -def test_application_runs_query_with_note_regex_raises() -> None: - """Test that using query with note_regex raises ValueError.""" - service = ApplicationService() - - with pytest.raises(ValueError, match=r"Cannot use 'query' parameter together with 'note_regex' parameter"): - service.application_runs(query="test", note_regex="test.*") - - -@pytest.mark.unit -def test_application_runs_query_with_tags_raises() -> None: - """Test that using query with tags raises ValueError.""" - service = ApplicationService() - - with pytest.raises(ValueError, match=r"Cannot use 'query' parameter together with 'tags' parameter"): - service.application_runs(query="test", tags={"tag1", "tag2"}) - - -@pytest.mark.unit -@patch("aignostics.application._service.Service._get_platform_client") -def test_application_runs_query_searches_note_and_tags(mock_get_client: MagicMock) -> None: - """Test that query parameter searches both note and tags with union semantics.""" - # Create mock runs - run_from_note = MagicMock(spec=RunData) - run_from_note.run_id = "run-note-123" - run_from_note.output = RunOutput.FULL - - run_from_tag = MagicMock(spec=RunData) - run_from_tag.run_id = "run-tag-456" - run_from_tag.output = RunOutput.FULL - - run_from_both = MagicMock(spec=RunData) - run_from_both.run_id = "run-both-789" - run_from_both.output = RunOutput.FULL - - # Mock the platform client to return different runs for note and tag searches - mock_client = MagicMock() - mock_runs = MagicMock() - - # First call returns runs matching note, second call returns runs matching tags - mock_runs.list_data.side_effect = [ - iter([run_from_note, run_from_both]), # Note search results - iter([run_from_tag]), # Tag search results (run_from_both already in note results, so not added) - ] - - mock_client.runs = mock_runs - mock_get_client.return_value = mock_client - - service = ApplicationService() - results = service.application_runs(query="test") - - # Verify we got union of both searches (3 unique runs) - assert len(results) == 3 - assert run_from_note in results - assert run_from_tag in results - assert run_from_both in results - - # Verify that list_data was called twice (once for note, once for tags) - assert mock_runs.list_data.call_count == 2 - - # Verify the custom_metadata parameters contain the escaped query with case insensitive flag - calls = mock_runs.list_data.call_args_list - note_call_kwargs = calls[0][1] - tag_call_kwargs = calls[1][1] - - assert "custom_metadata" in note_call_kwargs - assert "$.sdk.note" in note_call_kwargs["custom_metadata"] - assert 'like_regex "test"' in note_call_kwargs["custom_metadata"] - assert 'flag "i"' in note_call_kwargs["custom_metadata"] - - assert "custom_metadata" in tag_call_kwargs - assert "$.sdk.tags" in tag_call_kwargs["custom_metadata"] - assert 'like_regex "test"' in tag_call_kwargs["custom_metadata"] - assert 'flag "i"' in tag_call_kwargs["custom_metadata"] - - -@pytest.mark.unit -@patch("aignostics.application._service.Service._get_platform_client") -def test_application_runs_query_deduplicates_results(mock_get_client: MagicMock) -> None: - """Test that query parameter deduplicates runs that match both note and tags.""" - # Create mock run that matches both searches - run_from_both = MagicMock(spec=RunData) - run_from_both.run_id = "run-both-123" - run_from_both.output = RunOutput.FULL - - # Mock the platform client to return the same run from both searches - mock_client = MagicMock() - mock_runs = MagicMock() - - # Both searches return the same run - mock_runs.list_data.side_effect = [ - iter([run_from_both]), # Note search results - iter([run_from_both]), # Tag search results (should be deduplicated) - ] - - mock_client.runs = mock_runs - mock_get_client.return_value = mock_client - - service = ApplicationService() - results = service.application_runs(query="test") - - # Verify we only got one run (deduplicated) - assert len(results) == 1 - assert results[0].run_id == "run-both-123" - - -@pytest.mark.unit -@patch("aignostics.application._service.Service._get_platform_client") -def test_application_runs_query_respects_limit(mock_get_client: MagicMock) -> None: - """Test that query parameter respects the limit parameter.""" - # Create mock runs - runs = [] - for i in range(10): - run = MagicMock(spec=RunData) - run.run_id = f"run-{i}" - run.output = RunOutput.FULL - runs.append(run) - - # Mock the platform client to return many runs - mock_client = MagicMock() - mock_runs = MagicMock() - - # Note search returns 5 runs, tag search returns 5 runs - mock_runs.list_data.side_effect = [ - iter(runs[:5]), # Note search results - iter(runs[5:]), # Tag search results - ] - - mock_client.runs = mock_runs - mock_get_client.return_value = mock_client - - service = ApplicationService() - results = service.application_runs(query="test", limit=3) - - # Verify we only got 3 runs despite having 10 total - assert len(results) == 3 - - -@pytest.mark.unit -@patch("aignostics.application._service.Service._get_platform_client") -def test_application_runs_query_escapes_special_characters(mock_get_client: MagicMock) -> None: - """Test that query parameter properly escapes special regex characters.""" - # Mock the platform client - mock_client = MagicMock() - mock_runs = MagicMock() - mock_runs.list_data.side_effect = [ - iter([]), # Note search results - iter([]), # Tag search results - ] - mock_client.runs = mock_runs - mock_get_client.return_value = mock_client - - service = ApplicationService() - # Use query with special characters that need escaping - service.application_runs(query='test"value\\path') - - # Verify the custom_metadata parameters contain properly escaped query - calls = mock_runs.list_data.call_args_list - note_call_kwargs = calls[0][1] - tag_call_kwargs = calls[1][1] - - # Check that double quotes and backslashes are properly escaped - assert 'test\\"value\\\\path' in note_call_kwargs["custom_metadata"] - assert 'test\\"value\\\\path' in tag_call_kwargs["custom_metadata"] - - -@pytest.mark.unit -@patch("aignostics.application._service.Service._get_platform_client") -def test_application_run_update_custom_metadata_success(mock_get_client: MagicMock) -> None: - """Test successful update of run custom metadata.""" - mock_client = MagicMock() - mock_run = MagicMock() - mock_client.run.return_value = mock_run - mock_get_client.return_value = mock_client - - service = ApplicationService() - custom_metadata = {"key": "value", "tags": ["tag1", "tag2"]} - - # Should not raise any exception - service.application_run_update_custom_metadata("run-123", custom_metadata) - - # Verify the run() method was called with correct run_id - mock_client.run.assert_called_once_with("run-123") - # Verify the update_custom_metadata method was called with correct arguments - mock_run.update_custom_metadata.assert_called_once_with(custom_metadata) - - -@pytest.mark.unit -@patch("aignostics.application._service.Service._get_platform_client") -def test_application_run_update_custom_metadata_not_found(mock_get_client: MagicMock) -> None: - """Test update metadata with non-existent run.""" - mock_client = MagicMock() - mock_run = MagicMock() - mock_run.update_custom_metadata.side_effect = NotFoundException("Run not found") - mock_client.run.return_value = mock_run - mock_get_client.return_value = mock_client - - service = ApplicationService() - - with pytest.raises(NotFoundException, match="not found"): - service.application_run_update_custom_metadata("invalid-run-id", {"key": "value"}) - - -@pytest.mark.unit -@patch("aignostics.application._service.Service._get_platform_client") -def test_application_run_update_item_custom_metadata_success(mock_get_client: MagicMock) -> None: - """Test successful update of item custom metadata.""" - mock_client = MagicMock() - mock_run = MagicMock() - mock_client.run.return_value = mock_run - mock_get_client.return_value = mock_client - - service = ApplicationService() - custom_metadata = {"key": "value", "note": "test note"} - - # Should not raise any exception - service.application_run_update_item_custom_metadata("run-123", "item-ext-id", custom_metadata) - - # Verify the run() method was called with correct run_id - mock_client.run.assert_called_once_with("run-123") - # Verify the update_item_custom_metadata method was called with correct arguments - mock_run.update_item_custom_metadata.assert_called_once_with("item-ext-id", custom_metadata) - - -@pytest.mark.unit -@patch("aignostics.application._service.Service._get_platform_client") -def test_application_run_update_item_custom_metadata_not_found(mock_get_client: MagicMock) -> None: - """Test update item metadata with non-existent run or item.""" - mock_client = MagicMock() - mock_run = MagicMock() - mock_run.update_item_custom_metadata.side_effect = NotFoundException("Item not found") - mock_client.run.return_value = mock_run - mock_get_client.return_value = mock_client - - service = ApplicationService() + if "no latest version available" in str(e): + pass # This is expected behavior + else: + pytest.fail(f"Unexpected error: {e}") - with pytest.raises(NotFoundException, match="not found"): - service.application_run_update_item_custom_metadata("run-123", "invalid-item-id", {"key": "value"}) + with pytest.raises(ValueError, match=r"Invalid application version id format"): + service.application_version("invalid-format", use_latest_if_no_version_given=False) diff --git a/tests/aignostics/application/utils_test.py b/tests/aignostics/application/utils_test.py deleted file mode 100644 index 12b57a4f..00000000 --- a/tests/aignostics/application/utils_test.py +++ /dev/null @@ -1,437 +0,0 @@ -"""Tests to verify the utility functions of the application module.""" - -from datetime import UTC, datetime -from pathlib import Path -from unittest.mock import MagicMock, Mock, patch - -import pytest - -from aignostics.application._utils import ( - application_run_status_to_str, - get_mime_type_for_artifact, - get_supported_extensions_for_application, - print_runs_non_verbose, - print_runs_verbose, - read_metadata_csv_to_dict, - retrieve_and_print_run_details, - write_metadata_dict_to_csv, -) -from aignostics.constants import ( - HETA_APPLICATION_ID, - TEST_APP_APPLICATION_ID, - WSI_SUPPORTED_FILE_EXTENSIONS, - WSI_SUPPORTED_FILE_EXTENSIONS_TEST_APP, -) -from aignostics.platform import ( - ItemResult, - ItemState, - ItemTerminationReason, - OutputArtifactElement, - RunData, - RunItemStatistics, - RunOutput, - RunState, - RunTerminationReason, -) - - -@pytest.mark.unit -def test_get_supported_extensions_for_heta_application() -> None: - """Test that HETA application returns the correct set of supported extensions.""" - extensions = get_supported_extensions_for_application(HETA_APPLICATION_ID) - - assert extensions == WSI_SUPPORTED_FILE_EXTENSIONS - assert isinstance(extensions, set) - assert len(extensions) > 0 - - -@pytest.mark.unit -def test_get_supported_extensions_for_test_app() -> None: - """Test that test application returns the correct set of supported extensions.""" - extensions = get_supported_extensions_for_application(TEST_APP_APPLICATION_ID) - - assert extensions == WSI_SUPPORTED_FILE_EXTENSIONS_TEST_APP - assert isinstance(extensions, set) - assert len(extensions) > 0 - - -@pytest.mark.unit -def test_get_supported_extensions_for_unsupported_application() -> None: - """Test that an unsupported application ID raises RuntimeError.""" - unsupported_app_id = "unsupported-application-id" - - with pytest.raises(RuntimeError) as exc_info: - get_supported_extensions_for_application(unsupported_app_id) - - assert f"Unsupported application {unsupported_app_id}" in str(exc_info.value) - - -@pytest.mark.unit -def test_get_supported_extensions_for_empty_string() -> None: - """Test that an empty string application ID raises RuntimeError.""" - with pytest.raises(RuntimeError) as exc_info: - get_supported_extensions_for_application("") - - assert "Unsupported application" in str(exc_info.value) - - -@pytest.mark.unit -def test_get_supported_extensions_returns_different_sets() -> None: - """Test that different applications return different extension sets.""" - heta_extensions = get_supported_extensions_for_application(HETA_APPLICATION_ID) - test_extensions = get_supported_extensions_for_application(TEST_APP_APPLICATION_ID) - - # Verify they are separate sets (even if they might have the same contents) - assert heta_extensions is WSI_SUPPORTED_FILE_EXTENSIONS - assert test_extensions is WSI_SUPPORTED_FILE_EXTENSIONS_TEST_APP - - -# Tests for application_run_status_to_str - - -@pytest.mark.unit -def test_application_run_status_to_str_pending() -> None: - """Test conversion of PENDING status to string.""" - result = application_run_status_to_str(RunState.PENDING) - assert result == "pending" - - -@pytest.mark.unit -def test_application_run_status_to_str_processing() -> None: - """Test conversion of PROCESSING status to string.""" - result = application_run_status_to_str(RunState.PROCESSING) - assert result == "processing" - - -@pytest.mark.unit -def test_application_run_status_to_str_terminated() -> None: - """Test conversion of TERMINATED status to string.""" - result = application_run_status_to_str(RunState.TERMINATED) - assert result == "terminated" - - -# Tests for CSV utilities - - -@pytest.mark.unit -def test_write_and_read_metadata_csv(tmp_path: Path) -> None: - """Test writing and reading metadata CSV files.""" - metadata_csv = tmp_path / "metadata.csv" - test_data = [ - {"name": "file1.svs", "size": "1024", "type": "image"}, - {"name": "file2.svs", "size": "2048", "type": "image"}, - ] - - # Write CSV - result_path = write_metadata_dict_to_csv(metadata_csv, test_data) - assert result_path == metadata_csv - assert metadata_csv.exists() - - # Read CSV back - read_data = read_metadata_csv_to_dict(metadata_csv) - assert read_data is not None - assert len(read_data) == 2 - assert read_data[0]["name"] == "file1.svs" - assert read_data[1]["size"] == "2048" - - -@pytest.mark.unit -def test_read_metadata_csv_invalid_file(tmp_path: Path) -> None: - """Test reading invalid CSV file returns None.""" - invalid_csv = tmp_path / "invalid.csv" - invalid_csv.write_text("not;valid;csv\ndata") - - result = read_metadata_csv_to_dict(invalid_csv) - # Should still work but may return unexpected structure - assert result is not None or result is None # Either outcome is acceptable - - -@pytest.mark.unit -def test_read_metadata_csv_nonexistent_file(tmp_path: Path) -> None: - """Test reading non-existent CSV file.""" - nonexistent = tmp_path / "does_not_exist.csv" - - with pytest.raises(FileNotFoundError): - read_metadata_csv_to_dict(nonexistent) - - -# Tests for MIME type utilities - - -@pytest.mark.unit -def test_get_mime_type_for_input_artifact() -> None: - """Test getting MIME type from InputArtifactData.""" - # InputArtifactData is actually the response object from the API with different fields - # For now, skip testing this as it requires mocking the full API response - # The function is tested indirectly through integration tests - - -@pytest.mark.unit -def test_get_mime_type_for_output_artifact() -> None: - """Test getting MIME type from OutputArtifactData.""" - # OutputArtifactData requires additional fields we don't have access to in unit tests - # The function is tested indirectly through integration tests - - -@pytest.mark.unit -def test_get_mime_type_for_output_artifact_element_with_media_type() -> None: - """Test getting MIME type from OutputArtifactElement with media_type in metadata.""" - from aignx.codegen.models import ArtifactOutput, ArtifactState, ArtifactTerminationReason - - artifact = OutputArtifactElement( - output_artifact_id="artifact-456", - name="data.json", - download_url="https://example.com/download", - metadata={"media_type": "application/json"}, - state=ArtifactState.TERMINATED, - termination_reason=ArtifactTerminationReason.SUCCEEDED, - output=ArtifactOutput.AVAILABLE, - error_code=None, - error_message=None, - ) - - result = get_mime_type_for_artifact(artifact) - assert result == "application/json" - - -@pytest.mark.unit -def test_get_mime_type_for_output_artifact_element_with_mime_type() -> None: - """Test getting MIME type from OutputArtifactElement with mime_type in metadata.""" - from aignx.codegen.models import ArtifactOutput, ArtifactState, ArtifactTerminationReason - - artifact = OutputArtifactElement( - output_artifact_id="artifact-789", - name="data.csv", - download_url="https://example.com/download", - metadata={"mime_type": "text/csv"}, - state=ArtifactState.TERMINATED, - termination_reason=ArtifactTerminationReason.SUCCEEDED, - output=ArtifactOutput.AVAILABLE, - error_code=None, - error_message=None, - ) - - result = get_mime_type_for_artifact(artifact) - assert result == "text/csv" - - -@pytest.mark.unit -def test_get_mime_type_for_output_artifact_element_default() -> None: - """Test getting MIME type defaults to application/octet-stream.""" - from aignx.codegen.models import ArtifactOutput, ArtifactState, ArtifactTerminationReason - - artifact = OutputArtifactElement( - output_artifact_id="artifact-999", - name="unknown.bin", - download_url="https://example.com/download", - metadata={}, - state=ArtifactState.TERMINATED, - termination_reason=ArtifactTerminationReason.SUCCEEDED, - output=ArtifactOutput.AVAILABLE, - error_code=None, - error_message=None, - ) - - result = get_mime_type_for_artifact(artifact) - assert result == "application/octet-stream" - - -# Tests for print functions - - -@pytest.mark.unit -@patch("aignostics.application._utils.console") -def test_print_runs_verbose_with_single_run(mock_console: Mock) -> None: - """Test verbose printing of a single run.""" - submitted_at = datetime(2025, 1, 1, 12, 0, 0, tzinfo=UTC) - terminated_at = datetime(2025, 1, 1, 13, 0, 0, tzinfo=UTC) - - run = RunData( - run_id="run-123", - application_id="he-tme", - version_number="1.0.0", - state=RunState.TERMINATED, - termination_reason=RunTerminationReason.ALL_ITEMS_PROCESSED, - output=RunOutput.FULL, - statistics=RunItemStatistics( - item_count=5, - item_pending_count=0, - item_processing_count=0, - item_skipped_count=0, - item_succeeded_count=5, - item_user_error_count=0, - item_system_error_count=0, - ), - submitted_at=submitted_at, - submitted_by="user@example.com", - terminated_at=terminated_at, - custom_metadata=None, - error_message=None, - error_code=None, - ) - - print_runs_verbose([run]) - - mock_console.print.assert_called_once() - call_args = mock_console.print.call_args[0][0] - assert "Application Runs:" in call_args - assert "run-123" in call_args - assert "he-tme" in call_args - - -@pytest.mark.unit -@patch("aignostics.application._utils.console") -def test_print_runs_non_verbose_with_error(mock_console: Mock) -> None: - """Test non-verbose printing of runs with errors.""" - submitted_at = datetime(2025, 1, 1, 12, 0, 0, tzinfo=UTC) - - run = RunData( - run_id="run-456", - application_id="test-app", - version_number="0.0.1", - state=RunState.TERMINATED, - termination_reason=RunTerminationReason.CANCELED_BY_USER, - output=RunOutput.PARTIAL, - statistics=RunItemStatistics( - item_count=3, - item_pending_count=0, - item_processing_count=0, - item_skipped_count=0, - item_succeeded_count=1, - item_user_error_count=2, - item_system_error_count=0, - ), - submitted_at=submitted_at, - submitted_by="user@example.com", - terminated_at=None, - custom_metadata={"key": "value"}, - error_message="User canceled the run", - error_code="USER_CANCELED", - ) - - print_runs_non_verbose([run]) - - mock_console.print.assert_called_once() - call_args = mock_console.print.call_args[0][0] - assert "Application Run IDs:" in call_args - assert "run-456" in call_args - assert "USER_CANCELED" in call_args - - -@pytest.mark.unit -@patch("aignostics.application._utils.console") -def test_retrieve_and_print_run_details_with_items(mock_console: Mock) -> None: - """Test retrieving and printing run details with items.""" - submitted_at = datetime(2025, 1, 1, 12, 0, 0, tzinfo=UTC) - terminated_at = datetime(2025, 1, 1, 13, 0, 0, tzinfo=UTC) - - # Mock run data - run_data = RunData( - run_id="run-789", - application_id="he-tme", - version_number="1.0.0", - state=RunState.TERMINATED, - termination_reason=RunTerminationReason.ALL_ITEMS_PROCESSED, - output=RunOutput.FULL, - statistics=RunItemStatistics( - item_count=2, - item_pending_count=0, - item_processing_count=0, - item_skipped_count=0, - item_succeeded_count=2, - item_user_error_count=0, - item_system_error_count=0, - ), - submitted_at=submitted_at, - submitted_by="user@example.com", - terminated_at=terminated_at, - custom_metadata=None, - error_message=None, - error_code=None, - ) - - # Mock item results - from aignx.codegen.models import ArtifactOutput, ArtifactState, ArtifactTerminationReason, ItemOutput - - item_result = ItemResult( - item_id="item-123", - external_id="slide-001", - state=ItemState.TERMINATED, - termination_reason=ItemTerminationReason.SUCCEEDED, - output=ItemOutput.FULL, - error_message=None, - error_code=None, - custom_metadata=None, - custom_metadata_checksum=None, - terminated_at=terminated_at, - output_artifacts=[ - OutputArtifactElement( - output_artifact_id="artifact-abc", - name="result.parquet", - download_url="https://example.com/result.parquet", - metadata={"media_type": "application/vnd.apache.parquet"}, - state=ArtifactState.TERMINATED, - termination_reason=ArtifactTerminationReason.SUCCEEDED, - output=ArtifactOutput.AVAILABLE, - error_code=None, - error_message=None, - ) - ], - ) - - # Create mock run handle - mock_run = MagicMock() - mock_run.details.return_value = run_data - mock_run.results.return_value = [item_result] - - retrieve_and_print_run_details(mock_run) - - # Verify console.print was called multiple times (for run details and items) - assert mock_console.print.call_count >= 2 - - # Check that run details were printed - first_call = mock_console.print.call_args_list[0][0][0] - assert "Run Details for run-789" in first_call - assert "he-tme" in first_call - - -@pytest.mark.unit -@patch("aignostics.application._utils.console") -def test_retrieve_and_print_run_details_no_items(mock_console: Mock) -> None: - """Test retrieving and printing run details with no items.""" - submitted_at = datetime(2025, 1, 1, 12, 0, 0, tzinfo=UTC) - - run_data = RunData( - run_id="run-empty", - application_id="test-app", - version_number="0.0.1", - state=RunState.PENDING, - termination_reason=None, - output=RunOutput.NONE, - statistics=RunItemStatistics( - item_count=0, - item_pending_count=0, - item_processing_count=0, - item_skipped_count=0, - item_succeeded_count=0, - item_user_error_count=0, - item_system_error_count=0, - ), - submitted_at=submitted_at, - submitted_by="user@example.com", - terminated_at=None, - custom_metadata=None, - error_message=None, - error_code=None, - ) - - mock_run = MagicMock() - mock_run.details.return_value = run_data - mock_run.results.return_value = [] - - retrieve_and_print_run_details(mock_run) - - # Should print run details and "No item results available" - assert mock_console.print.call_count >= 2 - last_call = str(mock_console.print.call_args_list[-1]) - assert "No item results available" in last_call diff --git a/tests/aignostics/bucket/TC-BUCKET-CLI-01.feature b/tests/aignostics/bucket/TC-BUCKET-CLI-01.feature deleted file mode 100644 index f7abcbba..00000000 --- a/tests/aignostics/bucket/TC-BUCKET-CLI-01.feature +++ /dev/null @@ -1,23 +0,0 @@ -Feature: Bucket Complete Data Lifecycle Management - - The system provides complete bucket operations for file storage including - upload, discovery, download, and deletion with content validation and - cleanup verification. - - @tests:SPEC-BUCKET-SERVICE - @tests:SWR-BUCKET-1-1 - @tests:SWR-BUCKET-1-2 - @tests:SWR-BUCKET-1-3 - @tests:SWR-BUCKET-1-4 - @id:TC-BUCKET-CLI-01 - Scenario: System processes complete bucket data lifecycle operations - Given the user creates test files in multiple subdirectories - When the user uploads the directory structure to bucket storage - Then the system shall store all files with proper organization - When the user searches for uploaded files - Then the system shall return all uploaded files with correct paths - When the user downloads files to a new location - Then the system shall retrieve files with identical content validation - When the user deletes files individually from bucket storage - Then the system shall remove each file and confirm deletion - And the system shall report file not found for subsequent deletion attempts diff --git a/tests/aignostics/bucket/TC-BUCKET-GUI-01.feature b/tests/aignostics/bucket/TC-BUCKET-GUI-01.feature deleted file mode 100644 index 0fa26f3a..00000000 --- a/tests/aignostics/bucket/TC-BUCKET-GUI-01.feature +++ /dev/null @@ -1,25 +0,0 @@ -Feature: Bucket GUI File Management Operations - - The system provides graphical interface for bucket file operations including - file upload verification, grid display, download functionality, and deletion - with real-time UI updates and confirmation. - - @tests:SPEC-BUCKET-SERVICE - @tests:SWR-BUCKET-1-5 - @tests:SWR-BUCKET-1-6 - @tests:SWR-BUCKET-1-7 - @tests:SWR-BUCKET-1-8 - @tests:SWR-BUCKET-1-9 - @id:TC-BUCKET-GUI-01 - Scenario: System processes bucket file operations through GUI interface - Given the user creates test files and uploads them via CLI - When the user navigates to the bucket page in GUI - Then the system shall display uploaded files in the bucket grid - And the system shall show download and delete buttons in disabled state - When the user selects files in the grid interface - Then the system shall enable download and delete operation buttons - When the user triggers file download through GUI controls - Then the system shall download selected files and confirm completion - When the user triggers file deletion through GUI controls - Then the system shall remove selected files from bucket storage - And the system shall update the grid to reflect file removal diff --git a/tests/aignostics/bucket/cli_test.py b/tests/aignostics/bucket/cli_test.py index 6561f192..fa20bba1 100644 --- a/tests/aignostics/bucket/cli_test.py +++ b/tests/aignostics/bucket/cli_test.py @@ -1,10 +1,10 @@ """Tests to verify the CLI functionality of the bucket module.""" +import json import os import uuid from pathlib import Path -import pytest from typer.testing import CliRunner from aignostics.cli import cli @@ -13,10 +13,7 @@ MESSAGE_NOT_YET_IMPLEMENTED = "NOT YET IMPLEMENTED" -@pytest.mark.e2e -@pytest.mark.long_running -@pytest.mark.timeout(timeout=60 * 15) -def test_cli_bucket_flow(runner: CliRunner, tmpdir, record_property) -> None: # noqa: C901, PLR0912, PLR0915 +def test_cli_bucket_flow(runner: CliRunner, tmpdir) -> None: # noqa: C901, PLR0912, PLR0915 """E2E flow testing all bucket CLI commands. 1. Creates 9 files with 2 sub directories in tmpdir, with total file size of 1MB @@ -28,7 +25,6 @@ def test_cli_bucket_flow(runner: CliRunner, tmpdir, record_property) -> None: # 7. No longer finds any of the 9 files 8. Tries to delete a file that does not exist and gets "Object with key '{file}' not found message """ - record_property("tested-item-id", "TC-BUCKET-CLI-01") import re import psutil @@ -149,120 +145,22 @@ def test_cli_bucket_flow(runner: CliRunner, tmpdir, record_property) -> None: # assert f"No objects found matching pattern ['{non_existent_file}']" in normalize_output(result.stdout) -@pytest.mark.e2e -@pytest.mark.timeout(timeout=60 * 2) def test_cli_bucket_purge(runner: CliRunner) -> None: """Check bucket purge command runs successfully.""" result = runner.invoke(cli, ["bucket", "purge", "--dry-run"]) assert result.exit_code == 0 -@pytest.mark.e2e -@pytest.mark.timeout(timeout=60 * 2) -def test_cli_bucket_find_invalid_regex(runner: CliRunner) -> None: - """Test bucket find with invalid regex pattern triggers ValueError.""" - # Use an invalid regex pattern (unclosed bracket) - result = runner.invoke(cli, ["bucket", "find", "[invalid(regex"]) - assert result.exit_code == 2 - assert "Invalid regex pattern" in result.output or "Failed to find objects" in result.output - - -@pytest.mark.e2e -@pytest.mark.timeout(timeout=60 * 2) -def test_cli_bucket_find_nonexistent_key(runner: CliRunner) -> None: - """Test bucket find with non-existent key returns empty result.""" - # Use a key that definitely doesn't exist - nonexistent_key = f"nonexistent/file/{uuid.uuid4()}.txt" - result = runner.invoke(cli, ["bucket", "find", nonexistent_key, "--what-is-key"]) - assert result.exit_code == 0 - # Should return empty JSON array or object - assert "[]" in result.output or "{}" in result.output - - -@pytest.mark.e2e -@pytest.mark.timeout(timeout=60 * 2) -def test_cli_bucket_download_invalid_regex(runner: CliRunner, tmpdir) -> None: - """Test bucket download with invalid regex pattern triggers ValueError.""" - result = runner.invoke(cli, ["bucket", "download", "[invalid(regex", "--destination", str(tmpdir)]) - assert result.exit_code == 2 - assert "Invalid regex pattern" in result.output or "Failed to download" in result.output - - -@pytest.mark.e2e -@pytest.mark.timeout(timeout=60 * 2) -def test_cli_bucket_download_nonexistent_key(runner: CliRunner, tmpdir) -> None: - """Test bucket download with non-existent key completes with no files.""" - nonexistent_key = f"nonexistent/file/{uuid.uuid4()}.txt" - result = runner.invoke(cli, ["bucket", "download", nonexistent_key, "--what-is-key", "--destination", str(tmpdir)]) - assert result.exit_code == 0 - # Should complete successfully but with no objects found message - assert "No objects found" in result.output - - -@pytest.mark.e2e -@pytest.mark.timeout(timeout=60 * 2) -def test_cli_bucket_delete_invalid_regex(runner: CliRunner) -> None: - """Test bucket delete with invalid regex pattern triggers ValueError.""" - result = runner.invoke(cli, ["bucket", "delete", "[invalid(regex", "--dry-run"]) - assert result.exit_code == 2 - assert "Invalid regex pattern" in result.output or "Failed to delete" in result.output - - -@pytest.mark.e2e -@pytest.mark.timeout(timeout=60 * 2) -def test_cli_bucket_delete_nonexistent_key(runner: CliRunner) -> None: - """Test bucket delete with non-existent key returns no objects found.""" - nonexistent_key = f"nonexistent/file/{uuid.uuid4()}.txt" - result = runner.invoke(cli, ["bucket", "delete", nonexistent_key, "--what-is-key", "--dry-run"]) - assert result.exit_code == 0 - assert "No objects found" in result.output - - -@pytest.mark.e2e -@pytest.mark.timeout(timeout=60 * 5) -def test_cli_bucket_upload_single_file(runner: CliRunner, tmpdir) -> None: - """Test uploading a single file, then cleaning up from bucket and locally. - - 1. Creates a single random 1KB file in tmpdir - 2. Uploads the file to bucket - 3. Verifies the file was uploaded - 4. Deletes the file from the bucket - 5. Verifies the file is no longer in the bucket - 6. Deletes the local file - """ - import psutil - - # Get username for path verification - the_uuid = str(uuid.uuid4())[:8] - username = psutil.Process().username().replace("\\", "_") - test_prefix = f"{username}/test/{the_uuid}/single-file-test" - - # Step 1: Create a single random 1KB file - test_file = Path(tmpdir) / "test_file.bin" - test_file.write_bytes(os.urandom(1024)) # 1KB of random data - assert test_file.exists() - - # Step 2: Upload the single file to bucket - result = runner.invoke(cli, ["bucket", "upload", str(test_file), "--destination-prefix", test_prefix]) - assert result.exit_code == 0 - assert "Successfully uploaded 1 files" in result.output or "test_file.bin" in result.output - - # Step 3: Verify the file was uploaded - uploaded_key = f"{test_prefix}/test_file.bin" - result = runner.invoke(cli, ["bucket", "find", uploaded_key, "--what-is-key"]) +def test_cli_bucket_info_settings(runner: CliRunner) -> None: + """Check settings in system info with proper defaults.""" + result = runner.invoke(cli, ["system", "info"]) assert result.exit_code == 0 - assert uploaded_key in normalize_output(result.stdout) - # Step 4: Delete the file from the bucket - result = runner.invoke(cli, ["bucket", "delete", uploaded_key, "--what-is-key", "--no-dry-run"]) - assert result.exit_code == 0 - assert "Deleted 1 object" in normalize_output(result.stdout) - - # Step 5: Verify the file is no longer in the bucket - result = runner.invoke(cli, ["bucket", "find", uploaded_key, "--what-is-key"]) - assert result.exit_code == 0 - assert "[]" in result.output or "{}" in result.output # Empty result + # Parse the JSON output + output_data = json.loads(result.output) - # Step 6: Delete the local file - test_file.unlink() - assert not test_file.exists() + # Verify the bucket settings defaults + assert output_data["bucket"]["settings"]["protocol"] == "gs" + assert output_data["bucket"]["settings"]["region_name"] == "EUROPE-WEST3" + assert output_data["bucket"]["settings"]["upload_signed_url_expiration_seconds"] == 7200 + assert output_data["bucket"]["settings"]["download_signed_url_expiration_seconds"] == 604800 diff --git a/tests/aignostics/bucket/gui_test.py b/tests/aignostics/bucket/gui_test.py index 57efbac4..9e4cc015 100644 --- a/tests/aignostics/bucket/gui_test.py +++ b/tests/aignostics/bucket/gui_test.py @@ -15,29 +15,22 @@ from tests.conftest import assert_notified -@pytest.mark.integration async def test_gui_bucket_shows(user: User) -> None: """Test that the user sees the dataset page.""" await user.open("/bucket") await user.should_see("The bucket is securely hosted on Google Cloud in EU") -@pytest.mark.e2e -@pytest.mark.long_running @pytest.mark.flaky(retries=1, delay=5, only_on=[AssertionError]) -@pytest.mark.timeout(timeout=60 * 15) -async def test_gui_bucket_flow(user: User, runner: CliRunner, tmp_path: Path, silent_logging, record_property) -> None: # noqa: PLR0915 +async def test_gui_bucket_flow(user: User, runner: CliRunner, tmp_path: Path, silent_logging) -> None: """E2E flow testing all bucket CLI commands. - Steps: 1. Creates 1 file in a subdir of size 100kb 2. Uploads tmpdir to bucket using bucket upload command, prefix is {username}/test/ 3. Checks the file is there using find comand 5. Deletes the files using the GUI 6. Checks the file is no longer there using the find command """ - record_property("tested-item-id", "TC-BUCKET-GUI-01, SPEC-GUI-SERVICE") - # Step 1: Create file test_prefix = "{username}/test-gui-" + "".join(random.choices(string.ascii_letters + string.digits, k=3)) dir1 = tmp_path / "dir1" @@ -100,7 +93,7 @@ async def mocked_get_selected_rows(): # noqa: RUF029 await user.should_see(marker="BUTTON_DOWNLOAD_OBJECTS") user.find(marker="BUTTON_DOWNLOAD_OBJECTS").click() - await assert_notified(user, "Downloaded 1 objects.", wait_seconds=240) + await assert_notified(user, "Downloaded 1 objects.", wait_seconds=120) # Step 6: Delete the files using GUI assert grid_item.get_selected_rows is not None diff --git a/tests/aignostics/bucket/service_test.py b/tests/aignostics/bucket/service_test.py index 8af80cc0..2197ce7a 100644 --- a/tests/aignostics/bucket/service_test.py +++ b/tests/aignostics/bucket/service_test.py @@ -2,12 +2,9 @@ from unittest import mock -import pytest - from aignostics.bucket._service import Service -@pytest.mark.integration @mock.patch("aignostics.bucket._service.Service._get_s3_client") def test_create_signed_upload_url_expires_in_3600_seconds(mock_get_s3_client: mock.MagicMock) -> None: """Test that create_signed_upload_url calls generate_presigned_url with ExpiresIn of 3600 seconds.""" @@ -19,7 +16,6 @@ def test_create_signed_upload_url_expires_in_3600_seconds(mock_get_s3_client: mo service = Service() service._settings = mock.MagicMock() service._settings.upload_signed_url_expiration_seconds = 2 * 60 * 60 - service.get_bucket_name = mock.MagicMock(return_value="test-bucket") # Act result = service.create_signed_upload_url("test-object-key") @@ -33,7 +29,6 @@ def test_create_signed_upload_url_expires_in_3600_seconds(mock_get_s3_client: mo assert result == "https://example.com/signed-upload-url" -@pytest.mark.integration @mock.patch("aignostics.bucket._service.Service._get_s3_client") def test_create_signed_download_url_expires_in_7_days(mock_get_s3_client: mock.MagicMock) -> None: """Test that create_signed_download_url calls generate_presigned_url with ExpiresIn of 7 days (604800 seconds).""" @@ -45,7 +40,6 @@ def test_create_signed_download_url_expires_in_7_days(mock_get_s3_client: mock.M service = Service() service._settings = mock.MagicMock() service._settings.download_signed_url_expiration_seconds = 7 * 24 * 60 * 60 # 7 days in seconds - service.get_bucket_name = mock.MagicMock(return_value="test-bucket") # Act result = service.create_signed_download_url("test-object-key") diff --git a/tests/aignostics/bucket/settings_test.py b/tests/aignostics/bucket/settings_test.py index 4730d37d..5d795c32 100644 --- a/tests/aignostics/bucket/settings_test.py +++ b/tests/aignostics/bucket/settings_test.py @@ -1,16 +1,11 @@ """Tests for bucket settings module.""" -import json - import pytest from pydantic import ValidationError -from typer.testing import CliRunner from aignostics.bucket._settings import Settings -from aignostics.cli import cli -@pytest.mark.unit def test_signed_url_upload_settings() -> None: """Test upload settings, happy and not so happy path.""" # Test default works @@ -41,7 +36,6 @@ def test_signed_url_upload_settings() -> None: ) -@pytest.mark.unit def test_signed_url_download_settings() -> None: """Test download settings, happy and not so happy path.""" # Test default works (default is max: 7 days) @@ -71,20 +65,3 @@ def test_signed_url_download_settings() -> None: Settings( download_signed_url_expiration_seconds=7 * 24 * 60 * 60 + 1, # Above max ) - - -@pytest.mark.integration -@pytest.mark.timeout(timeout=30) -def test_cli_bucket_info_settings(runner: CliRunner) -> None: - """Check settings in system info with proper defaults.""" - result = runner.invoke(cli, ["system", "info"]) - assert result.exit_code == 0 - - # Parse the JSON output - output_data = json.loads(result.output) - - # Verify the bucket settings defaults - assert output_data["settings"]["AIGNOSTICS_BUCKET_PROTOCOL"] == "gs" - assert output_data["settings"]["AIGNOSTICS_BUCKET_REGION_NAME"] == "EUROPE-WEST3" - assert output_data["settings"]["AIGNOSTICS_BUCKET_UPLOAD_SIGNED_URL_EXPIRATION_SECONDS"] == 7200 - assert output_data["settings"]["AIGNOSTICS_BUCKET_DOWNLOAD_SIGNED_URL_EXPIRATION_SECONDS"] == 604800 diff --git a/tests/aignostics/cli_test.py b/tests/aignostics/cli_test.py index df4a5b6d..048515d1 100644 --- a/tests/aignostics/cli_test.py +++ b/tests/aignostics/cli_test.py @@ -19,7 +19,6 @@ THE_VALUE = "THE_VALUE" -@pytest.mark.integration def test_cli_built_with_love(runner) -> None: """Check epilog shown.""" result = runner.invoke(cli, ["--help"]) @@ -28,8 +27,6 @@ def test_cli_built_with_love(runner) -> None: assert __version__ in result.output -@pytest.mark.integration -@pytest.mark.timeout(timeout=60) def test_cli_fails_on_invalid_setting_with_env_arg() -> None: """Check system fails on boot with invalid setting using subprocess.""" # Run the CLI as a subprocess with environment variable @@ -65,7 +62,6 @@ def test_cli_fails_on_invalid_setting_with_env_arg() -> None: assert "Input should be 'CRITICAL'" in stdout_text or "Input should be 'CRITICAL'" in stderr_text -@pytest.mark.integration @pytest.mark.sequential def test_cli_fails_on_invalid_setting_with_environ(runner) -> None: """Check system fails on boot with invalid setting using CliRunner and environment variables.""" @@ -81,7 +77,7 @@ def test_cli_fails_on_invalid_setting_with_environ(runner) -> None: # end custon # Run the CLI with the runner - result = runner.invoke(cli, ["system", "dump-dot-env-file"], env=env) + result = runner.invoke(cli, ["system", "info"], env=env) # Check the exit code (0 indicates all good) assert result.exit_code == 0 @@ -96,23 +92,21 @@ def test_cli_fails_on_invalid_setting_with_environ(runner) -> None: # end custon # Run the CLI with the runner - result = runner.invoke(cli, ["system", "dump-dot-env-file"], env=env) + result = runner.invoke(cli, ["system", "info"], env=env) - # Check that the error message is in the output - assert "Input should be 'CRITICAL'" in result.output # Check the exit code (78 indicates validation failed) assert result.exit_code == 78 + # Check that the error message is in the output + assert "Input should be 'CRITICAL'" in result.output if find_spec("nicegui"): - @pytest.mark.integration def test_cli_gui_help(runner: CliRunner) -> None: """Check gui help works.""" result = runner.invoke(cli, ["launchpad", "--help"]) assert result.exit_code == 0 - @pytest.mark.integration def test_cli_gui_run(runner: CliRunner, monkeypatch: pytest.MonkeyPatch) -> None: """Check gui component behaviors when launchpad command is executed.""" # Create mocks @@ -133,7 +127,6 @@ def mock_ui_run( # noqa: PLR0913, PLR0917 show_welcome_message=False, show=False, window_size=None, - reconnect_timeout=0, ): nonlocal mock_ui_run_called, mock_ui_run_args mock_ui_run_called = True @@ -149,7 +142,6 @@ def mock_ui_run( # noqa: PLR0913, PLR0917 "show_welcome_message": show_welcome_message, "show": show, "window_size": window_size, - "reconnect_timeout": reconnect_timeout, } def mock_gui_register_pages(): @@ -207,14 +199,11 @@ def mock_app_mount(path, app_instance): if find_spec("marimo") and find_spec("fastapi"): from fastapi import FastAPI - @pytest.mark.integration def test_cli_notebook_help(runner: CliRunner) -> None: """Check notebook help works.""" result = runner.invoke(cli, ["notebook", "--help"]) assert result.exit_code == 0 - @pytest.mark.integration - @pytest.mark.timeout(timeout=60) def test_cli_notebook_run(runner: CliRunner, monkeypatch: pytest.MonkeyPatch) -> None: """Check uvicorn.run is called with FastAPI app from the notebook service.""" # Create a mock for uvicorn.run to capture the app instance diff --git a/tests/aignostics/dataset/TC-DATASET-CLI-01.feature b/tests/aignostics/dataset/TC-DATASET-CLI-01.feature deleted file mode 100644 index 474686cd..00000000 --- a/tests/aignostics/dataset/TC-DATASET-CLI-01.feature +++ /dev/null @@ -1,16 +0,0 @@ -Feature: Dataset Download Management - - The system provides dataset download capabilities with file validation, - integrity verification, and completion confirmation. - - @tests:SPEC-DATASET-SERVICE - @tests:SWR-DATASET-1-1 - @tests:SWR-DATASET-1-2 - @tests:SWR-DATASET-1-3 - @id:TC-DATASET-CLI-01 - Scenario: System downloads dataset through user request - Given the user specifies a valid dataset identifier or URL - When the user initiates dataset download with destination directory - Then the system shall download the dataset successfully with proper directory structure - And the system shall verify downloaded files are complete, uncorrupted, and have valid format integrity - And the system shall provide download completion confirmation diff --git a/tests/aignostics/dataset/TC-DATASET-GUI-01.feature b/tests/aignostics/dataset/TC-DATASET-GUI-01.feature deleted file mode 100644 index ab1a40a8..00000000 --- a/tests/aignostics/dataset/TC-DATASET-GUI-01.feature +++ /dev/null @@ -1,20 +0,0 @@ -Feature: Dataset Download GUI Operations - - The system provides graphical interface for dataset download operations - including dataset selection, destination configuration, and download - execution with progress feedback and completion validation. - - @tests:SPEC-DATASET-SERVICE - @tests:SWR-DATASET-1-1 - @tests:SWR-DATASET-1-2 - @tests:SWR-DATASET-1-3 - @id:TC-DATASET-GUI-01 - Scenario: System processes dataset download through GUI interface - Given the user navigates to the dataset download page - When the user selects example dataset and configures custom dataset identifier - And the user configures download destination through GUI controls - Then the system shall initiate dataset download process - And the system shall provide download progress notifications - When the download process completes - Then the system shall confirm download completion - And the system shall validate downloaded files exist with correct structure and size diff --git a/tests/aignostics/dataset/cli_test.py b/tests/aignostics/dataset/cli_test.py index c6a36bca..28befa37 100644 --- a/tests/aignostics/dataset/cli_test.py +++ b/tests/aignostics/dataset/cli_test.py @@ -4,13 +4,11 @@ import re import tempfile from pathlib import Path -from unittest.mock import MagicMock, patch import pytest from typer.testing import CliRunner from aignostics.cli import cli -from tests.conftest import normalize_output SERIES_UID = "1.3.6.1.4.1.5962.99.1.1069745200.1645485340.1637452317744.2.0" THUMBNAIL_UID = "1.3.6.1.4.1.5962.99.1.1038911754.1238045814.1637421484298.15.0" @@ -18,9 +16,7 @@ # Don't use tmp_path with flaky, see https://github.com/str0zzapreti/pytest-retry/issues/46 -@pytest.mark.integration -@pytest.mark.flaky(retries=1, delay=5) -@pytest.mark.timeout(timeout=60 * 2) +@pytest.mark.flaky(retries=1, delay=5, only_on=[AssertionError]) def test_cli_idc_indices(runner: CliRunner) -> None: """Check expected column returned.""" result = runner.invoke(cli, ["dataset", "idc", "indices"]) @@ -31,9 +27,7 @@ def test_cli_idc_indices(runner: CliRunner) -> None: ) -@pytest.mark.e2e -@pytest.mark.flaky(retries=1, delay=5) -@pytest.mark.timeout(timeout=60 * 2) +@pytest.mark.flaky(retries=1, delay=5, only_on=[AssertionError]) def test_cli_idc_columns_default_index(runner: CliRunner) -> None: """Check expected column returned.""" result = runner.invoke(cli, ["dataset", "idc", "columns"]) @@ -41,9 +35,7 @@ def test_cli_idc_columns_default_index(runner: CliRunner) -> None: assert "SOPInstanceUID" in result.output -@pytest.mark.integration -@pytest.mark.flaky(retries=1, delay=5) -@pytest.mark.timeout(timeout=60) +@pytest.mark.flaky(retries=1, delay=5, only_on=[AssertionError]) def test_cli_columns_special_index(runner: CliRunner) -> None: """Check expected column returned.""" result = runner.invoke(cli, ["dataset", "idc", "columns", "--index", "index"]) @@ -51,9 +43,7 @@ def test_cli_columns_special_index(runner: CliRunner) -> None: assert "series_aws_url" in result.output -@pytest.mark.e2e -@pytest.mark.flaky(retries=1, delay=5) -@pytest.mark.timeout(timeout=60 * 2) +@pytest.mark.flaky(retries=1, delay=5, only_on=[AssertionError]) def test_cli_idc_query(runner: CliRunner) -> None: """Check query returns expected results.""" result = runner.invoke(cli, ["dataset", "idc", "query"]) @@ -66,9 +56,7 @@ def test_cli_idc_query(runner: CliRunner) -> None: assert num_rows >= 50421, f"Expected equal or more than 50421 rows, but got {num_rows}" -@pytest.mark.e2e -@pytest.mark.flaky(retries=1, delay=5) -@pytest.mark.timeout(timeout=60 * 2) +@pytest.mark.flaky(retries=1, delay=5, only_on=[AssertionError]) def test_cli_idc_download_series_dry(runner: CliRunner, caplog) -> None: """Check download functionality with dry-run option.""" caplog.set_level(logging.INFO) @@ -89,12 +77,9 @@ def test_cli_idc_download_series_dry(runner: CliRunner, caplog) -> None: assert record.levelname != "ERROR" # if id would not be found, error would be logged -@pytest.mark.e2e -@pytest.mark.flaky(retries=1, delay=5) -@pytest.mark.timeout(timeout=60 * 2) -def test_cli_idc_download_instance_thumbnail(runner: CliRunner, caplog, record_property) -> None: +@pytest.mark.flaky(retries=1, delay=5, only_on=[AssertionError]) +def test_cli_idc_download_instance_thumbnail(runner: CliRunner, caplog) -> None: """Check download functionality with dry-run option.""" - record_property("tested-item-id", "TC-DATASET-CLI-01") caplog.set_level(logging.INFO) with tempfile.TemporaryDirectory() as tmpdir: result = runner.invoke( @@ -125,12 +110,8 @@ def test_cli_idc_download_instance_thumbnail(runner: CliRunner, caplog, record_p ) -@pytest.mark.e2e -@pytest.mark.flaky(retries=1, delay=5) -@pytest.mark.timeout(timeout=60 * 2) -def test_cli_aignostics_download_sample(runner: CliRunner, tmp_path: Path, record_property) -> None: +def test_cli_aignostics_download_sample(runner: CliRunner, tmp_path: Path) -> None: """Check download functionality with dry-run option.""" - record_property("tested-item-id", "TC-DATASET-CLI-01") result = runner.invoke( cli, [ @@ -152,111 +133,3 @@ def test_cli_aignostics_download_sample(runner: CliRunner, tmp_path: Path, recor expected_file = tmp_path / "9375e3ed-28d2-4cf3-9fb9-8df9d11a6627.tiff" assert expected_file.exists(), f"Expected file {expected_file} not found" assert expected_file.stat().st_size == 14681750 - - -@pytest.mark.integration -def test_idc_indices_error_handling(runner: CliRunner) -> None: - """Test that idc indices command properly displays error messages.""" - error_message = "Mock error: Failed to connect to IDC" - - with patch("aignostics.third_party.idc_index.IDCClient.client") as mock_client: - mock_client.side_effect = RuntimeError(error_message) - result = runner.invoke(cli, ["dataset", "idc", "indices"]) - - assert result.exit_code == 1 - # Check that key parts of the error message appear in output - assert "Mock error" in normalize_output(result.output) - assert "Failed to connect to IDC" in normalize_output(result.output) - - -@pytest.mark.integration -def test_idc_columns_error_handling(runner: CliRunner) -> None: - """Test that idc columns command properly displays error messages.""" - error_message = "Mock error: Invalid index name" - - with patch("aignostics.third_party.idc_index.IDCClient.client") as mock_client: - mock_instance = MagicMock() - mock_instance.fetch_index.side_effect = ValueError(error_message) - mock_client.return_value = mock_instance - - result = runner.invoke(cli, ["dataset", "idc", "columns", "--index", "invalid_index"]) - - assert result.exit_code == 1 - # Check that key parts of the error message appear in output - assert "Mock error" in normalize_output(result.output) - assert "Invalid index name" in normalize_output(result.output) - assert "invalid_index" in normalize_output(result.output) - - -@pytest.mark.integration -def test_idc_query_error_handling(runner: CliRunner) -> None: - """Test that idc query command properly displays error messages.""" - error_message = "Mock error: SQL query failed" - test_query = "SELECT * FROM invalid_table" - - with patch("aignostics.third_party.idc_index.IDCClient.client") as mock_client: - mock_instance = MagicMock() - mock_instance.sql_query.side_effect = RuntimeError(error_message) - mock_client.return_value = mock_instance - - result = runner.invoke(cli, ["dataset", "idc", "query", test_query]) - - assert result.exit_code == 1 - # Check that key parts of the error message appear in output - assert "Mock error" in normalize_output(result.output) - # "SQL query failed" may be split across lines by rich console formatting - assert "SQL query failed" in normalize_output(result.output) - - -@pytest.mark.integration -def test_idc_download_error_handling(runner: CliRunner, tmp_path: Path) -> None: - """Test that idc download command properly displays error messages.""" - error_message = "Mock error: Download failed" - test_id = "test-series-id" - - with patch("aignostics.third_party.idc_index.IDCClient.client") as mock_client: - mock_client.side_effect = RuntimeError(error_message) - - result = runner.invoke( - cli, - [ - "dataset", - "idc", - "download", - test_id, - str(tmp_path), - ], - ) - - assert result.exit_code == 1 - # Check that key parts of the error message appear in output - assert "Mock error" in normalize_output(result.output) - assert "Download failed" in normalize_output(result.output) - assert test_id in normalize_output(result.output) - - -@pytest.mark.integration -def test_aignostics_download_error_handling(runner: CliRunner, tmp_path: Path) -> None: - """Test that aignostics download command properly displays error messages.""" - error_message = "Mock error: Failed to download from bucket" - test_url = "gs://test-bucket/test-file.tiff" - - with patch("aignostics.dataset._service.platform_generate_signed_url") as mock_generate_url: - mock_generate_url.side_effect = RuntimeError(error_message) - - result = runner.invoke( - cli, - [ - "dataset", - "aignostics", - "download", - test_url, - str(tmp_path), - ], - ) - - assert result.exit_code == 1 - # Check that key parts of the error message appear in output - assert "Mock error" in normalize_output(result.output) - assert "Failed to download from bucket" in normalize_output(result.output) - assert test_url in normalize_output(result.output) diff --git a/tests/aignostics/dataset/gui_test.py b/tests/aignostics/dataset/gui_test.py index 99e07948..e8df241b 100644 --- a/tests/aignostics/dataset/gui_test.py +++ b/tests/aignostics/dataset/gui_test.py @@ -12,20 +12,15 @@ IDC_DOWNLOAD_MAX_DURATION = 60 -@pytest.mark.integration async def test_gui_idc_shows(user: User) -> None: """Test that the user sees the dataset page.""" await user.open("/dataset/idc") await user.should_see("Explore Portal") -@pytest.mark.e2e -@pytest.mark.long_running @pytest.mark.flaky(retries=1, delay=5, only_on=[AssertionError]) -@pytest.mark.timeout(timeout=60 * 5) -async def test_gui_idc_downloads(user: User, tmp_path: Path, silent_logging: bool, record_property) -> None: +async def test_gui_idc_downloads(user: User, tmp_path: Path, silent_logging: bool) -> None: """Test that the user can download a dataset to a temporary directory.""" - record_property("tested-item-id", "TC-DATASET-GUI-01, SPEC-GUI-SERVICE") # Mock get_user_data_directory to return the tmpdir for this test with patch("aignostics.dataset._gui.get_user_data_directory", return_value=tmp_path): await user.open("/dataset/idc") @@ -74,12 +69,26 @@ async def test_gui_idc_downloads(user: User, tmp_path: Path, silent_logging: boo ) -async def _gui_idc_download_fails_with_invalid_inputs( # noqa: PLR0913, PLR0917 - user: User, tmpdir, source_input: str, expected_notification: str, silent_logging: None, record_property +@pytest.mark.flaky(retries=1, delay=5, only_on=[AssertionError]) +@pytest.mark.parametrize( + ("source_input", "expected_notification"), + [ + (" ", "Download failed: No IDs provided."), + ( + "4711", + "Download failed: None of the values passed matched any of the identifiers: " + "collection_id, PatientID, StudyInstanceUID, SeriesInstanceUID, SOPInstanceUID.", + ), + ( + " ", + "Download failed: No IDs provided", + ), + ], +) +async def test_gui_idc_download_fails_with_invalid_inputs( + user: User, tmpdir, source_input: str, expected_notification: str, silent_logging: None ) -> None: - """Test that the download fails with appropriate notification.""" - record_property("tested-item-id", "TC-DATASET-GUI-01, SPEC-GUI-SERVICE") - + """Test that the download fails with appropriate notification when invalid IDs are provided.""" with patch("aignostics.dataset._gui.get_user_data_directory", return_value=Path(tmpdir)): await user.open("/dataset/idc") await user.should_see(marker="SOURCE_INPUT") @@ -94,46 +103,3 @@ async def _gui_idc_download_fails_with_invalid_inputs( # noqa: PLR0913, PLR0917 user.find(marker="BUTTON_DOWNLOAD").click() await assert_notified(user, expected_notification, wait_seconds=60) - - -@pytest.mark.integration -@pytest.mark.parametrize( - ("source_input", "expected_notification"), - [ - (" ", "Download failed: No IDs provided."), - ], -) -@pytest.mark.timeout(timeout=60) -async def test_gui_idc_download_fails_with_no_inputs( # noqa: PLR0913, PLR0917 - user: User, tmpdir, source_input: str, expected_notification: str, silent_logging: None, record_property -) -> None: - """Test that the download fails with appropriate notification when no IDs are provided.""" - record_property("tested-item-id", "TC-DATASET-GUI-01, SPEC-GUI-SERVICE") - - await _gui_idc_download_fails_with_invalid_inputs( - user, tmpdir, source_input, expected_notification, silent_logging, record_property - ) - - -@pytest.mark.e2e -@pytest.mark.flaky(retries=1, delay=5, only_on=[AssertionError]) -@pytest.mark.timeout(timeout=60 * 2) -@pytest.mark.parametrize( - ("source_input", "expected_notification"), - [ - ( - "4711", - "Download failed: None of the values passed matched any of the identifiers: " - "collection_id, PatientID, StudyInstanceUID, SeriesInstanceUID, SOPInstanceUID.", - ), - ], -) -async def test_gui_idc_download_fails_with_invalid_inputs( # noqa: PLR0913, PLR0917 - user: User, tmpdir, source_input: str, expected_notification: str, silent_logging: None, record_property -) -> None: - """Test that the download fails with appropriate notification when invalid IDs are provided.""" - record_property("tested-item-id", "TC-DATASET-GUI-01, SPEC-GUI-SERVICE") - - await _gui_idc_download_fails_with_invalid_inputs( - user, tmpdir, source_input, expected_notification, silent_logging, record_property - ) diff --git a/tests/aignostics/dataset/service_test.py b/tests/aignostics/dataset/service_test.py index 8faac0e8..ef89041f 100644 --- a/tests/aignostics/dataset/service_test.py +++ b/tests/aignostics/dataset/service_test.py @@ -3,12 +3,9 @@ import subprocess from unittest import mock -import pytest - from aignostics.dataset._service import _active_processes, _cleanup_processes, _terminate_process -@pytest.mark.unit @mock.patch("aignostics.dataset._service._terminate_process") def test_cleanup_processes_terminates_running_processes(mock_terminate_process: mock.MagicMock) -> None: """Test that _cleanup_processes terminates all running processes.""" @@ -31,7 +28,6 @@ def test_cleanup_processes_terminates_running_processes(mock_terminate_process: assert mock_terminate_process.call_count == 1 -@pytest.mark.unit @mock.patch("time.sleep") def test_terminate_process(mock_sleep: mock.MagicMock) -> None: """Test that _terminate_process properly terminates a process.""" @@ -50,7 +46,6 @@ def test_terminate_process(mock_sleep: mock.MagicMock) -> None: assert mock_sleep.call_count == 5 -@pytest.mark.unit @mock.patch("time.sleep") def test_terminate_process_graceful_exit(mock_sleep: mock.MagicMock) -> None: """Test that _terminate_process handles graceful process termination.""" @@ -69,7 +64,6 @@ def test_terminate_process_graceful_exit(mock_sleep: mock.MagicMock) -> None: assert mock_sleep.call_count == 1 # Should have slept once before detecting termination -@pytest.mark.unit @mock.patch("aignostics.dataset._service.logger") def test_terminate_process_exception_handling(mock_logger: mock.MagicMock) -> None: """Test that _terminate_process handles exceptions properly.""" diff --git a/tests/aignostics/docker_test.py b/tests/aignostics/docker_test.py index b6414675..b35751de 100644 --- a/tests/aignostics/docker_test.py +++ b/tests/aignostics/docker_test.py @@ -8,9 +8,6 @@ BUILT_WITH_LOVE = "built with love in Berlin" -@pytest.mark.e2e # Calls container registry (ghcr.io) -@pytest.mark.long_running -@pytest.mark.docker @pytest.mark.skipif( platform.system() == "Windows", reason="Docker CLI tests are not supported on Windows due to path issues." ) @@ -20,7 +17,9 @@ ) @pytest.mark.skip_with_act @pytest.mark.xdist_group(name="docker") -@pytest.mark.timeout(timeout=60 * 5) +@pytest.mark.docker +@pytest.mark.long_running +@pytest.mark.scheduled def test_core_docker_cli_help_with_love(docker_services) -> None: """Test the CLI help command with docker services returns expected output.""" out = docker_services._docker_compose.execute("run aignostics --help") diff --git a/tests/aignostics/notebook/TC-NOTEBOOK-GUI-01.feature b/tests/aignostics/notebook/TC-NOTEBOOK-GUI-01.feature deleted file mode 100644 index a0c876b8..00000000 --- a/tests/aignostics/notebook/TC-NOTEBOOK-GUI-01.feature +++ /dev/null @@ -1,17 +0,0 @@ -Feature: Notebook Extension Management via GUI - - The system provides graphical interface for managing Marimo notebook - extension including launch capabilities, iframe integration, and - navigation controls for interactive data analysis workflows. - - @tests:SPEC-NOTEBOOK-SERVICE - @tests:SWR-NOTEBOOK-1-1 - @id:TC-NOTEBOOK-GUI-01 - Scenario: System manages notebook extension through GUI interface - Given the user navigates to notebook extension management page - When the user launches Marimo extension through GUI controls - Then the system shall transition to notebook interface with iframe integration - And the system shall provide embedded Marimo notebook functionality - When the user navigates back to notebook management - Then the system shall return to extension management interface - And the system shall maintain proper navigation state and controls diff --git a/tests/aignostics/notebook/gui_test.py b/tests/aignostics/notebook/gui_test.py index 648a4469..dc9230df 100644 --- a/tests/aignostics/notebook/gui_test.py +++ b/tests/aignostics/notebook/gui_test.py @@ -1,16 +1,11 @@ """Tests to verify the GUI functionality of the Notebook module.""" -import pytest from nicegui.testing import User from typer.testing import CliRunner -@pytest.mark.integration -@pytest.mark.timeout(timeout=60) -async def test_gui_marimo_extension(user: User, runner: CliRunner, silent_logging: None, record_property) -> None: +async def test_gui_marimo_extension(user: User, runner: CliRunner, silent_logging: None) -> None: """Test that the user can install and launch Marimo via the GUI.""" - record_property("tested-item-id", "TC-NOTEBOOK-GUI-01, SPEC-GUI-SERVICE") - # Step 1: Check we are on the Notebook page await user.open("/notebook") await user.should_see("Manage your Marimo Extension") diff --git a/tests/aignostics/notebook/service_test.py b/tests/aignostics/notebook/service_test.py index 2db3eab7..517e23ce 100644 --- a/tests/aignostics/notebook/service_test.py +++ b/tests/aignostics/notebook/service_test.py @@ -13,11 +13,8 @@ from aignostics.notebook._service import MARIMO_SERVER_STARTUP_TIMEOUT, Service, _get_runner, _Runner -@pytest.mark.integration -@pytest.mark.flaky(retries=1, delay=5, only_on=[AssertionError]) @pytest.mark.sequential -@pytest.mark.timeout(timeout=60 * 2) -def test_notebook_start_and_stop(caplog: pytest.LogCaptureFixture) -> None: +def test_start_and_stop(caplog: pytest.LogCaptureFixture) -> None: """Test the server can be started and stopped with real process. This test actually starts and stops a real Marimo server process @@ -95,10 +92,7 @@ def test_notebook_start_and_stop(caplog: pytest.LogCaptureFixture) -> None: service.stop() -@pytest.mark.integration -@pytest.mark.flaky(retries=1, delay=5, only_on=[AssertionError]) @pytest.mark.sequential -@pytest.mark.timeout(timeout=60 * 2) def test_serve_notebook(user: User, caplog: pytest.LogCaptureFixture) -> None: """Test notebook serving. @@ -147,7 +141,7 @@ def test_serve_notebook(user: User, caplog: pytest.LogCaptureFixture) -> None: ) pytest.fail(error_msg) - if "run_id=4711" not in notebook_url: + if "application_run_id=4711" not in notebook_url: log_messages = "\n".join([f"{record.levelname}: {record.message}" for record in caplog.records]) error_msg = ( f"run_id not found in iframe src: {notebook_url}\n" @@ -163,7 +157,6 @@ def test_serve_notebook(user: User, caplog: pytest.LogCaptureFixture) -> None: raise AssertionError(error_msg) from e -@pytest.mark.unit def test_startup_timeout() -> None: """Test handling of timeout during server startup. @@ -205,7 +198,6 @@ def test_startup_timeout() -> None: mock_stop.assert_called_once() -@pytest.mark.unit def test_missing_url() -> None: """Test handling of missing URL after server ready event is triggered. @@ -231,7 +223,6 @@ def test_missing_url() -> None: runner.start() -@pytest.mark.unit def test_stop_nonrunning_server() -> None: """Test stopping a server that isn't running. @@ -252,7 +243,6 @@ def test_stop_nonrunning_server() -> None: mock_logger.info.assert_called_with("Service stopped.") -@pytest.mark.unit def test_capture_output_no_stdout() -> None: """Test _capture_output method with None stdout. @@ -271,7 +261,6 @@ def test_capture_output_no_stdout() -> None: mock_logger.warning.assert_called_once_with("Cannot capture stdout") -@pytest.mark.unit def test_server_url_detection() -> None: """Test server URL detection from output. @@ -292,7 +281,6 @@ def test_server_url_detection() -> None: assert match.group(1).startswith("http"), f"Extracted invalid URL from: {output}" -@pytest.mark.unit def test_singleton_runner() -> None: """Test that _get_runner returns a singleton instance.""" # Reset the singleton for testing diff --git a/tests/aignostics/platform/authentication_test.py b/tests/aignostics/platform/authentication_test.py index c57ac99f..fe840934 100644 --- a/tests/aignostics/platform/authentication_test.py +++ b/tests/aignostics/platform/authentication_test.py @@ -1,6 +1,5 @@ """Tests for the authentication module of the Aignostics Python SDK.""" -import errno import logging import socket import time @@ -58,7 +57,7 @@ def mock_settings() -> MagicMock: settings.auth_timeout = 10.0 settings.auth_retry_wait_min = 0.1 settings.auth_retry_wait_max = 5.0 - settings.auth_retry_attempts = 3 + settings.auth_retry_attempts_max = 3 settings.auth_jwk_set_cache_ttl = 300 settings.refresh_token = None mock_settings.return_value = settings @@ -66,9 +65,8 @@ def mock_settings() -> MagicMock: @pytest.fixture -def mock_token_file(tmp_path, record_property) -> Path: +def mock_token_file(tmp_path) -> Path: """Create a temporary token file for testing.""" - record_property("tested-item-id", "SPEC-PLATFORM-SERVICE") return tmp_path / "token" # Return directly, no need for assignment @@ -127,11 +125,9 @@ def mock_webbrowser() -> MagicMock: class TestGetToken: """Test cases for the get_token function.""" - @pytest.mark.unit @staticmethod - def test_get_token_from_cache_valid(record_property, mock_settings, valid_token_with_expiry) -> None: + def test_get_token_from_cache_valid(mock_settings: MagicMock, valid_token_with_expiry: str) -> None: """Test retrieving a valid token from cache.""" - record_property("tested-item-id", "SPEC-PLATFORM-SERVICE") # Create a mock for Path that can be properly asserted on mock_write_text = MagicMock() @@ -145,7 +141,6 @@ def test_get_token_from_cache_valid(record_property, mock_settings, valid_token_ # Ensure we didn't need to authenticate mock_write_text.assert_not_called() - @pytest.mark.unit @staticmethod def test_get_token_from_cache_missing_expiry(mock_settings: MagicMock, cached_token_missing_expiry: str) -> None: """Test retrieving a valid token from cache.""" @@ -167,11 +162,9 @@ def test_get_token_from_cache_missing_expiry(mock_settings: MagicMock, cached_to # Ensure we wrote the new token mock_write_text.assert_called_once() - @pytest.mark.unit @staticmethod - def test_get_token_from_cache_expired(record_property, mock_settings, expired_token) -> None: + def test_get_token_from_cache_expired(mock_settings, expired_token) -> None: """Test retrieving an expired token from cache, which should trigger re-authentication.""" - record_property("tested-item-id", "SPEC-PLATFORM-SERVICE") # Create a mock for Path that can be properly asserted on mock_write_text = MagicMock() @@ -190,11 +183,9 @@ def test_get_token_from_cache_expired(record_property, mock_settings, expired_to # Ensure we wrote the new token assert mock_write_text.call_count == 1 - @pytest.mark.unit @staticmethod - def test_get_token_no_cache(record_property, mock_settings) -> None: + def test_get_token_no_cache(mock_settings) -> None: """Test retrieving a token without using cache.""" - record_property("tested-item-id", "SPEC-PLATFORM-SERVICE") # Create a mock for Path that can be properly asserted on mock_write_text = MagicMock() @@ -211,11 +202,9 @@ def test_get_token_no_cache(record_property, mock_settings) -> None: # Ensure we didn't write to cache mock_write_text.assert_not_called() - @pytest.mark.unit @staticmethod - def test_authenticate_uses_refresh_token_when_available(record_property, mock_settings) -> None: + def test_authenticate_uses_refresh_token_when_available(mock_settings) -> None: """Test that _authenticate uses refresh token flow when refresh token is available.""" - record_property("tested-item-id", "SPEC-PLATFORM-SERVICE") # Set up refresh token in settings mock_settings.return_value.refresh_token = SecretStr("test-refresh-token") @@ -226,11 +215,9 @@ def test_authenticate_uses_refresh_token_when_available(record_property, mock_se assert token == "refreshed.token" # noqa: S105 - Test credential mock_refresh.assert_called_once_with(mock_settings.return_value.refresh_token) - @pytest.mark.unit @staticmethod - def test_authenticate_uses_browser_flow_when_available(record_property, mock_settings) -> None: + def test_authenticate_uses_browser_flow_when_available(mock_settings) -> None: """Test that _authenticate uses browser flow when browser is available.""" - record_property("tested-item-id", "SPEC-PLATFORM-SERVICE") mock_settings.return_value.refresh_token = None with ( @@ -244,11 +231,9 @@ def test_authenticate_uses_browser_flow_when_available(record_property, mock_set assert token == "browser.token" # noqa: S105 - Test credential mock_browser.assert_called_once() - @pytest.mark.unit @staticmethod - def test_authenticate_falls_back_to_device_flow(record_property, mock_settings) -> None: + def test_authenticate_falls_back_to_device_flow(mock_settings) -> None: """Test that _authenticate falls back to device flow when browser and refresh token are unavailable.""" - record_property("tested-item-id", "SPEC-PLATFORM-SERVICE") mock_settings.return_value.refresh_token = None with ( @@ -261,11 +246,9 @@ def test_authenticate_falls_back_to_device_flow(record_property, mock_settings) assert token == "device.token" # noqa: S105 - Test credential mock_device.assert_called_once() - @pytest.mark.unit @staticmethod - def test_authenticate_raises_error_on_failure(record_property, mock_settings) -> None: + def test_authenticate_raises_error_on_failure(mock_settings) -> None: """Test that _authenticate raises an error when all authentication methods fail.""" - record_property("tested-item-id", "SPEC-PLATFORM-SERVICE") mock_settings.return_value.refresh_token = None with ( @@ -279,11 +262,9 @@ def test_authenticate_raises_error_on_failure(record_property, mock_settings) -> class TestVerifyAndDecodeToken: """Test cases for the verify_and_decode_token function.""" - @pytest.mark.unit @staticmethod - def test_verify_and_decode_valid_token(clear_jwk_cache, record_property) -> None: + def test_verify_and_decode_valid_token(clear_jwk_cache) -> None: """Test that a valid token is properly verified and decoded.""" - record_property("tested-item-id", "SPEC-PLATFORM-SERVICE") mock_jwt_client = MagicMock() mock_signing_key = MagicMock() mock_signing_key.key = "test-key" @@ -298,11 +279,9 @@ def test_verify_and_decode_valid_token(clear_jwk_cache, record_property) -> None assert "sub" in result assert "exp" in result - @pytest.mark.unit @staticmethod - def test_verify_and_decode_invalid_token(clear_jwk_cache, record_property) -> None: + def test_verify_and_decode_invalid_token(clear_jwk_cache) -> None: """Test that an invalid token raises an appropriate error.""" - record_property("tested-item-id", "SPEC-PLATFORM-SERVICE") with ( patch("jwt.PyJWKClient"), patch("jwt.get_unverified_header"), @@ -315,11 +294,9 @@ def test_verify_and_decode_invalid_token(clear_jwk_cache, record_property) -> No class TestBrowserCapabilityCheck: """Test cases for the browser capability check functionality.""" - @pytest.mark.unit @staticmethod - def test_can_open_browser_true(record_property) -> None: + def test_can_open_browser_true() -> None: """Test that _can_open_browser returns True when a browser is available.""" - record_property("tested-item-id", "SPEC-PLATFORM-SERVICE") # We need to override the autouse fixture here with ( patch("webbrowser.get", return_value=MagicMock()), @@ -327,11 +304,9 @@ def test_can_open_browser_true(record_property) -> None: ): assert _can_open_browser() is True - @pytest.mark.unit @staticmethod - def test_can_open_browser_false(record_property) -> None: + def test_can_open_browser_false() -> None: """Test that _can_open_browser returns False when no browser is available.""" - record_property("tested-item-id", "SPEC-PLATFORM-SERVICE") with patch("webbrowser.get", side_effect=webbrowser.Error): assert _can_open_browser() is False @@ -339,11 +314,9 @@ def test_can_open_browser_false(record_property) -> None: class TestAuthorizationCodeFlow: """Test cases for the authorization code flow with PKCE.""" - @pytest.mark.unit @staticmethod - def test_perform_authorization_code_flow_success(record_property, mock_settings) -> None: + def test_perform_authorization_code_flow_success(mock_settings) -> None: """Test successful authorization code flow with PKCE.""" - record_property("tested-item-id", "SPEC-PLATFORM-SERVICE") # Mock OAuth session mock_session = MagicMock(spec=OAuth2Session) mock_session.authorization_url.return_value = ("https://test.auth/authorize?code_challenge=abc", None) @@ -394,11 +367,9 @@ def handle_request_side_effect(): mock_server.handle_request.assert_called_once() mock_session.authorization_url.assert_called_once() - @pytest.mark.unit @staticmethod - def test_perform_authorization_code_flow_invalid_redirect(record_property, mock_settings) -> None: + def test_perform_authorization_code_flow_invalid_redirect(mock_settings) -> None: """Test authorization code flow fails with invalid redirect URI.""" - record_property("tested-item-id", "SPEC-PLATFORM-SERVICE") # Mock OAuth session to prevent it from being created mock_session = MagicMock(spec=OAuth2Session) mock_session.authorization_url.return_value = ("https://test.auth/authorize?code_challenge=abc", None) @@ -415,11 +386,9 @@ def test_perform_authorization_code_flow_invalid_redirect(record_property, mock_ ): _perform_authorization_code_with_pkce_flow() - @pytest.mark.unit @staticmethod - def test_perform_authorization_code_flow_failure(record_property, mock_settings) -> None: + def test_perform_authorization_code_flow_failure(mock_settings) -> None: """Test authorization code flow when authentication fails.""" - record_property("tested-item-id", "SPEC-PLATFORM-SERVICE") # Mock OAuth session mock_session = MagicMock(spec=OAuth2Session) mock_session.authorization_url.return_value = ("https://test.auth/authorize?code_challenge=abc", None) @@ -470,11 +439,9 @@ def handle_request_side_effect(): class TestDeviceFlow: """Test cases for the device flow authentication.""" - @pytest.mark.unit @staticmethod - def test_perform_device_flow_success(record_property, mock_settings) -> None: + def test_perform_device_flow_success(mock_settings) -> None: """Test successful device flow authentication.""" - record_property("tested-item-id", "SPEC-PLATFORM-SERVICE") device_response = { "device_code": "device-code-123", "verification_uri_complete": "https://test.auth/device/activate", @@ -513,40 +480,31 @@ def test_perform_device_flow_success(record_property, mock_settings) -> None: class TestPortAvailability: """Test cases for checking port availability.""" - @pytest.mark.unit @staticmethod - def test_port_available(record_property) -> None: + def test_port_available() -> None: """Test that _ensure_local_port_is_available returns True when the port is available.""" - record_property("tested-item-id", "SPEC-PLATFORM-SERVICE") with patch("socket.socket.bind", return_value=None) as mock_bind: assert _ensure_local_port_is_available(8000) is True mock_bind.assert_called_once() - @pytest.mark.unit - @pytest.mark.timeout(timeout=25) # 20 retries, 1s sleep @staticmethod - def test_port_unavailable(record_property) -> None: + def test_port_unavailable() -> None: """Test that _ensure_local_port_is_available returns False when the port is unavailable.""" - record_property("tested-item-id", "SPEC-PLATFORM-SERVICE") with patch("socket.socket.bind", side_effect=socket.error) as mock_bind: assert _ensure_local_port_is_available(8000) is False mock_bind.assert_called() - @pytest.mark.unit @staticmethod - def test_port_retries(record_property) -> None: + def test_port_retries() -> None: """Test that _ensure_local_port_is_available retries the specified number of times.""" - record_property("tested-item-id", "SPEC-PLATFORM-SERVICE") with patch("socket.socket.bind", side_effect=socket.error) as mock_bind, patch("time.sleep") as mock_sleep: assert _ensure_local_port_is_available(8000, max_retries=3) is False assert mock_bind.call_count == 4 # Initial attempt + 3 retries assert mock_sleep.call_count == 3 - @pytest.mark.unit @staticmethod - def test_port_availability_uses_socket_reuse(record_property) -> None: + def test_port_availability_uses_socket_reuse() -> None: """Test that _ensure_local_port_is_available uses SO_REUSEADDR socket option.""" - record_property("tested-item-id", "SPEC-PLATFORM-SERVICE") mock_socket = MagicMock() # Make the mock work as a context manager mock_socket.__enter__ = MagicMock(return_value=mock_socket) @@ -560,11 +518,9 @@ def test_port_availability_uses_socket_reuse(record_property) -> None: # Verify bind was attempted mock_socket.bind.assert_called_with(("localhost", 8000)) - @pytest.mark.unit @staticmethod - def test_authorization_flow_sets_socket_reuse(record_property, mock_settings) -> None: + def test_authorization_flow_sets_socket_reuse(mock_settings) -> None: """Test that the HTTPServer in authorization flow uses SO_REUSEADDR.""" - record_property("tested-item-id", "SPEC-PLATFORM-SERVICE") mock_server = MagicMock() mock_socket = MagicMock() mock_server.socket = mock_socket @@ -572,6 +528,7 @@ def test_authorization_flow_sets_socket_reuse(record_property, mock_settings) -> # Mock the HTTPServer context manager with ( patch("aignostics.platform._authentication.HTTPServer") as mock_http_server, + patch("aignostics.platform._authentication._ensure_local_port_is_available", return_value=True), patch("urllib.parse.urlparse") as mock_urlparse, patch("aignostics.platform._authentication.OAuth2Session") as mock_oauth, patch("aignostics.platform._authentication.webbrowser"), @@ -589,64 +546,13 @@ def test_authorization_flow_sets_socket_reuse(record_property, mock_settings) -> # Verify SO_REUSEADDR was set mock_socket.setsockopt.assert_called_with(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) - @pytest.mark.unit - @pytest.mark.sequential - @staticmethod - def test_authorization_flow_retries_on_port_in_use(mock_settings) -> None: - """Test that authorization flow retries when port is in use.""" - mock_server = MagicMock() - mock_socket = MagicMock() - mock_server.socket = mock_socket - - # Mock OAuth session - mock_session = MagicMock(spec=OAuth2Session) - mock_session.authorization_url.return_value = ("https://test.auth/authorize", None) - - # Create a side effect that fails once with EADDRINUSE, then succeeds - call_count = 0 - - def http_server_side_effect(*args, **kwargs): - nonlocal call_count - call_count += 1 - if call_count == 1: - # First call fails with port in use - error = OSError("Address already in use") - error.errno = errno.EADDRINUSE - raise error - # Second call succeeds - return MagicMock(__enter__=MagicMock(return_value=mock_server), __exit__=MagicMock(return_value=None)) - - # Mock auth result - mock_auth_result = MagicMock() - mock_auth_result.token = "retry.token" # noqa: S105 - mock_auth_result.error = None - - with ( - patch("aignostics.platform._authentication.OAuth2Session", return_value=mock_session), - patch("aignostics.platform._authentication.HTTPServer", side_effect=http_server_side_effect), - patch("urllib.parse.urlparse") as mock_urlparse, - patch("time.sleep") as mock_sleep, - patch("aignostics.platform._authentication.AuthenticationResult", return_value=mock_auth_result), - ): - mock_urlparse.return_value.hostname = "localhost" - mock_urlparse.return_value.port = 8000 - - token = _perform_authorization_code_with_pkce_flow() - - # Verify we got the token after retry - assert token == "retry.token" # noqa: S105 - # Verify we slept between retries - mock_sleep.assert_called_once() - class TestRemoveCachedToken: """Test cases for the remove_cached_token function.""" - @pytest.mark.unit @staticmethod - def test_remove_cached_token_exists(record_property, mock_settings) -> None: + def test_remove_cached_token_exists(mock_settings) -> None: """Test removing a cached token when the token file exists.""" - record_property("tested-item-id", "SPEC-PLATFORM-SERVICE") with ( patch.object(Path, "exists", return_value=True), patch.object(Path, "unlink") as mock_unlink, @@ -656,21 +562,17 @@ def test_remove_cached_token_exists(record_property, mock_settings) -> None: assert result is True mock_unlink.assert_called_once_with(missing_ok=True) - @pytest.mark.unit @staticmethod - def test_remove_cached_token_not_exists(record_property, mock_settings) -> None: + def test_remove_cached_token_not_exists(mock_settings) -> None: """Test removing a cached token when the token file does not exist.""" - record_property("tested-item-id", "SPEC-PLATFORM-SERVICE") with patch.object(Path, "exists", return_value=False): result = remove_cached_token() assert result is False - @pytest.mark.unit @staticmethod - def test_remove_cached_token_unlink_error(record_property, mock_settings) -> None: + def test_remove_cached_token_unlink_error(mock_settings) -> None: """Test that remove_cached_token handles unlink errors gracefully.""" - record_property("tested-item-id", "SPEC-PLATFORM-SERVICE") with ( patch.object(Path, "exists", return_value=True), patch.object(Path, "unlink", side_effect=OSError("Permission denied")) as mock_unlink, @@ -683,11 +585,9 @@ def test_remove_cached_token_unlink_error(record_property, mock_settings) -> Non class TestSentryIntegration: """Test cases for Sentry integration in the authentication module.""" - @pytest.mark.unit @staticmethod - def test_get_token_calls_sentry_set_user(record_property, mock_settings) -> None: + def test_get_token_calls_sentry_set_user(mock_settings) -> None: """Test that get_token calls sentry_sdk.set_user with correct user information extracted from token claims.""" - record_property("tested-item-id", "SPEC-PLATFORM-SERVICE") # Mock token claims with the required fields mock_claims = { "sub": "user123", @@ -718,11 +618,9 @@ def test_get_token_calls_sentry_set_user(record_property, mock_settings) -> None # Verify sentry_sdk.set_user was called with correct user information mock_sentry_sdk.set_user.assert_called_once_with({"id": "user123", "org_id": "org456"}) - @pytest.mark.integration @staticmethod - def test_get_token_sentry_unavailable(record_property, mock_settings) -> None: + def test_get_token_sentry_unavailable(mock_settings) -> None: """Test that get_token works correctly when sentry_sdk is not available.""" - record_property("tested-item-id", "SPEC-PLATFORM-SERVICE") # Mock token claims mock_claims = { "sub": "user123", @@ -747,11 +645,9 @@ def test_get_token_sentry_unavailable(record_property, mock_settings) -> None: # Verify the token was returned successfully even without Sentry assert token == "test.token" # noqa: S105 - Test credential - @pytest.mark.integration @staticmethod - def test_get_token_sentry_missing_sub_claim(record_property, mock_settings) -> None: + def test_get_token_sentry_missing_sub_claim(mock_settings) -> None: """Test that get_token handles missing 'sub' claim gracefully when informing Sentry.""" - record_property("tested-item-id", "SPEC-PLATFORM-SERVICE") # Mock token claims without 'sub' field mock_claims = { "org_id": "org456", @@ -781,11 +677,9 @@ def test_get_token_sentry_missing_sub_claim(record_property, mock_settings) -> N # Verify sentry_sdk.set_user was not called due to missing 'sub' claim mock_sentry_sdk.set_user.assert_not_called() - @pytest.mark.integration @staticmethod - def test_get_token_sentry_handles_token_verification_error(record_property, mock_settings) -> None: + def test_get_token_sentry_handles_token_verification_error(mock_settings) -> None: """Test that get_token fails when token verification fails, and Sentry is not informed.""" - record_property("tested-item-id", "SPEC-PLATFORM-SERVICE") # Create a mock for sentry_sdk mock_sentry_sdk = MagicMock() @@ -809,7 +703,6 @@ def test_get_token_sentry_handles_token_verification_error(record_property, mock class TestTokenRefreshRetryLogic: """Test cases for the retry logic in _access_token_from_refresh_token.""" - @pytest.mark.unit @staticmethod def test_successful_token_refresh_no_retry(mock_settings) -> None: """Test that successful token refresh completes without retries. @@ -827,7 +720,6 @@ def test_successful_token_refresh_no_retry(mock_settings) -> None: assert result == "fresh.token" assert mock_post.call_count == 1 # Should succeed on first try - @pytest.mark.unit @staticmethod def test_no_retry_on_client_error(mock_settings, caplog) -> None: """Test that 4xx errors do not trigger retries. @@ -861,7 +753,6 @@ def test_no_retry_on_client_error(mock_settings, caplog) -> None: retry_logs = [record for record in caplog.records if "retry" in record.getMessage().lower()] assert len(retry_logs) == 0, "Should not log retry attempts for 4xx errors" - @pytest.mark.unit @staticmethod def test_retry_on_server_error(mock_settings, caplog) -> None: """Test that 5xx errors trigger retries. @@ -899,7 +790,6 @@ def side_effect(*args, **kwargs): ] assert len(retry_logs) > 0, "Should log retry attempts for 5xx errors" - @pytest.mark.unit @staticmethod def test_retry_on_connection_error(mock_settings, caplog) -> None: """Test that network connection errors trigger retries. @@ -933,7 +823,6 @@ def side_effect(*args, **kwargs): class TestJWKClientCache: """Test cases for the LRU cache on _get_jwk_client.""" - @pytest.mark.unit @staticmethod def test_jwk_client_cache_returns_same_instance(clear_jwk_cache, mock_settings) -> None: """Test that _get_jwk_client returns the same cached instance for the same URL. @@ -960,7 +849,6 @@ def test_jwk_client_cache_returns_same_instance(clear_jwk_cache, mock_settings) # PyJWKClient should only be instantiated once assert mock_pyjwk_client.call_count == 1 - @pytest.mark.unit @staticmethod def test_jwk_client_cache_different_urls(clear_jwk_cache, mock_settings) -> None: """Test that _get_jwk_client creates different instances for different URLs. @@ -989,7 +877,6 @@ def test_jwk_client_cache_different_urls(clear_jwk_cache, mock_settings) -> None # PyJWKClient should be instantiated twice (once for each URL) assert mock_pyjwk_client.call_count == 2 - @pytest.mark.unit @staticmethod def test_jwk_client_cache_info(clear_jwk_cache, mock_settings) -> None: """Test that cache_info provides correct statistics about cache usage. @@ -1025,7 +912,6 @@ def test_jwk_client_cache_info(clear_jwk_cache, mock_settings) -> None: assert info.misses == 1 assert info.hits == 2 - @pytest.mark.unit @staticmethod def test_jwk_client_cache_respects_settings(clear_jwk_cache, mock_settings) -> None: """Test that _get_jwk_client passes correct settings to PyJWKClient. @@ -1048,7 +934,6 @@ def test_jwk_client_cache_respects_settings(clear_jwk_cache, mock_settings) -> N lifespan=lifespan, ) - @pytest.mark.unit @staticmethod def test_jwk_client_cache_used_in_verification(clear_jwk_cache, mock_settings) -> None: """Test that verify_and_decode_token benefits from the _get_jwk_client cache. @@ -1083,7 +968,6 @@ def test_jwk_client_cache_used_in_verification(clear_jwk_cache, mock_settings) - assert info.hits >= 2 # At least 2 cache hits from 3 calls assert info.misses == 1 # Only 1 cache miss (first call) - @pytest.mark.unit @staticmethod def test_jwk_client_cache_size_limit(clear_jwk_cache, mock_settings) -> None: """Test that the LRU cache respects the maxsize=4 limit. @@ -1124,7 +1008,6 @@ def test_jwk_client_cache_size_limit(clear_jwk_cache, mock_settings) -> None: class TestTokenVerificationRetryLogic: """Test cases for the retry logic in verify_and_decode_token.""" - @pytest.mark.unit @staticmethod def test_successful_token_verification_no_retry(mock_settings) -> None: """Test that successful token verification completes without retries. @@ -1147,7 +1030,6 @@ def test_successful_token_verification_no_retry(mock_settings) -> None: # _get_jwk_client is called only once (no retries) assert mock_get_jwk.call_count == 1 - @pytest.mark.unit @staticmethod def test_no_retry_on_jwt_decode_error(mock_settings, caplog) -> None: """Test that JWT decode errors (non-connection errors) do not trigger retries. @@ -1190,7 +1072,6 @@ def decode_side_effect(*args, **kwargs): retry_logs = [record for record in caplog.records if "retry" in record.getMessage().lower()] assert len(retry_logs) == 0, "Should not log retry attempts for JWT decode errors" - @pytest.mark.unit @staticmethod def test_retry_on_jwk_connection_error(mock_settings, caplog) -> None: """Test that JWK client connection errors trigger retries. @@ -1227,7 +1108,6 @@ def get_signing_key_side_effect(*args, **kwargs): ] assert len(retry_logs) > 0, "Should log retry attempts for JWK connection errors" - @pytest.mark.unit @staticmethod def test_successful_verification_after_connection_retry(mock_settings, caplog) -> None: """Test that token verification succeeds after initial JWK connection failures. @@ -1272,7 +1152,6 @@ def get_signing_key_side_effect(*args, **kwargs): ] assert len(retry_logs) == 1, "Should log exactly one retry attempt" - @pytest.mark.unit @staticmethod def test_no_retry_on_other_jwk_errors(mock_settings, caplog) -> None: """Test that non-connection JWK errors do not trigger retries. diff --git a/tests/aignostics/platform/cli_test.py b/tests/aignostics/platform/cli_test.py index 0395f598..890c9451 100644 --- a/tests/aignostics/platform/cli_test.py +++ b/tests/aignostics/platform/cli_test.py @@ -2,7 +2,6 @@ from unittest.mock import patch -import pytest from typer.testing import CliRunner from aignostics.cli import cli @@ -14,11 +13,9 @@ class TestTokenInfo: """Test cases for TokenInfo model.""" - @pytest.mark.unit @staticmethod - def test_token_info_from_claims(record_property) -> None: + def test_token_info_from_claims() -> None: """Test TokenInfo creation from JWT claims.""" - record_property("tested-item-id", "SPEC-PLATFORM-SERVICE") claims = { "iss": "https://test.auth0.com/", "iat": 1609459200, @@ -41,11 +38,9 @@ def test_token_info_from_claims(record_property) -> None: assert token_info.org_id == "org123" assert token_info.role == "member" - @pytest.mark.unit @staticmethod - def test_token_info_from_claims_with_audience_list(record_property) -> None: + def test_token_info_from_claims_with_audience_list() -> None: """Test TokenInfo creation from JWT claims with audience as list.""" - record_property("tested-item-id", "SPEC-PLATFORM-SERVICE") claims = { "iss": "https://test.auth0.com/", "iat": 1609459200, @@ -62,11 +57,9 @@ def test_token_info_from_claims_with_audience_list(record_property) -> None: assert token_info.audience == ["https://test-audience1", "test-audience2"] assert token_info.role == "member" - @pytest.mark.unit @staticmethod - def test_token_info_from_claims_without_role(record_property) -> None: + def test_token_info_from_claims_without_role() -> None: """Test TokenInfo creation from JWT claims with role missing.""" - record_property("tested-item-id", "SPEC-PLATFORM-SERVICE") claims = { "iss": "https://test.auth0.com/", "iat": 1609459200, @@ -86,11 +79,9 @@ def test_token_info_from_claims_without_role(record_property) -> None: class TestUserInfo: """Test cases for UserInfo model.""" - @pytest.mark.unit @staticmethod - def test_user_info_from_claims_and_userinfo_with_profile(record_property) -> None: + def test_user_info_from_claims_and_userinfo_with_profile() -> None: """Test UserInfo creation with both claims and userinfo.""" - record_property("tested-item-id", "SPEC-PLATFORM-SERVICE") claims = { "sub": "user123", "org_id": "org456", @@ -133,11 +124,9 @@ def test_user_info_from_claims_and_userinfo_with_profile(record_property) -> Non assert user_info.role == "member" assert user_info.token.issuer == "https://test.auth0.com/" - @pytest.mark.unit @staticmethod - def test_user_info_from_claims_and_userinfo_no_org_name(record_property) -> None: + def test_user_info_from_claims_and_userinfo_no_org_name() -> None: """Test UserInfo creation when org_name is not provided in claims.""" - record_property("tested-item-id", "SPEC-PLATFORM-SERVICE") claims = { "sub": "user789", "org_id": "org999", @@ -183,11 +172,9 @@ def test_user_info_from_claims_and_userinfo_no_org_name(record_property) -> None class TestPlatformCLI: """Test cases for platform CLI commands.""" - @pytest.mark.e2e @staticmethod - def test_login_out_info_e2e(record_property, runner: CliRunner) -> None: + def test_login_out_info_e2e(runner: CliRunner) -> None: """Test successful logout command.""" - record_property("tested-item-id", "SPEC-PLATFORM-SERVICE") with ( patch("aignostics.platform._service.Service.logout", return_value=True), ): @@ -199,64 +186,47 @@ def test_login_out_info_e2e(record_property, runner: CliRunner) -> None: assert "Successfully logged out." in normalize_output(result.output) result = runner.invoke(cli, ["user", "whoami"]) assert result.exit_code == 0 - assert any( - url in normalize_output(result.output) - for url in [ - "https://aignostics-platform.eu.auth0.com/", - "https://aignostics-platform-staging.eu.auth0.com/", - "dev-8ouohmmrbuh2h4vu.eu.auth0.com", - ] - ) - - @pytest.mark.integration + assert "https://aignostics-platform.eu.auth0.com/" in normalize_output(result.output) + @staticmethod - def test_logout_success(record_property, runner: CliRunner) -> None: + def test_logout_success(runner: CliRunner) -> None: """Test successful logout command.""" - record_property("tested-item-id", "SPEC-PLATFORM-SERVICE") with patch("aignostics.platform._service.Service.logout", return_value=True): result = runner.invoke(cli, ["user", "logout"]) assert result.exit_code == 0 assert "Successfully logged out." in normalize_output(result.output) - @pytest.mark.integration @staticmethod - def test_logout_not_logged_in(record_property, runner: CliRunner) -> None: + def test_logout_not_logged_in(runner: CliRunner) -> None: """Test logout command when not logged in.""" - record_property("tested-item-id", "SPEC-PLATFORM-SERVICE") with patch("aignostics.platform._service.Service.logout", return_value=False): result = runner.invoke(cli, ["user", "logout"]) assert result.exit_code == 2 assert "Was not logged in." in normalize_output(result.output) - @pytest.mark.integration @staticmethod - def test_logout_error(record_property, runner: CliRunner) -> None: + def test_logout_error(runner: CliRunner) -> None: """Test logout command when an error occurs.""" - record_property("tested-item-id", "SPEC-PLATFORM-SERVICE") with patch("aignostics.platform._service.Service.logout", side_effect=RuntimeError("Test error")): result = runner.invoke(cli, ["user", "logout"]) assert result.exit_code == 1 assert "Error during logout: Test error" in normalize_output(result.output) - @pytest.mark.integration @staticmethod - def test_login_success(record_property, runner: CliRunner) -> None: + def test_login_success(runner: CliRunner) -> None: """Test successful login command.""" - record_property("tested-item-id", "SPEC-PLATFORM-SERVICE") with patch("aignostics.platform._service.Service.login", return_value=True): result = runner.invoke(cli, ["user", "login"]) assert result.exit_code == 0 assert "Successfully logged in." in normalize_output(result.output) - @pytest.mark.integration @staticmethod - def test_login_with_relogin_flag(record_property, runner: CliRunner) -> None: + def test_login_with_relogin_flag(runner: CliRunner) -> None: """Test login command with relogin flag.""" - record_property("tested-item-id", "SPEC-PLATFORM-SERVICE") with patch("aignostics.platform._service.Service.login", return_value=True) as mock_login: result = runner.invoke(cli, ["user", "login", "--relogin"]) @@ -264,33 +234,27 @@ def test_login_with_relogin_flag(record_property, runner: CliRunner) -> None: assert "Successfully logged in." in normalize_output(result.output) mock_login.assert_called_once_with(relogin=True) - @pytest.mark.integration @staticmethod - def test_login_failure(record_property, runner: CliRunner) -> None: + def test_login_failure(runner: CliRunner) -> None: """Test login command when login fails.""" - record_property("tested-item-id", "SPEC-PLATFORM-SERVICE") with patch("aignostics.platform._service.Service.login", return_value=False): result = runner.invoke(cli, ["user", "login"]) assert result.exit_code == 1 assert "Failed to log you in" in normalize_output(result.output) - @pytest.mark.integration @staticmethod - def test_login_error(record_property, runner: CliRunner) -> None: + def test_login_error(runner: CliRunner) -> None: """Test login command when an error occurs.""" - record_property("tested-item-id", "SPEC-PLATFORM-SERVICE") with patch("aignostics.platform._service.Service.login", side_effect=RuntimeError("Test error")): result = runner.invoke(cli, ["user", "login"]) assert result.exit_code == 1 assert "Error during login: Test error" in normalize_output(result.output) - @pytest.mark.integration @staticmethod - def test_whoami_success(record_property, runner: CliRunner) -> None: + def test_whoami_success(runner: CliRunner) -> None: """Test successful whoami command.""" - record_property("tested-item-id", "SPEC-PLATFORM-SERVICE") # Create mock user info mock_token_info = TokenInfo( issuer="https://test.auth0.com/", @@ -333,11 +297,9 @@ def test_whoami_success(record_property, runner: CliRunner) -> None: assert "Test Organization" in output assert "admin" in output - @pytest.mark.integration @staticmethod - def test_whoami_with_relogin_flag(record_property, runner: CliRunner) -> None: + def test_whoami_with_relogin_flag(runner: CliRunner) -> None: """Test whoami command with relogin flag.""" - record_property("tested-item-id", "SPEC-PLATFORM-SERVICE") mock_token_info = TokenInfo( issuer="https://test.auth0.com/", issued_at=1609459200, @@ -376,36 +338,29 @@ def test_whoami_with_relogin_flag(record_property, runner: CliRunner) -> None: assert result.exit_code == 0 mock_get_user_info.assert_called_once_with(relogin=True) - @pytest.mark.integration @staticmethod - def test_whoami_not_logged_in(record_property, runner: CliRunner) -> None: + def test_whoami_not_logged_in(runner: CliRunner) -> None: """Test whoami command when not logged in.""" - record_property("tested-item-id", "SPEC-PLATFORM-SERVICE") with patch( - "aignostics.platform._service.Service.get_user_info", - side_effect=RuntimeError("Could not retrieve user info"), + "aignostics.platform._service.Service.get_user_info", side_effect=RuntimeError("Could not user info") ): result = runner.invoke(cli, ["user", "whoami"]) assert result.exit_code == 1 - assert "Error while getting user info: Could not retrieve user info" in normalize_output(result.output) + assert "Error while getting user info: Could not user info" in normalize_output(result.output) - @pytest.mark.integration @staticmethod - def test_whoami_error(record_property, runner: CliRunner) -> None: + def test_whoami_error(runner: CliRunner) -> None: """Test whoami command when an error occurs.""" - record_property("tested-item-id", "SPEC-PLATFORM-SERVICE") with patch("aignostics.platform._service.Service.get_user_info", side_effect=RuntimeError("Test error")): result = runner.invoke(cli, ["user", "whoami"]) assert result.exit_code == 1 assert "Error while getting user info: Test error" in normalize_output(result.output) - @pytest.mark.integration @staticmethod - def test_whoami_success_with_no_org_name(record_property, runner: CliRunner) -> None: + def test_whoami_success_with_no_org_name(runner: CliRunner) -> None: """Test successful whoami command when org_name is None.""" - record_property("tested-item-id", "SPEC-PLATFORM-SERVICE") # Create mock token info mock_token_info = TokenInfo( issuer="https://test.auth0.com/", @@ -448,11 +403,9 @@ def test_whoami_success_with_no_org_name(record_property, runner: CliRunner) -> # org_name should be null in JSON output assert '"name": null' in output or '"name":null' in output - @pytest.mark.integration @staticmethod - def test_whoami_masks_secrets_by_default(record_property, runner: CliRunner) -> None: + def test_whoami_masks_secrets_by_default(runner: CliRunner) -> None: """Test that whoami masks secrets by default.""" - record_property("tested-item-id", "SPEC-PLATFORM-SERVICE") mock_token_info = TokenInfo( issuer="https://test.auth0.com/", issued_at=1609459200, @@ -495,11 +448,9 @@ def test_whoami_masks_secrets_by_default(record_property, runner: CliRunner) -> assert "the_logfire_token" not in output assert "very_secret_access_key_456" not in output - @pytest.mark.integration @staticmethod - def test_whoami_shows_secrets_with_no_mask_flag(record_property, runner: CliRunner) -> None: + def test_whoami_shows_secrets_with_no_mask_flag(runner: CliRunner) -> None: """Test that whoami shows secrets when --no-mask-secrets flag is used.""" - record_property("tested-item-id", "SPEC-PLATFORM-SERVICE") mock_token_info = TokenInfo( issuer="https://test.auth0.com/", issued_at=1609459200, @@ -538,86 +489,3 @@ def test_whoami_shows_secrets_with_no_mask_flag(record_property, runner: CliRunn assert "very_secret_access_key_456" in output # Check that masked values are not in output assert "***MASKED" not in output - - @pytest.mark.integration - @staticmethod - def test_sdk_run_metadata_schema_pretty(runner: CliRunner) -> None: - """Test run-metadata-schema command with pretty output (default).""" - result = runner.invoke(cli, ["sdk", "run-metadata-schema"]) - - assert result.exit_code == 0 - output = normalize_output(result.output) - # Check that schema contains expected top-level properties - assert "schema_version" in output - assert "submission" in output - assert "user_agent" in output - assert "SubmissionMetadata" in output - assert "WorkflowMetadata" in output - assert "SchedulingMetadata" in output - - @pytest.mark.integration - @staticmethod - def test_sdk_run_metadata_schema_no_pretty(runner: CliRunner) -> None: - """Test run-metadata-schema command with --no-pretty flag.""" - result = runner.invoke(cli, ["sdk", "run-metadata-schema", "--no-pretty"]) - - assert result.exit_code == 0 - # Don't normalize output for JSON parsing - output = result.output - # Check that schema contains expected top-level properties - assert "schema_version" in output - assert "submission" in output - assert "user_agent" in output - # In non-pretty mode, output should still be valid JSON - import json - - # Try to parse the output as JSON (should not raise an error) - try: - # Find JSON in output (skip boot messages) - json_start = output.find("{") - if json_start >= 0: - json.loads(output[json_start:]) - else: - pytest.fail("No JSON found in output") - except json.JSONDecodeError: - pytest.fail("Output is not valid JSON") - - @pytest.mark.integration - @staticmethod - def test_sdk_item_metadata_schema_pretty(runner: CliRunner) -> None: - """Test item-metadata-schema command with pretty output (default).""" - result = runner.invoke(cli, ["sdk", "item-metadata-schema"]) - - assert result.exit_code == 0 - output = normalize_output(result.output) - # Check that schema contains expected top-level properties - assert "schema_version" in output - assert "platform_bucket" in output - assert "PlatformBucketMetadata" in output - assert "ItemSdkMetadata" in output - - @pytest.mark.integration - @staticmethod - def test_sdk_item_metadata_schema_no_pretty(runner: CliRunner) -> None: - """Test item-metadata-schema command with --no-pretty flag.""" - result = runner.invoke(cli, ["sdk", "item-metadata-schema", "--no-pretty"]) - - assert result.exit_code == 0 - # Don't normalize output for JSON parsing - output = result.output - # Check that schema contains expected top-level properties - assert "schema_version" in output - assert "platform_bucket" in output - # In non-pretty mode, output should still be valid JSON - import json - - # Try to parse the output as JSON (should not raise an error) - try: - # Find JSON in output (skip boot messages) - json_start = output.find("{") - if json_start >= 0: - json.loads(output[json_start:]) - else: - pytest.fail("No JSON found in output") - except json.JSONDecodeError: - pytest.fail("Output is not valid JSON") diff --git a/tests/aignostics/platform/client_cache_test.py b/tests/aignostics/platform/client_cache_test.py index a3f18435..f5aabfc5 100644 --- a/tests/aignostics/platform/client_cache_test.py +++ b/tests/aignostics/platform/client_cache_test.py @@ -13,96 +13,88 @@ import pytest from aignostics.platform._client import Client -from aignostics.platform._operation_cache import _operation_cache, cache_key_with_token class TestCacheKeyGeneration: """Test cases for cache key generation.""" - @pytest.mark.unit @staticmethod def test_cache_key_includes_token_hash() -> None: """Test that cache key includes a hash of the token. This ensures different tokens produce different cache keys. """ - key1 = cache_key_with_token("token-123", "method_name") - key2 = cache_key_with_token("token-456", "method_name") + key1 = Client._cache_key("token-123", "method_name") + key2 = Client._cache_key("token-456", "method_name") assert key1 != key2 assert ":" in key1 assert ":" in key2 - @pytest.mark.unit @staticmethod def test_cache_key_includes_method_name() -> None: """Test that cache key includes the method name. This ensures different methods produce different cache keys even with same token. """ - key1 = cache_key_with_token("token-123", "method_a") - key2 = cache_key_with_token("token-123", "method_b") + key1 = Client._cache_key("token-123", "method_a") + key2 = Client._cache_key("token-123", "method_b") assert key1 != key2 assert "method_a" in key1 assert "method_b" in key2 - @pytest.mark.unit @staticmethod def test_cache_key_includes_args() -> None: """Test that cache key includes positional arguments. This ensures different args produce different cache keys. """ - key1 = cache_key_with_token("token-123", "method", "arg1", "arg2") - key2 = cache_key_with_token("token-123", "method", "arg1", "arg3") + key1 = Client._cache_key("token-123", "method", "arg1", "arg2") + key2 = Client._cache_key("token-123", "method", "arg1", "arg3") assert key1 != key2 - @pytest.mark.unit @staticmethod def test_cache_key_includes_kwargs() -> None: """Test that cache key includes keyword arguments. This ensures different kwargs produce different cache keys. """ - key1 = cache_key_with_token("token-123", "method", param1="value1") - key2 = cache_key_with_token("token-123", "method", param1="value2") + key1 = Client._cache_key("token-123", "method", param1="value1") + key2 = Client._cache_key("token-123", "method", param1="value2") assert key1 != key2 - @pytest.mark.unit @staticmethod def test_cache_key_consistent_for_same_inputs() -> None: """Test that cache key is consistent for identical inputs. This ensures the cache can find previously stored values. """ - key1 = cache_key_with_token("token-123", "method", "arg1", param1="value1") - key2 = cache_key_with_token("token-123", "method", "arg1", param1="value1") + key1 = Client._cache_key("token-123", "method", "arg1", param1="value1") + key2 = Client._cache_key("token-123", "method", "arg1", param1="value1") assert key1 == key2 - @pytest.mark.unit @staticmethod def test_cache_key_handles_empty_token() -> None: """Test that cache key handles empty or None token gracefully.""" - key1 = cache_key_with_token("", "method") - key2 = cache_key_with_token("", "method") + key1 = Client._cache_key("", "method") + key2 = Client._cache_key("", "method") assert key1 == key2 assert isinstance(key1, str) assert len(key1) > 0 - @pytest.mark.unit @staticmethod def test_cache_key_kwargs_order_independent() -> None: """Test that cache key is independent of kwargs order. Since kwargs are sorted in the cache key generation, the order should not matter. """ - key1 = cache_key_with_token("token-123", "method", a=1, b=2, c=3) - key2 = cache_key_with_token("token-123", "method", c=3, a=1, b=2) + key1 = Client._cache_key("token-123", "method", a=1, b=2, c=3) + key2 = Client._cache_key("token-123", "method", c=3, a=1, b=2) assert key1 == key2 @@ -110,7 +102,6 @@ def test_cache_key_kwargs_order_independent() -> None: class TestCacheBasicFunctionality: """Test cases for basic cache functionality.""" - @pytest.mark.unit @staticmethod def test_me_caches_result_on_first_call( mock_settings: MagicMock, client_with_mock_api: Client, mock_api_client: MagicMock @@ -135,7 +126,6 @@ def test_me_caches_result_on_first_call( # Results should be identical assert result1 == result2 - @pytest.mark.unit @staticmethod def test_cache_stores_value_in_operation_cache( mock_settings: MagicMock, client_with_mock_api: Client, mock_api_client: MagicMock @@ -148,23 +138,22 @@ def test_cache_stores_value_in_operation_cache( mock_api_client.get_me_v1_me_get.return_value = mock_me_response # Initially cache should be empty - assert len(_operation_cache) == 0 + assert len(Client._operation_cache) == 0 # Call me() client_with_mock_api.me() # Cache should now have one entry - assert len(_operation_cache) == 1 + assert len(Client._operation_cache) == 1 # Verify cache structure: key -> (value, expiry_timestamp) - cache_entry = next(iter(_operation_cache.values())) + cache_entry = next(iter(Client._operation_cache.values())) assert isinstance(cache_entry, tuple) assert len(cache_entry) == 2 assert cache_entry[0] == mock_me_response assert isinstance(cache_entry[1], float) assert cache_entry[1] > time.time() # Expiry should be in the future - @pytest.mark.unit @staticmethod def test_cache_returns_none_when_api_returns_none( mock_settings: MagicMock, client_with_mock_api: Client, mock_api_client: MagicMock @@ -189,7 +178,6 @@ def test_cache_returns_none_when_api_returns_none( class TestCacheTTL: """Test cases for cache TTL (time-to-live) functionality.""" - @pytest.mark.unit @staticmethod def test_cache_expires_after_ttl( mock_settings: MagicMock, client_with_mock_api: Client, mock_api_client: MagicMock @@ -213,9 +201,9 @@ def test_cache_expires_after_ttl( assert mock_api_client.get_me_v1_me_get.call_count == 1 # Manually expire the cache by setting expiry to the past - cache_key = next(iter(_operation_cache.keys())) - value, _ = _operation_cache[cache_key] - _operation_cache[cache_key] = (value, time.time() - 1) # Set expiry to 1 second ago + cache_key = next(iter(Client._operation_cache.keys())) + value, _ = Client._operation_cache[cache_key] + Client._operation_cache[cache_key] = (value, time.time() - 1) # Set expiry to 1 second ago # Third call after expiry - should hit API again mock_api_client.get_me_v1_me_get.return_value = mock_me_response_2 @@ -223,7 +211,6 @@ def test_cache_expires_after_ttl( assert result3 == mock_me_response_2 assert mock_api_client.get_me_v1_me_get.call_count == 2 - @pytest.mark.unit @staticmethod def test_expired_cache_entry_removed( mock_settings: MagicMock, client_with_mock_api: Client, mock_api_client: MagicMock @@ -237,22 +224,21 @@ def test_expired_cache_entry_removed( # First call - creates cache entry client_with_mock_api.me() - assert len(_operation_cache) == 1 + assert len(Client._operation_cache) == 1 # Expire the cache entry - cache_key = next(iter(_operation_cache.keys())) - value, _ = _operation_cache[cache_key] - _operation_cache[cache_key] = (value, time.time() - 1) + cache_key = next(iter(Client._operation_cache.keys())) + value, _ = Client._operation_cache[cache_key] + Client._operation_cache[cache_key] = (value, time.time() - 1) # Second call - should remove expired entry and create new one client_with_mock_api.me() - assert len(_operation_cache) == 1 + assert len(Client._operation_cache) == 1 # The new entry should not be expired - cache_entry = _operation_cache[cache_key] + cache_entry = Client._operation_cache[cache_key] assert cache_entry[1] > time.time() - @pytest.mark.unit @staticmethod def test_cache_ttl_is_60_seconds_for_me( mock_settings: MagicMock, client_with_mock_api: Client, mock_api_client: MagicMock @@ -268,7 +254,7 @@ def test_cache_ttl_is_60_seconds_for_me( client_with_mock_api.me() # Get the cache entry - cache_entry = next(iter(_operation_cache.values())) + cache_entry = next(iter(Client._operation_cache.values())) expiry_time = cache_entry[1] # Expiry should be approximately 60 seconds in the future @@ -279,7 +265,6 @@ def test_cache_ttl_is_60_seconds_for_me( class TestCacheWithDifferentTokens: """Test cases for cache behavior with different authentication tokens.""" - @pytest.mark.unit @staticmethod def test_different_tokens_use_different_cache_entries(mock_settings: MagicMock, mock_api_client: MagicMock) -> None: """Test that different tokens create separate cache entries. @@ -291,7 +276,6 @@ def test_different_tokens_use_different_cache_entries(mock_settings: MagicMock, # Client with token-1 with ( - patch("aignostics.platform._operation_cache.get_token", return_value="token-1"), patch("aignostics.platform._client.get_token", return_value="token-1"), patch("aignostics.platform._client.Client.get_api_client", return_value=mock_api_client), ): @@ -305,7 +289,6 @@ def test_different_tokens_use_different_cache_entries(mock_settings: MagicMock, # Client with token-2 with ( - patch("aignostics.platform._operation_cache.get_token", return_value="token-2"), patch("aignostics.platform._client.get_token", return_value="token-2"), patch("aignostics.platform._client.Client.get_api_client", return_value=mock_api_client), ): @@ -318,9 +301,8 @@ def test_different_tokens_use_different_cache_entries(mock_settings: MagicMock, assert mock_api_client.get_me_v1_me_get.call_count == 2 # New API call # Cache should have two entries - assert len(_operation_cache) == 2 + assert len(Client._operation_cache) == 2 - @pytest.mark.unit @staticmethod def test_token_change_invalidates_cache(mock_settings: MagicMock, mock_api_client: MagicMock) -> None: """Test that changing token invalidates the cache. @@ -332,11 +314,9 @@ def test_token_change_invalidates_cache(mock_settings: MagicMock, mock_api_clien # First call with token-1 with ( - patch("aignostics.platform._operation_cache.get_token") as mock_get_token, patch("aignostics.platform._client.get_token", return_value="token-1"), patch("aignostics.platform._client.Client.get_api_client", return_value=mock_api_client), ): - mock_get_token.return_value = "token-1" client = Client(cache_token=False) client._api = mock_api_client mock_api_client.get_me_v1_me_get.return_value = mock_me_response_1 @@ -345,15 +325,14 @@ def test_token_change_invalidates_cache(mock_settings: MagicMock, mock_api_clien assert result1 == mock_me_response_1 assert mock_api_client.get_me_v1_me_get.call_count == 1 - # Second call with token-2 (simulating token refresh) - mock_get_token.return_value = "token-2" + # Second call with token-2 (simulating token refresh) + with patch("aignostics.platform._client.get_token", return_value="token-2"): mock_api_client.get_me_v1_me_get.return_value = mock_me_response_2 result2 = client.me() assert result2 == mock_me_response_2 assert mock_api_client.get_me_v1_me_get.call_count == 2 # New API call, cache not used - @pytest.mark.unit @staticmethod def test_same_token_reuses_cache(mock_settings: MagicMock, mock_api_client: MagicMock) -> None: """Test that using the same token reuses cached values. @@ -364,7 +343,6 @@ def test_same_token_reuses_cache(mock_settings: MagicMock, mock_api_client: Magi # First client with token-123 with ( - patch("aignostics.platform._operation_cache.get_token", return_value="token-123"), patch("aignostics.platform._client.get_token", return_value="token-123"), patch("aignostics.platform._client.Client.get_api_client", return_value=mock_api_client), ): @@ -378,7 +356,6 @@ def test_same_token_reuses_cache(mock_settings: MagicMock, mock_api_client: Magi # Second client with same token-123 with ( - patch("aignostics.platform._operation_cache.get_token", return_value="token-123"), patch("aignostics.platform._client.get_token", return_value="token-123"), patch("aignostics.platform._client.Client.get_api_client", return_value=mock_api_client), ): @@ -396,7 +373,6 @@ def test_same_token_reuses_cache(mock_settings: MagicMock, mock_api_client: Magi class TestCacheWithRetries: """Test cases for cache interaction with retry mechanism.""" - @pytest.mark.unit @staticmethod def test_cache_not_populated_on_failure( mock_settings: MagicMock, client_with_mock_api: Client, mock_api_client: MagicMock @@ -419,9 +395,8 @@ def side_effect(*args, **kwargs): client_with_mock_api.me() # Cache should be empty after failed call - assert len(_operation_cache) == 0 + assert len(Client._operation_cache) == 0 - @pytest.mark.unit @staticmethod def test_exceptions_not_cached_subsequent_call_retries( mock_settings: MagicMock, client_with_mock_api: Client, mock_api_client: MagicMock @@ -455,7 +430,7 @@ def side_effect(*args, **kwargs): client_with_mock_api.me() assert call_count == 3 # Should have tried 3 times (max attempts) - assert len(_operation_cache) == 0 # No cache entry for failures + assert len(Client._operation_cache) == 0 # No cache entry for failures # Second call should retry the API (not use cached exception) and succeed result = client_with_mock_api.me() @@ -467,7 +442,6 @@ def side_effect(*args, **kwargs): assert result2 == mock_me_response assert call_count == 4 # Still 4, used cache - @pytest.mark.unit @staticmethod def test_cache_populated_after_successful_retry( mock_settings: MagicMock, client_with_mock_api: Client, mock_api_client: MagicMock @@ -502,7 +476,6 @@ def side_effect(*args, **kwargs): assert result2 == mock_me_response assert call_count == 2 # Still 2, used cache - @pytest.mark.unit @staticmethod def test_cache_used_before_retry_logic( mock_settings: MagicMock, client_with_mock_api: Client, mock_api_client: MagicMock @@ -536,7 +509,6 @@ def test_cache_used_before_retry_logic( class TestCacheConcurrency: """Test cases for cache behavior in concurrent scenarios.""" - @pytest.mark.unit @staticmethod def test_cache_is_class_level(mock_settings: MagicMock, mock_api_client: MagicMock) -> None: """Test that cache is shared across all Client instances. @@ -546,7 +518,6 @@ def test_cache_is_class_level(mock_settings: MagicMock, mock_api_client: MagicMo mock_me_response = {"user_id": "test-user", "org_id": "test-org"} with ( - patch("aignostics.platform._operation_cache.get_token", return_value="token-123"), patch("aignostics.platform._client.get_token", return_value="token-123"), patch("aignostics.platform._client.Client.get_api_client", return_value=mock_api_client), ): @@ -567,7 +538,6 @@ def test_cache_is_class_level(mock_settings: MagicMock, mock_api_client: MagicMo assert result == mock_me_response assert mock_api_client.get_me_v1_me_get.call_count == 1 # Still 1 - @pytest.mark.unit @staticmethod def test_cache_cleared_affects_all_clients(mock_settings: MagicMock, mock_api_client: MagicMock) -> None: """Test that clearing cache affects all Client instances. @@ -577,7 +547,6 @@ def test_cache_cleared_affects_all_clients(mock_settings: MagicMock, mock_api_cl mock_me_response = {"user_id": "test-user", "org_id": "test-org"} with ( - patch("aignostics.platform._operation_cache.get_token", return_value="token-123"), patch("aignostics.platform._client.get_token", return_value="token-123"), patch("aignostics.platform._client.Client.get_api_client", return_value=mock_api_client), ): @@ -587,11 +556,11 @@ def test_cache_cleared_affects_all_clients(mock_settings: MagicMock, mock_api_cl # Populate cache with client1 client1.me() - assert len(_operation_cache) == 1 + assert len(Client._operation_cache) == 1 # Clear cache - _operation_cache.clear() - assert len(_operation_cache) == 0 + Client._operation_cache.clear() + assert len(Client._operation_cache) == 0 # client2 should not find cached value client2 = Client(cache_token=False) @@ -604,7 +573,6 @@ def test_cache_cleared_affects_all_clients(mock_settings: MagicMock, mock_api_cl class TestCacheEdgeCases: """Test cases for edge cases and unusual scenarios.""" - @pytest.mark.unit @staticmethod def test_cache_handles_complex_response_objects( mock_settings: MagicMock, client_with_mock_api: Client, mock_api_client: MagicMock @@ -630,7 +598,6 @@ def test_cache_handles_complex_response_objects( assert result2 == mock_me_response assert result1 == result2 - @pytest.mark.unit @staticmethod def test_cache_handles_empty_dict( mock_settings: MagicMock, client_with_mock_api: Client, mock_api_client: MagicMock @@ -645,7 +612,6 @@ def test_cache_handles_empty_dict( assert result2 == {} assert mock_api_client.get_me_v1_me_get.call_count == 1 - @pytest.mark.unit @staticmethod def test_cache_with_rapid_successive_calls( mock_settings: MagicMock, client_with_mock_api: Client, mock_api_client: MagicMock @@ -666,18 +632,16 @@ def test_cache_with_rapid_successive_calls( # API should only be called once assert mock_api_client.get_me_v1_me_get.call_count == 1 - @pytest.mark.unit @staticmethod def test_cache_key_with_unicode_args(mock_settings: MagicMock) -> None: """Test that cache key generation handles unicode characters correctly.""" - key1 = cache_key_with_token("token-123", "method", "arg-ü-ö-ä", param="value-é-ñ") - key2 = cache_key_with_token("token-123", "method", "arg-ü-ö-ä", param="value-é-ñ") + key1 = Client._cache_key("token-123", "method", "arg-ü-ö-ä", param="value-é-ñ") + key2 = Client._cache_key("token-123", "method", "arg-ü-ö-ä", param="value-é-ñ") # Should be consistent assert key1 == key2 assert isinstance(key1, str) - @pytest.mark.unit @staticmethod def test_cache_with_very_long_token(mock_settings: MagicMock, mock_api_client: MagicMock) -> None: """Test that cache handles very long authentication tokens. @@ -687,7 +651,6 @@ def test_cache_with_very_long_token(mock_settings: MagicMock, mock_api_client: M long_token = "x" * 10000 # Very long token with ( - patch("aignostics.platform._operation_cache.get_token", return_value=long_token), patch("aignostics.platform._client.get_token", return_value=long_token), patch("aignostics.platform._client.Client.get_api_client", return_value=mock_api_client), ): @@ -698,14 +661,13 @@ def test_cache_with_very_long_token(mock_settings: MagicMock, mock_api_client: M client.me() # Cache key should be reasonable length (token is hashed) - cache_key = next(iter(_operation_cache.keys())) + cache_key = next(iter(Client._operation_cache.keys())) assert len(cache_key) < 200 # Much shorter than the 10000 char token class TestCacheIntegrationWithAuthentication: """Test cases for cache integration with authentication system.""" - @pytest.mark.unit @staticmethod def test_cache_uses_current_token_from_get_token(mock_settings: MagicMock, mock_api_client: MagicMock) -> None: """Test that cache always uses the current token from get_token(). @@ -715,8 +677,7 @@ def test_cache_uses_current_token_from_get_token(mock_settings: MagicMock, mock_ mock_me_response = {"user_id": "test-user", "org_id": "test-org"} with ( - patch("aignostics.platform._operation_cache.get_token") as mock_get_token, - patch("aignostics.platform._client.get_token", return_value="token-1"), + patch("aignostics.platform._client.get_token") as mock_get_token, patch("aignostics.platform._client.Client.get_api_client", return_value=mock_api_client), ): mock_get_token.return_value = "token-1" @@ -739,7 +700,6 @@ def test_cache_uses_current_token_from_get_token(mock_settings: MagicMock, mock_ assert result == mock_me_response_2 assert mock_api_client.get_me_v1_me_get.call_count == 2 - @pytest.mark.unit @staticmethod def test_cache_with_token_refresh_scenario(mock_settings: MagicMock, mock_api_client: MagicMock) -> None: """Test cache behavior in a token refresh scenario. @@ -750,8 +710,7 @@ def test_cache_with_token_refresh_scenario(mock_settings: MagicMock, mock_api_cl mock_me_response_2 = {"user_id": "user-2", "org_id": "org-2"} with ( - patch("aignostics.platform._operation_cache.get_token") as mock_get_token, - patch("aignostics.platform._client.get_token", return_value="token-initial"), + patch("aignostics.platform._client.get_token") as mock_get_token, patch("aignostics.platform._client.Client.get_api_client", return_value=mock_api_client), ): # Initial token @@ -786,4 +745,4 @@ def test_cache_with_token_refresh_scenario(mock_settings: MagicMock, mock_api_cl assert mock_api_client.get_me_v1_me_get.call_count == 2 # Should now have 2 cache entries (one for each token) - assert len(_operation_cache) == 2 + assert len(Client._operation_cache) == 2 diff --git a/tests/aignostics/platform/client_me_retry_test.py b/tests/aignostics/platform/client_me_retry_test.py index 22d863a2..7fb41fb4 100644 --- a/tests/aignostics/platform/client_me_retry_test.py +++ b/tests/aignostics/platform/client_me_retry_test.py @@ -16,7 +16,6 @@ class TestMeSuccess: """Test cases for successful Client.me() calls.""" - @pytest.mark.unit @staticmethod def test_me_success_no_retry(mock_settings: MagicMock, client_with_mock_api: Client) -> None: """Test that successful me() call completes without retries. @@ -32,7 +31,6 @@ def test_me_success_no_retry(mock_settings: MagicMock, client_with_mock_api: Cli # Should succeed on first try assert client_with_mock_api._api.get_me_v1_me_get.call_count == 1 - @pytest.mark.unit @staticmethod def test_me_passes_timeout_to_api(mock_settings: MagicMock, client_with_mock_api: Client) -> None: """Test that me() passes the correct timeout value to the API call.""" @@ -51,7 +49,6 @@ def test_me_passes_timeout_to_api(mock_settings: MagicMock, client_with_mock_api class TestMeRetryOnTransientErrors: """Test cases for retry behavior on transient errors.""" - @pytest.mark.unit @staticmethod def test_me_retries_on_service_exception(mock_settings: MagicMock, client_with_mock_api: Client, caplog) -> None: """Test that me() retries on ServiceException (5xx server errors). @@ -83,7 +80,6 @@ def side_effect(*args, **kwargs): retry_logs = [record for record in caplog.records if "Retrying" in record.getMessage()] assert len(retry_logs) > 0, "Should log retry attempts for ServiceException" - @pytest.mark.unit @staticmethod def test_me_retries_on_timeout_error(mock_settings: MagicMock, client_with_mock_api: Client, caplog) -> None: """Test that me() retries on Urllib3TimeoutError. @@ -114,7 +110,6 @@ def side_effect(*args, **kwargs): retry_logs = [record for record in caplog.records if "Retrying" in record.getMessage()] assert len(retry_logs) > 0, "Should log retry attempts for timeout errors" - @pytest.mark.unit @staticmethod def test_me_retries_on_pool_error(mock_settings: MagicMock, client_with_mock_api: Client) -> None: """Test that me() retries on PoolError. @@ -138,7 +133,6 @@ def side_effect(*args, **kwargs): # Should have retried multiple times assert call_count >= 3, f"Expected at least 3 attempts but got {call_count}" - @pytest.mark.unit @staticmethod def test_me_retries_on_incomplete_read(mock_settings: MagicMock, client_with_mock_api: Client) -> None: """Test that me() retries on IncompleteRead. @@ -161,7 +155,6 @@ def side_effect(*args, **kwargs): # Should have retried multiple times assert call_count >= 3, f"Expected at least 3 attempts but got {call_count}" - @pytest.mark.unit @staticmethod def test_me_retries_on_protocol_error(mock_settings: MagicMock, client_with_mock_api: Client) -> None: """Test that me() retries on ProtocolError. @@ -185,7 +178,6 @@ def side_effect(*args, **kwargs): # Should have retried multiple times assert call_count >= 3, f"Expected at least 3 attempts but got {call_count}" - @pytest.mark.unit @staticmethod def test_me_retries_on_proxy_error(mock_settings: MagicMock, client_with_mock_api: Client) -> None: """Test that me() retries on ProxyError. @@ -213,7 +205,6 @@ def side_effect(*args, **kwargs): class TestMeRetrySuccess: """Test cases for successful retry scenarios.""" - @pytest.mark.unit @staticmethod def test_me_succeeds_after_transient_failure( mock_settings: MagicMock, client_with_mock_api: Client, caplog @@ -248,7 +239,6 @@ def side_effect(*args, **kwargs): retry_logs = [record for record in caplog.records if "Retrying" in record.getMessage()] assert len(retry_logs) == 1, "Should log exactly one retry attempt" - @pytest.mark.unit @staticmethod def test_me_succeeds_on_second_retry(mock_settings: MagicMock, client_with_mock_api: Client) -> None: """Test that me() succeeds after multiple transient failures. @@ -281,7 +271,6 @@ def side_effect(*args, **kwargs): class TestMeNoRetryOnNonRetryableErrors: """Test cases for errors that should NOT trigger retries.""" - @pytest.mark.unit @staticmethod def test_me_no_retry_on_non_retryable_exception( mock_settings: MagicMock, client_with_mock_api: Client, caplog @@ -324,15 +313,14 @@ def side_effect(*args, **kwargs): class TestMeRetryConfiguration: """Test cases for retry configuration settings.""" - @pytest.mark.unit @staticmethod def test_me_respects_max_attempts_setting(mock_settings: MagicMock, client_with_mock_api: Client) -> None: - """Test that me() respects the me_retry_attempts setting. + """Test that me() respects the me_retry_attempts_max setting. The retry logic should stop after the configured maximum number of attempts. """ # Set max attempts to 5 - mock_settings.return_value.me_retry_attempts = 5 + mock_settings.return_value.me_retry_attempts_max = 5 call_count = 0 @@ -352,15 +340,14 @@ def side_effect(*args, **kwargs): # Should have attempted exactly 5 times (respecting the configured max) assert call_count == 5, f"Expected exactly 5 attempts but got {call_count}" - @pytest.mark.unit @staticmethod def test_me_respects_zero_max_attempts(mock_settings: MagicMock, client_with_mock_api: Client) -> None: - """Test that me() with me_retry_attempts=0 does not retry. + """Test that me() with me_retry_attempts_max=0 does not retry. When max attempts is set to 0, the function should fail immediately without any retries. """ # Set max attempts to 0 (no retries) - mock_settings.return_value.me_retry_attempts = 0 + mock_settings.return_value.me_retry_attempts_max = 0 call_count = 0 @@ -379,7 +366,6 @@ def side_effect(*args, **kwargs): # Should have attempted exactly once (no retries) assert call_count == 1, f"Expected exactly 1 attempt but got {call_count}" - @pytest.mark.unit @staticmethod def test_me_wait_times_increase_with_retries(mock_settings: MagicMock, client_with_mock_api: Client) -> None: """Test that wait times between retries increase exponentially. @@ -388,7 +374,7 @@ def test_me_wait_times_increase_with_retries(mock_settings: MagicMock, client_wi """ mock_settings.return_value.me_retry_wait_min = 0.1 mock_settings.return_value.me_retry_wait_max = 10.0 - mock_settings.return_value.me_retry_attempts = 4 + mock_settings.return_value.me_retry_attempts_max = 4 call_times = [] @@ -422,7 +408,6 @@ def side_effect(*args, **kwargs): class TestMeEdgeCases: """Test cases for edge cases and unusual scenarios.""" - @pytest.mark.unit @staticmethod def test_me_with_none_response(mock_settings: MagicMock, client_with_mock_api: Client) -> None: """Test that me() handles None response from API. @@ -436,7 +421,6 @@ def test_me_with_none_response(mock_settings: MagicMock, client_with_mock_api: C assert result is None assert client_with_mock_api._api.get_me_v1_me_get.call_count == 1 - @pytest.mark.unit @staticmethod def test_me_with_empty_response(mock_settings: MagicMock, client_with_mock_api: Client) -> None: """Test that me() handles empty dict response from API.""" @@ -447,7 +431,6 @@ def test_me_with_empty_response(mock_settings: MagicMock, client_with_mock_api: assert result == {} assert client_with_mock_api._api.get_me_v1_me_get.call_count == 1 - @pytest.mark.unit @staticmethod def test_me_logs_retry_attempts(mock_settings: MagicMock, client_with_mock_api: Client, caplog) -> None: """Test that me() logs retry attempts at WARNING level. @@ -485,13 +468,12 @@ def side_effect(*args, **kwargs): assert "attempt" in log.getMessage().lower() or "retrying" in log.getMessage().lower() -class TestMeWithSettings: - """Test cases for with settings.""" +class TestMeIntegrationWithSettings: + """Test cases for integration with settings module.""" - @pytest.mark.unit @staticmethod - def test_me_uses_settings_from_settings(client_with_mock_api: Client) -> None: - """Test that me() retrieves settings on each call. + def test_me_uses_settings_from_settings_module(client_with_mock_api: Client) -> None: + """Test that me() retrieves settings from the settings module on each call. Settings should be fetched dynamically to allow for runtime configuration changes. """ @@ -500,7 +482,7 @@ def test_me_uses_settings_from_settings(client_with_mock_api: Client) -> None: with patch("aignostics.platform._client.settings") as mock_settings: settings_obj = MagicMock() - settings_obj.me_retry_attempts = 3 + settings_obj.me_retry_attempts_max = 3 settings_obj.me_retry_wait_min = 0.1 settings_obj.me_retry_wait_max = 5.0 settings_obj.me_timeout = 20.0 @@ -514,7 +496,6 @@ def test_me_uses_settings_from_settings(client_with_mock_api: Client) -> None: # Verify the timeout from settings was used client_with_mock_api._api.get_me_v1_me_get.assert_called_once_with(_request_timeout=20, _headers=ANY) - @pytest.mark.unit @staticmethod def test_me_allows_runtime_settings_changes(client_with_mock_api: Client) -> None: """Test that me() respects settings changes between calls. @@ -538,7 +519,7 @@ def side_effect(*args, **kwargs): with patch("aignostics.platform._client.settings") as mock_settings: # First call with max_attempts = 1 (will fail) settings_obj_1 = MagicMock() - settings_obj_1.me_retry_attempts = 1 + settings_obj_1.me_retry_attempts_max = 1 settings_obj_1.me_retry_wait_min = 0.1 settings_obj_1.me_retry_wait_max = 2.0 settings_obj_1.me_timeout = 10.0 @@ -556,7 +537,7 @@ def side_effect(*args, **kwargs): # Second call with max_attempts = 3 (will succeed after retry) settings_obj_2 = MagicMock() - settings_obj_2.me_retry_attempts = 3 + settings_obj_2.me_retry_attempts_max = 3 settings_obj_2.me_retry_wait_min = 0.1 settings_obj_2.me_retry_wait_max = 2.0 settings_obj_2.me_timeout = 10.0 diff --git a/tests/aignostics/platform/client_pooling_test.py b/tests/aignostics/platform/client_pooling_test.py index 2741d84b..caec5bd1 100644 --- a/tests/aignostics/platform/client_pooling_test.py +++ b/tests/aignostics/platform/client_pooling_test.py @@ -1,11 +1,8 @@ """Tests for API client connection pooling.""" -import pytest - from aignostics.platform._client import Client -@pytest.mark.unit def test_api_client_cached_is_shared() -> None: """Test that get_api_client with cache_token=True returns the same instance. @@ -19,7 +16,6 @@ def test_api_client_cached_is_shared() -> None: assert api1 is api2, "get_api_client(cache_token=True) should return the same instance" -@pytest.mark.unit def test_api_client_uncached_is_shared() -> None: """Test that get_api_client with cache_token=False returns the same instance. @@ -33,7 +29,6 @@ def test_api_client_uncached_is_shared() -> None: assert api1 is api2, "get_api_client(cache_token=False) should return the same instance" -@pytest.mark.unit def test_api_client_cached_vs_uncached_are_different() -> None: """Test that cached and uncached API clients are separate instances. @@ -47,7 +42,6 @@ def test_api_client_cached_vs_uncached_are_different() -> None: assert cached_api is not uncached_api, "Cached and uncached API clients should be different instances" -@pytest.mark.unit def test_client_instances_share_api_client() -> None: """Test that multiple Client instances share the same underlying API client. diff --git a/tests/aignostics/platform/client_token_provider_test.py b/tests/aignostics/platform/client_token_provider_test.py index e787f211..aa2b6d54 100644 --- a/tests/aignostics/platform/client_token_provider_test.py +++ b/tests/aignostics/platform/client_token_provider_test.py @@ -14,7 +14,6 @@ def _clear_api_client_cache() -> None: Client._api_client_uncached = None -@pytest.mark.unit def test_oauth2_token_provider_configuration_uses_token_provider() -> None: """Test that token_provider is used when provided.""" token_provider = Mock(return_value="dynamic-token") @@ -24,7 +23,6 @@ def test_oauth2_token_provider_configuration_uses_token_provider() -> None: token_provider.assert_called_once() -@pytest.mark.unit def test_oauth2_token_provider_configuration_no_token() -> None: """Test that auth_settings returns empty dict if no token_provider is set.""" config = _OAuth2TokenProviderConfiguration(host="https://dummy") @@ -32,7 +30,6 @@ def test_oauth2_token_provider_configuration_no_token() -> None: assert auth == {} -@pytest.mark.unit def test_client_passes_token_provider() -> None: """Test that the client passes the token provider to the configuration.""" with ( @@ -47,7 +44,6 @@ def test_client_passes_token_provider() -> None: public_api_mock.assert_called() -@pytest.mark.unit def test_client_me_calls_api() -> None: """Test that the client.me() method calls the API and returns the result.""" with ( diff --git a/tests/aignostics/platform/conftest.py b/tests/aignostics/platform/conftest.py index 4e47b5bd..385dcf86 100644 --- a/tests/aignostics/platform/conftest.py +++ b/tests/aignostics/platform/conftest.py @@ -6,7 +6,6 @@ import pytest from aignostics.platform._client import Client -from aignostics.platform._operation_cache import _operation_cache @pytest.fixture @@ -18,26 +17,10 @@ def mock_settings() -> MagicMock: """ with patch("aignostics.platform._client.settings") as mock_settings: settings = MagicMock() - settings.me_retry_attempts = 3 + settings.me_retry_attempts_max = 3 settings.me_retry_wait_min = 0.1 settings.me_retry_wait_max = 5.0 settings.me_timeout = 10.0 - settings.me_cache_ttl = 60 # 60 seconds for testing - settings.application_retry_attempts = 3 - settings.application_retry_wait_min = 0.1 - settings.application_retry_wait_max = 5.0 - settings.application_timeout = 10.0 - settings.application_cache_ttl = 300 # 5 minutes - settings.application_version_retry_attempts = 3 - settings.application_version_retry_wait_min = 0.1 - settings.application_version_retry_wait_max = 5.0 - settings.application_version_timeout = 10.0 - settings.application_version_cache_ttl = 300 # 5 minutes - settings.run_retry_attempts = 3 - settings.run_retry_wait_min = 0.1 - settings.run_retry_wait_max = 5.0 - settings.run_timeout = 10.0 - settings.run_cache_ttl = 15 # 15 seconds settings.api_root = "https://test.api.com" mock_settings.return_value = settings yield mock_settings @@ -59,7 +42,7 @@ def clear_cache() -> None: This ensures tests don't interfere with each other through shared cache state. """ - _operation_cache.clear() + Client._operation_cache.clear() @pytest.fixture diff --git a/tests/aignostics/platform/e2e_test.py b/tests/aignostics/platform/e2e_test.py deleted file mode 100644 index 179b19ce..00000000 --- a/tests/aignostics/platform/e2e_test.py +++ /dev/null @@ -1,516 +0,0 @@ -"""Scheduled end-to-end (e2e) tests for the Aignostics client. - -This module contains e2e tests that run real application workflows -against the Aignostics platform. These tests verify e2e functionality -including creating runs, downloading results, and validating outputs. - -""" - -import tempfile -from datetime import UTC, datetime, timedelta -from pathlib import Path - -import pytest -from aignx.codegen.models import ( - ArtifactOutput, - ArtifactState, - ItemOutput, - ItemState, - RunOutput, - RunState, -) - -from aignostics import platform -from aignostics.platform.resources.runs import Run -from tests.constants_test import ( - HETA_APPLICATION_ID, - HETA_APPLICATION_VERSION, - SPOT_0_CRC32C, - SPOT_0_GS_URL, - SPOT_0_HEIGHT, - SPOT_0_RESOLUTION_MPP, - SPOT_0_WIDTH, - SPOT_1_CRC32C, - SPOT_1_GS_URL, - SPOT_1_HEIGHT, - SPOT_1_RESOLUTION_MPP, - SPOT_1_WIDTH, - SPOT_2_CRC32C, - SPOT_2_GS_URL, - SPOT_2_HEIGHT, - SPOT_2_RESOLUTION_MPP, - SPOT_2_WIDTH, - SPOT_3_CRC32C, - SPOT_3_GS_URL, - SPOT_3_HEIGHT, - SPOT_3_RESOLUTION_MPP, - SPOT_3_WIDTH, - TEST_APPLICATION_ID, - TEST_APPLICATION_VERSION, -) - -TEST_APPLICATION_SUBMIT_AND_WAIT_DEADLINE_SECONDS = 60 * 45 # 45 minutes -TEST_APPLICATION_SUBMIT_AND_WAIT_DUE_DATE_SECONDS = 60 * 10 # 10 minutes - -TEST_APPLICATION_SUBMIT_AND_FIND_DEADLINE_SECONDS = 60 * 60 * 24 # 24 hours -TEST_APPLICATION_SUBMIT_AND_FIND_DUE_DATE_SECONDS = 60 * 60 * 24 # 24 hours - -HETA_APPLICATION_SUBMIT_AND_WAIT_DUE_DATE_SECONDS = 60 * 60 * 1 # 1 hour -HETA_APPLICATION_SUBMIT_AND_WAIT_DEADLINE_SECONDS = 60 * 60 * 5 # 5 hours - -HETA_APPLICATION_SUBMIT_AND_FIND_DUE_DATE_SECONDS = 60 * 60 * 24 # 24 hours -HETA_APPLICATION_SUBMIT_AND_FIND_DEADLINE_SECONDS = 60 * 60 * 24 # 24 hours - - -def _get_single_spot_payload_for_heta(expires_seconds: int) -> list[platform.InputItem]: - """Generates a payload using a single spot.""" - return [ - platform.InputItem( - external_id=SPOT_0_GS_URL, - input_artifacts=[ - platform.InputArtifact( - name="whole_slide_image", - download_url=platform.generate_signed_url( - url=SPOT_0_GS_URL, - expires_seconds=expires_seconds, - ), - metadata={ - "checksum_base64_crc32c": SPOT_0_CRC32C, - "resolution_mpp": SPOT_0_RESOLUTION_MPP, - "width_px": SPOT_0_WIDTH, - "height_px": SPOT_0_HEIGHT, - "media_type": "image/tiff", - "staining_method": "H&E", - "specimen": { - "tissue": "LUNG", - "disease": "LUNG_CANCER", - }, - }, - ) - ], - ), - ] - - -def _get_three_spots_payload_for_test(expires_seconds: int) -> list[platform.InputItem]: - """Generates a payload using three spots.""" - return [ - platform.InputItem( - external_id=SPOT_1_GS_URL, - input_artifacts=[ - platform.InputArtifact( - name="whole_slide_image", - download_url=platform.generate_signed_url( - url=SPOT_1_GS_URL, - expires_seconds=expires_seconds, - ), - metadata={ - "checksum_base64_crc32c": SPOT_1_CRC32C, - "width_px": SPOT_1_WIDTH, - "height_px": SPOT_1_HEIGHT, - "resolution_mpp": SPOT_1_RESOLUTION_MPP, - "media_type": "image/tiff", - }, - ) - ], - ), - platform.InputItem( - external_id=SPOT_2_GS_URL, - input_artifacts=[ - platform.InputArtifact( - name="whole_slide_image", - download_url=platform.generate_signed_url( - url=SPOT_2_GS_URL, - expires_seconds=expires_seconds, - ), - metadata={ - "checksum_base64_crc32c": SPOT_2_CRC32C, - "width_px": SPOT_2_WIDTH, - "height_px": SPOT_2_HEIGHT, - "resolution_mpp": SPOT_2_RESOLUTION_MPP, - "media_type": "image/tiff", - }, - ) - ], - ), - platform.InputItem( - external_id=SPOT_3_GS_URL, - input_artifacts=[ - platform.InputArtifact( - name="whole_slide_image", - download_url=platform.generate_signed_url( - url=SPOT_3_GS_URL, - expires_seconds=expires_seconds, - ), - metadata={ - "checksum_base64_crc32c": SPOT_3_CRC32C, - "width_px": SPOT_3_WIDTH, - "height_px": SPOT_3_HEIGHT, - "resolution_mpp": SPOT_3_RESOLUTION_MPP, - "media_type": "image/tiff", - }, - ) - ], - ), - ] - - -def _submit_and_validate( # noqa: PLR0913, PLR0917 - application_id: str, - application_version: str, - payload: list[platform.InputItem], - due_date_seconds: int, - deadline_seconds: int, - tags: set[str] | None = None, -) -> Run: - """Submit application run and validate its details. - - Args: - application_id (str): The application ID to use for the test. - application_version (str): The application version to use for the test. - payload (list[platform.InputItem]): The input items for the application run. - due_date_seconds (int): The due date in seconds from now for the application run. - deadline_seconds (int): The deadline in seconds from now for the application run. - tags (set[str] | None): A set of tags to attach to the application run. - - Raises: - AssertionError: If any of the validation checks fail. - ValueError: If more than one tag is provided. - """ - client = platform.Client() - run = client.runs.submit( - application_id=application_id, - application_version=application_version, - items=payload, - custom_metadata={ - "sdk": { - "tags": tags or set(), - "scheduling": { - "due_date": (datetime.now(tz=UTC) + timedelta(seconds=due_date_seconds)).isoformat(), - "deadline": (datetime.now(tz=UTC) + timedelta(seconds=deadline_seconds)).isoformat(), - }, - } - }, - ) - details = run.details() - assert details.run_id == run.run_id, "Run ID mismatch after submission" - assert details.application_id == application_id, "Application ID mismatch after submission" - assert details.version_number == application_version, "Application version mismatch after submission" - assert details.state in {RunState.PENDING, RunState.PROCESSING}, ( - f"Unexpected run state `{details.state}` after submission" - ) - - if tags and len(tags) > 1: - message = "Only single tag filtering is supported in this test code." - raise ValueError(message) - runs = client.runs.list( - application_id=application_id, - application_version=application_version, - custom_metadata=f'$.sdk.tags[*] ? (@ == "{tags[0]}")' if tags else None, - ) - - # Find the submitted run in the list - matched_runs = [r for r in runs if r.run_id == run.run_id] - assert len(matched_runs) == 1, f"Submitted run `{run.run_id}` not found in run listing" - - return run - - -def _submit_and_wait( # noqa: PLR0913, PLR0917 - application_id: str, - application_version: str, - payload: list[platform.InputItem], - record_property, - due_date_seconds: int, - deadline_seconds: int, - tags: set[str] | None = None, - checksum_attribute_key: str = "checksum_base64_crc32c", -) -> None: - """Helper function to run an application test. - - This function creates an application run, waits for results to become available, - downloads results, and validates outputs. - - Args: - application_id (str): The application ID to use for the test. - application_version (str): The application version to use for the test. - payload (list[platform.InputItem]): The input items for the application run. - due_date_seconds (int): The due date in seconds from now for the application run. - deadline_seconds (int): The deadline in seconds from now for the application run. - tags (set[str] | None): A set of tags to attach to the application run. - checksum_attribute_key (str): The key used to validate the checksum of the output artifacts. - record_property: Function to record test properties. - - Raises: - AssertionError: If any of the validation checks fail. - """ - record_property("tested-item-id", "SPEC-PLATFORM-SERVICE") - run = _submit_and_validate( - application_id=application_id, - application_version=application_version, - payload=payload, - due_date_seconds=due_date_seconds, - deadline_seconds=deadline_seconds, - tags=tags, - ) - - with tempfile.TemporaryDirectory() as temp_dir: - run.download_to_folder(temp_dir, checksum_attribute_key, timeout_seconds=deadline_seconds) - _validate_output(run, Path(temp_dir), checksum_attribute_key) - - -def _find_and_validate( - application_id: str, - application_version: str, - payload: list[platform.InputItem], - due_date_seconds: int, - deadline_seconds: int, -) -> Run: - """Find application run submitted earlier and validate its details. - - Args: - application_id (str): The application ID to use for the test. - application_version (str): The application version to use for the test. - payload (list[platform.InputItem]): The input items for the application run. - due_date_seconds (int): The due date in seconds from now for the application run. - deadline_seconds (int): The deadline in seconds from now for the application run. - - Raises: - AssertionError: If any of the validation checks fail. - """ - client = platform.Client() - assert client is not None, "Failed to create platform client" - # TODO(Helmut): Build logic to find the run based on metadata once supported - - -@pytest.mark.skip( - reason="v0.0.4 on production balking on whole_slide_image input while identical version accepting on staging" -) -@pytest.mark.e2e -@pytest.mark.long_running -@pytest.mark.timeout(timeout=TEST_APPLICATION_SUBMIT_AND_WAIT_DEADLINE_SECONDS + 60 * 5) -def test_platform_test_app_submit_and_wait(record_property) -> None: - """Test application runs with the test application. - - This test creates an application run using the test application and three spots. - It then waits for results to become available, downloads the results to a temporary directory - and performs various checks to ensure the application run completed successfully and the results are valid. - - Raises: - AssertionError: If any of the validation checks fail. - """ - _submit_and_wait( - application_id=TEST_APPLICATION_ID, - application_version=TEST_APPLICATION_VERSION, - payload=_get_three_spots_payload_for_test( - expires_seconds=TEST_APPLICATION_SUBMIT_AND_FIND_DEADLINE_SECONDS + 60 * 5 - ), - record_property=record_property, - deadline_seconds=TEST_APPLICATION_SUBMIT_AND_FIND_DEADLINE_SECONDS, - due_date_seconds=TEST_APPLICATION_SUBMIT_AND_FIND_DUE_DATE_SECONDS, - tags=["test_platform_test_app_submit_and_wait"], - ) - - -@pytest.mark.e2e -@pytest.mark.very_long_running -@pytest.mark.scheduled_only -@pytest.mark.timeout(timeout=HETA_APPLICATION_SUBMIT_AND_WAIT_DEADLINE_SECONDS + 60 * 5) -def test_platform_heta_app_submit_and_wait(record_property) -> None: - """Test application runs with the HETA application. - - This test creates an application run using the HETA application and a single spot. - It then waits for the results to become available, downloads the results to a - temporary directory and performs various checks to ensure the application run completed successfully - and the results are valid. - - Raises: - AssertionError: If any of the validation checks fail. - """ - _submit_and_wait( - application_id=HETA_APPLICATION_ID, - application_version=HETA_APPLICATION_VERSION, - payload=_get_single_spot_payload_for_heta( - expires_seconds=HETA_APPLICATION_SUBMIT_AND_WAIT_DEADLINE_SECONDS + 60 * 5 - ), - record_property=record_property, - deadline_seconds=HETA_APPLICATION_SUBMIT_AND_WAIT_DEADLINE_SECONDS, - due_date_seconds=HETA_APPLICATION_SUBMIT_AND_WAIT_DUE_DATE_SECONDS, - tags=["test_platform_heta_app_submit_and_wait"], - ) - - -@pytest.mark.skip(reason="Waits for change in scheduler") -@pytest.mark.e2e -@pytest.mark.long_running -@pytest.mark.timeout(timeout=60 * 5) -def test_platform_test_app_submit() -> None: - """Test application submission with the test application. - - This test submits an application run with the test application and validates the submission. - - Raises: - AssertionError: If any of the validation checks fail. - """ - _submit_and_validate( - application_id=TEST_APPLICATION_ID, - application_version=TEST_APPLICATION_VERSION, - payload=_get_three_spots_payload_for_test( - expires_seconds=TEST_APPLICATION_SUBMIT_AND_WAIT_DEADLINE_SECONDS + 60 * 5 - ), - deadline_seconds=TEST_APPLICATION_SUBMIT_AND_WAIT_DEADLINE_SECONDS, - due_date_seconds=TEST_APPLICATION_SUBMIT_AND_WAIT_DUE_DATE_SECONDS, - tags=["test_platform_heta_app_submit_and_wait"], - ) - - -@pytest.mark.skip(reason="Waits for change in scheduler") -@pytest.mark.e2e -@pytest.mark.very_long_running -@pytest.mark.scheduled_only -@pytest.mark.timeout(timeout=60 * 5) -def test_platform_test_app_find() -> None: - """Test application runs with the test application. - - This test finds an application run with the test application submitted earlier and - validates it completed successfully and in time. - - Raises: - AssertionError: If any of the validation checks fail. - """ - _find_and_validate( - application_id=TEST_APPLICATION_ID, - application_version=TEST_APPLICATION_VERSION, - payload=_get_three_spots_payload_for_test( - expires_seconds=TEST_APPLICATION_SUBMIT_AND_FIND_DEADLINE_SECONDS + 60 * 5 - ), - deadline_seconds=TEST_APPLICATION_SUBMIT_AND_FIND_DEADLINE_SECONDS, - due_date_seconds=TEST_APPLICATION_SUBMIT_AND_FIND_DUE_DATE_SECONDS, - ) - - -@pytest.mark.skip(reason="Waits for change in scheduler") -@pytest.mark.e2e -@pytest.mark.very_long_running -@pytest.mark.scheduled_only -@pytest.mark.timeout(timeout=60 * 5) -def test_platform_heta_app_submit() -> None: - """Test application runs with the HETA application. - - This test submits an application run with the HETA application and validates the submission. - - Raises: - AssertionError: If any of the validation checks fail. - """ - _submit_and_validate( - application_id=HETA_APPLICATION_ID, - application_version=HETA_APPLICATION_VERSION, - payload=_get_single_spot_payload_for_heta( - expires_seconds=HETA_APPLICATION_SUBMIT_AND_FIND_DEADLINE_SECONDS + 60 * 5 - ), - deadline_seconds=HETA_APPLICATION_SUBMIT_AND_FIND_DEADLINE_SECONDS, - due_date_seconds=HETA_APPLICATION_SUBMIT_AND_FIND_DUE_DATE_SECONDS, - tags=["test_platform_heta_app_submit_and_find"], - ) - - -@pytest.mark.skip(reason="Waits for change in scheduler") -@pytest.mark.e2e -@pytest.mark.very_long_running -@pytest.mark.scheduled_only -@pytest.mark.timeout(timeout=60 * 5) -def test_platform_heta_app_find() -> None: - """Test application runs with the HETA application. - - This test finds an application run with the HETA application submitted earlier and - validates it completed successfully and in time. - - Raises: - AssertionError: If any of the validation checks fail. - """ - _find_and_validate( - application_id=HETA_APPLICATION_ID, - application_version=HETA_APPLICATION_VERSION, - payload=_get_single_spot_payload_for_heta( - expires_seconds=HETA_APPLICATION_SUBMIT_AND_FIND_DEADLINE_SECONDS + 60 * 5 - ), - deadline_seconds=HETA_APPLICATION_SUBMIT_AND_FIND_DEADLINE_SECONDS, - due_date_seconds=HETA_APPLICATION_SUBMIT_AND_FIND_DUE_DATE_SECONDS, - ) - - -def _validate_output( - application_run: Run, - output_base_folder: Path, - checksum_attribute_key: str = "checksum_base64_crc32c", -) -> None: - """Validate the output of an application run. - - This function checks if the application run has completed successfully and verifies the output artifact checksum - - Args: - application_run (Run): The application run to validate. - output_base_folder (Path): The base folder where the output is stored. - checksum_attribute_key (str): The key used to validate the checksum of the output artifacts. - """ - # validate run state - run_details = application_run.details() - assert run_details.state == RunState.TERMINATED, ( - f"Run `{application_run.run_id}`: " - f"Did not finish in state `TERMINATED`, but `{run_details.state}`.\n" - f"Termination reason `{run_details.termination_reason}`, " - f"error code `{run_details.error_code}`, message `{run_details.error_message}`." - ) - assert run_details.output == RunOutput.FULL, ( - f"Run `{application_run.run_id}`: " - f"Did not finish in state `FULL` for its output, but `{run_details.output}`.\n" - f"Termination reason `{run_details.termination_reason}`, " - f"error code `{run_details.error_code}`, message `{run_details.error_message}`." - ) - - run_result_folder = output_base_folder / application_run.run_id - assert run_result_folder.exists(), f"Application run {application_run.run_id}: result folder does not exist" - - # validate item state - run_results = application_run.results() - for item in run_results: - assert item.state == ItemState.TERMINATED, ( - f"Application run `{application_run.run_id}`: " - f"state for item `{item.external_id}` is `{item.state}`, expected `TERMINATED`.\n" - f"Termination reason `{item.termination_reason}`, " - f"error code `{item.error_code}`, message `{item.error_message}`." - ) - assert item.output == ItemOutput.FULL, ( - f"Application run `{application_run.run_id}`: " - f"output for item `{item.external_id}` is `{item.output}`, expected `FULL`.\n" - f"Termination reason`{item.termination_reason}`, " - f"error code `{item.error_code}`, message `{item.error_message}`." - ) - - # validate output artifact state - item_dir = run_result_folder / item.external_id - assert item_dir.exists(), ( - f"Application run `{application_run.run_id}`: result folder for item `{item.external_id}` does not exist" - ) - for artifact in item.output_artifacts: - assert artifact.state == ArtifactState.TERMINATED, ( - f"Application run `{application_run.run_id}`: artifact `{artifact}` should have state `TERMINATED`" - ) - assert artifact.output == ArtifactOutput.AVAILABLE, ( - f"Application run `{application_run.run_id}`: " - f"artifact `{artifact}` should have output state `AVAILABLE`." - ) - assert artifact.download_url is not None, ( - f"Application run `{application_run.run_id}`: artifact `{artifact}` should provide a download url." - ) - file_ending = platform.mime_type_to_file_ending(platform.get_mime_type_for_artifact(artifact)) - file_path = item_dir / f"{artifact.name}{file_ending}" - assert file_path.exists(), ( - f"Application run `{application_run.run_id}`: artifact `{artifact}` was not downloaded/" - ) - checksum = artifact.metadata[checksum_attribute_key] - file_checksum = platform.calculate_file_crc32c(file_path) - assert file_checksum == checksum, ( - f"Application run `{application_run.run_id}`: " - f"metadata checksum != file checksum `{checksum}` <> `{file_checksum}` for artifact `{artifact}`." - ) diff --git a/tests/aignostics/platform/nocache_test.py b/tests/aignostics/platform/nocache_test.py deleted file mode 100644 index 364b8260..00000000 --- a/tests/aignostics/platform/nocache_test.py +++ /dev/null @@ -1,687 +0,0 @@ -"""Tests for nocache parameter functionality across the platform module. - -This module tests that: -1. nocache=False uses cached values (default behavior) -2. nocache=True skips reading from cache but still writes to cache -3. All platform methods that support caching correctly handle nocache -4. The decorator properly intercepts and handles the nocache parameter -""" - -import time -from unittest.mock import MagicMock - -import pytest - -from aignostics.platform._client import Client -from aignostics.platform._operation_cache import _operation_cache, cached_operation, operation_cache_clear - - -class TestNocacheDecoratorBehavior: - """Test the nocache parameter handling in the cached_operation decorator.""" - - @pytest.mark.unit - @staticmethod - def test_decorator_without_nocache_uses_cache() -> None: - """Test that decorated function uses cache by default (nocache=False).""" - call_count = 0 - - @cached_operation(ttl=60, use_token=False) - def test_func() -> int: - nonlocal call_count - call_count += 1 - return call_count - - # First call - should execute function - result1 = test_func() - assert result1 == 1 - assert call_count == 1 - - # Second call - should use cache - result2 = test_func() - assert result2 == 1 # Same as first call, from cache - assert call_count == 1 # Function not called again - - @pytest.mark.unit - @staticmethod - def test_decorator_with_nocache_false_uses_cache() -> None: - """Test that nocache=False explicitly uses cache.""" - call_count = 0 - - @cached_operation(ttl=60, use_token=False) - def test_func() -> int: - nonlocal call_count - call_count += 1 - return call_count - - # First call with nocache=False - result1 = test_func() # type: ignore[call-arg] - assert result1 == 1 - assert call_count == 1 - - # Second call with nocache=False - should use cache - result2 = test_func() # type: ignore[call-arg] - assert result2 == 1 - assert call_count == 1 - - @pytest.mark.unit - @staticmethod - def test_decorator_with_nocache_true_skips_reading_cache() -> None: - """Test that nocache=True skips reading from cache.""" - call_count = 0 - - @cached_operation(ttl=60, use_token=False) - def test_func() -> int: - nonlocal call_count - call_count += 1 - return call_count - - # First call - populates cache - result1 = test_func() # type: ignore[call-arg] - assert result1 == 1 - assert call_count == 1 - - # Second call with nocache=True - skips cache, executes function - result2 = test_func(nocache=True) # type: ignore[call-arg] - assert result2 == 2 # New value, not from cache - assert call_count == 2 # Function called again - - @pytest.mark.unit - @staticmethod - def test_decorator_with_nocache_true_still_writes_to_cache() -> None: - """Test that nocache=True still writes the result to cache.""" - call_count = 0 - - @cached_operation(ttl=60, use_token=False) - def test_func() -> int: - nonlocal call_count - call_count += 1 - return call_count - - # First call - populates cache - result1 = test_func() # type: ignore[call-arg] - assert result1 == 1 - assert call_count == 1 - - # Second call with nocache=True - skips cache read, writes new value - result2 = test_func(nocache=True) # type: ignore[call-arg] - assert result2 == 2 - assert call_count == 2 - - # Third call without nocache - should use the value cached by second call - result3 = test_func() # type: ignore[call-arg] - assert result3 == 2 # Uses value from second call - assert call_count == 2 # Function not called again - - @pytest.mark.unit - @staticmethod - def test_decorator_nocache_parameter_not_passed_to_function() -> None: - """Test that nocache parameter is intercepted and not passed to the decorated function.""" - received_kwargs = {} - - @cached_operation(ttl=60, use_token=False) - def test_func(**kwargs: bool) -> dict: - nonlocal received_kwargs - received_kwargs = kwargs - return {"called": True} - - # Call with nocache=True - test_func(nocache=True) # type: ignore[call-arg] - - # The decorated function should not receive nocache in kwargs - assert "nocache" not in received_kwargs - - @pytest.mark.unit - @staticmethod - def test_decorator_with_nocache_and_other_kwargs() -> None: - """Test that nocache works alongside other keyword arguments.""" - call_count = 0 - - @cached_operation(ttl=60, use_token=False) - def test_func(param1: str = "default", param2: int = 0) -> tuple: - nonlocal call_count - call_count += 1 - return (call_count, param1, param2) - - # First call with params - result1 = test_func(param1="value1", param2=123) # type: ignore[call-arg] - assert result1 == (1, "value1", 123) - assert call_count == 1 - - # Second call with same params - should use cache - result2 = test_func(param1="value1", param2=123) # type: ignore[call-arg] - assert result2 == (1, "value1", 123) - assert call_count == 1 - - # Third call with nocache=True and same params - should skip cache - result3 = test_func(param1="value1", param2=123, nocache=True) # type: ignore[call-arg] - assert result3 == (2, "value1", 123) - assert call_count == 2 - - @pytest.mark.unit - @staticmethod - def test_decorator_nocache_with_different_cache_keys() -> None: - """Test that nocache respects different cache keys (different args).""" - call_count = 0 - - @cached_operation(ttl=60, use_token=False) - def test_func(key: str) -> tuple: - nonlocal call_count - call_count += 1 - return (call_count, key) - - # Call with key="A" - result1 = test_func("A") # type: ignore[call-arg] - assert result1 == (1, "A") - assert call_count == 1 - - # Call with key="B" - result2 = test_func("B") # type: ignore[call-arg] - assert result2 == (2, "B") - assert call_count == 2 - - # Call with key="A", nocache=True - should skip cache for key="A" - result3 = test_func("A", nocache=True) # type: ignore[call-arg] - assert result3 == (3, "A") - assert call_count == 3 - - # Call with key="B" again - should use cache for key="B" - result4 = test_func("B") # type: ignore[call-arg] - assert result4 == (2, "B") # Still has cached value - assert call_count == 3 - - -class TestClientMeNocache: - """Test nocache parameter for Client.me() method.""" - - @pytest.mark.unit - @staticmethod - def test_me_default_uses_cache( - mock_settings: MagicMock, client_with_mock_api: Client, mock_api_client: MagicMock - ) -> None: - """Test that me() uses cache by default.""" - mock_me_response = {"user_id": "test-user", "org_id": "test-org"} - mock_api_client.get_me_v1_me_get.return_value = mock_me_response - - # First call - result1 = client_with_mock_api.me() - assert result1 == mock_me_response - assert mock_api_client.get_me_v1_me_get.call_count == 1 - - # Second call - should use cache - result2 = client_with_mock_api.me() - assert result2 == mock_me_response - assert mock_api_client.get_me_v1_me_get.call_count == 1 - - @pytest.mark.unit - @staticmethod - def test_me_nocache_false_uses_cache( - mock_settings: MagicMock, client_with_mock_api: Client, mock_api_client: MagicMock - ) -> None: - """Test that me(nocache=False) uses cache.""" - mock_me_response = {"user_id": "test-user", "org_id": "test-org"} - mock_api_client.get_me_v1_me_get.return_value = mock_me_response - - # First call - result1 = client_with_mock_api.me(nocache=False) - assert result1 == mock_me_response - assert mock_api_client.get_me_v1_me_get.call_count == 1 - - # Second call with nocache=False - should use cache - result2 = client_with_mock_api.me(nocache=False) - assert result2 == mock_me_response - assert mock_api_client.get_me_v1_me_get.call_count == 1 - - @pytest.mark.unit - @staticmethod - def test_me_nocache_true_fetches_fresh_data( - mock_settings: MagicMock, client_with_mock_api: Client, mock_api_client: MagicMock - ) -> None: - """Test that me(nocache=True) fetches fresh data from API.""" - mock_me_response_1 = {"user_id": "user-1", "org_id": "org-1"} - mock_me_response_2 = {"user_id": "user-2", "org_id": "org-2"} - - # First call - populates cache - mock_api_client.get_me_v1_me_get.return_value = mock_me_response_1 - result1 = client_with_mock_api.me() - assert result1 == mock_me_response_1 - assert mock_api_client.get_me_v1_me_get.call_count == 1 - - # Change API response - mock_api_client.get_me_v1_me_get.return_value = mock_me_response_2 - - # Second call with nocache=True - should fetch fresh data - result2 = client_with_mock_api.me(nocache=True) - assert result2 == mock_me_response_2 - assert mock_api_client.get_me_v1_me_get.call_count == 2 - - @pytest.mark.unit - @staticmethod - def test_me_nocache_true_updates_cache( - mock_settings: MagicMock, client_with_mock_api: Client, mock_api_client: MagicMock - ) -> None: - """Test that me(nocache=True) updates the cache with fresh data.""" - mock_me_response_1 = {"user_id": "user-1", "org_id": "org-1"} - mock_me_response_2 = {"user_id": "user-2", "org_id": "org-2"} - - # First call - populates cache - mock_api_client.get_me_v1_me_get.return_value = mock_me_response_1 - result1 = client_with_mock_api.me() - assert result1 == mock_me_response_1 - assert mock_api_client.get_me_v1_me_get.call_count == 1 - - # Change API response - mock_api_client.get_me_v1_me_get.return_value = mock_me_response_2 - - # Second call with nocache=True - fetches and caches new data - result2 = client_with_mock_api.me(nocache=True) - assert result2 == mock_me_response_2 - assert mock_api_client.get_me_v1_me_get.call_count == 2 - - # Third call without nocache - should use updated cache - result3 = client_with_mock_api.me() - assert result3 == mock_me_response_2 # Uses new cached value - assert mock_api_client.get_me_v1_me_get.call_count == 2 # No additional API call - - -class TestClientApplicationNocache: - """Test nocache parameter for Client.application() method.""" - - @pytest.mark.unit - @staticmethod - def test_application_default_uses_cache( - mock_settings: MagicMock, client_with_mock_api: Client, mock_api_client: MagicMock - ) -> None: - """Test that application() uses cache by default.""" - mock_app_response = {"application_id": "test-app", "name": "Test App"} - mock_api_client.read_application_by_id_v1_applications_application_id_get.return_value = mock_app_response - - # First call - result1 = client_with_mock_api.application("test-app") - assert result1 == mock_app_response - assert mock_api_client.read_application_by_id_v1_applications_application_id_get.call_count == 1 - - # Second call - should use cache - result2 = client_with_mock_api.application("test-app") - assert result2 == mock_app_response - assert mock_api_client.read_application_by_id_v1_applications_application_id_get.call_count == 1 - - @pytest.mark.unit - @staticmethod - def test_application_nocache_true_fetches_fresh_data( - mock_settings: MagicMock, client_with_mock_api: Client, mock_api_client: MagicMock - ) -> None: - """Test that application(nocache=True) fetches fresh data.""" - mock_app_response_1 = {"application_id": "test-app", "name": "App v1"} - mock_app_response_2 = {"application_id": "test-app", "name": "App v2"} - - # First call - mock_api_client.read_application_by_id_v1_applications_application_id_get.return_value = mock_app_response_1 - result1 = client_with_mock_api.application("test-app") - assert result1 == mock_app_response_1 - assert mock_api_client.read_application_by_id_v1_applications_application_id_get.call_count == 1 - - # Change response - mock_api_client.read_application_by_id_v1_applications_application_id_get.return_value = mock_app_response_2 - - # Second call with nocache=True - result2 = client_with_mock_api.application("test-app", nocache=True) - assert result2 == mock_app_response_2 - assert mock_api_client.read_application_by_id_v1_applications_application_id_get.call_count == 2 - - @pytest.mark.unit - @staticmethod - def test_application_nocache_with_different_app_ids( - mock_settings: MagicMock, client_with_mock_api: Client, mock_api_client: MagicMock - ) -> None: - """Test that nocache works correctly with different application IDs.""" - mock_app_response_a = {"application_id": "app-a", "name": "App A"} - mock_app_response_b = {"application_id": "app-b", "name": "App B"} - - def side_effect(*args, **kwargs): - app_id = kwargs.get("application_id") - if app_id == "app-a": - return mock_app_response_a - return mock_app_response_b - - mock_api_client.read_application_by_id_v1_applications_application_id_get.side_effect = side_effect - - # Call for app-a - result1 = client_with_mock_api.application("app-a") - assert result1 == mock_app_response_a - - # Call for app-b - result2 = client_with_mock_api.application("app-b") - assert result2 == mock_app_response_b - - # Both should be cached - assert mock_api_client.read_application_by_id_v1_applications_application_id_get.call_count == 2 - - # Call app-a with nocache=True - result3 = client_with_mock_api.application("app-a", nocache=True) - assert result3 == mock_app_response_a - assert mock_api_client.read_application_by_id_v1_applications_application_id_get.call_count == 3 - - # Call app-b without nocache - should use cache - result4 = client_with_mock_api.application("app-b") - assert result4 == mock_app_response_b - assert mock_api_client.read_application_by_id_v1_applications_application_id_get.call_count == 3 - - -class TestClientApplicationVersionNocache: - """Test nocache parameter for Client.application_version() method.""" - - @pytest.mark.unit - @staticmethod - def test_application_version_default_uses_cache( - mock_settings: MagicMock, client_with_mock_api: Client, mock_api_client: MagicMock - ) -> None: - """Test that application_version() uses cache by default.""" - mock_version_response = {"application_id": "test-app", "version": "1.0.0"} - mock_api_client.application_version_details_v1_applications_application_id_versions_version_get.return_value = ( - mock_version_response - ) - - # First call - result1 = client_with_mock_api.application_version("test-app", "1.0.0") - assert result1 == mock_version_response - assert ( - mock_api_client.application_version_details_v1_applications_application_id_versions_version_get.call_count - == 1 - ) - - # Second call - should use cache - result2 = client_with_mock_api.application_version("test-app", "1.0.0") - assert result2 == mock_version_response - assert ( - mock_api_client.application_version_details_v1_applications_application_id_versions_version_get.call_count - == 1 - ) - - @pytest.mark.unit - @staticmethod - def test_application_version_nocache_true_fetches_fresh_data( - mock_settings: MagicMock, client_with_mock_api: Client, mock_api_client: MagicMock - ) -> None: - """Test that application_version(nocache=True) fetches fresh data.""" - mock_version_response_1 = {"application_id": "test-app", "version": "1.0.0", "updated": "2024-01-01"} - mock_version_response_2 = {"application_id": "test-app", "version": "1.0.0", "updated": "2024-01-02"} - - # First call - mock_api_client.application_version_details_v1_applications_application_id_versions_version_get.return_value = ( - mock_version_response_1 - ) - result1 = client_with_mock_api.application_version("test-app", "1.0.0") - assert result1 == mock_version_response_1 - assert ( - mock_api_client.application_version_details_v1_applications_application_id_versions_version_get.call_count - == 1 - ) - - # Change response - mock_api_client.application_version_details_v1_applications_application_id_versions_version_get.return_value = ( - mock_version_response_2 - ) - - # Second call with nocache=True - result2 = client_with_mock_api.application_version("test-app", "1.0.0", nocache=True) - assert result2 == mock_version_response_2 - assert ( - mock_api_client.application_version_details_v1_applications_application_id_versions_version_get.call_count - == 2 - ) - - -class TestRunDetailsNocache: - """Test nocache parameter for Run.details() method - simplified tests.""" - - @pytest.mark.unit - @staticmethod - def test_run_details_supports_nocache_parameter() -> None: - """Test that Run.details() method signature supports nocache parameter.""" - from inspect import signature - - from aignostics.platform.resources.runs import Run - - # Verify the method has nocache parameter - sig = signature(Run.details) - assert "nocache" in sig.parameters - param = sig.parameters["nocache"] - assert param.default is False - assert param.annotation is bool - - -class TestRunsListNocache: - """Test nocache parameter for Runs.list() method - simplified tests.""" - - @pytest.mark.unit - @staticmethod - def test_runs_list_supports_nocache_parameter() -> None: - """Test that Runs.list() method signature supports nocache parameter.""" - from inspect import signature - - from aignostics.platform.resources.runs import Runs - - # Verify the method has nocache parameter - sig = signature(Runs.list) - assert "nocache" in sig.parameters - param = sig.parameters["nocache"] - assert param.default is False - assert param.annotation is bool - - -class TestApplicationsResourcesNocache: - """Test nocache parameter for Applications and Versions resources - simplified tests.""" - - @pytest.mark.unit - @staticmethod - def test_versions_list_supports_nocache_parameter() -> None: - """Test that Versions.list() method signature supports nocache parameter.""" - from inspect import signature - - from aignostics.platform.resources.applications import Versions - - # Verify the method has nocache parameter - sig = signature(Versions.list) - assert "nocache" in sig.parameters - param = sig.parameters["nocache"] - assert param.default is False - assert param.annotation is bool - - @pytest.mark.unit - @staticmethod - def test_applications_details_supports_nocache_parameter() -> None: - """Test that Applications.details() method signature supports nocache parameter.""" - from inspect import signature - - from aignostics.platform.resources.applications import Applications - - # Verify the method has nocache parameter - sig = signature(Applications.details) - assert "nocache" in sig.parameters - param = sig.parameters["nocache"] - assert param.default is False - assert param.annotation is bool - - -class TestNocacheEdgeCases: - """Test edge cases and special scenarios for nocache functionality.""" - - @pytest.mark.unit - @staticmethod - def test_nocache_with_expired_cache_entry() -> None: - """Test nocache behavior when cache entry has expired.""" - call_count = 0 - - @cached_operation(ttl=1, use_token=False) # 1 second TTL - def test_func() -> int: - nonlocal call_count - call_count += 1 - return call_count - - # First call - populates cache - result1 = test_func() # type: ignore[call-arg] - assert result1 == 1 - assert call_count == 1 - - # Wait for cache to expire - time.sleep(1.1) - - # Second call with nocache=True on expired entry - result2 = test_func(nocache=True) # type: ignore[call-arg] - assert result2 == 2 - assert call_count == 2 - - @pytest.mark.unit - @staticmethod - def test_nocache_clears_expired_entry_before_writing_new() -> None: - """Test that nocache properly handles expired entries.""" - call_count = 0 - - @cached_operation(ttl=1, use_token=False) - def test_func() -> int: - nonlocal call_count - call_count += 1 - return call_count - - # First call - result1 = test_func() # type: ignore[call-arg] - assert result1 == 1 - - # Wait for expiry - time.sleep(1.1) - - # Call with nocache=True - should write new value - result2 = test_func(nocache=True) # type: ignore[call-arg] - assert result2 == 2 - - # Subsequent call should use new cached value - result3 = test_func() # type: ignore[call-arg] - assert result3 == 2 - assert call_count == 2 - - @pytest.mark.unit - @staticmethod - def test_multiple_consecutive_nocache_calls() -> None: - """Test multiple consecutive calls with nocache=True.""" - call_count = 0 - - @cached_operation(ttl=60, use_token=False) - def test_func() -> int: - nonlocal call_count - call_count += 1 - return call_count - - # Multiple calls with nocache=True - result1 = test_func(nocache=True) # type: ignore[call-arg] - assert result1 == 1 - - result2 = test_func(nocache=True) # type: ignore[call-arg] - assert result2 == 2 - - result3 = test_func(nocache=True) # type: ignore[call-arg] - assert result3 == 3 - - assert call_count == 3 - - # Last call without nocache should use cached value from third call - result4 = test_func() # type: ignore[call-arg] - assert result4 == 3 - assert call_count == 3 - - @pytest.mark.unit - @staticmethod - def test_nocache_interleaved_with_normal_calls() -> None: - """Test interleaving nocache=True with normal cached calls.""" - call_count = 0 - - @cached_operation(ttl=60, use_token=False) - def test_func() -> int: - nonlocal call_count - call_count += 1 - return call_count - - # Normal call - populates cache - result1 = test_func() # type: ignore[call-arg] - assert result1 == 1 - assert call_count == 1 - - # Normal call - uses cache - result2 = test_func() # type: ignore[call-arg] - assert result2 == 1 - assert call_count == 1 - - # Nocache call - skips cache, updates it - result3 = test_func(nocache=True) # type: ignore[call-arg] - assert result3 == 2 - assert call_count == 2 - - # Normal call - uses updated cache - result4 = test_func() # type: ignore[call-arg] - assert result4 == 2 - assert call_count == 2 - - # Another nocache call - result5 = test_func(nocache=True) # type: ignore[call-arg] - assert result5 == 3 - assert call_count == 3 - - # Final normal call - uses latest cached value - result6 = test_func() # type: ignore[call-arg] - assert result6 == 3 - assert call_count == 3 - - -class TestNocacheWithClearCache: - """Test interaction between nocache and cache clearing.""" - - @pytest.mark.unit - @staticmethod - def test_nocache_after_cache_clear() -> None: - """Test that nocache works correctly after cache has been cleared.""" - call_count = 0 - - @cached_operation(ttl=60, use_token=False) - def test_func() -> int: - nonlocal call_count - call_count += 1 - return call_count - - # Populate cache - result1 = test_func() # type: ignore[call-arg] - assert result1 == 1 - - # Clear cache - operation_cache_clear() - assert len(_operation_cache) == 0 - - # Call with nocache=True - should work normally - result2 = test_func(nocache=True) # type: ignore[call-arg] - assert result2 == 2 - - # Verify cache was populated - assert len(_operation_cache) == 1 - - @pytest.mark.unit - @staticmethod - def test_cache_clear_removes_nocache_populated_entries() -> None: - """Test that cache clear removes entries populated with nocache=True.""" - call_count = 0 - - @cached_operation(ttl=60, use_token=False) - def test_func() -> int: - nonlocal call_count - call_count += 1 - return call_count - - # Populate cache with nocache=True - result1 = test_func(nocache=True) # type: ignore[call-arg] - assert result1 == 1 - assert len(_operation_cache) == 1 - - # Clear cache - operation_cache_clear() - assert len(_operation_cache) == 0 - - # Subsequent call should fetch fresh data - result2 = test_func() # type: ignore[call-arg] - assert result2 == 2 diff --git a/tests/aignostics/platform/resources/applications_test.py b/tests/aignostics/platform/resources/applications_test.py index 2cf1ab60..beb3857f 100644 --- a/tests/aignostics/platform/resources/applications_test.py +++ b/tests/aignostics/platform/resources/applications_test.py @@ -4,10 +4,11 @@ verifying their functionality for listing applications and application versions. """ -from unittest.mock import Mock +from unittest.mock import Mock, call import pytest from aignx.codegen.api.public_api import PublicApi +from aignx.codegen.models import ApplicationVersionReadResponse from aignx.codegen.models.application_read_response import ApplicationReadResponse from aignostics.platform.resources.applications import Applications, Versions @@ -39,7 +40,6 @@ def applications(mock_api) -> Applications: return Applications(mock_api) -@pytest.mark.unit def test_applications_list_with_pagination(applications, mock_api) -> None: """Test that Applications.list() correctly handles pagination. @@ -62,15 +62,44 @@ def test_applications_list_with_pagination(applications, mock_api) -> None: # Assert assert len(result) == PAGE_SIZE + 5 assert mock_api.list_applications_v1_applications_get.call_count == 2 - # Check that calls were made with pagination parameters (ignore timeout/headers) - calls = mock_api.list_applications_v1_applications_get.call_args_list - assert calls[0].kwargs["page"] == 1 - assert calls[0].kwargs["page_size"] == PAGE_SIZE - assert calls[1].kwargs["page"] == 2 - assert calls[1].kwargs["page_size"] == PAGE_SIZE + mock_api.list_applications_v1_applications_get.assert_has_calls([ + call(page=1, page_size=PAGE_SIZE), + call(page=2, page_size=PAGE_SIZE), + ]) + + +def test_versions_list_with_pagination(mock_api) -> None: + """Test that Versions.list() correctly handles pagination. + + This test verifies that the list method for application versions properly + aggregates results from multiple paginated API responses. + + Args: + mock_api: Mock ExternalsApi instance. + """ + # Arrange + versions = Versions(mock_api) + mock_app = Mock(spec=ApplicationReadResponse) + mock_app.application_id = "test-app-id" + + # Create two pages of results + page1 = [Mock(spec=ApplicationVersionReadResponse) for _ in range(PAGE_SIZE)] + page2 = [Mock(spec=ApplicationVersionReadResponse) for _ in range(5)] # Partial page + + mock_api.list_versions_by_application_id_v1_applications_application_id_versions_get.side_effect = [page1, page2] + + # Act + result = list(versions.list(application=mock_app)) + + # Assert + assert len(result) == PAGE_SIZE + 5 + assert mock_api.list_versions_by_application_id_v1_applications_application_id_versions_get.call_count == 2 + mock_api.list_versions_by_application_id_v1_applications_application_id_versions_get.assert_has_calls([ + call(application_id=mock_app.application_id, page=1, page_size=PAGE_SIZE), + call(application_id=mock_app.application_id, page=2, page_size=PAGE_SIZE), + ]) -@pytest.mark.unit def test_applications_list_returns_empty_list_when_no_applications(applications, mock_api) -> None: """Test that Applications.list() returns an empty list when no applications are available. @@ -88,13 +117,9 @@ def test_applications_list_returns_empty_list_when_no_applications(applications, # Assert assert len(result) == 0 - mock_api.list_applications_v1_applications_get.assert_called_once() - call_kwargs = mock_api.list_applications_v1_applications_get.call_args.kwargs - assert call_kwargs["page"] == 1 - assert call_kwargs["page_size"] == PAGE_SIZE + mock_api.list_applications_v1_applications_get.assert_called_once_with(page=1, page_size=PAGE_SIZE) -@pytest.mark.unit def test_applications_list_returns_applications_when_available(applications, mock_api) -> None: """Test that Applications.list() returns a list of applications when available. @@ -117,13 +142,9 @@ def test_applications_list_returns_applications_when_available(applications, moc assert len(result) == 2 assert result[0] == mock_app1 assert result[1] == mock_app2 - mock_api.list_applications_v1_applications_get.assert_called_once() - call_kwargs = mock_api.list_applications_v1_applications_get.call_args.kwargs - assert call_kwargs["page"] == 1 - assert call_kwargs["page_size"] == PAGE_SIZE + mock_api.list_applications_v1_applications_get.assert_called_once_with(page=1, page_size=PAGE_SIZE) -@pytest.mark.unit def test_applications_list_passes_through_api_exception(applications, mock_api) -> None: """Test that Applications.list() passes through exceptions from the API. @@ -140,13 +161,9 @@ def test_applications_list_passes_through_api_exception(applications, mock_api) # Act & Assert with pytest.raises(Exception, match=API_ERROR): list(applications.list()) - mock_api.list_applications_v1_applications_get.assert_called_once() - call_kwargs = mock_api.list_applications_v1_applications_get.call_args.kwargs - assert call_kwargs["page"] == 1 - assert call_kwargs["page_size"] == PAGE_SIZE + mock_api.list_applications_v1_applications_get.assert_called_once_with(page=1, page_size=PAGE_SIZE) -@pytest.mark.unit def test_versions_property_returns_versions_instance(applications) -> None: """Test that the versions property returns a Versions instance. @@ -162,3 +179,77 @@ def test_versions_property_returns_versions_instance(applications) -> None: # Assert assert isinstance(versions, Versions) assert versions._api == applications._api + + +def test_versions_list_returns_versions_for_application(mock_api) -> None: + """Test that Versions.list() returns versions for a specified application. + + This test verifies that the list method correctly returns version objects + for a given application from the API response. + + Args: + mock_api: Mock ExternalsApi instance. + """ + # Arrange + versions = Versions(mock_api) + mock_app = Mock(spec=ApplicationReadResponse) + mock_app.application_id = "test-app-id" + mock_version = Mock(spec=ApplicationVersionReadResponse) + mock_api.list_versions_by_application_id_v1_applications_application_id_versions_get.return_value = [mock_version] + + # Act + result = list(versions.list(application=mock_app)) + + # Assert + assert len(result) == 1 + assert result[0] == mock_version + mock_api.list_versions_by_application_id_v1_applications_application_id_versions_get.assert_called_once_with( + application_id=mock_app.application_id, page=1, page_size=PAGE_SIZE + ) + + +def test_versions_list_returns_empty_list_when_no_versions(mock_api) -> None: + """Test that Versions.list() returns an empty list when no versions are available. + + This test verifies that the list method handles empty API responses correctly + when requesting application versions. + + Args: + mock_api: Mock ExternalsApi instance. + """ + # Arrange + versions = Versions(mock_api) + mock_app = Mock(spec=ApplicationReadResponse) + mock_app.application_id = "test-app-id" + mock_api.list_versions_by_application_id_v1_applications_application_id_versions_get.return_value = [] + + # Act + result = list(versions.list(application=mock_app)) + + # Assert + assert len(result) == 0 + mock_api.list_versions_by_application_id_v1_applications_application_id_versions_get.assert_called_once_with( + application_id=mock_app.application_id, page=1, page_size=PAGE_SIZE + ) + + +def test_versions_list_passes_through_api_exception(mock_api) -> None: + """Test that Versions.list() passes through exceptions from the API. + + This test verifies that exceptions raised by the API client when requesting + application versions are propagated to the caller without being caught or modified. + + Args: + mock_api: Mock ExternalsApi instance. + """ + # Arrange + versions = Versions(mock_api) + mock_app = Mock(spec=ApplicationReadResponse) + mock_app.application_id = "test-app-id" + mock_api.list_versions_by_application_id_v1_applications_application_id_versions_get.side_effect = Exception( + API_ERROR + ) + + # Act & Assert + with pytest.raises(Exception, match=API_ERROR): + list(versions.list(application=mock_app)) diff --git a/tests/aignostics/platform/resources/resource_utils_test.py b/tests/aignostics/platform/resources/resource_utils_test.py index b1d1d265..532e884d 100644 --- a/tests/aignostics/platform/resources/resource_utils_test.py +++ b/tests/aignostics/platform/resources/resource_utils_test.py @@ -6,12 +6,9 @@ from unittest.mock import Mock -import pytest - from aignostics.platform.resources.utils import PAGE_SIZE, paginate -@pytest.mark.unit def test_paginate_stops_when_results_less_than_page_size() -> None: """Test that paginate stops yielding when a page has fewer items than the page size. @@ -38,7 +35,6 @@ def test_paginate_stops_when_results_less_than_page_size() -> None: mock_func.assert_any_call(page=2, page_size=PAGE_SIZE) -@pytest.mark.unit def test_paginate_handles_empty_first_page() -> None: """Test that paginate handles an empty first page correctly. @@ -56,7 +52,6 @@ def test_paginate_handles_empty_first_page() -> None: mock_func.assert_called_once_with(page=1, page_size=PAGE_SIZE) -@pytest.mark.unit def test_paginate_passes_additional_arguments() -> None: """Test that paginate correctly passes additional arguments to the function. @@ -76,7 +71,6 @@ def test_paginate_passes_additional_arguments() -> None: mock_func.assert_called_once_with(additional_arg, keyword=additional_kwarg, page=1, page_size=PAGE_SIZE) -@pytest.mark.unit def test_paginate_custom_page_size() -> None: """Test that paginate correctly uses custom page size. @@ -94,7 +88,6 @@ def test_paginate_custom_page_size() -> None: mock_func.assert_called_once_with(page=1, page_size=custom_page_size) -@pytest.mark.unit def test_paginate_multiple_pages() -> None: """Test that paginate correctly iterates through multiple pages. diff --git a/tests/aignostics/platform/resources/runs_test.py b/tests/aignostics/platform/resources/runs_test.py index 1bcf037d..56417617 100644 --- a/tests/aignostics/platform/resources/runs_test.py +++ b/tests/aignostics/platform/resources/runs_test.py @@ -1,10 +1,10 @@ """Tests for the runs resource module. -This module contains unit tests for the Runs class and Run class, +This module contains unit tests for the Runs class and ApplicationRun class, verifying their functionality for listing, creating, and managing application runs. """ -from unittest.mock import Mock +from unittest.mock import Mock, call import pytest from aignx.codegen.api.public_api import PublicApi @@ -16,11 +16,7 @@ RunReadResponse, ) -from aignostics.platform.resources.runs import ( - LIST_APPLICATION_RUNS_MAX_PAGE_SIZE, - Run, - Runs, -) +from aignostics.platform.resources.runs import ApplicationRun, Runs from aignostics.platform.resources.utils import PAGE_SIZE @@ -48,54 +44,46 @@ def runs(mock_api) -> Runs: @pytest.fixture -def app_run(mock_api) -> Run: - """Create an Run instance with a mock API for testing. +def app_run(mock_api) -> ApplicationRun: + """Create an ApplicationRun instance with a mock API for testing. Args: mock_api: A mock instance of ExternalsApi. Returns: - Run: An Run instance using the mock API. + ApplicationRun: An ApplicationRun instance using the mock API. """ - return Run(mock_api, "test-run-id") + return ApplicationRun(mock_api, "test-run-id") -@pytest.mark.unit def test_runs_list_with_pagination(runs, mock_api) -> None: """Test that Runs.list() correctly handles pagination. This test verifies that the list method properly aggregates results from - multiple paginated API responses and converts them to Run instances. + multiple paginated API responses and converts them to ApplicationRun instances. Args: runs: Runs instance with mock API. mock_api: Mock ExternalsApi instance. """ # Arrange - # Since list() now uses LIST_APPLICATION_RUNS_MAX_PAGE_SIZE, adjust expectations - page1 = [Mock(spec=RunReadResponse, run_id=f"run-{i}") for i in range(LIST_APPLICATION_RUNS_MAX_PAGE_SIZE)] - page2 = [Mock(spec=RunReadResponse, run_id=f"run-{i + LIST_APPLICATION_RUNS_MAX_PAGE_SIZE}") for i in range(5)] - mock_api.list_runs_v1_runs_get.side_effect = [page1, page2] + page1 = [Mock(spec=RunReadResponse, application_run_id=f"run-{i}") for i in range(PAGE_SIZE)] + page2 = [Mock(spec=RunReadResponse, application_run_id=f"run-{i + PAGE_SIZE}") for i in range(5)] + mock_api.list_application_runs_v1_runs_get.side_effect = [page1, page2] # Act result = list(runs.list()) # Assert - assert len(result) == LIST_APPLICATION_RUNS_MAX_PAGE_SIZE + 5 - assert all(isinstance(run, Run) for run in result) - assert mock_api.list_runs_v1_runs_get.call_count == 2 - # Check that the calls were made with the expected parameters (ignoring _request_timeout and _headers) - assert mock_api.list_runs_v1_runs_get.call_args_list[0][1]["page"] == 1 - assert mock_api.list_runs_v1_runs_get.call_args_list[0][1]["page_size"] == LIST_APPLICATION_RUNS_MAX_PAGE_SIZE - assert mock_api.list_runs_v1_runs_get.call_args_list[0][1]["application_id"] is None - assert mock_api.list_runs_v1_runs_get.call_args_list[0][1]["application_version"] is None - assert mock_api.list_runs_v1_runs_get.call_args_list[1][1]["page"] == 2 - assert mock_api.list_runs_v1_runs_get.call_args_list[1][1]["page_size"] == LIST_APPLICATION_RUNS_MAX_PAGE_SIZE - assert mock_api.list_runs_v1_runs_get.call_args_list[1][1]["application_id"] is None - assert mock_api.list_runs_v1_runs_get.call_args_list[1][1]["application_version"] is None - - -@pytest.mark.unit + assert len(result) == PAGE_SIZE + 5 + assert all(isinstance(run, ApplicationRun) for run in result) + assert mock_api.list_application_runs_v1_runs_get.call_count == 2 + mock_api.list_application_runs_v1_runs_get.assert_has_calls([ + call(page=1, page_size=PAGE_SIZE), + call(page=2, page_size=PAGE_SIZE), + ]) + + def test_runs_list_with_application_version_filter(runs, mock_api) -> None: """Test that Runs.list() correctly filters by application version. @@ -107,59 +95,50 @@ def test_runs_list_with_application_version_filter(runs, mock_api) -> None: mock_api: Mock ExternalsApi instance. """ # Arrange - app_id = "test-app" - app_version = "version" - mock_api.list_runs_v1_runs_get.return_value = [] + app_version_id = "test-app-version" + mock_api.list_application_runs_v1_runs_get.return_value = [] # Act - list(runs.list(application_id=app_id, application_version=app_version)) + list(runs.list(for_application_version=app_version_id)) # Assert - mock_api.list_runs_v1_runs_get.assert_called_once() - call_kwargs = mock_api.list_runs_v1_runs_get.call_args[1] - assert call_kwargs["application_id"] == app_id - assert call_kwargs["application_version"] == app_version - assert call_kwargs["page"] == 1 - assert call_kwargs["page_size"] == LIST_APPLICATION_RUNS_MAX_PAGE_SIZE + mock_api.list_application_runs_v1_runs_get.assert_called_once_with( + application_version_id=app_version_id, page=1, page_size=PAGE_SIZE + ) -@pytest.mark.unit def test_application_run_results_with_pagination(app_run, mock_api) -> None: - """Test that Run.results() correctly handles pagination. + """Test that ApplicationRun.results() correctly handles pagination. This test verifies that the results method properly aggregates results from multiple paginated API responses when requesting run results. Args: - app_run: Run instance with mock API. + app_run: ApplicationRun instance with mock API. mock_api: Mock ExternalsApi instance. """ # Arrange page1 = [Mock(spec=ItemResultReadResponse) for _ in range(PAGE_SIZE)] page2 = [Mock(spec=ItemResultReadResponse) for _ in range(5)] - mock_api.list_run_items_v1_runs_run_id_items_get.side_effect = [page1, page2] + mock_api.list_run_results_v1_runs_application_run_id_results_get.side_effect = [page1, page2] # Act result = list(app_run.results()) # Assert assert len(result) == PAGE_SIZE + 5 - assert mock_api.list_run_items_v1_runs_run_id_items_get.call_count == 2 - # Check that the calls were made with the expected parameters (ignoring _request_timeout and _headers) - assert mock_api.list_run_items_v1_runs_run_id_items_get.call_args_list[0][1]["run_id"] == app_run.run_id - assert mock_api.list_run_items_v1_runs_run_id_items_get.call_args_list[0][1]["page"] == 1 - assert mock_api.list_run_items_v1_runs_run_id_items_get.call_args_list[0][1]["page_size"] == PAGE_SIZE - assert mock_api.list_run_items_v1_runs_run_id_items_get.call_args_list[1][1]["run_id"] == app_run.run_id - assert mock_api.list_run_items_v1_runs_run_id_items_get.call_args_list[1][1]["page"] == 2 - assert mock_api.list_run_items_v1_runs_run_id_items_get.call_args_list[1][1]["page_size"] == PAGE_SIZE + assert mock_api.list_run_results_v1_runs_application_run_id_results_get.call_count == 2 + mock_api.list_run_results_v1_runs_application_run_id_results_get.assert_has_calls([ + call(application_run_id=app_run.application_run_id, page=1, page_size=PAGE_SIZE), + call(application_run_id=app_run.application_run_id, page=2, page_size=PAGE_SIZE), + ]) -@pytest.mark.unit def test_runs_call_returns_application_run(runs) -> None: - """Test that Runs.__call__() returns an Run instance. + """Test that Runs.__call__() returns an ApplicationRun instance. This test verifies that calling the Runs instance as a function correctly - initializes and returns an Run instance with the specified run ID. + initializes and returns an ApplicationRun instance with the specified run ID. Args: runs: Runs instance with mock API. @@ -169,17 +148,16 @@ def test_runs_call_returns_application_run(runs) -> None: app_run = runs(run_id) # Assert - assert isinstance(app_run, Run) - assert app_run.run_id == run_id + assert isinstance(app_run, ApplicationRun) + assert app_run.application_run_id == run_id assert app_run._api == runs._api -@pytest.mark.unit -def test_runs_submit_returns_application_run(runs, mock_api) -> None: - """Test that Runs.submit() returns an Run instance. +def test_runs_create_returns_application_run(runs, mock_api) -> None: + """Test that Runs.create() returns an ApplicationRun instance. - This test verifies that the submit method correctly calls the API client - to submit a new run and returns an Run instance for the submitted run. + This test verifies that the create method correctly calls the API client + to create a new run and returns an ApplicationRun instance for the created run. Args: runs: Runs instance with mock API. @@ -189,32 +167,30 @@ def test_runs_submit_returns_application_run(runs, mock_api) -> None: run_id = "new-run-id" mock_items = [ ItemCreationRequest( - external_id="item-1", + reference="item-1", input_artifacts=[ InputArtifactCreationRequest(name="artifact-1", download_url="url", metadata={"key": "value"}) ], ) ] - mock_api.create_run_v1_runs_post.return_value = RunCreationResponse(run_id=run_id) + mock_api.create_application_run_v1_runs_post.return_value = RunCreationResponse(application_run_id=run_id) # Mock the validation method to prevent it from making actual API calls runs._validate_input_items = Mock() # Act - app_run = runs.submit(application_id="test", items=mock_items, application_version="1.0.0") + app_run = runs.create(application_version="mock", items=mock_items) # Assert - assert isinstance(app_run, Run) - assert app_run.run_id == run_id - mock_api.create_run_v1_runs_post.assert_called_once() + assert isinstance(app_run, ApplicationRun) + assert app_run.application_run_id == run_id + mock_api.create_application_run_v1_runs_post.assert_called_once() # Check that a RunCreationRequest was passed to the API call - call_args = mock_api.create_run_v1_runs_post.call_args[0][0] - assert call_args.application_id == "test" + call_args = mock_api.create_application_run_v1_runs_post.call_args[0][0] + assert call_args.application_version_id == "mock" assert call_args.items == mock_items - assert call_args.version_number == "1.0.0" -@pytest.mark.unit def test_paginate_with_not_found_exception_on_first_page(runs, mock_api) -> None: """Test that paginate handles NotFoundException on the first page gracefully. @@ -229,26 +205,20 @@ def test_paginate_with_not_found_exception_on_first_page(runs, mock_api) -> None from aignx.codegen.exceptions import NotFoundException # Make the API throw NotFoundException on the first call - mock_api.list_runs_v1_runs_get.side_effect = NotFoundException() + mock_api.list_application_runs_v1_runs_get.side_effect = NotFoundException() # Act result = list(runs.list()) # Assert assert len(result) == 0 - mock_api.list_runs_v1_runs_get.assert_called_once() - call_kwargs = mock_api.list_runs_v1_runs_get.call_args[1] - assert call_kwargs["page"] == 1 - assert call_kwargs["page_size"] == LIST_APPLICATION_RUNS_MAX_PAGE_SIZE - assert call_kwargs["application_id"] is None - assert call_kwargs["application_version"] is None + mock_api.list_application_runs_v1_runs_get.assert_called_once_with(page=1, page_size=PAGE_SIZE) -@pytest.mark.unit def test_paginate_with_not_found_exception_after_full_page(runs, mock_api) -> None: """Test that paginate handles NotFoundException after a full page. - This test verifies that when we get exactly LIST_APPLICATION_RUNS_MAX_PAGE_SIZE items on the first page + This test verifies that when we get exactly PAGE_SIZE items on the first page and then a NotFoundException on the second page, we correctly return just the first page's items. @@ -259,319 +229,17 @@ def test_paginate_with_not_found_exception_after_full_page(runs, mock_api) -> No # Arrange from aignx.codegen.exceptions import NotFoundException - # Return exactly LIST_APPLICATION_RUNS_MAX_PAGE_SIZE items for first page, then throw NotFoundException - full_page = [Mock(spec=RunReadResponse, run_id=f"run-{i}") for i in range(LIST_APPLICATION_RUNS_MAX_PAGE_SIZE)] - mock_api.list_runs_v1_runs_get.side_effect = [full_page, NotFoundException()] + # Return exactly PAGE_SIZE items for first page, then throw NotFoundException + full_page = [Mock(spec=RunReadResponse, application_run_id=f"run-{i}") for i in range(PAGE_SIZE)] + mock_api.list_application_runs_v1_runs_get.side_effect = [full_page, NotFoundException()] # Act result = list(runs.list()) # Assert - assert len(result) == LIST_APPLICATION_RUNS_MAX_PAGE_SIZE - assert mock_api.list_runs_v1_runs_get.call_count == 2 - # Check that the calls were made with the expected parameters (ignoring _request_timeout and _headers) - assert mock_api.list_runs_v1_runs_get.call_args_list[0][1]["page"] == 1 - assert mock_api.list_runs_v1_runs_get.call_args_list[0][1]["page_size"] == LIST_APPLICATION_RUNS_MAX_PAGE_SIZE - assert mock_api.list_runs_v1_runs_get.call_args_list[0][1]["application_id"] is None - assert mock_api.list_runs_v1_runs_get.call_args_list[0][1]["application_version"] is None - assert mock_api.list_runs_v1_runs_get.call_args_list[1][1]["page"] == 2 - assert mock_api.list_runs_v1_runs_get.call_args_list[1][1]["page_size"] == LIST_APPLICATION_RUNS_MAX_PAGE_SIZE - assert mock_api.list_runs_v1_runs_get.call_args_list[1][1]["application_id"] is None - assert mock_api.list_runs_v1_runs_get.call_args_list[1][1]["application_version"] is None - - -@pytest.mark.unit -def test_runs_list_with_external_id_filter(runs, mock_api) -> None: - """Test that Runs.list() correctly filters by external ID. - - This test verifies that the external_id filter parameter is - correctly passed to the API client. - - Args: - runs: Runs instance with mock API. - mock_api: Mock ExternalsApi instance. - """ - # Arrange - external_id = "test-external-id" - mock_api.list_runs_v1_runs_get.return_value = [] - - # Act - list(runs.list(external_id=external_id)) - - # Assert - mock_api.list_runs_v1_runs_get.assert_called_once() - call_kwargs = mock_api.list_runs_v1_runs_get.call_args[1] - assert call_kwargs["external_id"] == external_id - assert call_kwargs["page"] == 1 - assert call_kwargs["page_size"] == LIST_APPLICATION_RUNS_MAX_PAGE_SIZE - - -@pytest.mark.unit -def test_runs_list_with_custom_metadata_filter(runs, mock_api) -> None: - """Test that Runs.list() correctly filters by custom metadata. - - This test verifies that the custom_metadata filter parameter in JSONPath format - is correctly passed to the API client. - - Args: - runs: Runs instance with mock API. - mock_api: Mock ExternalsApi instance. - """ - # Arrange - custom_metadata = "$.experiment_id=='exp-123'" - mock_api.list_runs_v1_runs_get.return_value = [] - - # Act - list(runs.list(custom_metadata=custom_metadata)) - - # Assert - mock_api.list_runs_v1_runs_get.assert_called_once() - call_kwargs = mock_api.list_runs_v1_runs_get.call_args[1] - assert call_kwargs["custom_metadata"] == custom_metadata - assert call_kwargs["page"] == 1 - assert call_kwargs["page_size"] == LIST_APPLICATION_RUNS_MAX_PAGE_SIZE - - -@pytest.mark.unit -def test_runs_list_with_sort_ascending(runs, mock_api) -> None: - """Test that Runs.list() correctly applies ascending sort. - - This test verifies that the sort parameter for ascending order - is correctly passed to the API client as a list. - - Args: - runs: Runs instance with mock API. - mock_api: Mock ExternalsApi instance. - """ - # Arrange - sort_field = "created_at" - mock_api.list_runs_v1_runs_get.return_value = [] - - # Act - list(runs.list(sort=sort_field)) - - # Assert - mock_api.list_runs_v1_runs_get.assert_called_once() - call_kwargs = mock_api.list_runs_v1_runs_get.call_args[1] - assert call_kwargs["sort"] == [sort_field] - assert call_kwargs["page"] == 1 - assert call_kwargs["page_size"] == LIST_APPLICATION_RUNS_MAX_PAGE_SIZE - - -@pytest.mark.unit -def test_runs_list_with_descending_sort(runs, mock_api) -> None: - """Test that Runs.list() correctly applies descending sort. - - This test verifies that the sort parameter with '-' prefix for descending order - is correctly passed to the API client as a list. - - Args: - runs: Runs instance with mock API. - mock_api: Mock ExternalsApi instance. - """ - # Arrange - sort_field = "-created_at" - mock_api.list_runs_v1_runs_get.return_value = [] - - # Act - list(runs.list(sort=sort_field)) - - # Assert - mock_api.list_runs_v1_runs_get.assert_called_once() - call_kwargs = mock_api.list_runs_v1_runs_get.call_args[1] - assert call_kwargs["sort"] == [sort_field] - assert call_kwargs["page"] == 1 - assert call_kwargs["page_size"] == LIST_APPLICATION_RUNS_MAX_PAGE_SIZE - - -@pytest.mark.unit -def test_runs_list_with_custom_page_size(runs, mock_api) -> None: - """Test that Runs.list() correctly uses custom page size. - - This test verifies that a custom page_size parameter is correctly - passed to the paginate function and API client. - - Args: - runs: Runs instance with mock API. - mock_api: Mock ExternalsApi instance. - """ - # Arrange - custom_page_size = 50 - mock_api.list_runs_v1_runs_get.return_value = [] - - # Act - list(runs.list(page_size=custom_page_size)) - - # Assert - mock_api.list_runs_v1_runs_get.assert_called_once() - call_kwargs = mock_api.list_runs_v1_runs_get.call_args[1] - assert call_kwargs["page_size"] == custom_page_size - assert call_kwargs["page"] == 1 - - -@pytest.mark.unit -def test_runs_list_with_page_size_exceeding_max_raises_error(runs, mock_api) -> None: - """Test that Runs.list() raises ValueError when page_size exceeds maximum. - - This test verifies that attempting to use a page_size greater than the - maximum allowed value (100) raises a ValueError. - - Args: - runs: Runs instance with mock API. - mock_api: Mock ExternalsApi instance. - """ - # Arrange - invalid_page_size = 101 - - # Act & Assert - with pytest.raises(ValueError, match="page_size is must be less than or equal to 100"): - list(runs.list(page_size=invalid_page_size)) - - # Verify API was never called - mock_api.list_runs_v1_runs_get.assert_not_called() - - -@pytest.mark.unit -def test_runs_list_with_all_filters_combined(runs, mock_api) -> None: - """Test that Runs.list() correctly combines all filter parameters. - - This test verifies that all filter parameters (application_id, application_version, - external_id, custom_metadata, sort, page_size) work together correctly. - - Args: - runs: Runs instance with mock API. - mock_api: Mock ExternalsApi instance. - """ - # Arrange - app_id = "test-app" - app_version = "1.0.0" - external_id = "ext-123" - custom_metadata = "$.experiment=='test'" - sort_field = "-created_at" - page_size = 25 - mock_api.list_runs_v1_runs_get.return_value = [] - - # Act - list( - runs.list( - application_id=app_id, - application_version=app_version, - external_id=external_id, - custom_metadata=custom_metadata, - sort=sort_field, - page_size=page_size, - ) - ) - - # Assert - mock_api.list_runs_v1_runs_get.assert_called_once() - call_kwargs = mock_api.list_runs_v1_runs_get.call_args[1] - assert call_kwargs["application_id"] == app_id - assert call_kwargs["application_version"] == app_version - assert call_kwargs["external_id"] == external_id - assert call_kwargs["custom_metadata"] == custom_metadata - assert call_kwargs["sort"] == [sort_field] - assert call_kwargs["page_size"] == page_size - assert call_kwargs["page"] == 1 - - -@pytest.mark.unit -def test_runs_list_with_nocache_true(runs, mock_api) -> None: - """Test that Runs.list() respects the nocache parameter. - - This test verifies that the nocache parameter is correctly passed through - to list_data (nocache is handled by the caching decorator, not passed to API). - - Args: - runs: Runs instance with mock API. - mock_api: Mock ExternalsApi instance. - """ - # Arrange - mock_api.list_runs_v1_runs_get.return_value = [] - - # Act - list(runs.list(nocache=True)) - - # Assert - mock_api.list_runs_v1_runs_get.assert_called_once() - # nocache is handled by caching decorator, not passed to API - call_kwargs = mock_api.list_runs_v1_runs_get.call_args[1] - assert "nocache" not in call_kwargs - - -@pytest.mark.unit -def test_runs_list_with_none_sort_not_passed_as_list(runs, mock_api) -> None: - """Test that Runs.list() doesn't wrap None sort in a list. - - This test verifies that when sort is None, it's passed as None - rather than [None] to the API. - - Args: - runs: Runs instance with mock API. - mock_api: Mock ExternalsApi instance. - """ - # Arrange - mock_api.list_runs_v1_runs_get.return_value = [] - - # Act - list(runs.list(sort=None)) - - # Assert - mock_api.list_runs_v1_runs_get.assert_called_once() - call_kwargs = mock_api.list_runs_v1_runs_get.call_args[1] - assert call_kwargs["sort"] is None - - -@pytest.mark.unit -def test_runs_list_delegates_to_list_data(runs, mock_api) -> None: - """Test that Runs.list() correctly delegates to list_data() and wraps results. - - This test verifies that list() calls list_data() with all parameters - and converts RunData objects to Run objects. - - Args: - runs: Runs instance with mock API. - mock_api: Mock ExternalsApi instance. - """ - # Arrange - app_id = "test-app" - app_version = "1.0.0" - external_id = "ext-123" - custom_metadata = "$.test=='value'" - sort_field = "-created_at" - page_size = 25 - - # Create mock RunData responses - mock_responses = [Mock(spec=RunReadResponse, run_id=f"run-{i}") for i in range(3)] - mock_api.list_runs_v1_runs_get.return_value = mock_responses - - # Act - result = list( - runs.list( - application_id=app_id, - application_version=app_version, - external_id=external_id, - custom_metadata=custom_metadata, - sort=sort_field, - page_size=page_size, - nocache=True, - ) - ) - - # Assert - # Verify we got Run objects with correct run_ids - assert len(result) == 3 - assert all(isinstance(run, Run) for run in result) - assert [run.run_id for run in result] == ["run-0", "run-1", "run-2"] - - # Verify all parameters were passed to the API - mock_api.list_runs_v1_runs_get.assert_called_once() - call_kwargs = mock_api.list_runs_v1_runs_get.call_args[1] - assert call_kwargs["application_id"] == app_id - assert call_kwargs["application_version"] == app_version - assert call_kwargs["external_id"] == external_id - assert call_kwargs["custom_metadata"] == custom_metadata - assert call_kwargs["sort"] == [sort_field] - assert call_kwargs["page_size"] == page_size - # nocache is handled by caching decorator, not passed to API - assert "nocache" not in call_kwargs + assert len(result) == PAGE_SIZE + assert mock_api.list_application_runs_v1_runs_get.call_count == 2 + mock_api.list_application_runs_v1_runs_get.assert_has_calls([ + call(page=1, page_size=PAGE_SIZE), + call(page=2, page_size=PAGE_SIZE), + ]) diff --git a/tests/aignostics/platform/scheduled_test.py b/tests/aignostics/platform/scheduled_test.py new file mode 100644 index 00000000..4c404bcd --- /dev/null +++ b/tests/aignostics/platform/scheduled_test.py @@ -0,0 +1,324 @@ +"""Scheduled integration tests for the Aignostics client. + +This module contains integration tests that run real application workflows +against the Aignostics platform. These tests verify end-to-end functionality +including creating runs, downloading results, and validating outputs. +""" + +import tempfile +from pathlib import Path + +import pytest +from _pytest.fixtures import FixtureRequest +from aignx.codegen.models import ApplicationRunStatus, ItemStatus + +from aignostics import platform +from aignostics.platform.resources.runs import ApplicationRun +from tests.contants_test import ( + HETA_APPLICATION_TIMEOUT_SECONDS, + HETA_APPLICATION_VERSION_ID, + TEST_APPLICATION_TIMEOUT_SECONDS, + TEST_APPLICATION_VERSION_ID, +) + + +def _get_single_spot_payload_for_heta_v1_0_0() -> list[platform.InputItem]: + """Generates a payload using a single spot.""" + return [ + platform.InputItem( + reference="1", + input_artifacts=[ + platform.InputArtifact( + name="whole_slide_image", + download_url=platform.generate_signed_url( + "gs://platform-api-application-test-data/heta/slides/8fafc17d-a5cc-4e9d-a982-030b1486ca88.tiff", + HETA_APPLICATION_TIMEOUT_SECONDS, + ), + metadata={ + "checksum_base64_crc32c": "5onqtA==", + "resolution_mpp": 0.26268186053789266, + "width_px": 7447, + "height_px": 7196, + "media_type": "image/tiff", + "staining_method": "H&E", + "specimen": { + "tissue": "LUNG", + "disease": "LUNG_CANCER", + }, + }, + ) + ], + ), + ] + + +def _get_three_spots_payload_for_test_v0_0_1() -> list[platform.InputItem]: + """Generates a payload using three spots.""" + return [ + platform.InputItem( + reference="1", + input_artifacts=[ + platform.InputArtifact( + name="user_slide", + download_url=platform.generate_signed_url( + "gs://aignx-storage-service-dev/sample_data_formatted/9375e3ed-28d2-4cf3-9fb9-8df9d11a6627.tiff", + TEST_APPLICATION_TIMEOUT_SECONDS, + ), + metadata={ + "checksum_crc32c": "9l3NNQ==", + "base_mpp": 0.46499982, + "width": 3728, + "height": 3640, + }, + ) + ], + ), + platform.InputItem( + reference="2", + input_artifacts=[ + platform.InputArtifact( + name="user_slide", + download_url=platform.generate_signed_url( + "gs://aignx-storage-service-dev/sample_data_formatted/8c7b079e-8b8a-4036-bfde-5818352b503a.tiff", + TEST_APPLICATION_TIMEOUT_SECONDS, + ), + metadata={ + "checksum_crc32c": "w+ud3g==", + "base_mpp": 0.46499982, + "width": 3616, + "height": 3400, + }, + ) + ], + ), + platform.InputItem( + reference="3", + input_artifacts=[ + platform.InputArtifact( + name="user_slide", + download_url=platform.generate_signed_url( + "gs://aignx-storage-service-dev/sample_data_formatted/1f4f366f-a2c5-4407-9f5e-23400b22d50e.tiff", + TEST_APPLICATION_TIMEOUT_SECONDS, + ), + metadata={ + "checksum_crc32c": "Zmx0wA==", + "base_mpp": 0.46499982, + "width": 4016, + "height": 3952, + }, + ) + ], + ), + ] + + +# Test parameters without calling the payload functions at module level +TEST_PARAMETERS = [ + ( + TEST_APPLICATION_TIMEOUT_SECONDS, + TEST_APPLICATION_VERSION_ID, + "three_spots_test", + "checksum_crc32c", + ), + ( + HETA_APPLICATION_TIMEOUT_SECONDS, + HETA_APPLICATION_VERSION_ID, + "single_spot_heta", + "checksum_base64_crc32c", + ), +] + + +def single_spot_payload_for_heta_v1_0_0() -> list[platform.InputItem]: + """Generates a payload using a single spot.""" + return [ + platform.InputItem( + reference="1", + input_artifacts=[ + platform.InputArtifact( + name="whole_slide_image", + download_url=platform.generate_signed_url( + "gs://platform-api-application-test-data/heta/slides/8fafc17d-a5cc-4e9d-a982-030b1486ca88.tiff", + HETA_APPLICATION_TIMEOUT_SECONDS, + ), + metadata={ + "checksum_base64_crc32c": "5onqtA==", + "resolution_mpp": 0.26268186053789266, + "width_px": 7447, + "height_px": 7196, + "media_type": "image/tiff", + "staining_method": "H&E", + "specimen": { + "tissue": "LUNG", + "disease": "LUNG_CANCER", + }, + }, + ) + ], + ), + ] + + +def three_spots_payload_for_test_v0_0_1() -> list[platform.InputItem]: + """Generates a payload using three spots.""" + return [ + platform.InputItem( + reference="1", + input_artifacts=[ + platform.InputArtifact( + name="user_slide", + download_url=platform.generate_signed_url( + "gs://aignx-storage-service-dev/sample_data_formatted/9375e3ed-28d2-4cf3-9fb9-8df9d11a6627.tiff", + TEST_APPLICATION_TIMEOUT_SECONDS, + ), + metadata={ + "checksum_crc32c": "9l3NNQ==", + "base_mpp": 0.46499982, + "width": 3728, + "height": 3640, + }, + ) + ], + ), + platform.InputItem( + reference="2", + input_artifacts=[ + platform.InputArtifact( + name="user_slide", + download_url=platform.generate_signed_url( + "gs://aignx-storage-service-dev/sample_data_formatted/8c7b079e-8b8a-4036-bfde-5818352b503a.tiff", + TEST_APPLICATION_TIMEOUT_SECONDS, + ), + metadata={ + "checksum_crc32c": "w+ud3g==", + "base_mpp": 0.46499982, + "width": 3616, + "height": 3400, + }, + ) + ], + ), + platform.InputItem( + reference="3", + input_artifacts=[ + platform.InputArtifact( + name="user_slide", + download_url=platform.generate_signed_url( + "gs://aignx-storage-service-dev/sample_data_formatted/1f4f366f-a2c5-4407-9f5e-23400b22d50e.tiff", + TEST_APPLICATION_TIMEOUT_SECONDS, + ), + metadata={ + "checksum_crc32c": "Zmx0wA==", + "base_mpp": 0.46499982, + "width": 4016, + "height": 3952, + }, + ) + ], + ), + ] + + +@pytest.mark.scheduled +@pytest.mark.long_running +@pytest.mark.parametrize( + ("timeout", "application_version_id", "payload_type", "checksum_attribute_key"), + TEST_PARAMETERS, +) +def test_application_runs( + timeout: int, + application_version_id: str, + payload_type: str, + checksum_attribute_key: str, + request: FixtureRequest, +) -> None: + """Test application runs. + + This test creates an application run using a predefined application version and input samples. + It then downloads the results to a temporary directory and performs various checks to ensure + the application run completed successfully and the results are valid. + + Args: + timeout (int): Timeout for the test in seconds. + application_version_id (str): The application version ID to use for the test. + payload_type (str): The type of payload to generate ('three_spots_test' or 'single_spot_heta'). + checksum_attribute_key (str): The key used to validate the checksum of the output artifacts. + request (FixtureRequest): The pytest request object. + + Raises: + AssertionError: If any of the validation checks fail. + """ + request.node.add_marker(pytest.mark.timeout(timeout)) + + # Generate payload lazily during test execution + if payload_type == "three_spots_test": + payload = _get_three_spots_payload_for_test_v0_0_1() + elif payload_type == "single_spot_heta": + payload = _get_single_spot_payload_for_heta_v1_0_0() + else: + pytest.fail(f"Unknown payload type: {payload_type}") + + client = platform.Client(cache_token=False) + application_run = client.runs.create(application_version_id, items=payload) + + with tempfile.TemporaryDirectory() as temp_dir: + application_run.download_to_folder(temp_dir, checksum_attribute_key) + # validate the output + _validate_output(application_run, Path(temp_dir), checksum_attribute_key) + + +def _validate_output( + application_run: ApplicationRun, + output_base_folder: Path, + checksum_attribute_key: str = "checksum_base64_crc32c", +) -> None: + """Validate the output of an application run. + + This function checks if the application run has completed successfully and verifies the output artifact checksum + + Args: + application_run (ApplicationRun): The application run to validate. + output_base_folder (Path): The base folder where the output is stored. + checksum_attribute_key (str): The key used to validate the checksum of the output artifacts. + """ + run_details = application_run.details() + assert run_details.status == ApplicationRunStatus.COMPLETED, ( + f"Application run {application_run.application_run_id}: " + f"Did not finish in status COMPLETED but '{run_details.status}'." + ) + + run_result_folder = output_base_folder / application_run.application_run_id + assert run_result_folder.exists(), ( + f"Application run {application_run.application_run_id}: result folder does not exist" + ) + + run_results = application_run.results() + + for item in run_results: + # validate status + assert item.status == ItemStatus.SUCCEEDED, ( + f"Application run {application_run.application_run_id}: " + f"item {item.reference} status is {item.status}, expected SUCCEEDED" + ) + # validate results + item_dir = run_result_folder / item.reference + assert item_dir.exists(), ( + f"Application run {application_run.application_run_id}: " + f"result folder for item {item.reference} does not exist" + ) + for artifact in item.output_artifacts: + assert artifact.download_url is not None, ( + f"Application run {application_run.application_run_id}: " + f"artifact {artifact} should provide a download url" + ) + file_ending = platform.mime_type_to_file_ending(platform.get_mime_type_for_artifact(artifact)) + file_path = item_dir / f"{artifact.name}{file_ending}" + assert file_path.exists(), ( + f"Application run {application_run.application_run_id}: artifact {artifact} was not downloaded" + ) + checksum = artifact.metadata[checksum_attribute_key] + file_checksum = platform.calculate_file_crc32c(file_path) + assert file_checksum == checksum, ( + f"Application run {application_run.application_run_id}: " + f"metadata checksum != file checksum {checksum} <> {file_checksum}" + ) diff --git a/tests/aignostics/platform/sdk_metadata_test.py b/tests/aignostics/platform/sdk_metadata_test.py deleted file mode 100644 index b01038ec..00000000 --- a/tests/aignostics/platform/sdk_metadata_test.py +++ /dev/null @@ -1,894 +0,0 @@ -"""Unit tests for SDK metadata generation.""" - -import sys -from datetime import datetime -from unittest.mock import MagicMock, patch - -import pytest -from pydantic import ValidationError - -from aignostics.platform._sdk_metadata import ( - ITEM_SDK_METADATA_SCHEMA_VERSION, - SDK_METADATA_SCHEMA_VERSION, - build_item_sdk_metadata, - build_run_sdk_metadata, - get_item_sdk_metadata_json_schema, - get_run_sdk_metadata_json_schema, - validate_item_sdk_metadata, - validate_item_sdk_metadata_silent, - validate_run_sdk_metadata, - validate_run_sdk_metadata_silent, -) - - -@pytest.fixture -def clean_env(monkeypatch: pytest.MonkeyPatch) -> None: - """Clean environment variables.""" - env_vars_to_clear = [ - "AIGNOSTICS_SUBMISSION_INITIATOR_BRIDGE", - "GITHUB_ACTIONS", - "GITHUB_SERVER_URL", - "GITHUB_REPOSITORY", - "GITHUB_SHA", - "GITHUB_WORKFLOW", - "GITHUB_WORKFLOW_REF", - "PYTEST_CURRENT_TEST", - "PYTEST_MARKERS", - ] - for var in env_vars_to_clear: - monkeypatch.delenv(var, raising=False) - - -class TestBuildRunSdkMetadata: - """Test cases for build_run_sdk_metadata function.""" - - @pytest.mark.unit - @staticmethod - def test_basic_metadata_structure(clean_env: None) -> None: - """Test that basic metadata structure is created correctly.""" - with patch("aignostics.platform._client.Client") as mock_client: - mock_client.return_value.me.side_effect = Exception("No client available") - - metadata = build_run_sdk_metadata() - - assert "schema_version" in metadata - assert metadata["schema_version"] == SDK_METADATA_SCHEMA_VERSION - assert "submission" in metadata - assert "user_agent" in metadata - - @pytest.mark.unit - @staticmethod - def test_submission_metadata_default(clean_env: None) -> None: - """Test default submission metadata when no special environment is detected.""" - with ( - patch("aignostics.platform._client.Client") as mock_client, - patch("os.environ.get", return_value=None), - ): - mock_client.return_value.me.side_effect = Exception("No client available") - - metadata = build_run_sdk_metadata() - - assert metadata["submission"]["initiator"] == "user" - assert metadata["submission"]["interface"] == "script" - assert "date" in metadata["submission"] - # Verify date is in ISO format - datetime.fromisoformat(metadata["submission"]["date"]) - - @pytest.mark.unit - @staticmethod - def test_submission_initiator_bridge(clean_env: None, monkeypatch: pytest.MonkeyPatch) -> None: - """Test that bridge initiator is detected when AIGNOSTICS_BRIDGE_VERSION is set.""" - monkeypatch.setenv("AIGNOSTICS_BRIDGE_VERSION", "1.0.0") - - with patch("aignostics.platform._client.Client") as mock_client: - mock_client.return_value.me.side_effect = Exception("No client available") - - metadata = build_run_sdk_metadata() - - assert metadata["submission"]["initiator"] == "bridge" - - @pytest.mark.unit - @staticmethod - def test_submission_initiator_test(clean_env: None, monkeypatch: pytest.MonkeyPatch) -> None: - """Test that test initiator is detected when PYTEST_CURRENT_TEST is set.""" - monkeypatch.setenv("PYTEST_CURRENT_TEST", "tests/test_example.py::test_func") - - with patch("aignostics.platform._client.Client") as mock_client: - mock_client.return_value.me.side_effect = Exception("No client available") - - metadata = build_run_sdk_metadata() - - assert metadata["submission"]["initiator"] == "test" - - @pytest.mark.unit - @staticmethod - def test_submission_initiator_bridge_takes_precedence(clean_env: None, monkeypatch: pytest.MonkeyPatch) -> None: - """Test that bridge initiator takes precedence over test initiator.""" - monkeypatch.setenv("AIGNOSTICS_BRIDGE_VERSION", "1.0.0") - monkeypatch.setenv("PYTEST_CURRENT_TEST", "tests/test_example.py::test_func") - - with patch("aignostics.platform._client.Client") as mock_client: - mock_client.return_value.me.side_effect = Exception("No client available") - - metadata = build_run_sdk_metadata() - - assert metadata["submission"]["initiator"] == "bridge" - - @pytest.mark.unit - @staticmethod - def test_submission_interface_cli_typer(clean_env: None) -> None: - """Test that CLI interface is detected when running via typer.""" - original_argv = sys.argv - try: - sys.argv = ["/path/to/typer", "command"] - - with patch("aignostics.platform._client.Client") as mock_client: - mock_client.return_value.me.side_effect = Exception("No client available") - - metadata = build_run_sdk_metadata() - - assert metadata["submission"]["interface"] == "cli" - finally: - sys.argv = original_argv - - @pytest.mark.unit - @staticmethod - def test_submission_interface_cli_aignostics(clean_env: None) -> None: - """Test that CLI interface is detected when running via aignostics command.""" - original_argv = sys.argv - try: - sys.argv = ["/path/to/aignostics", "command"] - - with patch("aignostics.platform._client.Client") as mock_client: - mock_client.return_value.me.side_effect = Exception("No client available") - - metadata = build_run_sdk_metadata() - - assert metadata["submission"]["interface"] == "cli" - finally: - sys.argv = original_argv - - @pytest.mark.unit - @staticmethod - def test_submission_interface_launchpad(clean_env: None, monkeypatch: pytest.MonkeyPatch) -> None: - """Test that launchpad interface is detected when NICEGUI_HOST is set.""" - monkeypatch.setenv("NICEGUI_HOST", "localhost") - - with patch("aignostics.platform._client.Client") as mock_client: - mock_client.return_value.me.side_effect = Exception("No client available") - - metadata = build_run_sdk_metadata() - - assert metadata["submission"]["interface"] == "launchpad" - - @pytest.mark.unit - @staticmethod - def test_user_metadata_success(clean_env: None) -> None: - """Test that user metadata is included when Client().me() succeeds.""" - mock_me = MagicMock() - mock_me.organization.id = "org-123" - mock_me.organization.name = "Test Org" - mock_me.user.email = "test@example.com" - mock_me.user.id = "user-456" - - with patch("aignostics.platform._client.Client") as mock_client: - mock_client.return_value.me.return_value = mock_me - - metadata = build_run_sdk_metadata() - - assert "user" in metadata - assert metadata["user"]["organization_id"] == "org-123" - assert metadata["user"]["organization_name"] == "Test Org" - assert metadata["user"]["user_email"] == "test@example.com" - assert metadata["user"]["user_id"] == "user-456" - - @pytest.mark.unit - @staticmethod - def test_user_metadata_failure(clean_env: None) -> None: - """Test that user metadata is omitted when Client().me() fails.""" - with patch("aignostics.platform._client.Client") as mock_client: - mock_client.return_value.me.side_effect = Exception("Auth failed") - - metadata = build_run_sdk_metadata() - - assert "user" not in metadata - - @pytest.mark.unit - @staticmethod - def test_github_ci_metadata(clean_env: None, monkeypatch: pytest.MonkeyPatch) -> None: - """Test that GitHub CI metadata is collected correctly.""" - monkeypatch.setenv("GITHUB_RUN_ID", "12345") - monkeypatch.setenv("GITHUB_SERVER_URL", "https://github.com") - monkeypatch.setenv("GITHUB_REPOSITORY", "owner/repo") - monkeypatch.setenv("GITHUB_ACTION", "test-action") - monkeypatch.setenv("GITHUB_JOB", "test-job") - monkeypatch.setenv("GITHUB_REF", "refs/heads/main") - monkeypatch.setenv("GITHUB_REF_NAME", "main") - monkeypatch.setenv("GITHUB_REF_TYPE", "branch") - monkeypatch.setenv("GITHUB_RUN_ATTEMPT", "1") - monkeypatch.setenv("GITHUB_RUN_NUMBER", "42") - monkeypatch.setenv("RUNNER_ARCH", "X64") - monkeypatch.setenv("RUNNER_OS", "Linux") - monkeypatch.setenv("GITHUB_SHA", "abc123") - monkeypatch.setenv("GITHUB_WORKFLOW", "CI") - monkeypatch.setenv("GITHUB_WORKFLOW_REF", "owner/repo/.github/workflows/ci.yml@main") - - with patch("aignostics.platform._client.Client") as mock_client: - mock_client.return_value.me.side_effect = Exception("No client available") - - metadata = build_run_sdk_metadata() - - assert "ci" in metadata - assert "github" in metadata["ci"] - - github = metadata["ci"]["github"] - assert github["action"] == "test-action" - assert github["job"] == "test-job" - assert github["ref"] == "refs/heads/main" - assert github["ref_name"] == "main" - assert github["ref_type"] == "branch" - assert github["repository"] == "owner/repo" - assert github["run_attempt"] == "1" - assert github["run_id"] == "12345" - assert github["run_number"] == "42" - assert github["run_url"] == "https://github.com/owner/repo/actions/runs/12345" - assert github["runner_arch"] == "X64" - assert github["runner_os"] == "Linux" - assert github["sha"] == "abc123" - assert github["workflow"] == "CI" - assert github["workflow_ref"] == "owner/repo/.github/workflows/ci.yml@main" - - @pytest.mark.unit - @staticmethod - def test_github_ci_metadata_custom_server(clean_env: None, monkeypatch: pytest.MonkeyPatch) -> None: - """Test GitHub CI metadata with custom server URL.""" - monkeypatch.setenv("GITHUB_RUN_ID", "12345") - monkeypatch.setenv("GITHUB_SERVER_URL", "https://github.enterprise.com") - monkeypatch.setenv("GITHUB_REPOSITORY", "owner/repo") - - with patch("aignostics.platform._client.Client") as mock_client: - mock_client.return_value.me.side_effect = Exception("No client available") - - metadata = build_run_sdk_metadata() - - assert metadata["ci"]["github"]["run_url"] == ( - "https://github.enterprise.com/owner/repo/actions/runs/12345" - ) - - @pytest.mark.unit - @staticmethod - def test_pytest_metadata_basic(clean_env: None, monkeypatch: pytest.MonkeyPatch) -> None: - """Test that pytest metadata is collected correctly.""" - monkeypatch.setenv("PYTEST_CURRENT_TEST", "tests/test_example.py::test_func (call)") - - with patch("aignostics.platform._client.Client") as mock_client: - mock_client.return_value.me.side_effect = Exception("No client available") - - metadata = build_run_sdk_metadata() - - assert "ci" in metadata - assert "pytest" in metadata["ci"] - assert metadata["ci"]["pytest"]["current_test"] == "tests/test_example.py::test_func (call)" - - @pytest.mark.unit - @staticmethod - def test_pytest_metadata_with_markers(clean_env: None, monkeypatch: pytest.MonkeyPatch) -> None: - """Test that pytest markers are parsed correctly.""" - monkeypatch.setenv("PYTEST_CURRENT_TEST", "tests/test_example.py::test_func (call)") - monkeypatch.setenv("PYTEST_MARKERS", "slow,integration,unit") - - with patch("aignostics.platform._client.Client") as mock_client: - mock_client.return_value.me.side_effect = Exception("No client available") - - metadata = build_run_sdk_metadata() - - assert "markers" in metadata["ci"]["pytest"] - assert metadata["ci"]["pytest"]["markers"] == ["slow", "integration", "unit"] - - @pytest.mark.unit - @staticmethod - def test_combined_github_and_pytest_metadata(clean_env: None, monkeypatch: pytest.MonkeyPatch) -> None: - """Test that both GitHub and pytest metadata can coexist.""" - monkeypatch.setenv("GITHUB_RUN_ID", "12345") - monkeypatch.setenv("GITHUB_REPOSITORY", "owner/repo") - monkeypatch.setenv("PYTEST_CURRENT_TEST", "tests/test_example.py::test_func (call)") - - with patch("aignostics.platform._client.Client") as mock_client: - mock_client.return_value.me.side_effect = Exception("No client available") - - metadata = build_run_sdk_metadata() - - assert "ci" in metadata - assert "github" in metadata["ci"] - assert "pytest" in metadata["ci"] - - @pytest.mark.unit - @staticmethod - def test_no_ci_metadata_when_not_in_ci(clean_env: None) -> None: - """Test that ci field is omitted when not in CI environment.""" - with ( - patch("aignostics.platform._client.Client") as mock_client, - patch("os.environ.get", return_value=None), - ): - mock_client.return_value.me.side_effect = Exception("No client available") - - metadata = build_run_sdk_metadata() - - assert "ci" not in metadata - - @pytest.mark.unit - @staticmethod - def test_user_agent_included(clean_env: None) -> None: - """Test that user_agent is included in metadata.""" - with patch("aignostics.platform._client.Client") as mock_client: - mock_client.return_value.me.side_effect = Exception("No client available") - with patch("aignostics.platform._sdk_metadata.user_agent", return_value="test-agent/1.0"): - metadata = build_run_sdk_metadata() - - assert metadata["user_agent"] == "test-agent/1.0" - - @pytest.mark.unit - @staticmethod - def test_metadata_date_format(clean_env: None) -> None: - """Test that submission date is in correct ISO format with seconds precision.""" - with patch("aignostics.platform._client.Client") as mock_client: - mock_client.return_value.me.side_effect = Exception("No client available") - - metadata = build_run_sdk_metadata() - - date_str = metadata["submission"]["date"] - # Should be able to parse as datetime - parsed_date = datetime.fromisoformat(date_str) - # Should have timezone info - assert parsed_date.tzinfo is not None - # Should be in ISO format with seconds precision (no microseconds) - assert "." not in date_str or date_str.count(".") == 0 or len(date_str.split(".")[-1]) <= 3 - - -class TestRunSdkMetadataValidation: - """Test cases for Run SDK metadata validation.""" - - @pytest.mark.unit - @staticmethod - def test_validate_basic_metadata(clean_env: None) -> None: - """Test validation of basic metadata structure.""" - with patch("aignostics.platform._client.Client") as mock_client: - mock_client.return_value.me.side_effect = Exception("No client available") - - metadata = build_run_sdk_metadata() - assert validate_run_sdk_metadata(metadata) is True - - @pytest.mark.unit - @staticmethod - def test_validate_metadata_with_user(clean_env: None) -> None: - """Test validation of metadata with user information.""" - with patch("aignostics.platform._client.Client") as mock_client: - mock_user = MagicMock() - mock_user.organization.id = "org-123" - mock_user.organization.name = "Test Org" - mock_user.user.email = "test@example.com" - mock_user.user.id = "user-456" - mock_client.return_value.me.return_value = mock_user - - metadata = build_run_sdk_metadata() - assert validate_run_sdk_metadata(metadata) is True - - @pytest.mark.unit - @staticmethod - def test_validate_metadata_with_github_ci(clean_env: None, monkeypatch: pytest.MonkeyPatch) -> None: - """Test validation of metadata with GitHub CI information.""" - monkeypatch.setenv("GITHUB_RUN_ID", "123456") - monkeypatch.setenv("GITHUB_REPOSITORY", "owner/repo") - - with patch("aignostics.platform._client.Client") as mock_client: - mock_client.return_value.me.side_effect = Exception("No client available") - - metadata = build_run_sdk_metadata() - assert validate_run_sdk_metadata(metadata) is True - - @pytest.mark.unit - @staticmethod - def test_validate_metadata_with_pytest_ci(clean_env: None, monkeypatch: pytest.MonkeyPatch) -> None: - """Test validation of metadata with pytest information.""" - monkeypatch.setenv("PYTEST_CURRENT_TEST", "tests/test_example.py::test_func") - monkeypatch.setenv("PYTEST_MARKERS", "unit,integration") - - with patch("aignostics.platform._client.Client") as mock_client: - mock_client.return_value.me.side_effect = Exception("No client available") - - metadata = build_run_sdk_metadata() - assert validate_run_sdk_metadata(metadata) is True - - @pytest.mark.unit - @staticmethod - def test_validate_metadata_with_workflow(clean_env: None) -> None: - """Test validation of metadata with workflow fields.""" - with patch("aignostics.platform._client.Client") as mock_client: - mock_client.return_value.me.side_effect = Exception("No client available") - - metadata = build_run_sdk_metadata() - metadata["note"] = "Test run note" - metadata["workflow"] = { - "onboard_to_aignostics_portal": True, - "validate_only": False, - } - metadata["scheduling"] = { - "due_date": "2025-12-31T23:59:59+00:00", - "deadline": "2026-01-01T00:00:00+00:00", - } - - assert validate_run_sdk_metadata(metadata) is True - - @pytest.mark.unit - @staticmethod - def test_validate_invalid_schema_version() -> None: - """Test that invalid schema version fails validation.""" - metadata = { - "schema_version": "invalid-version", - "submission": { - "date": "2025-10-19T12:00:00+00:00", - "interface": "script", - "initiator": "user", - }, - "user_agent": "test-agent/1.0", - } - - with pytest.raises(ValidationError): - validate_run_sdk_metadata(metadata) - - @pytest.mark.unit - @staticmethod - def test_validate_invalid_submission_interface() -> None: - """Test that invalid submission interface fails validation.""" - metadata = { - "schema_version": SDK_METADATA_SCHEMA_VERSION, - "created_at": "2025-10-19T12:00:00+00:00", - "updated_at": "2025-10-19T12:00:00+00:00", - "submission": { - "date": "2025-10-19T12:00:00+00:00", - "interface": "invalid", - "initiator": "user", - }, - "user_agent": "test-agent/1.0", - } - - with pytest.raises(ValidationError): - validate_run_sdk_metadata(metadata) - - @pytest.mark.unit - @staticmethod - def test_validate_invalid_submission_initiator() -> None: - """Test that invalid submission initiator fails validation.""" - metadata = { - "schema_version": SDK_METADATA_SCHEMA_VERSION, - "created_at": "2025-10-19T12:00:00+00:00", - "updated_at": "2025-10-19T12:00:00+00:00", - "submission": { - "date": "2025-10-19T12:00:00+00:00", - "interface": "script", - "initiator": "invalid", - }, - "user_agent": "test-agent/1.0", - } - - with pytest.raises(ValidationError): - validate_run_sdk_metadata(metadata) - - @pytest.mark.unit - @staticmethod - def test_validate_missing_required_fields() -> None: - """Test that missing required fields fail validation.""" - metadata = { - "schema_version": SDK_METADATA_SCHEMA_VERSION, - "created_at": "2025-10-19T12:00:00+00:00", - "updated_at": "2025-10-19T12:00:00+00:00", - "submission": { - "date": "2025-10-19T12:00:00+00:00", - "interface": "script", - }, - # Missing initiator - "user_agent": "test-agent/1.0", - } - - with pytest.raises(ValidationError): - validate_run_sdk_metadata(metadata) - - @pytest.mark.unit - @staticmethod - def test_validate_extra_fields_rejected() -> None: - """Test that extra unknown fields are rejected.""" - metadata = { - "schema_version": SDK_METADATA_SCHEMA_VERSION, - "created_at": "2025-10-19T12:00:00+00:00", - "updated_at": "2025-10-19T12:00:00+00:00", - "submission": { - "date": "2025-10-19T12:00:00+00:00", - "interface": "script", - "initiator": "user", - }, - "user_agent": "test-agent/1.0", - "unknown_field": "should fail", - } - - with pytest.raises(ValidationError): - validate_run_sdk_metadata(metadata) - - @pytest.mark.unit - @staticmethod - def test_validate_with_tags_set() -> None: - """Test validation with tags as a set of strings.""" - metadata = { - "schema_version": SDK_METADATA_SCHEMA_VERSION, - "created_at": "2025-10-19T12:00:00+00:00", - "updated_at": "2025-10-19T12:00:00+00:00", - "submission": { - "date": "2025-10-19T12:00:00+00:00", - "interface": "script", - "initiator": "user", - }, - "user_agent": "test-agent/1.0", - "tags": {"experiment", "production", "v2"}, - } - - assert validate_run_sdk_metadata(metadata) is True - - @pytest.mark.unit - @staticmethod - def test_validate_with_empty_tags_set() -> None: - """Test validation with empty tags set.""" - metadata = { - "schema_version": SDK_METADATA_SCHEMA_VERSION, - "created_at": "2025-10-19T12:00:00+00:00", - "updated_at": "2025-10-19T12:00:00+00:00", - "submission": { - "date": "2025-10-19T12:00:00+00:00", - "interface": "script", - "initiator": "user", - }, - "user_agent": "test-agent/1.0", - "tags": set(), - } - - assert validate_run_sdk_metadata(metadata) is True - - @pytest.mark.unit - @staticmethod - def test_validate_with_tags_none() -> None: - """Test validation with tags as None.""" - metadata = { - "schema_version": SDK_METADATA_SCHEMA_VERSION, - "created_at": "2025-10-19T12:00:00+00:00", - "updated_at": "2025-10-19T12:00:00+00:00", - "submission": { - "date": "2025-10-19T12:00:00+00:00", - "interface": "script", - "initiator": "user", - }, - "user_agent": "test-agent/1.0", - "tags": None, - } - - assert validate_run_sdk_metadata(metadata) is True - - @pytest.mark.unit - @staticmethod - def test_validate_without_tags_field() -> None: - """Test validation when tags field is omitted entirely.""" - metadata = { - "schema_version": SDK_METADATA_SCHEMA_VERSION, - "created_at": "2025-10-19T12:00:00+00:00", - "updated_at": "2025-10-19T12:00:00+00:00", - "submission": { - "date": "2025-10-19T12:00:00+00:00", - "interface": "script", - "initiator": "user", - }, - "user_agent": "test-agent/1.0", - } - - assert validate_run_sdk_metadata(metadata) is True - - @pytest.mark.unit - @staticmethod - def test_validate_with_tags_list_converted_to_set() -> None: - """Test that list is automatically converted to set by Pydantic.""" - metadata = { - "schema_version": SDK_METADATA_SCHEMA_VERSION, - "created_at": "2025-10-19T12:00:00+00:00", - "updated_at": "2025-10-19T12:00:00+00:00", - "submission": { - "date": "2025-10-19T12:00:00+00:00", - "interface": "script", - "initiator": "user", - }, - "user_agent": "test-agent/1.0", - "tags": ["tag1", "tag2"], # List gets converted to set - } - - # Validation should succeed as Pydantic converts list to set - assert validate_run_sdk_metadata(metadata) is True - - @pytest.mark.unit - @staticmethod - def test_validate_with_tags_invalid_type_dict() -> None: - """Test validation fails when tags is a dict instead of set.""" - metadata = { - "schema_version": SDK_METADATA_SCHEMA_VERSION, - "created_at": "2025-10-19T12:00:00+00:00", - "updated_at": "2025-10-19T12:00:00+00:00", - "submission": { - "date": "2025-10-19T12:00:00+00:00", - "interface": "script", - "initiator": "user", - }, - "user_agent": "test-agent/1.0", - "tags": {"key": "value"}, # Dict instead of set - } - - with pytest.raises(ValidationError): - validate_run_sdk_metadata(metadata) - - @pytest.mark.unit - @staticmethod - def test_validate_with_tags_non_string_values() -> None: - """Test validation fails when tags contains non-string values.""" - metadata = { - "schema_version": SDK_METADATA_SCHEMA_VERSION, - "created_at": "2025-10-19T12:00:00+00:00", - "updated_at": "2025-10-19T12:00:00+00:00", - "submission": { - "date": "2025-10-19T12:00:00+00:00", - "interface": "script", - "initiator": "user", - }, - "user_agent": "test-agent/1.0", - "tags": {"valid", 123, None}, # Mixed types - } - - with pytest.raises(ValidationError): - validate_run_sdk_metadata(metadata) - - @pytest.mark.unit - @staticmethod - def test_validate_sdk_metadata_silent_valid(clean_env: None) -> None: - """Test silent validation with valid metadata.""" - with patch("aignostics.platform._client.Client") as mock_client: - mock_client.return_value.me.side_effect = Exception("No client available") - - metadata = build_run_sdk_metadata() - assert validate_run_sdk_metadata_silent(metadata) is True - - @pytest.mark.unit - @staticmethod - def test_validate_sdk_metadata_silent_invalid() -> None: - """Test silent validation with invalid metadata.""" - metadata = { - "schema_version": "invalid", - "submission": { - "date": "2025-10-19T12:00:00+00:00", - "interface": "invalid", - "initiator": "user", - }, - "user_agent": "test-agent/1.0", - } - - assert validate_run_sdk_metadata_silent(metadata) is False - - @pytest.mark.unit - @staticmethod - def test_get_json_schema() -> None: - """Test that JSON schema can be exported.""" - schema = get_run_sdk_metadata_json_schema() - - assert isinstance(schema, dict) - assert "$schema" in schema - assert schema["$schema"] == "https://json-schema.org/draft/2020-12/schema" - assert "$id" in schema - assert ( - schema["$id"] - == f"https://raw.githubusercontent.com/aignostics/python-sdk/main/docs/source/_static/sdk_metadata_schema_v{SDK_METADATA_SCHEMA_VERSION}.json" - ) - assert "properties" in schema - assert "schema_version" in schema["properties"] - assert "submission" in schema["properties"] - assert "user_agent" in schema["properties"] - assert "required" in schema - assert "schema_version" in schema["required"] - assert "submission" in schema["required"] - assert "user_agent" in schema["required"] - - -class TestItemSdkMetadata: - """Test cases for Item SDK metadata.""" - - @pytest.mark.unit - @staticmethod - def test_build_item_metadata_basic() -> None: - """Test that basic item metadata structure is created correctly.""" - metadata = build_item_sdk_metadata() - - assert metadata["schema_version"] == ITEM_SDK_METADATA_SCHEMA_VERSION - assert "platform_bucket" not in metadata - - @pytest.mark.unit - @staticmethod - def test_validate_item_metadata_basic() -> None: - """Test validation of the default item metadata.""" - metadata = build_item_sdk_metadata() - - assert validate_item_sdk_metadata(metadata) is True - - @pytest.mark.unit - @staticmethod - def test_validate_item_metadata_with_platform_bucket() -> None: - """Test validation succeeds with platform bucket metadata present.""" - metadata = { - "schema_version": ITEM_SDK_METADATA_SCHEMA_VERSION, - "created_at": "2025-10-19T12:00:00+00:00", - "updated_at": "2025-10-19T12:00:00+00:00", - "platform_bucket": { - "bucket_name": "sdk-bucket", - "object_key": "runs/123/items/456", - "signed_download_url": "https://example.com/run-item", - }, - } - - assert validate_item_sdk_metadata(metadata) is True - - @pytest.mark.unit - @staticmethod - def test_validate_item_metadata_missing_platform_bucket_fields() -> None: - """Test validation fails when required platform bucket fields are missing.""" - metadata = { - "schema_version": ITEM_SDK_METADATA_SCHEMA_VERSION, - "created_at": "2025-10-19T12:00:00+00:00", - "updated_at": "2025-10-19T12:00:00+00:00", - "platform_bucket": { - "bucket_name": "sdk-bucket", - "object_key": "runs/123/items/456", - }, - } - - with pytest.raises(ValidationError): - validate_item_sdk_metadata(metadata) - - @pytest.mark.unit - @staticmethod - def test_validate_item_metadata_invalid_schema_version() -> None: - """Test that an invalid schema version fails validation.""" - metadata = { - "schema_version": "invalid", - } - - with pytest.raises(ValidationError): - validate_item_sdk_metadata(metadata) - - @pytest.mark.unit - @staticmethod - def test_validate_item_metadata_extra_fields() -> None: - """Test that extra fields are rejected for item metadata.""" - metadata = { - "schema_version": ITEM_SDK_METADATA_SCHEMA_VERSION, - "created_at": "2025-10-19T12:00:00+00:00", - "updated_at": "2025-10-19T12:00:00+00:00", - "unexpected": "value", - } - - with pytest.raises(ValidationError): - validate_item_sdk_metadata(metadata) - - @pytest.mark.unit - @staticmethod - def test_validate_item_metadata_silent_valid() -> None: - """Test silent validation passes for valid item metadata.""" - metadata = build_item_sdk_metadata() - - assert validate_item_sdk_metadata_silent(metadata) is True - - @pytest.mark.unit - @staticmethod - def test_validate_item_metadata_silent_invalid() -> None: - """Test silent validation fails for invalid item metadata.""" - metadata = { - "schema_version": "invalid", - } - - assert validate_item_sdk_metadata_silent(metadata) is False - - @pytest.mark.unit - @staticmethod - def test_get_item_json_schema() -> None: - """Test that the item metadata JSON schema can be exported.""" - schema = get_item_sdk_metadata_json_schema() - - assert isinstance(schema, dict) - assert "$schema" in schema - assert schema["$schema"] == "https://json-schema.org/draft/2020-12/schema" - assert "$id" in schema - assert schema["$id"] == ( - f"https://raw.githubusercontent.com/aignostics/python-sdk/main/docs/source/_static/" - f"item_sdk_metadata_schema_v{ITEM_SDK_METADATA_SCHEMA_VERSION}.json" - ) - assert "properties" in schema - assert "schema_version" in schema["properties"] - assert "required" in schema - assert "schema_version" in schema["required"] - - @pytest.mark.unit - @staticmethod - def test_validate_item_metadata_with_tags() -> None: - """Test validation of item metadata with tags as a set of strings.""" - metadata = { - "schema_version": ITEM_SDK_METADATA_SCHEMA_VERSION, - "created_at": "2025-10-19T12:00:00+00:00", - "updated_at": "2025-10-19T12:00:00+00:00", - "tags": {"slide", "tumor", "he-stained"}, - } - - assert validate_item_sdk_metadata(metadata) is True - - @pytest.mark.unit - @staticmethod - def test_validate_item_metadata_with_empty_tags() -> None: - """Test validation of item metadata with empty tags set.""" - metadata = { - "schema_version": ITEM_SDK_METADATA_SCHEMA_VERSION, - "created_at": "2025-10-19T12:00:00+00:00", - "updated_at": "2025-10-19T12:00:00+00:00", - "tags": set(), - } - - assert validate_item_sdk_metadata(metadata) is True - - @pytest.mark.unit - @staticmethod - def test_validate_item_metadata_with_tags_none() -> None: - """Test validation of item metadata with tags as None.""" - metadata = { - "schema_version": ITEM_SDK_METADATA_SCHEMA_VERSION, - "created_at": "2025-10-19T12:00:00+00:00", - "updated_at": "2025-10-19T12:00:00+00:00", - "tags": None, - } - - assert validate_item_sdk_metadata(metadata) is True - - @pytest.mark.unit - @staticmethod - def test_validate_item_metadata_without_tags() -> None: - """Test validation of item metadata when tags field is omitted.""" - metadata = { - "schema_version": ITEM_SDK_METADATA_SCHEMA_VERSION, - "created_at": "2025-10-19T12:00:00+00:00", - "updated_at": "2025-10-19T12:00:00+00:00", - } - - assert validate_item_sdk_metadata(metadata) is True - - @pytest.mark.unit - @staticmethod - def test_validate_item_metadata_tags_list_converted() -> None: - """Test that list is automatically converted to set by Pydantic.""" - metadata = { - "schema_version": ITEM_SDK_METADATA_SCHEMA_VERSION, - "created_at": "2025-10-19T12:00:00+00:00", - "updated_at": "2025-10-19T12:00:00+00:00", - "tags": ["tag1", "tag2"], # List gets converted to set - } - - # Validation should succeed as Pydantic converts list to set - assert validate_item_sdk_metadata(metadata) is True - - @pytest.mark.unit - @staticmethod - def test_validate_item_metadata_tags_non_string() -> None: - """Test validation fails when tags contains non-string values.""" - metadata = { - "schema_version": ITEM_SDK_METADATA_SCHEMA_VERSION, - "created_at": "2025-10-19T12:00:00+00:00", - "updated_at": "2025-10-19T12:00:00+00:00", - "tags": {"valid", 123}, # Mixed types - } - - with pytest.raises(ValidationError): - validate_item_sdk_metadata(metadata) diff --git a/tests/aignostics/platform/service_test.py b/tests/aignostics/platform/service_test.py index 02f3b8fa..67c61856 100644 --- a/tests/aignostics/platform/service_test.py +++ b/tests/aignostics/platform/service_test.py @@ -1,11 +1,8 @@ """Tests for the platform service module.""" -import pytest - from aignostics.platform._service import Service -@pytest.mark.unit def test_http_pool_is_shared() -> None: """Test that Service._get_http_pool returns the same instance across multiple calls. @@ -22,7 +19,6 @@ def test_http_pool_is_shared() -> None: assert pool1 is pool2, "Service._get_http_pool should return the same PoolManager instance" -@pytest.mark.unit def test_http_pool_singleton() -> None: """Test that Service._http_pool maintains a singleton pattern. diff --git a/tests/aignostics/platform/settings_test.py b/tests/aignostics/platform/settings_test.py index 8877c62a..e17cf931 100644 --- a/tests/aignostics/platform/settings_test.py +++ b/tests/aignostics/platform/settings_test.py @@ -18,9 +18,6 @@ AUTHORIZATION_BASE_URL_DEV, AUTHORIZATION_BASE_URL_PRODUCTION, AUTHORIZATION_BASE_URL_STAGING, - CLIENT_ID_INTERACTIVE_DEV, - CLIENT_ID_INTERACTIVE_PRODUCTION, - CLIENT_ID_INTERACTIVE_STAGING, DEVICE_URL_DEV, DEVICE_URL_PRODUCTION, DEVICE_URL_STAGING, @@ -42,11 +39,12 @@ @pytest.fixture def mock_env_vars(): # noqa: ANN201 - """Mock environment variable for testing of settings.""" + """Mock environment variables required for settings.""" with mock.patch.dict( os.environ, { f"{__project_name__.upper()}_CLIENT_ID_DEVICE": "test-client-id-device", + f"{__project_name__.upper()}_CLIENT_ID_INTERACTIVE": "test-client-id-interactive", }, ): yield @@ -68,20 +66,17 @@ def reset_cached_settings(): # noqa: ANN201 settings.__cached_settings = original -@pytest.mark.unit -def test_authentication_settings_production(record_property) -> None: +def test_authentication_settings_production(mock_env_vars, reset_cached_settings) -> None: """Test authentication settings with production API root.""" - record_property("tested-item-id", "SPEC-PLATFORM-SERVICE") # Create settings with production API root settings = Settings( client_id_device=SecretStr("test-client-id-device"), + client_id_interactive=SecretStr("test-client-id-interactive"), api_root=API_ROOT_PRODUCTION, ) # Validate production-specific settings assert settings.api_root == API_ROOT_PRODUCTION - assert settings.client_id_interactive == CLIENT_ID_INTERACTIVE_PRODUCTION - assert settings.client_id_device.get_secret_value() == "test-client-id-device" assert settings.audience == AUDIENCE_PRODUCTION assert settings.authorization_base_url == AUTHORIZATION_BASE_URL_PRODUCTION assert settings.token_url == TOKEN_URL_PRODUCTION @@ -90,18 +85,15 @@ def test_authentication_settings_production(record_property) -> None: assert settings.jws_json_url == JWS_JSON_URL_PRODUCTION -@pytest.mark.unit -def test_authentication_settings_staging(record_property, mock_env_vars) -> None: +def test_authentication_settings_staging(mock_env_vars) -> None: """Test authentication settings with staging API root.""" - record_property("tested-item-id", "SPEC-PLATFORM-SERVICE") settings = Settings( client_id_device=SecretStr("test-client-id-device"), + client_id_interactive=SecretStr("test-client-id-interactive"), api_root=API_ROOT_STAGING, ) assert settings.api_root == API_ROOT_STAGING - assert settings.client_id_interactive == CLIENT_ID_INTERACTIVE_STAGING - assert settings.client_id_device.get_secret_value() == "test-client-id-device" assert settings.audience == AUDIENCE_STAGING assert settings.authorization_base_url == AUTHORIZATION_BASE_URL_STAGING assert settings.token_url == TOKEN_URL_STAGING @@ -110,18 +102,15 @@ def test_authentication_settings_staging(record_property, mock_env_vars) -> None assert settings.jws_json_url == JWS_JSON_URL_STAGING -@pytest.mark.unit -def test_authentication_settings_dev(record_property, mock_env_vars) -> None: +def test_authentication_settings_dev(mock_env_vars) -> None: """Test authentication settings with dev API root.""" - record_property("tested-item-id", "SPEC-PLATFORM-SERVICE") settings = Settings( client_id_device=SecretStr("test-client-id-device"), + client_id_interactive=SecretStr("test-client-id-interactive"), api_root=API_ROOT_DEV, ) assert settings.api_root == API_ROOT_DEV - assert settings.client_id_interactive == CLIENT_ID_INTERACTIVE_DEV - assert settings.client_id_device.get_secret_value() == "test-client-id-device" assert settings.audience == AUDIENCE_DEV assert settings.authorization_base_url == AUTHORIZATION_BASE_URL_DEV assert settings.token_url == TOKEN_URL_DEV @@ -130,53 +119,51 @@ def test_authentication_settings_dev(record_property, mock_env_vars) -> None: assert settings.jws_json_url == JWS_JSON_URL_DEV -@pytest.mark.unit -def test_authentication_settings_unknown_api_root(record_property, mock_env_vars) -> None: +def test_authentication_settings_unknown_api_root(mock_env_vars) -> None: """Test authentication settings with unknown API root raises ValueError.""" - record_property("tested-item-id", "SPEC-PLATFORM-SERVICE") with pytest.raises(ValueError, match=UNKNOWN_ENDPOINT_URL): Settings( + client_id_device=SecretStr("test-client-id-device"), + client_id_interactive=SecretStr("test-client-id-interactive"), api_root="https://unknown.example.com", ) -@pytest.mark.unit -def test_scope_elements_empty_fails_validation(record_property) -> None: +def test_scope_elements_empty_fails_validation() -> None: """Test scope_elements property with empty scope fails validation.""" - record_property("tested-item-id", "SPEC-PLATFORM-SERVICE") with pytest.raises(PydanticValidationError, match="String should have at least 3 characters"): Settings( + client_id_device=SecretStr("test-client-id-device"), + client_id_interactive=SecretStr("test-client-id-interactive"), scope="", api_root=API_ROOT_PRODUCTION, ) -@pytest.mark.unit -def test_scope_elements_multiple(record_property) -> None: +def test_scope_elements_multiple() -> None: """Test scope_elements property with multiple scopes.""" - record_property("tested-item-id", "SPEC-PLATFORM-SERVICE") settings = Settings( + client_id_device=SecretStr("test-client-id-device"), + client_id_interactive=SecretStr("test-client-id-interactive"), scope="offline_access, profile, email", api_root=API_ROOT_PRODUCTION, ) assert settings.scope_elements == ["offline_access", "profile", "email"] -@pytest.mark.unit -def test_authentication_settings_with_refresh_token(record_property, mock_env_vars) -> None: +def test_authentication_settings_with_refresh_token(mock_env_vars) -> None: """Test authentication settings with refresh token.""" - record_property("tested-item-id", "SPEC-PLATFORM-SERVICE") settings = Settings( + client_id_device=SecretStr("test-client-id-device"), + client_id_interactive=SecretStr("test-client-id-interactive"), refresh_token=SecretStr("test-refresh-token"), api_root=API_ROOT_PRODUCTION, ) assert settings.refresh_token == SecretStr("test-refresh-token") -@pytest.mark.unit -def test_lazy_authentication_settings(record_property, mock_env_vars, reset_cached_settings) -> None: +def test_lazy_authentication_settings(mock_env_vars, reset_cached_settings) -> None: """Test lazy loading of authentication settings.""" - record_property("tested-item-id", "SPEC-PLATFORM-SERVICE") # First call should create the settings settings1 = settings() assert settings1 is not None @@ -186,87 +173,32 @@ def test_lazy_authentication_settings(record_property, mock_env_vars, reset_cach assert settings2 is settings1 -@pytest.mark.unit @pytest.mark.sequential -def test_authentication_settings_with_env_vars(record_property, mock_env_vars, reset_cached_settings) -> None: +# TODO(Helmut): fix race +@pytest.mark.skip +def test_authentication_settings_with_env_vars(mock_env_vars, reset_cached_settings) -> None: """Test authentication settings from environment variables.""" - record_property("tested-item-id", "SPEC-PLATFORM-SERVICE") settings1 = settings() assert settings1.client_id_device.get_secret_value() == "test-client-id-device" + assert settings1.client_id_interactive.get_secret_value() == "test-client-id-interactive" + +# TODO(Helmut): fixme +@pytest.mark.skip +def test_custom_env_file_location(mock_env_vars) -> None: + """Test custom env file location.""" + custom_env_file = "/home/dummy/test_env_file" + with mock.patch.dict(os.environ, {f"{__project_name__.upper()}_ENV_FILE": custom_env_file}): + settings = Settings.model_config + assert custom_env_file in settings["env_file"] -def test_custom_env_file_location(reset_cached_settings, record_property) -> None: - """Test custom env file location. - - This test verifies that a custom env file can be specified via the AIGNOSTICS_ENV_FILE - environment variable and that Settings will load from that file. The test uses a context - manager to ensure proper cleanup of the temporary env file. - - Note: This test uses health_timeout instead of client_id_device because in CI environments, - the AIGNOSTICS_CLIENT_ID_DEVICE environment variable takes precedence over env file values - (as per pydantic-settings priority). The health_timeout field is less likely to be set - in CI environments. - """ - import sys - import tempfile - from contextlib import contextmanager - - record_property("tested-item-id", "SPEC-PLATFORM-SERVICE") - - settings_module = "aignostics.platform._settings" - - @contextmanager - def temp_env_file(content: str): # type: ignore[misc] - """Context manager for creating a temporary env file that's cleaned up automatically. - - Args: - content: The content to write to the temporary env file. - - Yields: - str: The path to the temporary env file. - """ - with tempfile.NamedTemporaryFile(mode="w", suffix=".env", delete=False, encoding="utf-8") as f: - f.write(content) - temp_path = f.name - try: - yield temp_path - finally: - Path(temp_path).unlink(missing_ok=True) - - # Create a temporary env file with test settings - with temp_env_file("AIGNOSTICS_HEALTH_TIMEOUT=42.5\n") as custom_env_file: - # Set the custom env file location BEFORE importing Settings - # This requires reimporting the module to pick up the new env var - # Clear ALL AIGNOSTICS_ environment variables to ensure clean state - env_patch = {k: v for k, v in os.environ.items() if not k.startswith(f"{__project_name__.upper()}_")} - - # Now set only the variables we want for this test - env_patch[f"{__project_name__.upper()}_ENV_FILE"] = custom_env_file - - try: - with mock.patch.dict(os.environ, env_patch, clear=True): - # Remove the module from sys.modules to force reimport - if settings_module in sys.modules: - del sys.modules[settings_module] - - # Now import Settings fresh - it should read from the custom env file - from aignostics.platform._settings import Settings - - assert custom_env_file in Settings.model_config["env_file"] - test_settings = Settings() - assert test_settings.health_timeout == pytest.approx(42.5) - finally: - # Restore the original module state by deleting it so it gets reimported fresh next time - if settings_module in sys.modules: - del sys.modules[settings_module] - - -@pytest.mark.unit -def test_custom_cache_dir(record_property, mock_env_vars) -> None: + +def test_custom_cache_dir(mock_env_vars) -> None: """Test custom cache directory.""" - record_property("tested-item-id", "SPEC-PLATFORM-SERVICE") custom_cache_dir = "/home/dummy/test_cache_dir" settings = Settings( + client_id_device=SecretStr("test-client-id-device"), + client_id_interactive=SecretStr("test-client-id-interactive"), cache_dir=custom_cache_dir, api_root=API_ROOT_PRODUCTION, ) @@ -274,11 +206,11 @@ def test_custom_cache_dir(record_property, mock_env_vars) -> None: assert settings.token_file == Path(custom_cache_dir) / ".token" -@pytest.mark.unit -def test_issuer_computed_field_production(record_property, mock_env_vars) -> None: +def test_issuer_computed_field_production(mock_env_vars) -> None: """Test issuer computed field with production authorization base URL.""" - record_property("tested-item-id", "SPEC-PLATFORM-SERVICE") settings = Settings( + client_id_device=SecretStr("test-client-id-device"), + client_id_interactive=SecretStr("test-client-id-interactive"), api_root=API_ROOT_PRODUCTION, ) # Production authorization_base_url is https://aignostics-platform.eu.auth0.com/authorize @@ -287,11 +219,11 @@ def test_issuer_computed_field_production(record_property, mock_env_vars) -> Non assert settings.issuer == expected_issuer -@pytest.mark.unit -def test_issuer_computed_field_staging(record_property, mock_env_vars) -> None: +def test_issuer_computed_field_staging(mock_env_vars) -> None: """Test issuer computed field with staging authorization base URL.""" - record_property("tested-item-id", "SPEC-PLATFORM-SERVICE") settings = Settings( + client_id_device=SecretStr("test-client-id-device"), + client_id_interactive=SecretStr("test-client-id-interactive"), api_root=API_ROOT_STAGING, ) # Staging authorization_base_url is https://todo (placeholder) @@ -300,11 +232,11 @@ def test_issuer_computed_field_staging(record_property, mock_env_vars) -> None: assert settings.issuer == expected_issuer -@pytest.mark.unit -def test_issuer_computed_field_dev(record_property, mock_env_vars) -> None: +def test_issuer_computed_field_dev(mock_env_vars) -> None: """Test issuer computed field with dev authorization base URL.""" - record_property("tested-item-id", "SPEC-PLATFORM-SERVICE") settings = Settings( + client_id_device=SecretStr("test-client-id-device"), + client_id_interactive=SecretStr("test-client-id-interactive"), api_root=API_ROOT_DEV, ) # Dev authorization_base_url is https://dev-8ouohmmrbuh2h4vu.eu.auth0.com/authorize @@ -313,15 +245,13 @@ def test_issuer_computed_field_dev(record_property, mock_env_vars) -> None: assert settings.issuer == expected_issuer -@pytest.mark.unit -def test_issuer_computed_field_custom_url(record_property, mock_env_vars) -> None: +def test_issuer_computed_field_custom_url(mock_env_vars) -> None: """Test issuer computed field with custom authorization base URL.""" - record_property("tested-item-id", "SPEC-PLATFORM-SERVICE") # Avoid triggering api_root-based validator by setting all required fields manually settings = Settings( client_id_device=SecretStr("test-client-id-device"), - api_root="https://custom.platform.example.com", # Custom api_root that doesn't match any preset client_id_interactive="test-client-id-interactive", + api_root="https://custom.platform.example.com", # Custom api_root that doesn't match any preset authorization_base_url="https://custom.example.com/auth/oauth2/authorize", audience="test-audience", token_url="https://custom.example.com/auth/oauth2/token", # noqa: S106 @@ -333,14 +263,12 @@ def test_issuer_computed_field_custom_url(record_property, mock_env_vars) -> Non assert settings.issuer == expected_issuer -@pytest.mark.unit -def test_issuer_computed_field_malformed_url_no_scheme(record_property, mock_env_vars) -> None: +def test_issuer_computed_field_malformed_url_no_scheme(mock_env_vars) -> None: """Test issuer computed field with malformed URL (no scheme) falls back gracefully.""" - record_property("tested-item-id", "SPEC-PLATFORM-SERVICE") settings = Settings( client_id_device=SecretStr("test-client-id-device"), - api_root="https://custom.platform.example.com", # Custom api_root that doesn't match any preset client_id_interactive="test-client-id-interactive", + api_root="https://custom.platform.example.com", # Custom api_root that doesn't match any preset authorization_base_url="example.com/oauth2/auth", audience="test-audience", token_url="https://example.com/oauth2/token", # noqa: S106 @@ -353,14 +281,12 @@ def test_issuer_computed_field_malformed_url_no_scheme(record_property, mock_env assert settings.issuer == expected_issuer -@pytest.mark.unit -def test_issuer_computed_field_malformed_url_no_domain(record_property, mock_env_vars) -> None: +def test_issuer_computed_field_malformed_url_no_domain(mock_env_vars) -> None: """Test issuer computed field with malformed URL (no domain) falls back gracefully.""" - record_property("tested-item-id", "SPEC-PLATFORM-SERVICE") settings = Settings( client_id_device=SecretStr("test-client-id-device"), - api_root="https://custom.platform.example.com", # Custom api_root that doesn't match any preset client_id_interactive="test-client-id-interactive", + api_root="https://custom.platform.example.com", # Custom api_root that doesn't match any preset authorization_base_url="https:///oauth2/auth", audience="test-audience", token_url="https://example.com/oauth2/token", # noqa: S106 @@ -373,14 +299,12 @@ def test_issuer_computed_field_malformed_url_no_domain(record_property, mock_env assert settings.issuer == expected_issuer -@pytest.mark.unit -def test_issuer_computed_field_url_with_port(record_property, mock_env_vars) -> None: +def test_issuer_computed_field_url_with_port(mock_env_vars) -> None: """Test issuer computed field with URL containing port number.""" - record_property("tested-item-id", "SPEC-PLATFORM-SERVICE") settings = Settings( client_id_device=SecretStr("test-client-id-device"), - api_root="https://custom.platform.example.com", # Custom api_root that doesn't match any preset client_id_interactive="test-client-id-interactive", + api_root="https://custom.platform.example.com", # Custom api_root that doesn't match any preset authorization_base_url="https://localhost:8080/oauth2/auth", audience="test-audience", token_url="https://localhost:8080/oauth2/token", # noqa: S106 @@ -392,14 +316,12 @@ def test_issuer_computed_field_url_with_port(record_property, mock_env_vars) -> assert settings.issuer == expected_issuer -@pytest.mark.unit -def test_issuer_computed_field_url_with_subdirectory(record_property, mock_env_vars) -> None: +def test_issuer_computed_field_url_with_subdirectory(mock_env_vars) -> None: """Test issuer computed field with URL containing multiple path segments.""" - record_property("tested-item-id", "SPEC-PLATFORM-SERVICE") settings = Settings( client_id_device=SecretStr("test-client-id-device"), - api_root="https://custom.platform.example.com", # Custom api_root that doesn't match any preset client_id_interactive="test-client-id-interactive", + api_root="https://custom.platform.example.com", # Custom api_root that doesn't match any preset authorization_base_url="https://example.com/auth/v1/oauth2/authorize", audience="test-audience", token_url="https://example.com/auth/v1/oauth2/token", # noqa: S106 @@ -411,14 +333,12 @@ def test_issuer_computed_field_url_with_subdirectory(record_property, mock_env_v assert settings.issuer == expected_issuer -@pytest.mark.unit -def test_issuer_computed_field_url_with_query_params(record_property, mock_env_vars) -> None: +def test_issuer_computed_field_url_with_query_params(mock_env_vars) -> None: """Test issuer computed field with URL containing query parameters.""" - record_property("tested-item-id", "SPEC-PLATFORM-SERVICE") settings = Settings( client_id_device=SecretStr("test-client-id-device"), - api_root="https://custom.platform.example.com", # Custom api_root that doesn't match any preset client_id_interactive="test-client-id-interactive", + api_root="https://custom.platform.example.com", # Custom api_root that doesn't match any preset authorization_base_url="https://example.com/oauth2/auth?param=value", audience="test-audience", token_url="https://example.com/oauth2/token", # noqa: S106 @@ -430,10 +350,11 @@ def test_issuer_computed_field_url_with_query_params(record_property, mock_env_v assert settings.issuer == expected_issuer -@pytest.mark.unit def test_validate_retry_wait_times_valid(mock_env_vars) -> None: """Test that valid retry wait times pass validation.""" settings = Settings( + client_id_device=SecretStr("test-client-id-device"), + client_id_interactive=SecretStr("test-client-id-interactive"), api_root=API_ROOT_PRODUCTION, auth_retry_wait_min=0.1, auth_retry_wait_max=5.0, @@ -442,10 +363,11 @@ def test_validate_retry_wait_times_valid(mock_env_vars) -> None: assert settings.auth_retry_wait_max == 5.0 -@pytest.mark.unit def test_validate_retry_wait_times_min_equals_max(mock_env_vars) -> None: """Test that retry wait min equal to max passes validation.""" settings = Settings( + client_id_device=SecretStr("test-client-id-device"), + client_id_interactive=SecretStr("test-client-id-interactive"), api_root=API_ROOT_PRODUCTION, auth_retry_wait_min=3.0, auth_retry_wait_max=3.0, @@ -454,7 +376,6 @@ def test_validate_retry_wait_times_min_equals_max(mock_env_vars) -> None: assert settings.auth_retry_wait_max == 3.0 -@pytest.mark.unit def test_validate_retry_wait_times_min_greater_than_max(mock_env_vars) -> None: """Test that retry wait min greater than max fails validation.""" with pytest.raises( @@ -462,6 +383,8 @@ def test_validate_retry_wait_times_min_greater_than_max(mock_env_vars) -> None: match=r"auth_retry_wait_min \(10\.0\) must be less or equal than auth_retry_wait_max \(5.0\)", ): Settings( + client_id_device=SecretStr("test-client-id-device"), + client_id_interactive=SecretStr("test-client-id-interactive"), api_root=API_ROOT_PRODUCTION, auth_retry_wait_min=10.0, auth_retry_wait_max=5.0, diff --git a/tests/aignostics/platform/utils_test.py b/tests/aignostics/platform/utils_test.py index ba50eb6c..5e6a3da2 100644 --- a/tests/aignostics/platform/utils_test.py +++ b/tests/aignostics/platform/utils_test.py @@ -1,241 +1,73 @@ """Tests for the platform utility functions.""" -import math - import pytest from aignostics.platform import mime_type_to_file_ending -from aignostics.platform._utils import convert_to_json_serializable - - -class TestConvertToJsonSerializable: - """Tests for the convert_to_json_serializable function.""" - - @pytest.mark.unit - @staticmethod - def test_convert_simple_set_to_list() -> None: - """Test that a set is converted to a sorted list. - - This test verifies that the convert_to_json_serializable function correctly - converts a set to a sorted list for JSON serialization. - """ - result = convert_to_json_serializable({"a", "c", "b"}) - assert result == ["a", "b", "c"] - - @pytest.mark.unit - @staticmethod - def test_convert_numeric_set_to_list() -> None: - """Test that a numeric set is converted to a sorted list. - - This test verifies that the convert_to_json_serializable function correctly - converts a numeric set to a sorted list for JSON serialization. - """ - result = convert_to_json_serializable({3, 1, 2}) - assert result == [1, 2, 3] - - @pytest.mark.unit - @staticmethod - def test_convert_dict_with_set_values() -> None: - """Test that a dictionary with set values has them converted to lists. - - This test verifies that the convert_to_json_serializable function recursively - converts set values within dictionaries to sorted lists. - """ - result = convert_to_json_serializable({"tags": {"test", "prod", "dev"}}) - assert result == {"tags": ["dev", "prod", "test"]} - - @pytest.mark.unit - @staticmethod - def test_convert_nested_dict_with_sets() -> None: - """Test that nested dictionaries with sets are fully converted. - - This test verifies that the convert_to_json_serializable function recursively - processes nested structures containing sets. - """ - input_data = { - "outer": { - "inner": {"items": {5, 3, 1}}, - "tags": {"z", "a"}, - } - } - expected = { - "outer": { - "inner": {"items": [1, 3, 5]}, - "tags": ["a", "z"], - } - } - result = convert_to_json_serializable(input_data) - assert result == expected - - @pytest.mark.unit - @staticmethod - def test_convert_list_with_sets() -> None: - """Test that lists containing sets have them converted to lists. - - This test verifies that the convert_to_json_serializable function recursively - processes lists containing sets. - """ - result = convert_to_json_serializable([{"a", "b"}, {"c", "d"}]) - assert result == [["a", "b"], ["c", "d"]] - - @pytest.mark.unit - @staticmethod - def test_convert_tuple_with_sets() -> None: - """Test that tuples containing sets have them converted to lists. - - This test verifies that the convert_to_json_serializable function recursively - processes tuples containing sets and converts the tuple itself to a list. - """ - result = convert_to_json_serializable(({"x", "y"}, {"z"})) - assert result == [["x", "y"], ["z"]] - - @pytest.mark.unit - @staticmethod - def test_convert_mixed_types_unchanged() -> None: - """Test that JSON-serializable types remain unchanged. - - This test verifies that the convert_to_json_serializable function does not - modify types that are already JSON-serializable (str, int, bool, None). - """ - input_data = { - "string": "test", - "number": 42, - "float": math.pi, - "boolean": True, - "null": None, - "list": [1, 2, 3], - } - result = convert_to_json_serializable(input_data) - assert result == input_data - - @pytest.mark.unit - @staticmethod - def test_convert_empty_set() -> None: - """Test that an empty set is converted to an empty list. - - This test verifies that the convert_to_json_serializable function correctly - handles empty sets. - """ - result = convert_to_json_serializable(set()) - assert result == [] - - @pytest.mark.unit - @staticmethod - def test_convert_complex_nested_structure() -> None: - """Test conversion of a complex nested structure with multiple sets. - - This test verifies that the convert_to_json_serializable function can handle - deeply nested structures with sets at various levels. - """ - input_data = { - "sdk": { - "tags": {"test_1", "test_2"}, - "metadata": { - "nested": [ - {"items": {1, 2}}, - {"values": {"a", "b"}}, - ] - }, - }, - "user": { - "groups": {"admin", "user"}, - }, - } - expected = { - "sdk": { - "tags": ["test_1", "test_2"], - "metadata": { - "nested": [ - {"items": [1, 2]}, - {"values": ["a", "b"]}, - ] - }, - }, - "user": { - "groups": ["admin", "user"], - }, - } - result = convert_to_json_serializable(input_data) - assert result == expected class TestMimeTypeToFileEnding: """Tests for the mime_type_to_file_ending function.""" - @pytest.mark.unit @staticmethod - def test_png_mime_type(record_property) -> None: + def test_png_mime_type() -> None: """Test that image/png MIME type returns .png extension. This test verifies that the mime_type_to_file_ending function correctly maps the image/png MIME type to the .png file extension. """ - record_property("tested-item-id", "SPEC-PLATFORM-SERVICE") assert mime_type_to_file_ending("image/png") == ".png" - @pytest.mark.unit @staticmethod - def test_tiff_mime_type(record_property) -> None: + def test_tiff_mime_type() -> None: """Test that image/tiff MIME type returns .tiff extension. This test verifies that the mime_type_to_file_ending function correctly maps the image/tiff MIME type to the .tiff file extension. """ - record_property("tested-item-id", "SPEC-PLATFORM-SERVICE") assert mime_type_to_file_ending("image/tiff") == ".tiff" - @pytest.mark.unit @staticmethod - def test_parquet_mime_type(record_property) -> None: + def test_parquet_mime_type() -> None: """Test that application/vnd.apache.parquet MIME type returns .parquet extension. This test verifies that the mime_type_to_file_ending function correctly maps the application/vnd.apache.parquet MIME type to the .parquet file extension. """ - record_property("tested-item-id", "SPEC-PLATFORM-SERVICE") assert mime_type_to_file_ending("application/vnd.apache.parquet") == ".parquet" - @pytest.mark.unit @staticmethod - def test_json_mime_type(record_property) -> None: + def test_json_mime_type() -> None: """Test that application/json MIME type returns .json extension. This test verifies that the mime_type_to_file_ending function correctly maps the application/json MIME type to the .json file extension. """ - record_property("tested-item-id", "SPEC-PLATFORM-SERVICE") assert mime_type_to_file_ending("application/json") == ".json" - @pytest.mark.unit @staticmethod - def test_geojson_mime_type(record_property) -> None: + def test_geojson_mime_type() -> None: """Test that application/geo+json MIME type returns .json extension. This test verifies that the mime_type_to_file_ending function correctly maps the application/geo+json MIME type to the .json file extension. """ - record_property("tested-item-id", "SPEC-PLATFORM-SERVICE") assert mime_type_to_file_ending("application/geo+json") == ".json" - @pytest.mark.unit @staticmethod - def test_csv_mime_type(record_property) -> None: + def test_csv_mime_type() -> None: """Test that text/csv MIME type returns .csv extension. This test verifies that the mime_type_to_file_ending function correctly maps the text/csv MIME type to the .csv file extension. """ - record_property("tested-item-id", "SPEC-PLATFORM-SERVICE") assert mime_type_to_file_ending("text/csv") == ".csv" - @pytest.mark.unit @staticmethod - def test_unknown_mime_type_raises_error(record_property) -> None: + def test_unknown_mime_type_raises_error() -> None: """Test that an unknown MIME type raises a ValueError. This test verifies that the mime_type_to_file_ending function correctly raises a ValueError when given an unrecognized MIME type. """ - record_property("tested-item-id", "SPEC-PLATFORM-SERVICE") with pytest.raises(ValueError, match="Unknown mime type: application/unknown"): mime_type_to_file_ending("application/unknown") diff --git a/tests/aignostics/qupath/TC-QUPATH-01.feature b/tests/aignostics/qupath/TC-QUPATH-01.feature deleted file mode 100644 index 400266cd..00000000 --- a/tests/aignostics/qupath/TC-QUPATH-01.feature +++ /dev/null @@ -1,18 +0,0 @@ -Feature: QuPath Software Management - - The system provides QuPath software installation, launch capabilities, - and project creation functionality for image visualization and analysis. - - @tests:SPEC-QUPATH-SERVICE - @tests:SWR-VISUALIZATION-1-1 - @tests:SWR-VISUALIZATION-1-2 - @tests:SWR-VISUALIZATION-1-3 - @tests:SWR-VISUALIZATION-1-4 - @id:TC-QUPATH-01 - Scenario: System manages QuPath software functionality - When the user initiates QuPath installation - Then the system shall install QuPath software and confirm installation completion with version information - When the user launches QuPath application - Then the system shall launch QuPath application when requested by user - When the user creates QuPath project from application results - Then the system shall create QuPath projects with annotation data from application run results diff --git a/tests/aignostics/qupath/cli_test.py b/tests/aignostics/qupath/cli_test.py index a94caa74..149cd43c 100644 --- a/tests/aignostics/qupath/cli_test.py +++ b/tests/aignostics/qupath/cli_test.py @@ -3,7 +3,6 @@ import json import platform import re -from pathlib import Path import psutil import pytest @@ -14,13 +13,10 @@ from tests.conftest import normalize_output -@pytest.mark.e2e -@pytest.mark.long_running @pytest.mark.skipif( platform.system() == "Linux" and platform.machine() in {"aarch64", "arm64"}, reason="QuPath is not supported on ARM64 Linux", ) -@pytest.mark.timeout(timeout=60 * 10) @pytest.mark.sequential def test_cli_install_and_uninstall(runner: CliRunner) -> None: """Check (un)install works for Windows, Mac and Linux package.""" @@ -61,16 +57,13 @@ def test_cli_install_and_uninstall(runner: CliRunner) -> None: result = runner.invoke(cli, ["qupath", "install"]) -@pytest.mark.e2e -@pytest.mark.long_running @pytest.mark.skipif( platform.system() == "Linux" and platform.machine() in {"aarch64", "arm64"}, reason="QuPath is not supported on ARM64 Linux", ) -@pytest.mark.timeout(timeout=60 * 10) @pytest.mark.sequential -def test_cli_install_launch_project_annotations_headless(runner: CliRunner, tmpdir, qupath_teardown) -> None: - """Check (un)install, launching headless, creating project and adding annotations works.""" +def test_cli_install_and_launch_headless(runner: CliRunner, qupath_teardown) -> None: + """Check (un)install and launching headless works.""" # Uninstall QuPath if it exists to have a clean state for the test result = runner.invoke(cli, ["qupath", "uninstall"]) was_installed = result.exit_code == 0 @@ -94,51 +87,33 @@ def test_cli_install_launch_project_annotations_headless(runner: CliRunner, tmpd assert output_data["qupath"]["app"]["version"]["version"] == QUPATH_VERSION assert result.exit_code == 0 - # Step 4: Check repeated QuPath install succeeds (idempotent operation) + # Step 4: Check QuPath install succeeds (idempotent operation) result = runner.invoke(cli, ["qupath", "install"]) assert f"QuPath v{QUPATH_VERSION} installed successfully" in normalize_output(result.output) assert result.exit_code == 0 - if platform.system() in {"Linux", "Darwin"}: - # Step 5: Create QuPath project and add image - project_dir = Path(tmpdir) / "qupath_project" - small_pyramidal_dcm = Path(__file__).parent.parent.parent / "resources" / "run" / "small-pyramidal.dcm" - result = runner.invoke(cli, ["qupath", "add", str(project_dir), str(small_pyramidal_dcm)]) - assert f"Added '1' images to project '{project_dir}'." in normalize_output(result.output) - assert project_dir.exists(), f"Project directory {project_dir} was not created" - assert project_dir.parent == Path(tmpdir), f"Project directory {project_dir} is not a subdirectory of {tmpdir}" - - # Step 6: Annotate project with polygons - cells_json = Path(__file__).parent.parent.parent / "resources" / "cells_broken.json" - result = runner.invoke(cli, ["qupath", "annotate", str(project_dir), str(small_pyramidal_dcm), str(cells_json)]) - assert "Failed to add images to project: parse error: premature EOF" in normalize_output(result.output) - assert result.exit_code == 1 - - # Step 7: Uninstall QuPath + # Step 5: Uninstall QuPath result = runner.invoke(cli, ["qupath", "uninstall"]) assert "QuPath uninstalled successfully." in normalize_output(result.output) assert result.exit_code == 0 - # Step 8: Check QuPath info fails if not installed + # Step 6: Check QuPath info fails if not installed result = runner.invoke(cli, ["system", "info"]) output_data = json.loads(result.output) assert output_data["qupath"]["app"]["path"] is None assert output_data["qupath"]["app"]["version"] is None assert result.exit_code == 0 - # Step 9: Reinstall QuPath if it was installed before + # Step 7: Reinstall QuPath if it was installed before if was_installed: result = runner.invoke(cli, ["qupath", "install"]) -@pytest.mark.e2e -@pytest.mark.long_running @pytest.mark.skipif( platform.system() == "Linux" and platform.machine() in {"aarch64", "arm64"}, reason="QuPath is not supported on ARM64 Linux", ) @pytest.mark.flaky(retries=1, delay=5, only_on=[AssertionError]) -@pytest.mark.timeout(timeout=60 * 10) @pytest.mark.sequential def test_cli_install_and_launch_ui(runner: CliRunner, qupath_teardown) -> None: """Check (un)install and launching UI versin of QuPath works.""" @@ -165,16 +140,11 @@ def test_cli_install_and_launch_ui(runner: CliRunner, qupath_teardown) -> None: pid_match = re.search(r"QuPath launched successfully with process id '(\d+)'.", normalize_output(result.output)) assert pid_match is not None, "PID not found in launch output" pid = int(pid_match.group(1)) - assert psutil.Process(pid).is_running(), f"QuPath process wit pid '{pid}' is not running as expected." + assert psutil.Process(pid).is_running(), "QuPath process is not running" # Step 4: Check we list the process via the CLI result = runner.invoke(cli, ["qupath", "processes", "--json"]) assert f'"pid": {pid},' in normalize_output(result.output) - assert result.exit_code == 0 - result = runner.invoke(cli, ["qupath", "processes"]) - assert "Process ID" in normalize_output(result.output) - assert f"{pid}" in normalize_output(result.output) - assert result.exit_code == 0 # Step 5: Terminate via CLI result = runner.invoke(cli, ["qupath", "terminate"]) diff --git a/tests/aignostics/qupath/gui_test.py b/tests/aignostics/qupath/gui_test.py index cf5b0cb1..5f2a1a2a 100644 --- a/tests/aignostics/qupath/gui_test.py +++ b/tests/aignostics/qupath/gui_test.py @@ -5,7 +5,6 @@ import re from asyncio import sleep from pathlib import Path -from typing import TYPE_CHECKING from unittest.mock import patch import platformdirs @@ -16,39 +15,23 @@ from aignostics.application import Service from aignostics.cli import cli +from aignostics.gui import HEALTH_UPDATE_INTERVAL +from aignostics.platform import ApplicationRunStatus from aignostics.qupath import QUPATH_LAUNCH_MAX_WAIT_TIME, QUPATH_VERSION from aignostics.utils import __project_name__ from tests.conftest import assert_notified, normalize_output, print_directory_structure -from tests.constants_test import ( - HETA_APPLICATION_ID, - HETA_APPLICATION_VERSION, - SPOT_0_EXPECTED_CELLS_CLASSIFIED, - SPOT_0_EXPECTED_RESULT_FILES, - SPOT_0_FILENAME, - SPOT_0_FILESIZE, - SPOT_0_GS_URL, - SPOT_0_HEIGHT, - SPOT_0_WIDTH, -) - -if TYPE_CHECKING: - from nicegui import ui +from tests.contants_test import HETA_APPLICATION_ID MESSAGE_NO_DOWNLOAD_FOLDER_SELECTED = "No download folder selected" -@pytest.mark.e2e -@pytest.mark.long_running -@pytest.mark.flaky(retries=1, delay=5) @pytest.mark.skipif( platform.system() == "Linux" and platform.machine() in {"aarch64", "arm64"}, reason="QuPath is not supported on ARM64 Linux", ) -@pytest.mark.timeout(timeout=60 * 10) @pytest.mark.sequential -async def test_gui_qupath_install_only(user: User, runner: CliRunner, silent_logging: None, record_property) -> None: +async def test_gui_qupath_install(user: User, runner: CliRunner, silent_logging: None) -> None: """Test that the user can install and launch QuPath via the GUI.""" - record_property("tested-item-id", "TC-QUPATH-01, SPEC-GUI-SERVICE") result = runner.invoke(cli, ["qupath", "uninstall"]) assert result.exit_code in {0, 2}, f"Uninstall command failed with exit code {result.exit_code}" was_installed = not result.exit_code @@ -58,6 +41,8 @@ async def test_gui_qupath_install_only(user: User, runner: CliRunner, silent_log await user.should_see("QuPath Extension") # Step 2: Check we indicate QuPath is not installed + await sleep(HEALTH_UPDATE_INTERVAL * 2) # Health UI updated in background + await user.should_see("Launchpad is unhealthy") await user.should_see("Install QuPath to enable visualizing your Whole Slide Image and application results") # Step 3: Install QuPath @@ -67,7 +52,7 @@ async def test_gui_qupath_install_only(user: User, runner: CliRunner, silent_log await assert_notified( user, f"QuPath installed successfully to '{app_dir}", - wait_seconds=60 * 2, + wait_seconds=35, ) # Step 4: Check we indicate QuPath is installed @@ -75,24 +60,25 @@ async def test_gui_qupath_install_only(user: User, runner: CliRunner, silent_log await user.should_see(f"QuPath {QUPATH_VERSION} is installed and ready to execute.") await user.should_see(marker="BUTTON_QUPATH_LAUNCH") + # Step 5: Check Launchpad turned healthy + # TODO(Helmut): reactivate + # await sleep(HEALTH_UPDATE_INTERVAL * 2) # Health UI updated in background # noqa: ERA001 + # await user.should_see("Launchpad is healthy") # noqa: ERA001 + if not was_installed: result = runner.invoke(cli, ["qupath", "uninstall"]) -@pytest.mark.e2e -@pytest.mark.long_running -@pytest.mark.flaky(retries=1, delay=5) @pytest.mark.skipif( - platform.system() == "Linux" and platform.machine() in {"arm64", "aarch64"}, + platform.system() == "Linux" and platform.machine() in {"aarch64", "arm64"}, reason="QuPath is not supported on ARM64 Linux", ) -@pytest.mark.timeout(timeout=60 * 10) -@pytest.mark.sequential +@pytest.mark.long_running async def test_gui_qupath_install_and_launch( - user: User, runner: CliRunner, silent_logging: None, qupath_teardown, record_property + user: User, runner: CliRunner, silent_logging: None, qupath_teardown ) -> None: """Test that the user can install and launch QuPath via the GUI.""" - record_property("tested-item-id", "TC-QUPATH-01, SPEC-GUI-SERVICE") + pytest.skip("Skip interim - TODO (Helmut)") result = runner.invoke(cli, ["qupath", "uninstall"]) assert result.exit_code in {0, 2}, f"Uninstall command failed with exit code {result.exit_code}" @@ -115,7 +101,7 @@ async def test_gui_qupath_install_and_launch( await assert_notified( user, f"QuPath installed successfully to '{app_dir}", - wait_seconds=60 * 8, + wait_seconds=35, ) # Step 4: Check we indicate QuPath is installed @@ -146,149 +132,79 @@ async def test_gui_qupath_install_and_launch( result = runner.invoke(cli, ["qupath", "uninstall"]) -@pytest.mark.e2e -@pytest.mark.long_running @pytest.mark.skipif( - (platform.system() == "Linux" and platform.machine() in {"aarch64", "arm64"}) or platform.system() == "Windows", - reason="QuPath is not supported on ARM64 Linux; Windows support is not fully tested yet", + platform.system() == "Linux" and platform.machine() in {"aarch64", "arm64"}, + reason="QuPath is not supported on ARM64 Linux", ) -@pytest.mark.timeout(timeout=60 * 15) -@pytest.mark.sequential -async def test_gui_run_qupath_install_to_inspect( # noqa: C901, PLR0912, PLR0913, PLR0914, PLR0915, PLR0917 - user: User, runner: CliRunner, tmp_path: Path, silent_logging: None, qupath_teardown: None, record_property +@pytest.mark.long_running +async def test_gui_run_qupath_install_to_inspect( # noqa: PLR0914, PLR0915 + user: User, runner: CliRunner, tmp_path: Path, silent_logging: None ) -> None: - """Test installing QuPath, downloading run results, creating QuPath project from it, and inspecting results.""" - record_property("tested-item-id", "TC-QUPATH-01, SPEC-GUI-SERVICE") - - # Find run - runs = Service().application_runs( - application_id=HETA_APPLICATION_ID, - application_version=HETA_APPLICATION_VERSION, - external_id=SPOT_0_GS_URL, - has_output=True, - limit=1, - ) - if not runs: - message = f"No matching runs found for application {HETA_APPLICATION_ID} ({HETA_APPLICATION_VERSION}). " - message += "This test requires the scheduled test test_application_runs_heta_version passing first." - pytest.skip(message) - - run_id = runs[0].run_id - - # Explore run - run = Service().application_run(run_id).details() - print( - f"Found existing run: {run.run_id}\n" - f"application: {run.application_id} ({run.version_number})\n" - f"status: {run.state}, output: {run.output}\n" - f"submitted at: {run.submitted_at}, terminated at: {run.terminated_at}\n" - f"statistics: {run.statistics!r}\n", - f"custom_metadata: {run.custom_metadata!r}\n", - ) + """Test that the user can open QuPath on a run.""" + result = runner.invoke(cli, ["qupath", "uninstall"]) + assert result.exit_code in {0, 2}, f"Uninstall command failed with exit code {result.exit_code}" + was_installed = not result.exit_code - # Explore results - results = list(Service().application_run(run_id).results()) - assert results, f"No results found for run {run_id}" - for item in results: - print( - f"Found item: {item.item_id}, status: {item.state}, output: {item.output}, " - f"external_id: {item.external_id}\n" - f"custom_metadata: {item.custom_metadata!r}\n", - ) + result = runner.invoke(cli, ["qupath", "install"]) + assert f"QuPath v{QUPATH_VERSION} installed successfully" in normalize_output(result.output) + assert result.exit_code == 0 with patch( "aignostics.application._gui._page_application_run_describe.get_user_data_directory", return_value=tmp_path ): - # Step 1: (Re)Install QuPath - result = runner.invoke(cli, ["qupath", "uninstall"]) - assert result.exit_code in {0, 2}, f"Uninstall command failed with exit code {result.exit_code}" - was_installed = not result.exit_code - - result = runner.invoke(cli, ["qupath", "install"]) - output = normalize_output(result.output, strip_ansi=True) - assert f"QuPath v{QUPATH_VERSION} installed successfully" in output, ( - f"Expected 'QuPath v{QUPATH_VERSION} installed successfully' in output.\nOutput: {output}" - ) - assert result.exit_code == 0 - - # Step 2: Go to latest completed run via GUI - await user.open(f"/application/run/{run.run_id}") - await user.should_see(f"Run {run.run_id}") - await user.should_see(f"Run of {HETA_APPLICATION_ID} ({HETA_APPLICATION_VERSION})") - - # Step 3: Open Result Download dialog + latest_version = Service().application_version_latest(Service().application(HETA_APPLICATION_ID)) + latest_version_id = latest_version.application_version_id + runs = Service().application_runs(limit=1, status=ApplicationRunStatus.COMPLETED) + + if not runs: + pytest.fail("No completed runs found, please run the test first.") + # Find a completed run with the latest application version ID + run = None + for potential_run in runs: + if potential_run.application_version_id == latest_version_id: + run = potential_run + break + if not run: + pytest.skip(f"No completed runs found with version {latest_version_id}") + + # Step 1: Go to latest completed run + print(f"Found existing run: {run.application_run_id}, status: {run.status}") + await user.open(f"/application/run/{run.application_run_id}") + await user.should_see(f"Run {run.application_run_id}") + await user.should_see(f"Run of {latest_version_id}") + + # Step 2: Open Result Download dialog await user.should_see(marker="BUTTON_OPEN_QUPATH", retries=100) user.find(marker="BUTTON_OPEN_QUPATH").click() - # Step 4: Select Data destination + # Step 3: Select Data await user.should_see(marker="BUTTON_DOWNLOAD_DESTINATION_DATA") - download_destination_data_button: ui.button = user.find( - marker="BUTTON_DOWNLOAD_DESTINATION_DATA" - ).elements.pop() - assert download_destination_data_button.enabled, "Download destination button should be enabled" user.find(marker="BUTTON_DOWNLOAD_DESTINATION_DATA").click() - await assert_notified(user, "Using Launchpad results directory", 30) - # Step 5: Trigger Download + # Step 3: Trigger Download await user.should_see(marker="DIALOG_BUTTON_DOWNLOAD_RUN") - download_run_button: ui.button = user.find(marker="DIALOG_BUTTON_DOWNLOAD_RUN").elements.pop() - assert download_run_button.enabled, "Download button should be enabled before downloading" user.find(marker="DIALOG_BUTTON_DOWNLOAD_RUN").click() - await assert_notified(user, "Downloading ...", 30) - # Step 6: Check download completes, QuPath project created, and QuPath launched - await assert_notified(user, "Download and QuPath project creation completed.", 60 * 5) + # Check: Download completed + await assert_notified(user, "Download and QuPath project creation completed.", 60) print_directory_structure(tmp_path, "execute") - - # Check for directory layout as expected - run_dir = tmp_path / run.run_id - assert run_dir.is_dir(), f"Expected run directory {run_dir} not found" - - subdirs = [d for d in run_dir.iterdir() if d.is_dir()] - assert len(subdirs) == 3, f"Expected three subdirectories in {run_dir}, but found {len(subdirs)}" - - input_dir = run_dir / "input" - assert input_dir.is_dir(), f"Expected input directory {input_dir} not found" - - results_dir = run_dir / SPOT_0_FILENAME.replace(".tiff", "") - assert results_dir.is_dir(), f"Expected run results directory {results_dir} not found" - - qupath_dir = run_dir / "qupath" - assert qupath_dir.is_dir(), f"Expected QuPath directory {qupath_dir} not found" - - # Check for input file having been downloaded - input_file = input_dir / SPOT_0_FILENAME - assert input_file.exists(), f"Expected input file {input_file} not found" - assert input_file.stat().st_size == SPOT_0_FILESIZE, ( - f"Expected input file size {SPOT_0_FILESIZE}, but got {input_file.stat().st_size}" + run_out_dir = tmp_path / run.application_run_id + assert run_out_dir.is_dir(), f"Expected run directory {run_out_dir} not found" + # Find any subdirectory in the run_out_dir that is not qupath + subdirs = [d for d in run_out_dir.iterdir() if d.is_dir() and d.name != "qupath"] + assert len(subdirs) > 0, f"Expected at least one non-qupath subdirectory in {run_out_dir}, but found none" + + # Take the first subdirectory found (item_out_dir) + item_out_dir = subdirs[0] + print(f"Found subdirectory: {item_out_dir.name}") + + # Check for files in the item directory + files_in_item_dir = list(item_out_dir.glob("*")) + assert len(files_in_item_dir) == 9, ( + f"Expected 9 files in {item_out_dir}, but found {len(files_in_item_dir)}: " + f"{[f.name for f in files_in_item_dir]}" ) - # Check for files in the results directory - files_in_results_dir = list(results_dir.glob("*")) - assert len(files_in_results_dir) == 9, ( - f"Expected 9 files in {results_dir}, but found {len(files_in_results_dir)}: " - f"{[f.name for f in files_in_results_dir]}" - ) - - print(f"Found files in {results_dir}:") - for filename, expected_size, tolerance_percent in SPOT_0_EXPECTED_RESULT_FILES: - file_path = results_dir / filename - if file_path.exists(): - actual_size = file_path.stat().st_size - print(f" {filename}: {actual_size} bytes (expected: {expected_size} ±{tolerance_percent}%)") - else: - print(f" {filename}: NOT FOUND") - for filename, expected_size, tolerance_percent in SPOT_0_EXPECTED_RESULT_FILES: - file_path = results_dir / filename - assert file_path.exists(), f"Expected file {filename} not found" - actual_size = file_path.stat().st_size - min_size = expected_size * (100 - tolerance_percent) // 100 - max_size = expected_size * (100 + tolerance_percent) // 100 - assert min_size <= actual_size <= max_size, ( - f"File size for {filename} ({actual_size} bytes) is outside allowed range " - f"({min_size} to {max_size} bytes, ±{tolerance_percent}% of {expected_size})" - ) - # Check QuPath is running notification = await assert_notified(user, "QuPath opened successfully", 30) pid_match = re.search(r"process id '(\d+)'", notification) @@ -302,48 +218,20 @@ async def test_gui_run_qupath_install_to_inspect( # noqa: C901, PLR0912, PLR091 except Exception as e: # noqa: BLE001 pytest.fail(f"Failed to kill QuPath process: {e}") - # Step 7: Inspect QuPath results - result = runner.invoke(cli, ["qupath", "inspect", str(qupath_dir)]) - output = normalize_output(result.output, strip_ansi=True) - print(repr(output)) + # Step 5: Inspect QuPath results + result = runner.invoke(cli, ["qupath", "inspect", str(run_out_dir / "qupath")]) + assert result.exit_code == 0 - # Check for (1) spot added to QuPath project, (2) heatmaps added, (3) spot annotated - try: - project_info = json.loads(output) - annotations_total = 0 - spot_found = False - spot_width = None - spot_height = None - qc_segmentation_map_found = False - tissue_segmentation_map_found = False - for image in project_info["images"]: - if image.get("name") == SPOT_0_FILENAME: - spot_found = True - spot_width = image.get("width") - spot_height = image.get("height") - hierarchy = image.get("hierarchy", {}) - spot_annotations = hierarchy.get("total", 0) - if image.get("name") == "tissue_qc_segmentation_map_image.tiff": - qc_segmentation_map_found = True - if image.get("name") == "tissue_segmentation_segmentation_map_image.tiff": - tissue_segmentation_map_found = True - assert spot_found, f"Spot '{SPOT_0_FILENAME}' not found in QuPath project" - assert spot_width == SPOT_0_WIDTH, f"Expected width of spot {SPOT_0_WIDTH}, but got {spot_width}" - assert spot_height == SPOT_0_HEIGHT, f"Expected height of spot {SPOT_0_HEIGHT}, but got {spot_height}" - assert qc_segmentation_map_found, "QC segmentation map image not found in QuPath project" - assert tissue_segmentation_map_found, "Tissue segmentation map image not found in QuPath project" - assert abs(spot_annotations - SPOT_0_EXPECTED_CELLS_CLASSIFIED[0]) <= ( - SPOT_0_EXPECTED_CELLS_CLASSIFIED[0] * SPOT_0_EXPECTED_CELLS_CLASSIFIED[1] // 100 - ), ( - f"Expected approximately {SPOT_0_EXPECTED_CELLS_CLASSIFIED[0]} " - f"({SPOT_0_EXPECTED_CELLS_CLASSIFIED[1]}% tolerance) annotations in the QuPath results, " - f"but found {annotations_total}" - ) - except json.JSONDecodeError as e: - pytest.fail(f"Failed to parse QuPath inspect output as JSON: {e}\nOutput: {output!r}\n") - - # Validate the inspect command exited successfully - assert result.exit_code == 0, f"QuPath inspect command failed with exit code {result.exit_code}" - - if not was_installed: - result = runner.invoke(cli, ["qupath", "uninstall"]) + # Check images have been annotated in the QuPath project created + print(result.output) + project_info = json.loads(result.output) + annotations_total = 0 + for image in project_info["images"]: + hierarchy = image.get("hierarchy", {}) + total = hierarchy.get("total", 0) + if total > 0: + annotations_total += total + assert annotations_total > 1000, "Expected at least 1001 annotations in the QuPath results" + + if not was_installed: + result = runner.invoke(cli, ["qupath", "uninstall"]) diff --git a/tests/aignostics/system/cli_test.py b/tests/aignostics/system/cli_test.py index 522e4ff7..59cff067 100644 --- a/tests/aignostics/system/cli_test.py +++ b/tests/aignostics/system/cli_test.py @@ -12,44 +12,37 @@ from aignostics.utils import __project_name__ from tests.conftest import normalize_output -THE_VALUE = "test_secret_value_not_real_for_testing_only" +THE_VALUE = "THE_VALUE" -@pytest.mark.e2e @pytest.mark.scheduled -@pytest.mark.timeout(timeout=60) def test_cli_health_json(runner: CliRunner, record_property) -> None: """Check health is true.""" - record_property("tested-item-id", "TEST-SYSTEM-CLI-HEALTH-JSON") + record_property("tested-item-id", "TEST-SYSTEM-CLI-HEALTH-YAML") result = runner.invoke(cli, ["system", "health"]) - assert normalize_output(result.stdout).startswith('{ "status": "UP"') assert result.exit_code == 0 + assert '"status": "UP"' in result.output -@pytest.mark.e2e -@pytest.mark.timeout(timeout=30) +@pytest.mark.scheduled def test_cli_health_yaml(runner: CliRunner, record_property) -> None: """Check health is true.""" - record_property("tested-item-id", "TEST-SYSTEM-CLI-HEALTH-YAML") + record_property("tested-item-id", "TEST-SYSTEM-CLI-HEALTH-JSON") result = runner.invoke(cli, ["system", "health", "--output-format", "yaml"]) - assert "status: UP" in result.output assert result.exit_code == 0 + assert "status: UP" in result.output -@pytest.mark.e2e -@pytest.mark.timeout(timeout=30) -def test_cli_info(runner: CliRunner, record_property) -> None: +@pytest.mark.sequential +def test_cli_info(runner: CliRunner) -> None: """Check aignostics.log in outpu of system info.""" - record_property("tested-item-id", "SPEC-SYSTEM-SERVICE") result = runner.invoke(cli, ["system", "info"]) assert result.exit_code == 0 assert "aignostics.log" in result.output -@pytest.mark.e2e -@pytest.mark.timeout(timeout=30) @pytest.mark.sequential -def test_cli_info_secrets(runner: CliRunner, caplog: pytest.LogCaptureFixture, record_property) -> None: +def test_cli_info_secrets(runner: CliRunner, caplog: pytest.LogCaptureFixture) -> None: """Check secrets only shown if requested. This test verifies that secrets are properly masked by default and only shown @@ -57,7 +50,6 @@ def test_cli_info_secrets(runner: CliRunner, caplog: pytest.LogCaptureFixture, r secret values in test failure output and disable logging to prevent secret exposure in logs. """ - record_property("tested-item-id", "SPEC-SYSTEM-SERVICE") # Disable all logging to prevent secrets from appearing in logs with runner.isolated_filesystem(), caplog.at_level(logging.CRITICAL + 1): # Set environment variable for the test @@ -84,13 +76,10 @@ def test_cli_info_secrets(runner: CliRunner, caplog: pytest.LogCaptureFixture, r assert secret_found, "Expected secret value to be present in unmasked output, but it was not found" -@pytest.mark.integration @patch("aignostics.utils._gui.gui_register_pages") @patch("nicegui.ui.run") -def test_cli_serve_api_and_app(mock_ui_run, mock_register_pages, runner: CliRunner, record_property) -> None: +def test_cli_serve_api_and_app(mock_ui_run, mock_register_pages, runner: CliRunner) -> None: """Check serve command starts the server with API and GUI app.""" - record_property("tested-item-id", "SPEC-SYSTEM-SERVICE") - # Create mocks for components needed in gui_run mock_app = MagicMock() @@ -99,8 +88,7 @@ def test_cli_serve_api_and_app(mock_ui_run, mock_register_pages, runner: CliRunn result = runner.invoke(cli, ["system", "serve", "--host", "127.0.0.1", "--port", "8000"]) assert result.exit_code == 0 - assert "Starting web application server" in result.output - assert "http://127.0.0.1:8000" in result.output + assert "Starting web application server at http://127.0.0.1:8000" in result.output # Check that gui_register_pages was called mock_register_pages.assert_called_once() @@ -118,14 +106,11 @@ def test_cli_serve_api_and_app(mock_ui_run, mock_register_pages, runner: CliRunn show_welcome_message=True, show=False, window_size=None, - reconnect_timeout=60 * 60 * 24 * 7, ) -@pytest.mark.integration -def test_cli_openapi_yaml(runner: CliRunner, record_property) -> None: +def test_cli_openapi_yaml(runner: CliRunner) -> None: """Check openapi command outputs YAML schema.""" - record_property("tested-item-id", "SPEC-SYSTEM-SERVICE") result = runner.invoke(cli, ["system", "openapi", "--output-format", "yaml"]) assert result.exit_code == 0 # Check for common OpenAPI YAML elements @@ -138,10 +123,8 @@ def test_cli_openapi_yaml(runner: CliRunner, record_property) -> None: assert "Error: Invalid API version 'v3'. Available versions: v1" in result.output -@pytest.mark.integration -def test_cli_openapi_json(runner: CliRunner, record_property) -> None: +def test_cli_openapi_json(runner: CliRunner) -> None: """Check openapi command outputs JSON schema.""" - record_property("tested-item-id", "SPEC-SYSTEM-SERVICE") result = runner.invoke(cli, ["system", "openapi"]) assert result.exit_code == 0 # Check for common OpenAPI JSON elements @@ -150,19 +133,15 @@ def test_cli_openapi_json(runner: CliRunner, record_property) -> None: assert '"paths":' in result.output -@pytest.mark.integration -def test_cli_install(runner: CliRunner, record_property) -> None: +def test_cli_install(runner: CliRunner) -> None: """Check install command runs successfully.""" - record_property("tested-item-id", "SPEC-SYSTEM-SERVICE") result = runner.invoke(cli, ["system", "install"]) assert result.exit_code == 0 -@pytest.mark.integration @pytest.mark.sequential -def test_cli_set_unset_get(runner: CliRunner, silent_logging, tmp_path, record_property) -> None: +def test_cli_set_unset_get(runner: CliRunner, silent_logging, tmp_path) -> None: """Check set, unset, and get commands.""" - record_property("tested-item-id", "SPEC-SYSTEM-SERVICE") with patch("aignostics.system.Service._get_env_files_paths", return_value=[tmp_path / ".env"]): (tmp_path / ".env").touch() result = runner.invoke(cli, ["system", "config", "unset", "test_key"]) @@ -193,11 +172,9 @@ def test_cli_set_unset_get(runner: CliRunner, silent_logging, tmp_path, record_p assert "None" in result.output -@pytest.mark.integration @pytest.mark.sequential -def test_cli_remote_diagnostics(runner: CliRunner, silent_logging, tmp_path: Path, record_property) -> None: +def test_cli_remote_diagnostics(runner: CliRunner, silent_logging, tmp_path: Path) -> None: """Check disable/enable remote diagnostics.""" - record_property("tested-item-id", "SPEC-SYSTEM-SERVICE") with patch("aignostics.system.Service._get_env_files_paths", return_value=[tmp_path / ".env"]): (tmp_path / ".env").touch() result = runner.invoke(cli, ["system", "config", "remote-diagnostics-disable"]) @@ -236,11 +213,9 @@ def test_cli_remote_diagnostics(runner: CliRunner, silent_logging, tmp_path: Pat assert "None" in result.output -@pytest.mark.integration @pytest.mark.sequential -def test_cli_http_proxy(runner: CliRunner, silent_logging, tmp_path: Path, record_property) -> None: # noqa: PLR0915 +def test_cli_http_proxy(runner: CliRunner, silent_logging, tmp_path: Path) -> None: # noqa: PLR0915 """Check disable/enable remote diagnostics.""" - record_property("tested-item-id", "SPEC-SYSTEM-SERVICE") with patch("aignostics.system.Service._get_env_files_paths", return_value=[tmp_path / ".env"]): # Set up a mock .env file (tmp_path / ".env").touch() @@ -383,55 +358,3 @@ def test_cli_http_proxy(runner: CliRunner, silent_logging, tmp_path: Path, recor result = runner.invoke(cli, ["system", "config", "get", "CURL_CA_BUNDLE"]) assert result.exit_code == 0 assert "None" in result.output - - -@pytest.mark.integration -def test_cli_dump_dot_env_file(runner: CliRunner, silent_logging, tmp_path: Path) -> None: - """Check dump-dot-env-file command creates a file with all settings.""" - with patch("aignostics.system.Service._get_env_files_paths", return_value=[tmp_path / ".env"]): - # Create the .env file that the system expects to exist - (tmp_path / ".env").touch() - - # Set some test environment variables to verify they appear in the dump - env = os.environ.copy() - env["AIGNOSTICS_SYSTEM_TOKEN"] = "test_token_value" # noqa: S105 # Test data, not a real password - env["AIGNOSTICS_PLATFORM_BASE_URL"] = "https://test.example.com" - - # Create a destination file path - destination = tmp_path / ".env.test" - - # Run the dump-dot-env-file command - result = runner.invoke(cli, ["system", "dump-dot-env-file", "--destination", str(destination)], env=env) - - # Check the command succeeded - assert result.exit_code == 0 - assert f"Settings dumped to {destination}" in normalize_output(result.output) - - # Verify the file was created - assert destination.exists() - assert destination.is_file() - - # Read and verify the content - content = destination.read_text() - - # Check that the file is not empty - assert len(content) > 0 - - # Check that it contains some expected settings keys (should be in uppercase with prefix) - lines = content.strip().split("\n") - assert len(lines) > 0 - - # Verify format: each line should be KEY=VALUE - for line in lines: - if line.strip(): # Skip empty lines - assert "=" in line, f"Line '{line}' does not have KEY=VALUE format" - - # Check for specific settings that should be present - # The settings should be in the format ENV_PREFIX + KEY in uppercase - settings_keys = [key.split("=")[0] for key in lines if "=" in key] - - # Should contain system settings - assert any("AIGNOSTICS_SYSTEM" in key for key in settings_keys), "Should contain AIGNOSTICS_SYSTEM settings" - - # Verify that the token value is present (unmasked in dump) - assert "AIGNOSTICS_SYSTEM_TOKEN=test_token_value" in content or "AIGNOSTICS_SYSTEM_TOKEN=None" in content diff --git a/tests/aignostics/system/gui_test.py b/tests/aignostics/system/gui_test.py index a4e696ad..7c63736f 100644 --- a/tests/aignostics/system/gui_test.py +++ b/tests/aignostics/system/gui_test.py @@ -13,22 +13,10 @@ from aignostics.utils import __project_name__ -@pytest.mark.integration -async def test_gui_system_alive(user: User, record_property) -> None: - """Test that the user sees the info page with the mask secrets switch on by default.""" - record_property("tested-item-id", "SPEC-SYSTEM-SERVICE, SPEC-GUI-SERVICE") - - await user.open("/alive") - await user.should_see("Yes") - - -@pytest.mark.e2e -@pytest.mark.flaky(retries=2, delay=5, only_on=[AssertionError]) -@pytest.mark.timeout(timeout=60 * 3) @pytest.mark.sequential async def test_gui_system_switch_right(user: User, silent_logging, record_property) -> None: """Test that the user sees the info page with the mask secrets switch on by default.""" - record_property("tested-item-id", "TEST-SYSTEM-GUI-SETTINGS-MASKING-DEFAULT, SPEC-GUI-SERVICE") + record_property("tested-item-id", "TEST-SYSTEM-GUI-SETTINGS-MASKING-DEFAULT") await user.open("/system") await user.should_see(__project_name__) await user.should_see("Health") @@ -38,29 +26,3 @@ async def test_gui_system_switch_right(user: User, silent_logging, record_proper switch_interaction: UserInteraction = user.find("Mask secrets") switch_element: switch = switch_interaction.elements.pop() assert switch_element.value is True - - -@pytest.mark.integration -@pytest.mark.timeout(timeout=60) -async def test_gui_system_health_shown_and_updated(user: User, silent_logging, record_property) -> None: - """Test that health status is always visible in the footer on any page. - - As a user, I expect the current health of Launchpad to be always visible - in the footer of the GUI so that I can ensure the system is operational. - """ - record_property("tested-item-id", "TEST-SYSTEM-GUI-HEALTH, SWR-SYSTEM-GUI-HEALTH-1") - - # Test that health is visible on multiple pages to verify "always visible" - pages_to_test = [ - "/", # Main page - "/system", # System page - "/dataset/idc", # Dataset page - ] - - for page in pages_to_test: - # Navigate to the page - await user.open(page) - - # Verify the health status footer component is visible - # The health_link() is rendered in the footer and has a tooltip "Check Launchpad Status" - await user.should_see("Check Launchpad Status", retries=5 * 100) diff --git a/tests/aignostics/system/service_pooling_test.py b/tests/aignostics/system/service_pooling_test.py index bff8d2e7..8b77b8d4 100644 --- a/tests/aignostics/system/service_pooling_test.py +++ b/tests/aignostics/system/service_pooling_test.py @@ -1,19 +1,15 @@ """Tests for system service connection pooling.""" -import pytest - from aignostics.system._service import Service -@pytest.mark.unit -def test_http_pool_is_shared(record_property) -> None: +def test_http_pool_is_shared() -> None: """Test that Service._get_http_pool returns the same instance across multiple calls. This ensures that all service instances share the same urllib3.PoolManager for efficient connection reuse when calling ipify. """ # Get pool instance - record_property("tested-item-id", "SPEC-SYSTEM-SERVICE") pool1 = Service._get_http_pool() # Get pool instance again (should return same instance) @@ -23,14 +19,11 @@ def test_http_pool_is_shared(record_property) -> None: assert pool1 is pool2, "Service._get_http_pool should return the same PoolManager instance" -@pytest.mark.unit -def test_http_pool_singleton(record_property) -> None: +def test_http_pool_singleton() -> None: """Test that Service._http_pool maintains a singleton pattern. Multiple service instances should share the same connection pool. """ - record_property("tested-item-id", "SPEC-SYSTEM-SERVICE") - # Create two service instances service1 = Service() service2 = Service() diff --git a/tests/aignostics/system/service_test.py b/tests/aignostics/system/service_test.py index abebfe1e..9a764a3f 100644 --- a/tests/aignostics/system/service_test.py +++ b/tests/aignostics/system/service_test.py @@ -8,11 +8,9 @@ from aignostics.system._service import Service -@pytest.mark.unit @pytest.mark.timeout(15) -def test_is_token_valid(record_property) -> None: +def test_is_token_valid() -> None: """Test that is_token_valid works correctly with environment variable.""" - record_property("tested-item-id", "SPEC-SYSTEM-SERVICE") # Set the environment variable for the test the_value = "the_value" with mock.patch.dict(os.environ, {"AIGNOSTICS_SYSTEM_TOKEN": the_value}): @@ -29,10 +27,8 @@ def test_is_token_valid(record_property) -> None: assert service.is_token_valid("") is False -@pytest.mark.unit -def test_is_token_valid_when_not_set(record_property) -> None: +def test_is_token_valid_when_not_set() -> None: """Test that is_token_valid handles the case when no token is set.""" - record_property("tested-item-id", "SPEC-SYSTEM-SERVICE") # Ensure the environment variable is not set with mock.patch.dict(os.environ, {"AIGNOSTICS_SYSTEM_TOKEN": ""}, clear=True): # Create a new service instance with no token set @@ -43,10 +39,8 @@ def test_is_token_valid_when_not_set(record_property) -> None: assert service.is_token_valid("") is False -@pytest.mark.unit -def test_is_secret_key_word_boundary_matching_positive_cases(record_property) -> None: +def test_is_secret_key_word_boundary_matching_positive_cases() -> None: """Test that word boundary terms are correctly identified as secrets.""" - record_property("tested-item-id", "SPEC-SYSTEM-SERVICE") # Test cases where "id" appears as a whole word - should be detected secret_keys = [ "id", # Exact match @@ -64,10 +58,8 @@ def test_is_secret_key_word_boundary_matching_positive_cases(record_property) -> assert Service._is_secret_key(key), f"Expected '{key}' to be identified as a secret key" -@pytest.mark.unit -def test_is_secret_key_word_boundary_matching_negative_cases(record_property) -> None: +def test_is_secret_key_word_boundary_matching_negative_cases() -> None: """Test that word boundary terms do not match partial words.""" - record_property("tested-item-id", "SPEC-SYSTEM-SERVICE") # Test cases where "id" appears as part of a larger word - should NOT be detected non_secret_keys = [ "valid", # Contains "id" but not as whole word @@ -84,7 +76,6 @@ def test_is_secret_key_word_boundary_matching_negative_cases(record_property) -> assert not Service._is_secret_key(key), f"Expected '{key}' to NOT be identified as a secret key" -@pytest.mark.unit def test_is_secret_key_string_match_terms_positive_cases(record_property) -> None: """Test that string match terms are correctly identified as secrets.""" record_property("tested-item-id", "SPEC-SYSTEM-SERVICE") @@ -157,10 +148,8 @@ def test_is_secret_key_string_match_terms_positive_cases(record_property) -> Non assert Service._is_secret_key(key), f"Expected '{key}' to be identified as a secret key" -@pytest.mark.unit -def test_is_secret_key_string_match_terms_edge_cases(record_property) -> None: +def test_is_secret_key_string_match_terms_edge_cases() -> None: """Test edge cases for string matching.""" - record_property("tested-item-id", "SPEC-SYSTEM-SERVICE") # Test that partial matches work correctly edge_cases = [ "keychain", # Contains "key" @@ -176,10 +165,8 @@ def test_is_secret_key_string_match_terms_edge_cases(record_property) -> None: assert Service._is_secret_key(key), f"Expected '{key}' to be identified as a secret key" -@pytest.mark.unit -def test_is_secret_key_non_secret_keys(record_property) -> None: +def test_is_secret_key_non_secret_keys() -> None: """Test that non-secret keys are correctly identified as non-secrets.""" - record_property("tested-item-id", "SPEC-SYSTEM-SERVICE") non_secret_keys = [ # Regular configuration keys "database_host", @@ -230,10 +217,8 @@ def test_is_secret_key_non_secret_keys(record_property) -> None: assert not Service._is_secret_key(key), f"Expected '{key}' to NOT be identified as a secret key" -@pytest.mark.unit -def test_is_secret_key_case_insensitivity(record_property) -> None: +def test_is_secret_key_case_insensitivity() -> None: """Test that the method is case insensitive.""" - record_property("tested-item-id", "SPEC-SYSTEM-SERVICE") test_cases = [ ("PASSWORD", True), ("password", True), @@ -254,10 +239,8 @@ def test_is_secret_key_case_insensitivity(record_property) -> None: assert result == expected, f"Expected _is_secret_key('{key}') to return {expected}, got {result}" -@pytest.mark.unit -def test_is_secret_key_special_characters_and_boundaries(record_property) -> None: +def test_is_secret_key_special_characters_and_boundaries() -> None: """Test handling of special characters and word boundaries.""" - record_property("tested-item-id", "SPEC-SYSTEM-SERVICE") test_cases = [ # Word boundary cases for "id" ("_id_", True), # Surrounded by underscores @@ -282,10 +265,8 @@ def test_is_secret_key_special_characters_and_boundaries(record_property) -> Non assert result == expected, f"Expected _is_secret_key('{key}') to return {expected}, got {result}" -@pytest.mark.unit -def test_is_secret_key_empty_and_none_like_inputs(record_property) -> None: +def test_is_secret_key_empty_and_none_like_inputs() -> None: """Test edge cases with empty or minimal inputs.""" - record_property("tested-item-id", "SPEC-SYSTEM-SERVICE") test_cases = [ ("", False), # Empty string (" ", False), # Whitespace only @@ -298,10 +279,8 @@ def test_is_secret_key_empty_and_none_like_inputs(record_property) -> None: assert result == expected, f"Expected _is_secret_key('{key}') to return {expected}, got {result}" -@pytest.mark.unit -def test_is_secret_key_real_world_examples(record_property) -> None: +def test_is_secret_key_real_world_examples() -> None: """Test with real-world examples of environment variable names.""" - record_property("tested-item-id", "SPEC-SYSTEM-SERVICE") # Common secret environment variables (should return True) secret_examples = [ "AWS_ACCESS_KEY_ID", diff --git a/tests/aignostics/utils/cli_test.py b/tests/aignostics/utils/cli_test.py index 01b83951..86a88e34 100644 --- a/tests/aignostics/utils/cli_test.py +++ b/tests/aignostics/utils/cli_test.py @@ -73,10 +73,8 @@ def mock_group(mock_typer: MockTyper) -> MockGroup: return MockGroup(mock_typer) -@pytest.mark.unit -def test_prepare_cli_adds_typers(mock_typer: MockTyper, record_property) -> None: +def test_prepare_cli_adds_typers(mock_typer: MockTyper) -> None: """Test that prepare_cli correctly adds discovered typers.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") with patch(LOCATE_IMPLEMENTATIONS_PATH) as mock_locate: # Create a different typer instance to be discovered other_typer = MockTyper() @@ -91,26 +89,21 @@ def test_prepare_cli_adds_typers(mock_typer: MockTyper, record_property) -> None mock_typer.add_typer.assert_called_once_with(other_typer) -@pytest.mark.unit -def test_prepare_cli_sets_epilog(mock_typer: MockTyper, record_property) -> None: +def test_prepare_cli_sets_epilog(mock_typer: MockTyper) -> None: """Test that prepare_cli correctly sets the epilog.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") with patch(LOCATE_IMPLEMENTATIONS_PATH, return_value=[]): prepare_cli(mock_typer, TEST_EPILOG) assert mock_typer.info.epilog == TEST_EPILOG -@pytest.mark.unit @pytest.mark.skip(reason="https://github.com/fastapi/typer/pull/1240") -def test_prepare_cli_sets_no_args_is_help(mock_typer: MockTyper, record_property) -> None: +def test_prepare_cli_sets_no_args_is_help(mock_typer: MockTyper) -> None: """Test that prepare_cli correctly sets no_args_is_help.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") with patch(LOCATE_IMPLEMENTATIONS_PATH, return_value=[]): prepare_cli(mock_typer, TEST_EPILOG) assert mock_typer.info.no_args_is_help is True -@pytest.mark.unit @pytest.mark.parametrize( ("argv_parts", "expected_calls"), [ @@ -122,10 +115,8 @@ def test_prepare_cli_conditional_epilog_recursion( argv_parts: list[str], expected_calls: int, mock_typer: MockTyper, - record_property, ) -> None: """Test that prepare_cli conditionally calls _add_epilog_recursively based on sys.argv.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") with ( patch(LOCATE_IMPLEMENTATIONS_PATH, return_value=[]), patch("aignostics.utils._cli.Path") as mock_path, @@ -136,11 +127,9 @@ def test_prepare_cli_conditional_epilog_recursion( assert mock_add_epilog.call_count == expected_calls -@pytest.mark.unit -def test_add_epilog_recursively_with_cycle(mock_typer: MockTyper, record_property) -> None: +def test_add_epilog_recursively_with_cycle(mock_typer: MockTyper) -> None: """Test that _add_epilog_recursively handles cycles in the typer structure.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") - # Create a cycle by having the typer external_id itself + # Create a cycle by having the typer reference itself mock_typer.registered_groups = [] group = Mock(spec=TyperInfo) group.typer_instance = mock_typer diff --git a/tests/aignostics/utils/di_test.py b/tests/aignostics/utils/di_test.py index f9fdf44a..2af3f7be 100644 --- a/tests/aignostics/utils/di_test.py +++ b/tests/aignostics/utils/di_test.py @@ -17,11 +17,9 @@ SCRIPT_FILENAME = "script.py" -@pytest.mark.unit @patch("aignostics.utils._cli.locate_implementations") -def test_prepare_cli_registers_subcommands(mock_locate_implementations: Mock, record_property) -> None: +def test_prepare_cli_registers_subcommands(mock_locate_implementations: Mock) -> None: """Test that prepare_cli registers all located implementations.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") # Setup cli = typer.Typer() mock_subcli = typer.Typer() @@ -35,11 +33,9 @@ def test_prepare_cli_registers_subcommands(mock_locate_implementations: Mock, re assert mock_subcli in [group.typer_instance for group in cli.registered_groups] -@pytest.mark.unit @patch("aignostics.utils._cli.locate_implementations") -def test_prepare_cli_sets_epilog_and_no_args_help(mock_locate_implementations: Mock, record_property) -> None: +def test_prepare_cli_sets_epilog_and_no_args_help(mock_locate_implementations: Mock) -> None: """Test that prepare_cli sets epilog and no_args_is_help on the cli instance.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") # Setup cli = typer.Typer() mock_locate_implementations.return_value = [cli] @@ -53,14 +49,12 @@ def test_prepare_cli_sets_epilog_and_no_args_help(mock_locate_implementations: M # assert cli.info.no_args_is_help is True -@pytest.mark.unit @patch("aignostics.utils._cli.Path") @patch("aignostics.utils._cli.locate_implementations") def test_prepare_cli_adds_epilog_to_commands_when_not_running_from_typer( - mock_locate_implementations: Mock, mock_path: Mock, record_property + mock_locate_implementations: Mock, mock_path: Mock ) -> None: """Test that prepare_cli adds epilog to commands when not running from typer.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") # Setup cli = typer.Typer() mock_command = MagicMock() @@ -76,15 +70,13 @@ def test_prepare_cli_adds_epilog_to_commands_when_not_running_from_typer( assert mock_command.epilog == TEST_EPILOG -@pytest.mark.unit @patch("aignostics.utils._cli._add_epilog_recursively") @patch("aignostics.utils._cli.Path") @patch("aignostics.utils._cli.locate_implementations") def test_prepare_cli_calls_add_epilog_recursively_when_not_running_from_typer( - mock_locate_implementations: Mock, mock_path: Mock, mock_add_epilog_recursively: Mock, record_property + mock_locate_implementations: Mock, mock_path: Mock, mock_add_epilog_recursively: Mock ) -> None: """Test that prepare_cli calls _add_epilog_recursively when not running from typer.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") # Setup cli = typer.Typer() mock_locate_implementations.return_value = [cli] @@ -98,14 +90,12 @@ def test_prepare_cli_calls_add_epilog_recursively_when_not_running_from_typer( mock_add_epilog_recursively.assert_called_once_with(cli, TEST_EPILOG) -@pytest.mark.unit @patch("aignostics.utils._cli._no_args_is_help_recursively") @patch("aignostics.utils._cli.locate_implementations") def test_prepare_cli_calls_no_args_is_help_recursively( - mock_locate_implementations: Mock, mock_no_args_is_help_recursively: Mock, record_property + mock_locate_implementations: Mock, mock_no_args_is_help_recursively: Mock ) -> None: """Test that prepare_cli calls _no_args_is_help_recursively.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") # Setup cli = typer.Typer() mock_locate_implementations.return_value = [cli] @@ -117,10 +107,8 @@ def test_prepare_cli_calls_no_args_is_help_recursively( mock_no_args_is_help_recursively.assert_called_once_with(cli) -@pytest.mark.unit -def test_add_epilog_recursively_sets_epilog_on_cli(record_property) -> None: +def test_add_epilog_recursively_sets_epilog_on_cli() -> None: """Test that _add_epilog_recursively sets epilog on the cli instance.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") # Setup cli = typer.Typer() @@ -131,10 +119,8 @@ def test_add_epilog_recursively_sets_epilog_on_cli(record_property) -> None: assert cli.info.epilog == TEST_EPILOG -@pytest.mark.unit -def test_add_epilog_recursively_sets_epilog_on_nested_typers(record_property) -> None: +def test_add_epilog_recursively_sets_epilog_on_nested_typers() -> None: """Test that _add_epilog_recursively sets epilog on nested typer instances.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") # Setup cli = typer.Typer() subcli = typer.Typer() @@ -147,10 +133,8 @@ def test_add_epilog_recursively_sets_epilog_on_nested_typers(record_property) -> assert subcli.info.epilog == TEST_EPILOG -@pytest.mark.unit -def test_no_args_is_help_recursively_sets_no_args_is_help_on_groups(record_property) -> None: +def test_no_args_is_help_recursively_sets_no_args_is_help_on_groups() -> None: """Test that _no_args_is_help_recursively sets no_args_is_help on groups.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") # Setup cli = typer.Typer() subcli = typer.Typer() @@ -169,11 +153,9 @@ def test_no_args_is_help_recursively_sets_no_args_is_help_on_groups(record_prope mock_group.no_args_is_help = True -@pytest.mark.unit @pytest.mark.skip(reason="https://github.com/fastapi/typer/pull/1240") -def test_no_args_is_help_recursively_calls_itself_on_nested_typers(record_property) -> None: +def test_no_args_is_help_recursively_calls_itself_on_nested_typers() -> None: """Test that _no_args_is_help_recursively calls itself on nested typer instances.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") # Setup cli = typer.Typer() subcli = typer.Typer() diff --git a/tests/aignostics/utils/fs_test.py b/tests/aignostics/utils/fs_test.py index ce41ed32..80f0b522 100644 --- a/tests/aignostics/utils/fs_test.py +++ b/tests/aignostics/utils/fs_test.py @@ -16,38 +16,30 @@ log = get_logger(__name__) -@pytest.mark.unit -def test_string_input_returns_string(record_property) -> None: +def test_string_input_returns_string() -> None: """Test that string input returns string output.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") result = sanitize_path("test/path") assert isinstance(result, str) assert result == "test/path" -@pytest.mark.unit -def test_path_input_returns_path(record_property) -> None: +def test_path_input_returns_path() -> None: """Test that Path input returns Path output.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") input_path = Path("test/path") result = sanitize_path(input_path) assert isinstance(result, Path) assert str(result) == str(Path("test/path")) -@pytest.mark.unit -def test_colon_replacement_on_all_platforms(record_property) -> None: +def test_colon_replacement_on_all_platforms() -> None: """Test that colons are replaced on all platforms.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") with patch("platform.system", return_value="Linux"): result = sanitize_path("test:path:with:colons") assert result == "test_path_with_colons" -@pytest.mark.unit -def test_windows_colon_replacement_enabled(record_property) -> None: +def test_windows_colon_replacement_enabled() -> None: """Test colon replacement on Windows when enabled.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") with patch("platform.system", return_value="Windows"): result = sanitize_path("test:path:with:colons") assert result == "test_path_with_colons" @@ -56,37 +48,29 @@ def test_windows_colon_replacement_enabled(record_property) -> None: assert result == "test_path_with_colons" -@pytest.mark.unit -def test_windows_drive_letter_preserved(record_property) -> None: +def test_windows_drive_letter_preserved() -> None: """Test that Windows drive letters are preserved when replacing colons.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") with patch("platform.system", return_value="Windows"): result = sanitize_path("C:/test:path") assert result == "C:/test_path" -@pytest.mark.unit -def test_windows_drive_letter_with_multiple_colons(record_property) -> None: +def test_windows_drive_letter_with_multiple_colons() -> None: """Test drive letter preservation with multiple colons.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") with patch("platform.system", return_value="Windows"): result = sanitize_path("D:/folder:name:with:colons") assert result == "D:/folder_name_with_colons" -@pytest.mark.unit -def test_windows_no_drive_letter_all_colons_replaced(record_property) -> None: +def test_windows_no_drive_letter_all_colons_replaced() -> None: """Test that all colons are replaced when no drive letter is present.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") with patch("platform.system", return_value="Windows"): result = sanitize_path("folder:name:with:colons") assert result == "folder_name_with_colons" -@pytest.mark.unit -def test_windows_single_char_with_colon_is_drive(record_property) -> None: +def test_windows_single_char_with_colon_is_drive() -> None: """Test that single character with colon IS treated as drive letter.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") with patch("platform.system", return_value="Windows"): # "a:test" has colon in position 1 and 'a' is alphabetic, so it IS treated as a drive letter # Only the part after the drive letter should have colons replaced @@ -94,10 +78,8 @@ def test_windows_single_char_with_colon_is_drive(record_property) -> None: assert result == "a:test" # Drive letter preserved, no additional colons to replace -@pytest.mark.unit -def test_windows_numeric_with_colon_not_drive(record_property) -> None: +def test_windows_numeric_with_colon_not_drive() -> None: """Test that numeric character with colon is not treated as drive letter.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") with patch("platform.system", return_value="Windows"): result = sanitize_path("1:test") assert result == "1_test" # All colons replaced since '1' is not alphabetic @@ -105,10 +87,8 @@ def test_windows_numeric_with_colon_not_drive(record_property) -> None: assert result == "1_/test" -@pytest.mark.unit -def test_windows_reserved_path_raises_error(record_property) -> None: +def test_windows_reserved_path_raises_error() -> None: """Test that reserved Windows paths raise ValueError.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") with ( patch("platform.system", return_value="Windows"), patch("pathlib.PureWindowsPath.is_reserved", return_value=True), @@ -117,10 +97,8 @@ def test_windows_reserved_path_raises_error(record_property) -> None: sanitize_path("CON") -@pytest.mark.unit -def test_windows_non_reserved_path_succeeds(record_property) -> None: +def test_windows_non_reserved_path_succeeds() -> None: """Test that non-reserved Windows paths succeed.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") with ( patch("platform.system", return_value="Windows"), patch("pathlib.PureWindowsPath.is_reserved", return_value=False), @@ -129,10 +107,8 @@ def test_windows_non_reserved_path_succeeds(record_property) -> None: assert result == "valid_path" -@pytest.mark.unit -def test_windows_reserved_path_with_path_object(record_property) -> None: +def test_windows_reserved_path_with_path_object() -> None: """Test that reserved Windows paths raise ValueError with Path input.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") with ( patch("platform.system", return_value="Windows"), patch("pathlib.PureWindowsPath.is_reserved", return_value=True), @@ -141,10 +117,8 @@ def test_windows_reserved_path_with_path_object(record_property) -> None: sanitize_path(Path("PRN")) -@pytest.mark.unit -def test_windows_reserved_path_after_colon_replacement(record_property) -> None: +def test_windows_reserved_path_after_colon_replacement() -> None: """Test reserved path check happens after colon replacement.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") with ( patch("platform.system", return_value="Windows"), patch("pathlib.PureWindowsPath.is_reserved", return_value=True), @@ -153,10 +127,8 @@ def test_windows_reserved_path_after_colon_replacement(record_property) -> None: sanitize_path("test:AUX") -@pytest.mark.unit -def test_non_windows_reserved_check_skipped(record_property) -> None: +def test_non_windows_reserved_check_skipped() -> None: """Test that reserved path check is skipped on non-Windows systems.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") with ( patch("platform.system", return_value="Linux"), patch("pathlib.PureWindowsPath.is_reserved", return_value=True), @@ -166,19 +138,15 @@ def test_non_windows_reserved_check_skipped(record_property) -> None: assert result == "CON" -@pytest.mark.unit -def test_windows_empty_string(record_property) -> None: +def test_windows_empty_string() -> None: """Test handling of empty string on Windows.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") with patch("platform.system", return_value="Windows"): result = sanitize_path("") assert not result -@pytest.mark.unit -def test_windows_path_object_preserves_type(record_property) -> None: +def test_windows_path_object_preserves_type() -> None: """Test that Path object input returns Path object with colon replacement.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") with patch("platform.system", return_value="Windows"): input_path = Path("test:path") result = sanitize_path(input_path) @@ -186,38 +154,30 @@ def test_windows_path_object_preserves_type(record_property) -> None: assert str(result) == "test_path" -@pytest.mark.unit -def test_windows_complex_path_with_drive(record_property) -> None: +def test_windows_complex_path_with_drive() -> None: """Test complex Windows path with drive letter and multiple colons.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") with patch("platform.system", return_value="Windows"): result = sanitize_path("C:/Users/test:user/Documents/file:name.txt") assert result == "C:/Users/test_user/Documents/file_name.txt" # Tests for sanitize_path_component function -@pytest.mark.unit -def test_sanitize_path_component_all_platforms(record_property) -> None: +def test_sanitize_path_component_all_platforms() -> None: """Test that sanitize_path_component replaces colons on all platforms.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") with patch("platform.system", return_value="Linux"): result = sanitize_path_component("test:component:with:colons") assert result == "test_component_with_colons" -@pytest.mark.unit -def test_sanitize_path_component_windows_replaces_all_colons(record_property) -> None: +def test_sanitize_path_component_windows_replaces_all_colons() -> None: """Test that sanitize_path_component replaces all colons on Windows.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") with patch("platform.system", return_value="Windows"): result = sanitize_path_component("test:component:with:colons") assert result == "test_component_with_colons" -@pytest.mark.unit -def test_sanitize_path_component_windows_drive_like_pattern(record_property) -> None: +def test_sanitize_path_component_windows_drive_like_pattern() -> None: """Test that sanitize_path_component replaces colons even for drive-like patterns.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") with patch("platform.system", return_value="Windows"): result = sanitize_path_component("a:whatever") assert result == "a_whatever" @@ -225,58 +185,46 @@ def test_sanitize_path_component_windows_drive_like_pattern(record_property) -> assert result == "C_filename" -@pytest.mark.unit -def test_sanitize_path_component_windows_empty_string(record_property) -> None: +def test_sanitize_path_component_windows_empty_string() -> None: """Test that sanitize_path_component handles empty string.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") with patch("platform.system", return_value="Windows"): result = sanitize_path_component("") assert not result -@pytest.mark.unit -def test_sanitize_path_component_windows_no_colons(record_property) -> None: +def test_sanitize_path_component_windows_no_colons() -> None: """Test that sanitize_path_component returns unchanged when no colons.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") with patch("platform.system", return_value="Windows"): result = sanitize_path_component("normal_filename.txt") assert result == "normal_filename.txt" -@pytest.mark.unit -def test_sanitize_path_component_multiple_consecutive_colons(record_property) -> None: +def test_sanitize_path_component_multiple_consecutive_colons() -> None: """Test that sanitize_path_component handles multiple consecutive colons.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") with patch("platform.system", return_value="Windows"): result = sanitize_path_component("file:::name") assert result == "file___name" # Tests for integration between sanitize_path and sanitize_path_component -@pytest.mark.unit -def test_sanitize_path_uses_sanitize_path_component_for_drive_path(record_property) -> None: +def test_sanitize_path_uses_sanitize_path_component_for_drive_path() -> None: """Test that sanitize_path uses sanitize_path_component for the non-drive part.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") with patch("platform.system", return_value="Windows"): # Drive letter should be preserved, but rest should be sanitized using sanitize_path_component result = sanitize_path("C:/folder:name:with:colons") assert result == "C:/folder_name_with_colons" -@pytest.mark.unit -def test_sanitize_path_uses_sanitize_path_component_for_non_drive_path(record_property) -> None: +def test_sanitize_path_uses_sanitize_path_component_for_non_drive_path() -> None: """Test that sanitize_path uses sanitize_path_component for paths without drive letters.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") with patch("platform.system", return_value="Windows"): result = sanitize_path("folder:name:with:colons") assert result == "folder_name_with_colons" # Tests for get_user_data_directory function -@pytest.mark.integration -def test_get_user_data_directory_without_scope(record_property, tmp_path) -> None: +def test_get_user_data_directory_without_scope(tmp_path) -> None: """Test get_user_data_directory returns correct path without scope.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") with ( patch("aignostics.utils._fs.platformdirs.user_data_dir") as mock_user_data_dir, patch("aignostics.utils._fs.__project_name__", "test_project"), @@ -293,10 +241,8 @@ def test_get_user_data_directory_without_scope(record_property, tmp_path) -> Non mock_mkdir.assert_called_once_with(parents=True, exist_ok=True) -@pytest.mark.integration -def test_get_user_data_directory_with_scope(record_property, tmp_path) -> None: +def test_get_user_data_directory_with_scope(tmp_path) -> None: """Test get_user_data_directory returns correct path with scope.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") with ( patch("aignostics.utils._fs.platformdirs.user_data_dir") as mock_user_data_dir, patch("aignostics.utils._fs.__project_name__", "test_project"), @@ -313,10 +259,8 @@ def test_get_user_data_directory_with_scope(record_property, tmp_path) -> None: mock_mkdir.assert_called_once_with(parents=True, exist_ok=True) -@pytest.mark.integration -def test_get_user_data_directory_with_nested_scope(record_property, tmp_path) -> None: +def test_get_user_data_directory_with_nested_scope(tmp_path) -> None: """Test get_user_data_directory returns correct path with nested scope.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") with ( patch("aignostics.utils._fs.platformdirs.user_data_dir") as mock_user_data_dir, patch("aignostics.utils._fs.__project_name__", "test_project"), @@ -333,10 +277,8 @@ def test_get_user_data_directory_with_nested_scope(record_property, tmp_path) -> mock_mkdir.assert_called_once_with(parents=True, exist_ok=True) -@pytest.mark.integration -def test_get_user_data_directory_read_only_environment_no_mkdir(record_property, tmp_path) -> None: +def test_get_user_data_directory_read_only_environment_no_mkdir(tmp_path) -> None: """Test get_user_data_directory doesn't create directory in read-only environment.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") with ( patch("aignostics.utils._fs.platformdirs.user_data_dir") as mock_user_data_dir, patch("aignostics.utils._fs.__project_name__", "test_project"), @@ -353,10 +295,8 @@ def test_get_user_data_directory_read_only_environment_no_mkdir(record_property, mock_mkdir.assert_not_called() -@pytest.mark.integration -def test_get_user_data_directory_empty_scope(record_property, tmp_path) -> None: +def test_get_user_data_directory_empty_scope(tmp_path) -> None: """Test get_user_data_directory handles empty scope string.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") with ( patch("aignostics.utils._fs.platformdirs.user_data_dir") as mock_user_data_dir, patch("aignostics.utils._fs.__project_name__", "test_project"), @@ -373,10 +313,8 @@ def test_get_user_data_directory_empty_scope(record_property, tmp_path) -> None: mock_mkdir.assert_called_once_with(parents=True, exist_ok=True) -@pytest.mark.integration -def test_get_user_data_directory_none_scope(record_property, tmp_path) -> None: +def test_get_user_data_directory_none_scope(tmp_path) -> None: """Test get_user_data_directory handles None scope.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") with ( patch("aignostics.utils._fs.platformdirs.user_data_dir") as mock_user_data_dir, patch("aignostics.utils._fs.__project_name__", "test_project"), @@ -394,10 +332,8 @@ def test_get_user_data_directory_none_scope(record_property, tmp_path) -> None: # Tests for open_user_data_directory function -@pytest.mark.integration -def test_open_user_data_directory_without_scope(record_property, tmp_path) -> None: +def test_open_user_data_directory_without_scope(tmp_path) -> None: """Test open_user_data_directory opens correct directory without scope.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") with ( patch("aignostics.utils._fs.platformdirs.user_data_dir") as mock_user_data_dir, patch("aignostics.utils._fs.__project_name__", "test_project"), @@ -416,10 +352,8 @@ def test_open_user_data_directory_without_scope(record_property, tmp_path) -> No mock_show_in_file_manager.assert_called_once_with(str(tmp_path / "test_project")) -@pytest.mark.integration -def test_open_user_data_directory_with_scope(record_property, tmp_path) -> None: +def test_open_user_data_directory_with_scope(tmp_path) -> None: """Test open_user_data_directory opens correct directory with scope.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") with ( patch("aignostics.utils._fs.platformdirs.user_data_dir") as mock_user_data_dir, patch("aignostics.utils._fs.__project_name__", "test_project"), @@ -438,10 +372,8 @@ def test_open_user_data_directory_with_scope(record_property, tmp_path) -> None: mock_show_in_file_manager.assert_called_once_with(str(tmp_path / "test_project" / "logs")) -@pytest.mark.integration -def test_open_user_data_directory_with_nested_scope(record_property, tmp_path) -> None: +def test_open_user_data_directory_with_nested_scope(tmp_path) -> None: """Test open_user_data_directory opens correct directory with nested scope.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") with ( patch("aignostics.utils._fs.platformdirs.user_data_dir") as mock_user_data_dir, patch("aignostics.utils._fs.__project_name__", "test_project"), @@ -460,10 +392,8 @@ def test_open_user_data_directory_with_nested_scope(record_property, tmp_path) - mock_show_in_file_manager.assert_called_once_with(str(tmp_path / "test_project" / "cache" / "models")) -@pytest.mark.integration -def test_open_user_data_directory_read_only_environment(record_property, tmp_path) -> None: +def test_open_user_data_directory_read_only_environment(tmp_path) -> None: """Test open_user_data_directory works in read-only environment.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") with ( patch("aignostics.utils._fs.platformdirs.user_data_dir") as mock_user_data_dir, patch("aignostics.utils._fs.__project_name__", "test_project"), @@ -482,10 +412,8 @@ def test_open_user_data_directory_read_only_environment(record_property, tmp_pat mock_show_in_file_manager.assert_called_once_with(str(tmp_path / "test_project" / "data")) -@pytest.mark.integration -def test_open_user_data_directory_show_in_file_manager_exception(record_property, tmp_path) -> None: +def test_open_user_data_directory_show_in_file_manager_exception(tmp_path) -> None: """Test open_user_data_directory handles show_in_file_manager exceptions gracefully.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") with ( patch("aignostics.utils._fs.platformdirs.user_data_dir") as mock_user_data_dir, patch("aignostics.utils._fs.__project_name__", "test_project"), diff --git a/tests/aignostics/utils/gui_test.py b/tests/aignostics/utils/gui_test.py index 9ec2c9b9..edffdb25 100644 --- a/tests/aignostics/utils/gui_test.py +++ b/tests/aignostics/utils/gui_test.py @@ -14,26 +14,22 @@ ) -@pytest.mark.unit -def test_base_page_builder_is_abstract(record_property) -> None: +def test_base_page_builder_is_abstract() -> None: """Test that BasePageBuilder is an abstract class. Args: - record_property: pytest record_property fixture + None """ - record_property("tested-item-id", "SPEC-UTILS-SERVICE, SPEC-GUI-SERVICE") with pytest.raises(TypeError): BasePageBuilder() # type: ignore # Cannot instantiate abstract class -@pytest.mark.unit -def test_register_pages_is_abstract(record_property) -> None: +def test_register_pages_is_abstract() -> None: """Test that register_pages is an abstract method. Args: - record_property: pytest record_property fixture + None """ - record_property("tested-item-id", "SPEC-UTILS-SERVICE, SPEC-GUI-SERVICE") class IncompletePageBuilder(BasePageBuilder): pass @@ -42,16 +38,13 @@ class IncompletePageBuilder(BasePageBuilder): IncompletePageBuilder() # type: ignore # Abstract method not implemented -@pytest.mark.unit @mock.patch("aignostics.utils._gui.locate_subclasses") -def test_register_pages_calls_all_builders(mock_locate_subclasses: mock.MagicMock, record_property) -> None: +def test_register_pages_calls_all_builders(mock_locate_subclasses: mock.MagicMock) -> None: """Test that gui_register_pages calls register_pages on all builders. Args: mock_locate_subclasses: Mock for locate_subclasses function - record_property: pytest record_property fixture """ - record_property("tested-item-id", "SPEC-UTILS-SERVICE, SPEC-GUI-SERVICE") # Create mock page builders mock_builder1 = mock.MagicMock() mock_builder2 = mock.MagicMock() @@ -65,20 +58,18 @@ def test_register_pages_calls_all_builders(mock_locate_subclasses: mock.MagicMoc mock_builder2.register_pages.assert_called_once() -@pytest.mark.unit -@pytest.mark.skip(reason="Nicegui 3 complexity.") @mock.patch("aignostics.utils._gui.__is_running_in_container__", False) @mock.patch("aignostics.utils._gui.gui_register_pages") @mock.patch("nicegui.ui") -def test_gui_run_default_params(record_property, mock_ui: mock.MagicMock, mock_register_pages: mock.MagicMock) -> None: +@pytest.mark.skip(reason="Nicegui 3 complexity.") +def test_gui_run_default_params(mock_ui: mock.MagicMock, mock_register_pages: mock.MagicMock) -> None: """Test gui_run with default parameters. Args: mock_ui: Mock for nicegui UI mock_register_pages: Mock for gui_register_pages function - record_property: pytest record_property fixture + nicegui_reset_globals: Fixture to reset NiceGUI globals """ - record_property("tested-item-id", "SPEC-UTILS-SERVICE, SPEC-GUI-SERVICE") with mock.patch("nicegui.native.find_open_port", return_value=8000): os.environ["NICEGUI_SCREEN_TEST_PORT"] = "3392" gui_run() @@ -92,21 +83,17 @@ def test_gui_run_default_params(record_property, mock_ui: mock.MagicMock, mock_r assert call_kwargs["port"] == 8000 -@pytest.mark.unit @mock.patch("aignostics.utils._gui.__is_running_in_container__", False) @mock.patch("aignostics.utils._gui.gui_register_pages") @mock.patch("nicegui.ui.run") -def test_gui_run_custom_params( - mock_ui_run: mock.MagicMock, mock_register_pages: mock.MagicMock, record_property -) -> None: +def test_gui_run_custom_params(mock_ui_run: mock.MagicMock, mock_register_pages: mock.MagicMock) -> None: """Test gui_run with custom parameters. Args: mock_ui_run: Mock for nicegui UI run mock_register_pages: Mock for gui_register_pages function - record_property: pytest record_property fixture + nicegui_reset_globals: Fixture to reset NiceGUI globals """ - record_property("tested-item-id", "SPEC-UTILS-SERVICE, SPEC-GUI-SERVICE") os.environ["NICEGUI_SCREEN_TEST_PORT"] = "3392" gui_run( native=False, @@ -128,17 +115,15 @@ def test_gui_run_custom_params( assert call_kwargs["show"] is True -@pytest.mark.unit @mock.patch("aignostics.utils._gui.__is_running_in_container__", True) @mock.patch("nicegui.ui.run") -def test_gui_run_in_container_with_native(mock_ui_run: mock.MagicMock, record_property) -> None: +def test_gui_run_in_container_with_native(mock_ui_run: mock.MagicMock) -> None: """Test that gui_run raises ValueError when running native in container. Args: mock_ui_run: Mock for nicegui UI run - record_property: pytest record_property fixture + nicegui_reset_globals: Fixture to reset NiceGUI globals """ - record_property("tested-item-id", "SPEC-UTILS-SERVICE, SPEC-GUI-SERVICE") with pytest.raises(ValueError) as excinfo: gui_run(native=True) assert "Native GUI cannot be run in a container" in str(excinfo.value) diff --git a/tests/aignostics/utils/health_test.py b/tests/aignostics/utils/health_test.py index 8dbddb7d..1a2b6781 100644 --- a/tests/aignostics/utils/health_test.py +++ b/tests/aignostics/utils/health_test.py @@ -10,20 +10,16 @@ log = get_logger(__name__) -@pytest.mark.unit -def test_health_default_status(record_property) -> None: +def test_health_default_status() -> None: """Test that health can be initialized with default UP status.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") health = Health(status=Health.Code.UP) assert health.status == Health.Code.UP assert health.reason is None assert health.components == {} -@pytest.mark.unit -def test_health_down_requires_reason(record_property) -> None: +def test_health_down_requires_reason() -> None: """Test that a DOWN status requires a reason.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") # Valid case - DOWN with reason health = Health(status=Health.Code.DOWN, reason="Database connection failed") assert health.status == Health.Code.DOWN @@ -34,18 +30,14 @@ def test_health_down_requires_reason(record_property) -> None: Health(status=Health.Code.DOWN) -@pytest.mark.unit -def test_health_up_with_reason_invalid(record_property) -> None: +def test_health_up_with_reason_invalid() -> None: """Test that an UP status cannot have a reason.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") with pytest.raises(ValueError, match="Health UP must not have reason"): Health(status=Health.Code.UP, reason="This should not be allowed") -@pytest.mark.unit -def test_compute_health_from_components_no_components(record_property) -> None: +def test_compute_health_from_components_no_components() -> None: """Test that health status is unchanged when there are no components.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") health = Health(status=Health.Code.UP) result = health.compute_health_from_components() @@ -54,10 +46,8 @@ def test_compute_health_from_components_no_components(record_property) -> None: assert result is health # Should return self -@pytest.mark.unit -def test_compute_health_from_components_already_down(record_property) -> None: +def test_compute_health_from_components_already_down() -> None: """Test that health status remains DOWN with original reason when already DOWN.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") health = Health(status=Health.Code.DOWN, reason="Original failure") health.components = { "database": Health(status=Health.Code.DOWN, reason=DB_FAILURE), @@ -71,10 +61,8 @@ def test_compute_health_from_components_already_down(record_property) -> None: assert result is health # Should return self -@pytest.mark.unit -def test_compute_health_from_components_single_down(record_property) -> None: +def test_compute_health_from_components_single_down() -> None: """Test that health status is DOWN when a single component is DOWN.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") health = Health(status=Health.Code.UP) health.components = { "database": Health(status=Health.Code.DOWN, reason=DB_FAILURE), @@ -88,10 +76,8 @@ def test_compute_health_from_components_single_down(record_property) -> None: assert result is health # Should return self -@pytest.mark.unit -def test_compute_health_from_components_multiple_down(record_property) -> None: +def test_compute_health_from_components_multiple_down() -> None: """Test that health status is DOWN with correct reason when multiple components are DOWN.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") health = Health(status=Health.Code.UP) health.components = { "database": Health(status=Health.Code.DOWN, reason=DB_FAILURE), @@ -111,10 +97,8 @@ def test_compute_health_from_components_multiple_down(record_property) -> None: assert result is health # Should return self -@pytest.mark.unit -def test_compute_health_recursive(record_property) -> None: +def test_compute_health_recursive() -> None: """Test that health status is recursively computed through the component tree.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") # Create a nested health structure deep_component = Health(status=Health.Code.DOWN, reason="Deep failure") mid_component = Health( @@ -137,26 +121,20 @@ def test_compute_health_recursive(record_property) -> None: assert health.components["other"].status == Health.Code.UP -@pytest.mark.unit -def test_str_representation_up(record_property) -> None: +def test_str_representation_up() -> None: """Test string representation of UP health status.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") health = Health(status=Health.Code.UP) assert str(health) == "UP" -@pytest.mark.unit -def test_str_representation_down(record_property) -> None: +def test_str_representation_down() -> None: """Test string representation of DOWN health status.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") health = Health(status=Health.Code.DOWN, reason="Service unavailable") assert str(health) == "DOWN: Service unavailable" -@pytest.mark.unit -def test_validate_health_state_integration(record_property) -> None: +def test_validate_health_state_integration() -> None: """Test the complete validation process with complex health tree.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") # Create a complex health tree health = Health( status=Health.Code.UP, @@ -188,10 +166,8 @@ def test_validate_health_state_integration(record_property) -> None: assert health.components["monitoring"].status == Health.Code.UP -@pytest.mark.unit -def test_health_manually_set_components_validated(record_property) -> None: +def test_health_manually_set_components_validated() -> None: """Test that manually setting components triggers validation.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") health = Health(status=Health.Code.UP) # Now manually set components that would cause validation to fail diff --git a/tests/aignostics/utils/log_test.py b/tests/aignostics/utils/log_test.py index 7197f9e7..4e7144ed 100644 --- a/tests/aignostics/utils/log_test.py +++ b/tests/aignostics/utils/log_test.py @@ -14,17 +14,13 @@ log = get_logger(__name__) -@pytest.mark.unit -def test_validate_file_name_none(record_property) -> None: +def test_validate_file_name_none() -> None: """Test that None file name is returned unchanged.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") assert _validate_file_name(None) is None -@pytest.mark.integration -def test_validate_file_name_nonexistent(record_property) -> None: +def test_validate_file_name_nonexistent() -> None: """Test validation of a non-existent file that can be created.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") with tempfile.TemporaryDirectory() as temp_dir: test_file = Path(temp_dir) / "test_log.log" assert _validate_file_name(str(test_file)) == str(test_file) @@ -32,10 +28,8 @@ def test_validate_file_name_nonexistent(record_property) -> None: assert not test_file.exists() -@pytest.mark.integration -def test_validate_file_name_existing(record_property) -> None: +def test_validate_file_name_existing() -> None: """Test validation of an existing writable file.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") with tempfile.NamedTemporaryFile(mode="w", encoding="utf-8", delete=False) as temp_file: temp_file_path = Path(temp_file.name) @@ -47,10 +41,8 @@ def test_validate_file_name_existing(record_property) -> None: temp_file_path.unlink(missing_ok=True) -@pytest.mark.integration -def test_validate_file_name_existing_readonly(record_property) -> None: +def test_validate_file_name_existing_readonly() -> None: """Test validation of an existing read-only file.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") with tempfile.NamedTemporaryFile(mode="w", encoding="utf-8", delete=False) as temp_file: temp_file_path = Path(temp_file.name) @@ -67,22 +59,18 @@ def test_validate_file_name_existing_readonly(record_property) -> None: temp_file_path.unlink(missing_ok=True) -@pytest.mark.integration -def test_validate_file_name_directory(record_property) -> None: +def test_validate_file_name_directory() -> None: """Test validation of a path that points to a directory.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") with tempfile.TemporaryDirectory() as temp_dir, pytest.raises(ValueError, match=r"exists but is a directory"): _validate_file_name(temp_dir) -@pytest.mark.integration @pytest.mark.skipif( platform.system() == "Windows", reason="This test is designed for Unix-like systems where permissions can be set to non-writable.", ) -def test_validate_file_name_cannot_create(record_property) -> None: +def test_validate_file_name_cannot_create() -> None: """Test validation of a file that cannot be created due to permissions.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") with tempfile.TemporaryDirectory() as temp_dir: temp_dir_path = Path(temp_dir) temp_dir_path.chmod(0o555) @@ -95,44 +83,34 @@ def test_validate_file_name_cannot_create(record_property) -> None: temp_dir_path.chmod(0o755) -@pytest.mark.unit -def test_validate_file_name_invalid_path(record_property) -> None: +def test_validate_file_name_invalid_path() -> None: """Test validation of a file with an invalid path.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") # Testing with a path that should always be invalid invalid_path = Path("/nonexistent/directory/that/definitely/should/not/exist") / "file.log" with pytest.raises(ValueError, match=r"cannot be created"): _validate_file_name(str(invalid_path)) -@pytest.mark.unit -def test_get_logger_with_name(record_property) -> None: +def test_get_logger_with_name() -> None: """Test get_logger with a specific name.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") logger = get_logger("test_module") assert logger.name == "aignostics.test_module" -@pytest.mark.unit -def test_get_logger_none(record_property) -> None: +def test_get_logger_none() -> None: """Test get_logger with None name.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") logger = get_logger(None) assert logger.name == "aignostics" -@pytest.mark.unit -def test_get_logger_project_name(record_property) -> None: +def test_get_logger_project_name() -> None: """Test get_logger with the project name.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") logger = get_logger("aignostics") assert logger.name == "aignostics" -@pytest.mark.unit -def test_logging_initialize_with_defaults(record_property) -> None: +def test_logging_initialize_with_defaults() -> None: """Test logging_initialize with default settings.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") with ( mock.patch("aignostics.utils._log.load_settings") as mock_load_settings, mock.patch("logging.basicConfig") as mock_basic_config, diff --git a/tests/aignostics/utils/sentry_test.py b/tests/aignostics/utils/sentry_test.py index 445a8d80..f394be53 100644 --- a/tests/aignostics/utils/sentry_test.py +++ b/tests/aignostics/utils/sentry_test.py @@ -41,10 +41,8 @@ def mock_environment() -> Generator[None, None, None]: ): yield - @pytest.mark.unit - def test_validate_url_scheme(record_property) -> None: + def test_validate_url_scheme() -> None: """Test URL scheme validation.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") import urllib.parse # Valid case @@ -56,10 +54,8 @@ def test_validate_url_scheme(record_property) -> None: with pytest.raises(ValueError, match=re.escape(_ERR_MSG_MISSING_SCHEME)): _validate_url_scheme(invalid_url) - @pytest.mark.unit - def test_validate_url_netloc(record_property) -> None: + def test_validate_url_netloc() -> None: """Test network location validation.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") import urllib.parse # Valid case @@ -71,10 +67,8 @@ def test_validate_url_netloc(record_property) -> None: with pytest.raises(ValueError, match=re.escape(_ERR_MSG_MISSING_NETLOC)): _validate_url_netloc(invalid_url) - @pytest.mark.unit - def test_validate_https_scheme(record_property) -> None: + def test_validate_https_scheme() -> None: """Test HTTPS scheme validation.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") import urllib.parse # Valid case @@ -86,10 +80,8 @@ def test_validate_https_scheme(record_property) -> None: with pytest.raises(ValueError, match=re.escape(_ERR_MSG_NON_HTTPS)): _validate_https_scheme(invalid_url) - @pytest.mark.unit - def test_validate_sentry_domain(record_property) -> None: + def test_validate_sentry_domain() -> None: """Test Sentry domain validation.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") import urllib.parse # Valid cases @@ -109,25 +101,19 @@ def test_validate_sentry_domain(record_property) -> None: with pytest.raises(ValueError, match=re.escape(_ERR_MSG_INVALID_DOMAIN)): _validate_sentry_domain(invalid_netloc) - @pytest.mark.unit - def test_validate_https_dsn_with_valid_dsn(record_property) -> None: + def test_validate_https_dsn_with_valid_dsn() -> None: """Test DSN validation with valid DSN.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") valid_dsn = SecretStr(VALID_DSN) result = _validate_https_dsn(valid_dsn) assert result is valid_dsn # Should return the same object - @pytest.mark.unit - def test_validate_https_dsn_with_none(record_property) -> None: + def test_validate_https_dsn_with_none() -> None: """Test DSN validation with None value.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") result = _validate_https_dsn(None) assert result is None # Should return None unchanged - @pytest.mark.unit - def test_validate_https_dsn_invalid_cases(record_property) -> None: + def test_validate_https_dsn_invalid_cases() -> None: """Test DSN validation with various invalid cases.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") # Missing scheme with pytest.raises(ValueError, match=re.escape(_ERR_MSG_MISSING_SCHEME)): _validate_https_dsn(SecretStr("//invalid.com")) @@ -144,10 +130,8 @@ def test_validate_https_dsn_invalid_cases(record_property) -> None: with pytest.raises(ValueError, match=re.escape(_ERR_MSG_INVALID_DOMAIN)): _validate_https_dsn(SecretStr("https://user@example.com")) - @pytest.mark.unit - def test_sentry_initialize_with_no_dsn(record_property, mock_environment: None) -> None: + def test_sentry_initialize_with_no_dsn(mock_environment: None) -> None: """Test sentry_initialize with no DSN.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") with mock.patch("aignostics.utils._sentry.load_settings") as mock_load_settings: mock_settings = mock.MagicMock() mock_settings.dsn = None @@ -156,10 +140,8 @@ def test_sentry_initialize_with_no_dsn(record_property, mock_environment: None) result = sentry_initialize() assert result is False # Should return False when no DSN is provided - @pytest.mark.unit - def test_sentry_initialize_with_valid_dsn(record_property, mock_environment: None) -> None: + def test_sentry_initialize_with_valid_dsn(mock_environment: None) -> None: """Test sentry_initialize with a valid DSN.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") with ( mock.patch("aignostics.utils._sentry.load_settings") as mock_load_settings, mock.patch("sentry_sdk.init") as mock_sentry_init, diff --git a/tests/aignostics/utils/settings_test.py b/tests/aignostics/utils/settings_test.py index 8cbbd9a8..6edda0fb 100644 --- a/tests/aignostics/utils/settings_test.py +++ b/tests/aignostics/utils/settings_test.py @@ -4,7 +4,6 @@ from typing import Any, ClassVar from unittest.mock import patch -import pytest from pydantic import SecretStr from aignostics.utils._settings import ( @@ -15,31 +14,23 @@ ) -@pytest.mark.unit -def test_strip_to_none_before_validator_with_none(record_property) -> None: +def test_strip_to_none_before_validator_with_none() -> None: """Test that None is returned when None is passed.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") assert strip_to_none_before_validator(None) is None -@pytest.mark.unit -def test_strip_to_none_before_validator_with_empty_string(record_property) -> None: +def test_strip_to_none_before_validator_with_empty_string() -> None: """Test that None is returned when an empty string is passed.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") assert strip_to_none_before_validator("") is None -@pytest.mark.unit -def test_strip_to_none_before_validator_with_whitespace_string(record_property) -> None: +def test_strip_to_none_before_validator_with_whitespace_string() -> None: """Test that None is returned when a whitespace string is passed.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") assert strip_to_none_before_validator(" \t\n ") is None -@pytest.mark.unit -def test_strip_to_none_before_validator_with_valid_string(record_property) -> None: +def test_strip_to_none_before_validator_with_valid_string() -> None: """Test that a stripped string is returned when a valid string is passed.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") assert strip_to_none_before_validator(" test ") == "test" @@ -51,10 +42,8 @@ class TheTestSettings(OpaqueSettings): required_value: str -@pytest.mark.unit -def test_opaque_settings_serialize_sensitive_info_with_unhide(record_property) -> None: +def test_opaque_settings_serialize_sensitive_info_with_unhide() -> None: """Test that sensitive info is revealed when unhide_sensitive_info is True.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") secret = SecretStr("sensitive") context = {UNHIDE_SENSITIVE_INFO: True} @@ -63,10 +52,8 @@ def test_opaque_settings_serialize_sensitive_info_with_unhide(record_property) - assert result == "sensitive" -@pytest.mark.unit -def test_opaque_settings_serialize_sensitive_info_without_unhide(record_property) -> None: +def test_opaque_settings_serialize_sensitive_info_without_unhide() -> None: """Test that sensitive info is hidden when unhide_sensitive_info is False.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") secret = SecretStr("sensitive") context = {UNHIDE_SENSITIVE_INFO: False} @@ -75,10 +62,8 @@ def test_opaque_settings_serialize_sensitive_info_without_unhide(record_property assert result == "**********" -@pytest.mark.unit -def test_opaque_settings_serialize_sensitive_info_empty(record_property) -> None: +def test_opaque_settings_serialize_sensitive_info_empty() -> None: """Test that None is returned when the SecretStr is empty.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") secret = SecretStr("") context = {} @@ -87,34 +72,27 @@ def test_opaque_settings_serialize_sensitive_info_empty(record_property) -> None assert result is None -@pytest.mark.unit @patch.dict(os.environ, {"REQUIRED_VALUE": "test_value"}) -def test_load_settings_success(record_property) -> None: +def test_load_settings_success() -> None: """Test successful settings loading.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") settings = load_settings(TheTestSettings) assert settings.test_value == "default" assert settings.required_value == "test_value" -@pytest.mark.unit @patch("sys.exit") @patch("rich.console.Console.print") -def test_load_settings_validation_error(mock_console_print, mock_exit, record_property) -> None: +def test_load_settings_validation_error(mock_console_print, mock_exit) -> None: """Test that validation error is handled properly.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") # The settings class requires required_value, but we're not providing it # This will trigger a validation error load_settings(TheTestSettings) + # Verify that sys.exit was called with the correct code mock_exit.assert_called_once_with(78) - assert mock_console_print.call_count == 1, ( - "Expected console.print to be called exactly once, but was called " - f"{mock_console_print.call_count} times. If this test fails with a higher call count, " - "you likely have AIGNOSTICS_LOG_CONSOLE_ENABLE=true in your .env file. " - "Disable console logging to make this test pass." - ) + # Verify that console.print was called (with a panel showing the error) + mock_console_print.assert_called_once() class TheTestSettingsWithEnvPrefix(OpaqueSettings): @@ -125,11 +103,9 @@ class TheTestSettingsWithEnvPrefix(OpaqueSettings): value: str -@pytest.mark.unit @patch.dict(os.environ, {"TEST_VALUE": "prefixed_value"}) -def test_settings_with_env_prefix(record_property) -> None: +def test_settings_with_env_prefix() -> None: """Test that settings with environment prefix work correctly.""" - record_property("tested-item-id", "SPEC-UTILS-SERVICE") settings = load_settings(TheTestSettingsWithEnvPrefix) assert settings.value == "prefixed_value" diff --git a/tests/aignostics/utils/user_agent_test.py b/tests/aignostics/utils/user_agent_test.py deleted file mode 100644 index ab182eca..00000000 --- a/tests/aignostics/utils/user_agent_test.py +++ /dev/null @@ -1,258 +0,0 @@ -"""Tests for user agent string generation.""" - -import platform -from unittest.mock import patch - -import pytest - -from aignostics.utils._user_agent import user_agent - - -@pytest.mark.unit -def test_user_agent_basic_format(monkeypatch: pytest.MonkeyPatch) -> None: - """Test basic user agent format without optional environment variables.""" - # Clear environment variables - monkeypatch.delenv("PYTEST_CURRENT_TEST", raising=False) - monkeypatch.delenv("GITHUB_RUN_ID", raising=False) - monkeypatch.delenv("GITHUB_REPOSITORY", raising=False) - - with ( - patch("aignostics.utils._user_agent.__project_name__", "aignostics"), - patch("aignostics.utils._user_agent.__version_full__", "1.0.0"), - patch("aignostics.utils._user_agent.__repository_url__", "https://github.com/aignostics/python-sdk"), - ): - result = user_agent() - - # Check basic structure - assert result.startswith("aignostics-python-sdk/1.0.0 (") - assert platform.platform() in result - assert "https://github.com/aignostics/python-sdk" in result - assert result.endswith(")") - - -@pytest.mark.unit -def test_user_agent_with_pytest_current_test(monkeypatch: pytest.MonkeyPatch) -> None: - """Test user agent includes PYTEST_CURRENT_TEST when set.""" - test_name = "tests/test_module.py::test_function" - monkeypatch.setenv("PYTEST_CURRENT_TEST", test_name) - monkeypatch.delenv("GITHUB_RUN_ID", raising=False) - monkeypatch.delenv("GITHUB_REPOSITORY", raising=False) - - with ( - patch("aignostics.utils._user_agent.__project_name__", "aignostics"), - patch("aignostics.utils._user_agent.__version_full__", "1.0.0"), - patch("aignostics.utils._user_agent.__repository_url__", "https://github.com/aignostics/python-sdk"), - ): - result = user_agent() - - assert test_name in result - assert f"; {test_name})" in result - - -@pytest.mark.unit -def test_user_agent_with_github_run_info(monkeypatch: pytest.MonkeyPatch) -> None: - """Test user agent includes GitHub run information when both variables are set.""" - run_id = "12345678" - repository = "aignostics/python-sdk" - monkeypatch.delenv("PYTEST_CURRENT_TEST", raising=False) - monkeypatch.setenv("GITHUB_RUN_ID", run_id) - monkeypatch.setenv("GITHUB_REPOSITORY", repository) - - with ( - patch("aignostics.utils._user_agent.__project_name__", "aignostics"), - patch("aignostics.utils._user_agent.__version_full__", "1.0.0"), - patch("aignostics.utils._user_agent.__repository_url__", "https://github.com/aignostics/python-sdk"), - ): - result = user_agent() - - expected_github_url = f"+https://github.com/{repository}/actions/runs/{run_id}" - assert expected_github_url in result - assert result.endswith(f"{expected_github_url})") - - -@pytest.mark.unit -def test_user_agent_with_github_run_id_only(monkeypatch: pytest.MonkeyPatch) -> None: - """Test user agent does not include GitHub Actions run URL when only GITHUB_RUN_ID is set.""" - monkeypatch.delenv("PYTEST_CURRENT_TEST", raising=False) - monkeypatch.setenv("GITHUB_RUN_ID", "12345678") - monkeypatch.delenv("GITHUB_REPOSITORY", raising=False) - - with ( - patch("aignostics.utils._user_agent.__project_name__", "aignostics"), - patch("aignostics.utils._user_agent.__version_full__", "1.0.0"), - patch("aignostics.utils._user_agent.__repository_url__", "https://github.com/aignostics/python-sdk"), - ): - result = user_agent() - - # The GitHub Actions run URL (with +https prefix) should not be included - # when GITHUB_REPOSITORY is not set - assert "+https://github.com/aignostics/python-sdk/actions/runs/" not in result - - -@pytest.mark.unit -def test_user_agent_with_github_repository_only(monkeypatch: pytest.MonkeyPatch) -> None: - """Test user agent does not include GitHub Actions run URL when only GITHUB_REPOSITORY is set.""" - monkeypatch.delenv("PYTEST_CURRENT_TEST", raising=False) - monkeypatch.delenv("GITHUB_RUN_ID", raising=False) - monkeypatch.setenv("GITHUB_REPOSITORY", "aignostics/python-sdk") - - with ( - patch("aignostics.utils._user_agent.__project_name__", "aignostics"), - patch("aignostics.utils._user_agent.__version_full__", "1.0.0"), - patch("aignostics.utils._user_agent.__repository_url__", "https://github.com/aignostics/python-sdk"), - ): - result = user_agent() - - # The GitHub Actions run URL (with +https prefix) should not be included - # when GITHUB_RUN_ID is not set - assert "+https://github.com/aignostics/python-sdk/actions/runs/" not in result - - -@pytest.mark.unit -def test_user_agent_with_all_optional_variables(monkeypatch: pytest.MonkeyPatch) -> None: - """Test user agent includes all optional information when all variables are set.""" - test_name = "tests/test_module.py::test_function" - run_id = "12345678" - repository = "aignostics/python-sdk" - - monkeypatch.setenv("PYTEST_CURRENT_TEST", test_name) - monkeypatch.setenv("GITHUB_RUN_ID", run_id) - monkeypatch.setenv("GITHUB_REPOSITORY", repository) - - with ( - patch("aignostics.utils._user_agent.__project_name__", "aignostics"), - patch("aignostics.utils._user_agent.__version_full__", "1.0.0"), - patch("aignostics.utils._user_agent.__repository_url__", "https://github.com/aignostics/python-sdk"), - ): - result = user_agent() - - # Check all components are present - assert "aignostics-python-sdk/1.0.0" in result - assert platform.platform() in result - assert "https://github.com/aignostics/python-sdk" in result - assert test_name in result - expected_github_url = f"+https://github.com/{repository}/actions/runs/{run_id}" - assert expected_github_url in result - - # Check ordering - test name should come before GitHub URL - test_index = result.index(test_name) - github_index = result.index(expected_github_url) - assert test_index < github_index - - -@pytest.mark.unit -def test_user_agent_version_with_build_number(monkeypatch: pytest.MonkeyPatch) -> None: - """Test user agent properly handles version with build number.""" - monkeypatch.delenv("PYTEST_CURRENT_TEST", raising=False) - monkeypatch.delenv("GITHUB_RUN_ID", raising=False) - monkeypatch.delenv("GITHUB_REPOSITORY", raising=False) - - with ( - patch("aignostics.utils._user_agent.__project_name__", "aignostics"), - patch("aignostics.utils._user_agent.__version_full__", "1.0.0+42"), - patch("aignostics.utils._user_agent.__repository_url__", "https://github.com/aignostics/python-sdk"), - ): - result = user_agent() - - assert "aignostics-python-sdk/1.0.0+42" in result - - -@pytest.mark.unit -def test_user_agent_special_characters_in_test_name(monkeypatch: pytest.MonkeyPatch) -> None: - """Test user agent handles special characters in test names.""" - test_name = "tests/test_module.py::TestClass::test_method[param-with-dashes]" - monkeypatch.setenv("PYTEST_CURRENT_TEST", test_name) - monkeypatch.delenv("GITHUB_RUN_ID", raising=False) - monkeypatch.delenv("GITHUB_REPOSITORY", raising=False) - - with ( - patch("aignostics.utils._user_agent.__project_name__", "aignostics"), - patch("aignostics.utils._user_agent.__version_full__", "1.0.0"), - patch("aignostics.utils._user_agent.__repository_url__", "https://github.com/aignostics/python-sdk"), - ): - result = user_agent() - - assert test_name in result - - -@pytest.mark.unit -def test_user_agent_format_consistency(monkeypatch: pytest.MonkeyPatch) -> None: - """Test that user agent format is consistent across different scenarios.""" - monkeypatch.delenv("PYTEST_CURRENT_TEST", raising=False) - monkeypatch.delenv("GITHUB_RUN_ID", raising=False) - monkeypatch.delenv("GITHUB_REPOSITORY", raising=False) - - with ( - patch("aignostics.utils._user_agent.__project_name__", "aignostics"), - patch("aignostics.utils._user_agent.__version_full__", "1.0.0"), - patch("aignostics.utils._user_agent.__repository_url__", "https://github.com/aignostics/python-sdk"), - ): - result = user_agent() - - # Verify format: {base_info} ({system_info}) - assert result.count("(") == 1 - assert result.count(")") == 1 - assert result.index("(") < result.index(")") - assert result.endswith(")") - - -@pytest.mark.unit -def test_user_agent_empty_environment_variables(monkeypatch: pytest.MonkeyPatch) -> None: - """Test user agent handles empty string environment variables correctly.""" - # Empty strings should be treated as not set - monkeypatch.setenv("PYTEST_CURRENT_TEST", "") - monkeypatch.setenv("GITHUB_RUN_ID", "") - monkeypatch.setenv("GITHUB_REPOSITORY", "") - - with ( - patch("aignostics.utils._user_agent.__project_name__", "aignostics"), - patch("aignostics.utils._user_agent.__version_full__", "1.0.0"), - patch("aignostics.utils._user_agent.__repository_url__", "https://github.com/aignostics/python-sdk"), - ): - result = user_agent() - - # Empty strings are falsy in Python, so they should not be included - # Only base info and system info should be present - # Should have: base_info, platform, repository_url (no optional parts) - # Format: base_info (platform; repository_url) - assert result.startswith("aignostics-python-sdk/1.0.0") - # The GitHub Actions run URL should not be included with empty env vars - assert "+https://github.com/aignostics/python-sdk/actions/runs/" not in result - - -@pytest.mark.unit -def test_user_agent_different_repository_urls(monkeypatch: pytest.MonkeyPatch) -> None: - """Test user agent works with different repository URL formats.""" - monkeypatch.delenv("PYTEST_CURRENT_TEST", raising=False) - monkeypatch.delenv("GITHUB_RUN_ID", raising=False) - monkeypatch.delenv("GITHUB_REPOSITORY", raising=False) - - custom_repo_url = "https://gitlab.com/custom-org/custom-project" - - with ( - patch("aignostics.utils._user_agent.__project_name__", "aignostics"), - patch("aignostics.utils._user_agent.__version_full__", "1.0.0"), - patch("aignostics.utils._user_agent.__repository_url__", custom_repo_url), - ): - result = user_agent() - - assert custom_repo_url in result - - -@pytest.mark.unit -def test_user_agent_platform_info_included(monkeypatch: pytest.MonkeyPatch) -> None: - """Test that platform information is always included.""" - monkeypatch.delenv("PYTEST_CURRENT_TEST", raising=False) - monkeypatch.delenv("GITHUB_RUN_ID", raising=False) - monkeypatch.delenv("GITHUB_REPOSITORY", raising=False) - - with ( - patch("aignostics.utils._user_agent.__project_name__", "aignostics"), - patch("aignostics.utils._user_agent.__version_full__", "1.0.0"), - patch("aignostics.utils._user_agent.__repository_url__", "https://github.com/aignostics/python-sdk"), - ): - result = user_agent() - - # Platform info should always be present - platform_info = platform.platform() - assert platform_info in result diff --git a/tests/aignostics/wsi/cli_test.py b/tests/aignostics/wsi/cli_test.py index f8d3aab4..f90c10f0 100644 --- a/tests/aignostics/wsi/cli_test.py +++ b/tests/aignostics/wsi/cli_test.py @@ -1,25 +1,17 @@ """Tests to verify the CLI functionality of the wsi module.""" -import platform -import sys from pathlib import Path -from unittest.mock import MagicMock, patch -import pytest from typer.testing import CliRunner from aignostics.cli import cli -from tests.conftest import normalize_output SERIES_UID = "1.3.6.1.4.1.5962.99.1.1069745200.1645485340.1637452317744.2.0" THUMBNAIL_UID = "1.3.6.1.4.1.5962.99.1.1038911754.1238045814.1637421484298.15.0" -@pytest.mark.integration -@pytest.mark.timeout(timeout=60 * 5) -def test_inspect_openslide_dicom(runner: CliRunner, record_property) -> None: +def test_inspect_openslide_dicom(runner: CliRunner) -> None: """Check expected column returned.""" - record_property("tested-item-id", "SPEC-WSI-SERVICE") file_path = Path(__file__).parent.parent.parent / "resources" / "run" / "small-pyramidal.dcm" result = runner.invoke(cli, ["wsi", "inspect", str(file_path)]) assert result.exit_code == 0 @@ -35,15 +27,8 @@ def test_inspect_openslide_dicom(runner: CliRunner, record_property) -> None: ) -@pytest.mark.skipif( - sys.platform == "win32" and platform.machine().lower() in {"arm64", "aarch64"} and sys.version_info[:2] == (3, 12), - reason="Skipping on Windows ARM with Python 3.12.x", -) -@pytest.mark.integration -@pytest.mark.timeout(timeout=60 * 5) -def test_inspect_pydicom_directory_non_verbose(runner: CliRunner, record_property) -> None: +def test_inspect_pydicom_directory(runner: CliRunner) -> None: """Check expected column returned.""" - record_property("tested-item-id", "SPEC-WSI-SERVICE") file_path = Path(__file__).parent.parent.parent / "resources" result = runner.invoke(cli, ["wsi", "dicom", "inspect", str(file_path)]) assert result.exit_code == 0 @@ -56,17 +41,8 @@ def test_inspect_pydicom_directory_non_verbose(runner: CliRunner, record_propert ) -@pytest.mark.skipif( - platform.system() == "Windows" - and platform.machine().lower() in {"arm64", "aarch64"} - and sys.version_info[:2] == (3, 12), - reason="Skipping on Windows ARM with Python 3.12.x given instability of pydicom on this platform", -) -@pytest.mark.integration -@pytest.mark.timeout(timeout=60 * 5) -def test_inspect_pydicom_directory_verbose(runner: CliRunner, record_property) -> None: +def test_inspect_pydicom_directory_verbose(runner: CliRunner) -> None: """Check expected column returned.""" - record_property("tested-item-id", "SPEC-WSI-SERVICE") file_path = Path(__file__).parent.parent.parent / "resources" result = runner.invoke(cli, ["wsi", "dicom", "inspect", "--verbose", str(file_path)]) assert result.exit_code == 0 @@ -83,17 +59,8 @@ def test_inspect_pydicom_directory_verbose(runner: CliRunner, record_property) - ) -@pytest.mark.skipif( - platform.system() == "Windows" - and platform.machine().lower() in {"arm64", "aarch64"} - and sys.version_info[:2] == (3, 12), - reason="Skipping on Windows ARM with Python 3.12.x given instability of pydicom on this platform", -) -@pytest.mark.integration -@pytest.mark.timeout(timeout=60 * 5) -def test_inspect_pydicom_geojson_import(runner: CliRunner, record_property) -> None: +def test_inspect_pydicom_geojson_import(runner: CliRunner) -> None: """Check expected column returned.""" - record_property("tested-item-id", "SPEC-WSI-SERVICE") dicom_path = Path(__file__).parent.parent.parent / "resources" / "run" / "small-pyramidal.dcm" geojson_path = Path(__file__).parent.parent.parent / "resources" / "cells.json" result = runner.invoke(cli, ["wsi", "dicom", "geojson_import", str(dicom_path), str(geojson_path)]) @@ -104,55 +71,3 @@ def test_inspect_pydicom_geojson_import(runner: CliRunner, record_property) -> N "Failed to import GeoJSON: Expecting value: line 1 column 1 (char 0)", ] ) - - -@pytest.mark.integration -def test_wsi_inspect_error_handling(runner: CliRunner, record_property) -> None: - """Test that wsi inspect command properly displays error messages.""" - record_property("tested-item-id", "SPEC-WSI-SERVICE") - file_path = Path(__file__).parent.parent.parent / "resources" / "run" / "small-pyramidal.dcm" - error_message = "Mock error: Failed to read file" - - with patch("aignostics.wsi._service.Service.get_metadata") as mock_get_metadata: - mock_get_metadata.side_effect = RuntimeError(error_message) - result = runner.invoke(cli, ["wsi", "inspect", str(file_path)]) - - assert result.exit_code == 1 - assert "Failed to read file" in normalize_output(result.output) - assert str(file_path) in normalize_output(result.output) - - -@pytest.mark.integration -def test_wsi_dicom_inspect_error_handling(runner: CliRunner, record_property) -> None: - """Test that wsi dicom inspect command properly displays error messages.""" - record_property("tested-item-id", "SPEC-WSI-SERVICE") - file_path = Path(__file__).parent.parent.parent / "resources" - error_message = "Mock error: Invalid DICOM structure" - - with patch("aignostics.wsi._pydicom_handler.PydicomHandler.from_file") as mock_from_file: - mock_handler = MagicMock() - mock_handler.__enter__ = MagicMock(side_effect=RuntimeError(error_message)) - mock_from_file.return_value = mock_handler - - result = runner.invoke(cli, ["wsi", "dicom", "inspect", str(file_path)]) - - assert result.exit_code == 1 - assert "Invalid DICOM structure" in normalize_output(result.output) - - -@pytest.mark.integration -def test_wsi_dicom_geojson_import_error_handling(runner: CliRunner, record_property) -> None: - """Test that wsi dicom geojson_import command properly displays error messages.""" - record_property("tested-item-id", "SPEC-WSI-SERVICE") - dicom_path = Path(__file__).parent.parent.parent / "resources" / "run" / "small-pyramidal.dcm" - geojson_path = Path(__file__).parent.parent.parent / "resources" / "cells.json" - error_message = "Mock error: Invalid GeoJSON format" - - with patch("aignostics.wsi._pydicom_handler.PydicomHandler.geojson_import") as mock_geojson_import: - mock_geojson_import.side_effect = ValueError(error_message) - result = runner.invoke(cli, ["wsi", "dicom", "geojson_import", str(dicom_path), str(geojson_path)]) - - assert result.exit_code == 1 - assert "Invalid GeoJSON format" in normalize_output(result.output) - assert str(geojson_path) in normalize_output(result.output) - assert str(dicom_path) in normalize_output(result.output) diff --git a/tests/aignostics/wsi/service_test.py b/tests/aignostics/wsi/service_test.py index b79213ca..c8b35dc6 100644 --- a/tests/aignostics/wsi/service_test.py +++ b/tests/aignostics/wsi/service_test.py @@ -7,7 +7,6 @@ from io import BytesIO from pathlib import Path -import pytest from fastapi.testclient import TestClient from nicegui import app from nicegui.testing import User @@ -16,10 +15,8 @@ CONTENT_LENGTH_FALLBACK = 32066 # Fallback image size in bytes -@pytest.mark.integration -def test_serve_thumbnail_fails_on_missing_file(user: User, record_property) -> None: +def test_serve_thumbnail_fails_on_missing_file(user: User) -> None: """Test that the thumbnail fails on missing file.""" - record_property("tested-item-id", "SPEC-WSI-SERVICE") client = TestClient(app) test_dir = Path(__file__).parent @@ -31,10 +28,8 @@ def test_serve_thumbnail_fails_on_missing_file(user: User, record_property) -> N assert int(response.headers["Content-Length"]) == CONTENT_LENGTH_FALLBACK -@pytest.mark.integration -def test_serve_thumbnail_fails_on_unsupported_filetype(user: User, record_property) -> None: +def test_serve_thumbnail_fails_on_unsupported_filetype(user: User) -> None: """Test that the thumbnail falls back on unsupported_filetype.""" - record_property("tested-item-id", "SPEC-WSI-SERVICE") client = TestClient(app) test_dir = Path(__file__).parent @@ -46,10 +41,8 @@ def test_serve_thumbnail_fails_on_unsupported_filetype(user: User, record_proper assert int(response.headers["Content-Length"]) == CONTENT_LENGTH_FALLBACK -@pytest.mark.integration -def test_serve_thumbnail_for_dicom_thumbnail(user: User, silent_logging, record_property) -> None: +def test_serve_thumbnail_for_dicom_thumbnail(user: User, silent_logging) -> None: """Test that the thumbnail route works for non-pyramidal dicom thumbnail file.""" - record_property("tested-item-id", "SPEC-WSI-SERVICE") client = TestClient(app) test_dir = Path(__file__).parent @@ -67,10 +60,8 @@ def test_serve_thumbnail_for_dicom_thumbnail(user: User, silent_logging, record_ assert image.height > 0 -@pytest.mark.integration -def test_serve_thumbnail_for_dicom_pyramidal_small(user: User, record_property) -> None: +def test_serve_thumbnail_for_dicom_pyramidal_small(user: User) -> None: """Test that the thumbnail route works for small pyramidal dicom file.""" - record_property("tested-item-id", "SPEC-WSI-SERVICE") client = TestClient(app) test_dir = Path(__file__).parent @@ -88,10 +79,8 @@ def test_serve_thumbnail_for_dicom_pyramidal_small(user: User, record_property) assert image.height > 0 -@pytest.mark.integration -def test_serve_thumbnail_for_tiff(user: User, record_property) -> None: +def test_serve_thumbnail_for_tiff(user: User) -> None: """Test that the thumbnail route works for dicom file.""" - record_property("tested-item-id", "SPEC-WSI-SERVICE") client = TestClient(app) test_dir = Path(__file__).parent @@ -109,16 +98,13 @@ def test_serve_thumbnail_for_tiff(user: User, record_property) -> None: assert image.height > 0 -@pytest.mark.integration -@pytest.mark.timeout(timeout=60) -def test_serve_tiff_to_jpeg_fails_on_broken_url(user: User, record_property) -> None: +def test_serve_tiff_to_jpeg_fails_on_broken_url(user: User) -> None: """Test that the tiff route serves the expected jpeg. - Spin up local webserver serving tests/resources/single-channel-ome.tiff - Open the tiff and check that the response is a valid jpeg """ - record_property("tested-item-id", "SPEC-WSI-SERVICE") client = TestClient(app) response = client.get("/tiff?url=bla") @@ -163,16 +149,13 @@ def log_message(self, format_str, *args): print("Warning: Server thread did not terminate within timeout") -@pytest.mark.integration -@pytest.mark.timeout(timeout=60) -def test_serve_tiff_to_jpeg_serves(user: User, silent_logging, record_property) -> None: +def test_serve_tiff_to_jpeg(user: User, silent_logging) -> None: """Test that the tiff route serves the expected jpeg. - Spin up local webserver serving tests/resources/single-channel-ome.tiff - Open the tiff and check that the response is a valid jpeg """ - record_property("tested-item-id", "SPEC-WSI-SERVICE") client = TestClient(app) test_dir = Path(__file__).parent @@ -194,16 +177,13 @@ def test_serve_tiff_to_jpeg_serves(user: User, silent_logging, record_property) assert image.height > 0 -@pytest.mark.integration -@pytest.mark.timeout(timeout=60) -def test_serve_tiff_to_jpeg_fails_on_broken_tiff(user: User, tmpdir, record_property) -> None: +def test_serve_tiff_to_jpeg_fails_on_broken_tiff(user: User, tmpdir, silent_logging) -> None: """Test that the tiff route falls back as expected on broken tiff. - Spin up local webserver serving 4711 random bytes - Open the tiff and check the response """ - record_property("tested-item-id", "SPEC-WSI-SERVICE") client = TestClient(app) random_file_path = Path(tmpdir) / "broken.tiff" @@ -218,17 +198,13 @@ def test_serve_tiff_to_jpeg_fails_on_broken_tiff(user: User, tmpdir, record_prop assert int(response.headers["Content-Length"]) == CONTENT_LENGTH_FALLBACK -@pytest.mark.integration -@pytest.mark.timeout(timeout=60) -def test_serve_tiff_to_jpeg_fails_on_tiff_not_found(user: User, tmpdir, record_property) -> None: +def test_serve_tiff_to_jpeg_fails_on_tiff_not_found(user: User, tmpdir) -> None: """Test that the tiff route falls back as expected on tiff not found. - Spin up local webserver - Open the unavailable tiff and check the response """ - record_property("tested-item-id", "SPEC-WSI-SERVICE") - client = TestClient(app) random_file_path = Path(tmpdir) / "broken.tiff" @@ -243,16 +219,12 @@ def test_serve_tiff_to_jpeg_fails_on_tiff_not_found(user: User, tmpdir, record_p assert int(response.headers["Content-Length"]) == CONTENT_LENGTH_FALLBACK -@pytest.mark.integration -@pytest.mark.timeout(timeout=60) -def test_serve_tiff_to_jpeg_fails_on_tiff_url_broken(user: User, record_property) -> None: +def test_serve_tiff_to_jpeg_fails_on_tiff_url_broken(user: User) -> None: """Test that the tiff route falls back as expected on invalid url as arg. - Open the broken url and check the response """ - record_property("tested-item-id", "SPEC-WSI-SERVICE") - client = TestClient(app) response = client.get("/tiff?url=https://") diff --git a/tests/conftest.py b/tests/conftest.py index 0a0e8dc0..ca8a0878 100644 --- a/tests/conftest.py +++ b/tests/conftest.py @@ -21,61 +21,24 @@ from nicegui.testing import User - -def pytest_xdist_auto_num_workers(config) -> int: - """Set the number of workers for xdist to a factor of the (logical) CPU cores. - - If the pytest option `--numprocesses` is set to "logical" or "auto", the number of workers is calculated - based on the logical CPU count multiplied by the factor. If the option is set otherwise, that value is - used directly. - - The factor (float) can be adjusted via the environment variable `XDIST_WORKER_FACTOR`, defaulting to 1. - - Args: - config: The pytest configuration object. - - Returns: - int: The number of workers set for xdist. - """ - if config.getoption("numprocesses") in {"logical", "auto"}: - logical_cpu_count = psutil.cpu_count(logical=config.getoption("numprocesses") == "logical") or 1 - factor = float(os.getenv("XDIST_WORKER_FACTOR", "1")) - print(f"xdist_worker_factor: {factor}") - num_workers = max(1, int(logical_cpu_count * factor)) - print(f"xdist_num_workers: {num_workers}") - logger.info( - "Set number of xdist workers to '%s' based on logical CPU count of %d.", num_workers, logical_cpu_count - ) - return num_workers - return config.getoption("numprocesses") - - # See https://nicegui.io/documentation/section_testing#project_structure if find_spec("nicegui"): pytest_plugins = ("nicegui.testing.plugin",) -def normalize_output(output: str, strip_ansi: bool = True) -> str: +def normalize_output(output: str) -> str: r"""Normalize output by removing both Windows and Unix line endings. This helper function ensures cross-platform compatibility when testing CLI output - by removing both Windows (\r\n) and Unix (\n) line endings. Optionally strips - ANSI escape codes (color codes and formatting) from the output. + by removing both Windows (\r\n) and Unix (\n) line endings. Args: - output (str): The output string to normalize. - strip_ansi (bool): Whether to remove ANSI escape codes. Defaults to True. + output: The output string to normalize. Returns: - str: The normalized output with line endings removed and optionally ANSI codes stripped. + str: The normalized output with line endings removed. """ - normalized = output.replace("\r\n", "").replace("\n", "") - if strip_ansi: - import re - - ansi_escape = re.compile(r"\x1b\[[0-9;]*m") - normalized = ansi_escape.sub("", normalized) - return normalized + return output.replace("\r\n", "").replace("\n", "") @pytest.fixture @@ -122,7 +85,7 @@ async def assert_notified(user: User, expected_notification: str, wait_seconds: return matching_messages[0] await sleep(1) - recent_messages = (user.notify.messages[-10:] if len(user.notify.messages) > 10 else user.notify.messages)[::-1] + recent_messages = user.notify.messages[-10:] if len(user.notify.messages) > 10 else user.notify.messages total_count = len(user.notify.messages) pytest.fail( f"No notification containing '{expected_notification}' was found within {wait_seconds} seconds. " @@ -130,81 +93,8 @@ async def assert_notified(user: User, expected_notification: str, wait_seconds: ) -@pytest.hookimpl(tryfirst=True, hookwrapper=True) -def pytest_runtest_makereport(item, call): # noqa: ANN201 - """Hook to suppress expected teardown errors from NiceGUI background tasks. - - This hook wraps the test report generation and modifies teardown errors - that are expected and benign (like NiceGUI background task cancellation). - - Args: - item: The pytest test item. - call: The pytest call info. - - Yields: - None: Control to other hooks. - """ - outcome = yield - report = outcome.get_result() - - # Only process teardown phase errors that are NiceGUI-related - if report.when == "teardown" and report.failed and hasattr(report, "longrepr") and report.longrepr: - error_msg = str(report.longrepr) - # Known benign NiceGUI teardown errors - if any( - pattern in error_msg - for pattern in [ - "Could not cancel", - "tasks within timeout", - "nicegui_run.io_bound", - "returned None, likely canceled by shutdown", - "KeyError: <_pytest.stash.StashKey", - ] - ): - # Mark as passed to avoid failing the test suite - report.outcome = "passed" - logger.warning( - "Suppressed expected NiceGUI teardown error in test '%s': %s", - item.nodeid, - error_msg[:200], - ) - - -@pytest.hookimpl(tryfirst=True, hookwrapper=True) -def pytest_runtest_setup(item) -> Generator[None, None, None]: - """Capture test markers and store them in environment variable before test execution. - - This hook runs before each test and sets the PYTEST_MARKERS environment variable - with a comma-separated list of all markers applied to the test. - - Args: - item: The pytest test item being executed. - - Yields: - None: This is a hookwrapper that yields control to other hooks. - """ - # Get all marker names for this test item - markers = [marker.name for marker in item.iter_markers()] - # Filter out built-in pytest markers that are not user-defined - filtered_markers = [ - m - for m in markers - if m not in {"parametrize", "skip", "skipif", "xfail", "usefixtures", "filterwarnings", "tryfirst", "trylast"} - ] - # Set environment variable with comma-separated markers - if filtered_markers: - os.environ["PYTEST_MARKERS"] = ",".join(sorted(filtered_markers)) - else: - os.environ.pop("PYTEST_MARKERS", None) - - yield - - # Clean up after test - os.environ.pop("PYTEST_MARKERS", None) - - def pytest_collection_modifyitems(config, items) -> None: - """Modify collected test items by skipping tests marked as '[very_]long_running' unless matching marker given. + """Modify collected test items by skipping tests marked as 'long_running' unless matching marker given. Args: config: The pytest configuration object. @@ -215,15 +105,11 @@ def pytest_collection_modifyitems(config, items) -> None: for item in items: if "long_running" in item.keywords: item.add_marker(skip_me) - if "very_long_running" in item.keywords: - item.add_marker(skip_me) - elif config.getoption("-m") in {"not sequential", "(not sequential)"}: + elif config.getoption("-m") == "not sequential": skip_me = pytest.mark.skip(reason="skipped as only not sequential marker given on execution using '-m'") for item in items: if "long_running" in item.keywords: item.add_marker(skip_me) - if "very_long_running" in item.keywords: - item.add_marker(skip_me) @pytest.fixture @@ -264,6 +150,9 @@ def docker_compose_file(pytestconfig) -> str: def docker_setup() -> list[str] | str: """Commands to run when spinning up services. + Args: + scope: The scope of the fixture. + Returns: list[str] | str: The commands to run. """ diff --git a/tests/constants_test.py b/tests/constants_test.py deleted file mode 100644 index af434666..00000000 --- a/tests/constants_test.py +++ /dev/null @@ -1,78 +0,0 @@ -"""Test constants used across multiple test modules. - -Constants for application versions to test and timeouts to use for the corresponding runs -that are shared across different test modules to ensure consistency and easy maintenance. -""" - -import os - -SPOT_0_GS_URL = "gs://platform-api-application-test-data/heta/slides/8fafc17d-a5cc-4e9d-a982-030b1486ca88.tiff" -SPOT_0_FILENAME = "8fafc17d-a5cc-4e9d-a982-030b1486ca88.tiff" -SPOT_0_FILESIZE = 10562338 -SPOT_0_EXPECTED_RESULT_FILES = [ - ("tissue_qc_segmentation_map_image.tiff", 1698570, 10), - ("tissue_qc_geojson_polygons.json", 315019, 10), - ("tissue_segmentation_geojson_polygons.json", 927599, 10), - ("readout_generation_slide_readouts.csv", 299865, 10), - ("readout_generation_cell_readouts.csv", 1470036, 10), - ("cell_classification_geojson_polygons.json", 9915953, 10), - ("tissue_segmentation_segmentation_map_image.tiff", 2989980, 10), - ("tissue_segmentation_csv_class_information.csv", 361, 10), - ("tissue_qc_csv_class_information.csv", 236, 10), -] -SPOT_0_EXPECTED_CELLS_CLASSIFIED = (35160, 10) -SPOT_0_CRC32C = "5onqtA==" -SPOT_0_RESOLUTION_MPP = 0.26268186053789266 -SPOT_0_WIDTH = 7447 -SPOT_0_HEIGHT = 7196 - -SPOT_1_GS_URL = "gs://aignx-storage-service-dev/sample_data_formatted/9375e3ed-28d2-4cf3-9fb9-8df9d11a6627.tiff" -SPOT_1_FILENAME = "9375e3ed-28d2-4cf3-9fb9-8df9d11a6627.tiff" -SPOT_1_FILESIZE = 14681750 -SPOT_1_EXPECTED_RESULT_FILES = [ - ("tissue_qc_segmentation_map_image.tiff", 464908, 10), - ("tissue_qc_geojson_polygons.json", 180522, 10), - ("tissue_segmentation_geojson_polygons.json", 270931, 10), - ("readout_generation_slide_readouts.csv", 295268, 10), - ("readout_generation_cell_readouts.csv", 2228907, 10), - ("cell_classification_geojson_polygons.json", 16054058, 10), - ("tissue_segmentation_segmentation_map_image.tiff", 581258, 10), - ("tissue_segmentation_csv_class_information.csv", 342, 10), - ("tissue_qc_csv_class_information.csv", 232, 10), -] -SPOT_1_CRC32C = "9l3NNQ==" -SPOT_1_WIDTH = 3728 -SPOT_1_HEIGHT = 3640 -SPOT_1_RESOLUTION_MPP = 0.46499982 - -SPOT_2_GS_URL = "gs://aignx-storage-service-dev/sample_data_formatted/8c7b079e-8b8a-4036-bfde-5818352b503a.tiff" -SPOT_2_FILENAME = "8c7b079e-8b8a-4036-bfde-5818352b503a.tiff" -SPOT_2_FILESIZE = 20153772 -SPOT_2_CRC32C = "w+ud3g==" -SPOT_2_WIDTH = 3616 -SPOT_2_HEIGHT = 3400 -SPOT_2_RESOLUTION_MPP = 0.46499982 - -SPOT_3_GS_URL = "gs://aignx-storage-service-dev/sample_data_formatted/1f4f366f-a2c5-4407-9f5e-23400b22d50e.tiff" -SPOT_3_FILENAME = "1f4f366f-a2c5-4407-9f5e-23400b22d50e.tiff" -SPOT_3_CRC32C = "Zmx0wA==" -SPOT_3_WIDTH = 4016 -SPOT_3_HEIGHT = 3952 -SPOT_3_RESOLUTION_MPP = 0.46499982 - -match os.getenv("AIGNOSTICS_PLATFORM_ENVIRONMENT", "production"): - case "production": - TEST_APPLICATION_ID = "test-app" - TEST_APPLICATION_VERSION = "0.0.4" - - HETA_APPLICATION_ID = "he-tme" - HETA_APPLICATION_VERSION = "1.0.0-beta.8" - case "staging": - TEST_APPLICATION_ID = "test-app" - TEST_APPLICATION_VERSION = "0.0.5" - - HETA_APPLICATION_ID = "he-tme" - HETA_APPLICATION_VERSION = "1.0.0-sl+3" - case _: - message = f"Unsupported AIGNOSTICS_PLATFORM_ENVIRONMENT value: {os.getenv('AIGNOSTICS_PLATFORM_ENVIRONMENT')}" - raise ValueError(message) diff --git a/tests/contants_test.py b/tests/contants_test.py new file mode 100644 index 00000000..0d63d803 --- /dev/null +++ b/tests/contants_test.py @@ -0,0 +1,15 @@ +"""Test constants used across multiple test modules. + +Constants for application versions to test and timeouts to use for the corresponding runs +that are shared across different test modules to ensure consistency and easy maintenance. +""" + +# Test Application constants +TEST_APPLICATION_ID = "test-app" +TEST_APPLICATION_VERSION_ID = "test-app:v0.0.1" +TEST_APPLICATION_TIMEOUT_SECONDS = 2 * 60 * 60 # 1 hour + +# HETA Application constants +HETA_APPLICATION_ID = "he-tme" +HETA_APPLICATION_VERSION_ID = "he-tme:v1.0.0-beta.8" +HETA_APPLICATION_TIMEOUT_SECONDS = 6 * 60 * 60 # 6 hours diff --git a/tests/resources/cells_broken.json b/tests/resources/cells_broken.json deleted file mode 100644 index e69de29b..00000000 diff --git a/uv.lock b/uv.lock index f0255908..c8df7e43 100644 --- a/uv.lock +++ b/uv.lock @@ -1,8 +1,9 @@ version = 1 revision = 3 -requires-python = ">=3.11, <3.14" +requires-python = ">=3.11, <4.0" resolution-markers = [ - "python_full_version >= '3.13'", + "python_full_version >= '3.14'", + "python_full_version == '3.13.*'", "python_full_version == '3.12.*'", "python_full_version < '3.12'", ] @@ -17,10 +18,9 @@ overrides = [ { name = "pip", specifier = ">=5.3" }, { name = "rfc3987", marker = "sys_platform == 'never'" }, { name = "starlette", specifier = ">=0.47.2" }, - { name = "starlette", specifier = ">=0.49.1" }, { name = "tornado", specifier = ">=6.5.0" }, { name = "urllib3", specifier = ">=2.5.0" }, - { name = "uv", specifier = ">=0.9.7" }, + { name = "uv", specifier = ">=0.8.9" }, ] [[package]] @@ -37,14 +37,13 @@ wheels = [ [[package]] name = "aignostics" -version = "0.2.197" +version = "0.2.189" source = { editable = "." } dependencies = [ { name = "aiopath", version = "0.6.11", source = { registry = "https://pypi.org/simple" }, marker = "python_full_version < '3.12'" }, { name = "aiopath", version = "0.7.7", source = { registry = "https://pypi.org/simple" }, marker = "python_full_version >= '3.12'" }, { name = "boto3" }, { name = "certifi" }, - { name = "defusedxml" }, { name = "dicom-validator" }, { name = "dicomweb-client", extra = ["gcp"] }, { name = "duckdb" }, @@ -162,31 +161,30 @@ dev = [ [package.metadata] requires-dist = [ { name = "aiopath", specifier = ">=0.6.11,<1" }, - { name = "boto3", specifier = ">=1.40.64,<2" }, + { name = "boto3", specifier = ">=1.40.47,<2" }, { name = "certifi", specifier = ">=2025.10.5,<2026" }, { name = "cloudpathlib", marker = "extra == 'marimo'", specifier = ">=0.23.0,<1" }, - { name = "defusedxml", specifier = ">=0.7.1" }, - { name = "dicom-validator", specifier = ">=0.7.3,<1" }, + { name = "dicom-validator", specifier = ">=0.7.2,<1" }, { name = "dicomweb-client", extras = ["gcp"], specifier = ">=0.59.3,<1" }, { name = "duckdb", specifier = ">=0.10.0,<=1.4.1" }, - { name = "fastapi", extras = ["standard", "all"], specifier = ">=0.120.4,<1" }, + { name = "fastapi", extras = ["standard", "all"], specifier = ">=0.118.2,<1" }, { name = "fastparquet", specifier = ">=2024.11.0,<2025" }, - { name = "google-cloud-storage", specifier = ">=3.4.1,<4" }, + { name = "google-cloud-storage", specifier = ">=3.4.0,<4" }, { name = "google-crc32c", specifier = ">=1.7.1,<2" }, { name = "highdicom", specifier = ">=0.26.1,<1" }, { name = "html-sanitizer", specifier = ">=2.6.0,<3" }, { name = "httpx", specifier = ">=0.28.1,<1" }, - { name = "humanize", specifier = ">=4.14.0,<5" }, + { name = "humanize", specifier = ">=4.13.0,<5" }, { name = "idc-index-data", specifier = "==22.0.2" }, - { name = "ijson", specifier = ">=3.4.0.post0,<4" }, + { name = "ijson", specifier = ">=3.4.0,<4" }, { name = "ipython", marker = "extra == 'marimo'", specifier = ">=9.6.0,<10" }, { name = "jsf", specifier = ">=0.11.2,<1" }, { name = "jsonschema", extras = ["format-nongpl"], specifier = ">=4.25.1,<5" }, { name = "jupyter", marker = "extra == 'jupyter'", specifier = ">=1.1.1,<2" }, - { name = "logfire", extras = ["system-metrics"], specifier = ">=4.14.2,<5" }, - { name = "marimo", marker = "extra == 'marimo'", specifier = ">=0.17.2,<1" }, - { name = "matplotlib", marker = "extra == 'marimo'", specifier = ">=3.10.7,<4" }, - { name = "nicegui", extras = ["native"], specifier = ">=3.1.0,<3.2.0" }, + { name = "logfire", extras = ["system-metrics"], specifier = ">=4.12.0,<5" }, + { name = "marimo", marker = "extra == 'marimo'", specifier = ">=0.16.5,<1" }, + { name = "matplotlib", marker = "extra == 'marimo'", specifier = ">=3.10.6,<4" }, + { name = "nicegui", extras = ["native"], specifier = ">=3.0.3,<4" }, { name = "openslide-bin", specifier = ">=4.0.0.8,<5" }, { name = "openslide-python", specifier = ">=1.4.2,<2" }, { name = "opentelemetry-instrumentation-fastapi", specifier = ">=0.53b0,<1" }, @@ -200,8 +198,8 @@ requires-dist = [ { name = "packaging", specifier = ">=25.0,<26" }, { name = "pandas", specifier = ">=2.3.3,<3" }, { name = "platformdirs", specifier = ">=4.3.8,<5" }, - { name = "platformdirs", specifier = ">=4.5.0,<5" }, - { name = "psutil", specifier = ">=7.1.2,<8" }, + { name = "platformdirs", specifier = ">=4.4.0,<5" }, + { name = "psutil", specifier = ">=7.1.0,<8" }, { name = "pydantic-settings", specifier = ">=2.11.0,<3" }, { name = "pyinstaller", marker = "extra == 'pyinstaller'", specifier = ">=6.14.0,<7" }, { name = "pyjwt", extras = ["crypto"], specifier = ">=2.10.1,<3" }, @@ -212,13 +210,13 @@ requires-dist = [ { name = "requests-oauthlib", specifier = ">=2.0.0,<3" }, { name = "s5cmd", specifier = ">=0.3.3,<1" }, { name = "semver", specifier = ">=3.0.4,<4" }, - { name = "sentry-sdk", specifier = ">=2.43.0,<3" }, + { name = "sentry-sdk", specifier = ">=2.40.0,<3" }, { name = "shapely", specifier = ">=2.1.2,<3" }, { name = "shapely", marker = "extra == 'marimo'", specifier = ">=2.1.0,<3" }, { name = "tenacity", specifier = ">=9.1.2,<10" }, { name = "tqdm", specifier = ">=4.67.1,<5" }, { name = "truststore", specifier = ">=0.10.4,<1" }, - { name = "typer", specifier = ">=0.20.0,<1" }, + { name = "typer", specifier = ">=0.19.2,<1" }, { name = "uptime", specifier = ">=3.0.1,<4" }, { name = "urllib3", specifier = ">=2.5.0,<3" }, { name = "wsidicom", specifier = ">=0.28.1,<1" }, @@ -235,17 +233,17 @@ dev = [ { name = "furo", specifier = ">=2025.9.25,<2026" }, { name = "git-cliff", specifier = ">=2.10.1,<3" }, { name = "mypy", specifier = ">=1.18.2,<2" }, - { name = "nox", extras = ["uv"], specifier = ">=2025.10.16,<2026" }, + { name = "nox", extras = ["uv"], specifier = ">=2025.5.1,<2026" }, { name = "pip-audit", specifier = ">=2.9.0,<3" }, { name = "pip-licenses", git = "https://github.com/neXenio/pip-licenses.git?rev=master" }, { name = "pre-commit", specifier = ">=4.3.0,<5" }, - { name = "pyright", specifier = ">=1.1.406,<1.1.407" }, + { name = "pyright", specifier = ">=1.1.406,<2" }, { name = "pytest", specifier = ">=8.4.2,<9" }, { name = "pytest-asyncio", specifier = ">=1.2.0,<2" }, { name = "pytest-cov", specifier = ">=7.0.0,<8" }, { name = "pytest-docker", specifier = ">=3.2.3,<4" }, { name = "pytest-durations", specifier = ">=1.6.1,<2" }, - { name = "pytest-env", specifier = ">=1.2.0,<2" }, + { name = "pytest-env", specifier = ">=1.1.5,<2" }, { name = "pytest-md-report", specifier = ">=0.7.0,<1" }, { name = "pytest-regressions", specifier = ">=2.8.3,<3" }, { name = "pytest-retry", specifier = ">=1.7.0,<2" }, @@ -254,7 +252,7 @@ dev = [ { name = "pytest-timeout", specifier = ">=2.4.0,<3" }, { name = "pytest-watcher", specifier = ">=0.4.3,<1" }, { name = "pytest-xdist", extras = ["psutil"], specifier = ">=3.8.0,<4" }, - { name = "ruff", specifier = ">=0.14.3,<1" }, + { name = "ruff", specifier = ">=0.14.0,<1" }, { name = "scalene", specifier = ">=1.5.55,<2" }, { name = "sphinx", specifier = ">=8.2.3,<9" }, { name = "sphinx-autobuild", specifier = ">=2025.8.25,<2026" }, @@ -264,10 +262,10 @@ dev = [ { name = "sphinx-mdinclude", specifier = ">=0.6.2,<1" }, { name = "sphinx-rtd-theme", specifier = ">=3.0.2,<4" }, { name = "sphinx-selective-exclude", specifier = ">=1.0.3,<2" }, - { name = "sphinx-toolbox", specifier = ">=3.9.0,<4" }, + { name = "sphinx-toolbox", specifier = ">=4,<5" }, { name = "sphinxext-opengraph", specifier = ">=0.9.1,<1" }, - { name = "swagger-plugin-for-sphinx", specifier = ">=5.2.0,<6" }, - { name = "tomli", specifier = ">=2.3.0,<3" }, + { name = "swagger-plugin-for-sphinx", specifier = ">=5.1.3,<6" }, + { name = "tomli", specifier = ">=2.1.0,<3" }, { name = "types-pyyaml", specifier = ">=6.0.12.20250915,<7" }, { name = "types-requests", specifier = ">=2.32.4.20250913,<3" }, { name = "watchdog", specifier = ">=6.0.0,<7" }, @@ -287,11 +285,11 @@ wheels = [ [[package]] name = "aiofiles" -version = "25.1.0" +version = "24.1.0" source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/41/c3/534eac40372d8ee36ef40df62ec129bee4fdb5ad9706e58a29be53b2c970/aiofiles-25.1.0.tar.gz", hash = "sha256:a8d728f0a29de45dc521f18f07297428d56992a742f0cd2701ba86e44d23d5b2", size = 46354, upload-time = "2025-10-09T20:51:04.358Z" } +sdist = { url = "https://files.pythonhosted.org/packages/0b/03/a88171e277e8caa88a4c77808c20ebb04ba74cc4681bf1e9416c862de237/aiofiles-24.1.0.tar.gz", hash = "sha256:22a075c9e5a3810f0c2e48f3008c94d68c65d763b9b03857924c99e57355166c", size = 30247, upload-time = "2024-06-24T11:02:03.584Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/bc/8a/340a1555ae33d7354dbca4faa54948d76d89a27ceef032c8c3bc661d003e/aiofiles-25.1.0-py3-none-any.whl", hash = "sha256:abe311e527c862958650f9438e859c1fa7568a141b22abcd015e120e86a85695", size = 14668, upload-time = "2025-10-09T20:51:03.174Z" }, + { url = "https://files.pythonhosted.org/packages/a5/45/30bb92d442636f570cb5651bc661f52b610e2eec3f891a5dc3a4c3667db0/aiofiles-24.1.0-py3-none-any.whl", hash = "sha256:b4ec55f4195e3eb5d7abd1bf7e061763e864dd4954231fb8539a0ef8bb8260e5", size = 15896, upload-time = "2024-06-24T11:02:01.529Z" }, ] [[package]] @@ -305,7 +303,7 @@ wheels = [ [[package]] name = "aiohttp" -version = "3.13.0" +version = "3.12.15" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "aiohappyeyeballs" }, @@ -316,59 +314,59 @@ dependencies = [ { name = "propcache" }, { name = "yarl" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/62/f1/8515650ac3121a9e55c7b217c60e7fae3e0134b5acfe65691781b5356929/aiohttp-3.13.0.tar.gz", hash = "sha256:378dbc57dd8cf341ce243f13fa1fa5394d68e2e02c15cd5f28eae35a70ec7f67", size = 7832348, upload-time = "2025-10-06T19:58:48.089Z" } -wheels = [ - { url = "https://files.pythonhosted.org/packages/b1/db/df80cacac46cd548a736c5535b13cc18925cf6f9f83cd128cf3839842219/aiohttp-3.13.0-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:99eb94e97a42367fef5fc11e28cb2362809d3e70837f6e60557816c7106e2e20", size = 741374, upload-time = "2025-10-06T19:55:13.095Z" }, - { url = "https://files.pythonhosted.org/packages/ae/f9/2d6d93fd57ab4726e18a7cdab083772eda8302d682620fbf2aef48322351/aiohttp-3.13.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:4696665b2713021c6eba3e2b882a86013763b442577fe5d2056a42111e732eca", size = 494956, upload-time = "2025-10-06T19:55:14.687Z" }, - { url = "https://files.pythonhosted.org/packages/89/a6/e1c061b079fed04ffd6777950c82f2e8246fd08b7b3c4f56fdd47f697e5a/aiohttp-3.13.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:3e6a38366f7f0d0f6ed7a1198055150c52fda552b107dad4785c0852ad7685d1", size = 491154, upload-time = "2025-10-06T19:55:16.661Z" }, - { url = "https://files.pythonhosted.org/packages/fe/4d/ee8913c0d2c7da37fdc98673a342b51611eaa0871682b37b8430084e35b5/aiohttp-3.13.0-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:aab715b1a0c37f7f11f9f1f579c6fbaa51ef569e47e3c0a4644fba46077a9409", size = 1745707, upload-time = "2025-10-06T19:55:18.376Z" }, - { url = "https://files.pythonhosted.org/packages/f9/70/26b2c97e8fa68644aec43d788940984c5f3b53a8d1468d5baaa328f809c9/aiohttp-3.13.0-cp311-cp311-manylinux2014_armv7l.manylinux_2_17_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:7972c82bed87d7bd8e374b60a6b6e816d75ba4f7c2627c2d14eed216e62738e1", size = 1702404, upload-time = "2025-10-06T19:55:20.098Z" }, - { url = "https://files.pythonhosted.org/packages/65/1e/c8aa3c293a0e8b18968b1b88e9bd8fb269eb67eb7449f504a4c3e175b159/aiohttp-3.13.0-cp311-cp311-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:ca8313cb852af788c78d5afdea24c40172cbfff8b35e58b407467732fde20390", size = 1805519, upload-time = "2025-10-06T19:55:21.811Z" }, - { url = "https://files.pythonhosted.org/packages/51/b6/a3753fe86249eb441768658cfc00f8c4e0913b255c13be00ddb8192775e1/aiohttp-3.13.0-cp311-cp311-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:6c333a2385d2a6298265f4b3e960590f787311b87f6b5e6e21bb8375914ef504", size = 1893904, upload-time = "2025-10-06T19:55:23.462Z" }, - { url = "https://files.pythonhosted.org/packages/51/6d/7b1e020fe1d2a2be7cf0ce5e35922f345e3507cf337faa1a6563c42065c1/aiohttp-3.13.0-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:cc6d5fc5edbfb8041d9607f6a417997fa4d02de78284d386bea7ab767b5ea4f3", size = 1745043, upload-time = "2025-10-06T19:55:25.208Z" }, - { url = "https://files.pythonhosted.org/packages/e6/df/aad5dce268f9d4f29759c3eeb5fb5995c569d76abb267468dc1075218d5b/aiohttp-3.13.0-cp311-cp311-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:7ddedba3d0043349edc79df3dc2da49c72b06d59a45a42c1c8d987e6b8d175b8", size = 1604765, upload-time = "2025-10-06T19:55:27.157Z" }, - { url = "https://files.pythonhosted.org/packages/1c/19/a84a0e97b2da2224c8b85e1aef5cac834d07b2903c17bff1a6bdbc7041d2/aiohttp-3.13.0-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:23ca762140159417a6bbc959ca1927f6949711851e56f2181ddfe8d63512b5ad", size = 1721737, upload-time = "2025-10-06T19:55:28.854Z" }, - { url = "https://files.pythonhosted.org/packages/6c/61/ca6ad390128d964a08554fd63d6df5810fb5fbc7e599cb9e617f1729ae19/aiohttp-3.13.0-cp311-cp311-musllinux_1_2_armv7l.whl", hash = "sha256:bfe824d6707a5dc3c5676685f624bc0c63c40d79dc0239a7fd6c034b98c25ebe", size = 1716052, upload-time = "2025-10-06T19:55:30.563Z" }, - { url = "https://files.pythonhosted.org/packages/2a/71/769e249e6625372c7d14be79b8b8c3b0592963a09793fb3d36758e60952c/aiohttp-3.13.0-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:3c11fa5dd2ef773a8a5a6daa40243d83b450915992eab021789498dc87acc114", size = 1783532, upload-time = "2025-10-06T19:55:32.798Z" }, - { url = "https://files.pythonhosted.org/packages/66/64/b9cd03cdbb629bc492e4a744fbe96550a8340b0cd7a0cc4a9c90cfecd8d3/aiohttp-3.13.0-cp311-cp311-musllinux_1_2_riscv64.whl", hash = "sha256:00fdfe370cffede3163ba9d3f190b32c0cfc8c774f6f67395683d7b0e48cdb8a", size = 1593072, upload-time = "2025-10-06T19:55:34.686Z" }, - { url = "https://files.pythonhosted.org/packages/24/0e/87922c8cfdbd09f5e2197e9d87714a98c99c423560d44739e3af55400fe3/aiohttp-3.13.0-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:6475e42ef92717a678bfbf50885a682bb360a6f9c8819fb1a388d98198fdcb80", size = 1798613, upload-time = "2025-10-06T19:55:36.393Z" }, - { url = "https://files.pythonhosted.org/packages/c5/bb/a3adfe2af76e1ee9e3b5464522004b148b266bc99d7ec424ca7843d64a3c/aiohttp-3.13.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:77da5305a410910218b99f2a963092f4277d8a9c1f429c1ff1b026d1826bd0b6", size = 1737480, upload-time = "2025-10-06T19:55:38.043Z" }, - { url = "https://files.pythonhosted.org/packages/ad/53/e124dcbd64e6365602f3493fe37a11ca5b7ac0a40822a6e2bc8260cd08e0/aiohttp-3.13.0-cp311-cp311-win32.whl", hash = "sha256:2f9d9ea547618d907f2ee6670c9a951f059c5994e4b6de8dcf7d9747b420c820", size = 429824, upload-time = "2025-10-06T19:55:39.595Z" }, - { url = "https://files.pythonhosted.org/packages/3e/bd/485d98b372a2cd6998484a93ddd401ec6b6031657661c36846a10e2a1f6e/aiohttp-3.13.0-cp311-cp311-win_amd64.whl", hash = "sha256:0f19f7798996d4458c669bd770504f710014926e9970f4729cf55853ae200469", size = 454137, upload-time = "2025-10-06T19:55:41.617Z" }, - { url = "https://files.pythonhosted.org/packages/3a/95/7e8bdfa6e79099a086d59d42589492f1fe9d29aae3cefb58b676015ce278/aiohttp-3.13.0-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:1c272a9a18a5ecc48a7101882230046b83023bb2a662050ecb9bfcb28d9ab53a", size = 735585, upload-time = "2025-10-06T19:55:43.401Z" }, - { url = "https://files.pythonhosted.org/packages/9f/20/2f1d3ee06ee94eafe516810705219bff234d09f135d6951661661d5595ae/aiohttp-3.13.0-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:97891a23d7fd4e1afe9c2f4473e04595e4acb18e4733b910b6577b74e7e21985", size = 490613, upload-time = "2025-10-06T19:55:45.237Z" }, - { url = "https://files.pythonhosted.org/packages/74/15/ab8600ef6dc1dcd599009a81acfed2ea407037e654d32e47e344e0b08c34/aiohttp-3.13.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:475bd56492ce5f4cffe32b5533c6533ee0c406d1d0e6924879f83adcf51da0ae", size = 489750, upload-time = "2025-10-06T19:55:46.937Z" }, - { url = "https://files.pythonhosted.org/packages/33/59/752640c2b86ca987fe5703a01733b00d375e6cd2392bc7574489934e64e5/aiohttp-3.13.0-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:c32ada0abb4bc94c30be2b681c42f058ab104d048da6f0148280a51ce98add8c", size = 1736812, upload-time = "2025-10-06T19:55:48.917Z" }, - { url = "https://files.pythonhosted.org/packages/3d/c6/dd6b86ddb852a7fdbcdc7a45b6bdc80178aef713c08279afcaee7a5a9f07/aiohttp-3.13.0-cp312-cp312-manylinux2014_armv7l.manylinux_2_17_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:4af1f8877ca46ecdd0bc0d4a6b66d4b2bddc84a79e2e8366bc0d5308e76bceb8", size = 1698535, upload-time = "2025-10-06T19:55:50.75Z" }, - { url = "https://files.pythonhosted.org/packages/33/e2/27c92d205b9e8cee7661670e8e9f187931b71e26d42796b153d2a0ba6949/aiohttp-3.13.0-cp312-cp312-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:e04ab827ec4f775817736b20cdc8350f40327f9b598dec4e18c9ffdcbea88a93", size = 1766573, upload-time = "2025-10-06T19:55:53.106Z" }, - { url = "https://files.pythonhosted.org/packages/df/6a/1fc1ad71d130a30f7a207d8d958a41224c29b834463b5185efb2dbff6ad4/aiohttp-3.13.0-cp312-cp312-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:a6d9487b9471ec36b0faedf52228cd732e89be0a2bbd649af890b5e2ce422353", size = 1865229, upload-time = "2025-10-06T19:55:55.01Z" }, - { url = "https://files.pythonhosted.org/packages/14/51/d0c1701a79fcb0109cff5304da16226581569b89a282d8e7f1549a7e3ec0/aiohttp-3.13.0-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:2e66c57416352f36bf98f6641ddadd47c93740a22af7150d3e9a1ef6e983f9a8", size = 1750379, upload-time = "2025-10-06T19:55:57.219Z" }, - { url = "https://files.pythonhosted.org/packages/ae/3d/2ec4b934f85856de1c0c18e90adc8902adadbfac2b3c0b831bfeb7214fc8/aiohttp-3.13.0-cp312-cp312-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:469167d5372f5bb3aedff4fc53035d593884fff2617a75317740e885acd48b04", size = 1560798, upload-time = "2025-10-06T19:55:58.888Z" }, - { url = "https://files.pythonhosted.org/packages/38/56/e23d9c3e13006e599fdce3851517c70279e177871e3e567d22cf3baf5d6c/aiohttp-3.13.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:a9f3546b503975a69b547c9fd1582cad10ede1ce6f3e313a2f547c73a3d7814f", size = 1697552, upload-time = "2025-10-06T19:56:01.172Z" }, - { url = "https://files.pythonhosted.org/packages/56/cb/caa32c2ccaeca0a3dc39129079fd2ad02f9406c3a5f7924340435b87d4cd/aiohttp-3.13.0-cp312-cp312-musllinux_1_2_armv7l.whl", hash = "sha256:6b4174fcec98601f0cfdf308ee29a6ae53c55f14359e848dab4e94009112ee7d", size = 1718609, upload-time = "2025-10-06T19:56:03.102Z" }, - { url = "https://files.pythonhosted.org/packages/fb/c0/5911856fef9e40fd1ccbb8c54a90116875d5753a92c1cac66ce2059b390d/aiohttp-3.13.0-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:a533873a7a4ec2270fb362ee5a0d3b98752e4e1dc9042b257cd54545a96bd8ed", size = 1735887, upload-time = "2025-10-06T19:56:04.841Z" }, - { url = "https://files.pythonhosted.org/packages/0e/48/8d6f4757a24c02f0a454c043556593a00645d10583859f7156db44d8b7d3/aiohttp-3.13.0-cp312-cp312-musllinux_1_2_riscv64.whl", hash = "sha256:ce887c5e54411d607ee0959cac15bb31d506d86a9bcaddf0b7e9d63325a7a802", size = 1553079, upload-time = "2025-10-06T19:56:07.197Z" }, - { url = "https://files.pythonhosted.org/packages/39/fa/e82c9445e40b50e46770702b5b6ca2f767966d53e1a5eef03583ceac6df6/aiohttp-3.13.0-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:d871f6a30d43e32fc9252dc7b9febe1a042b3ff3908aa83868d7cf7c9579a59b", size = 1762750, upload-time = "2025-10-06T19:56:09.376Z" }, - { url = "https://files.pythonhosted.org/packages/3d/e6/9d30554e7f1e700bfeae4ab6b153d5dc7441606a9ec5e929288fa93a1477/aiohttp-3.13.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:222c828243b4789d79a706a876910f656fad4381661691220ba57b2ab4547865", size = 1717461, upload-time = "2025-10-06T19:56:11.551Z" }, - { url = "https://files.pythonhosted.org/packages/1f/e5/29cca547990a59ea54f0674fc01de98519fc628cfceeab6175711750eca7/aiohttp-3.13.0-cp312-cp312-win32.whl", hash = "sha256:682d2e434ff2f1108314ff7f056ce44e457f12dbed0249b24e106e385cf154b9", size = 424633, upload-time = "2025-10-06T19:56:13.316Z" }, - { url = "https://files.pythonhosted.org/packages/8b/68/46dd042d7bc62eab30bafdb8569f55ef125c3a88bb174270324224f8df56/aiohttp-3.13.0-cp312-cp312-win_amd64.whl", hash = "sha256:0a2be20eb23888df130214b91c262a90e2de1553d6fb7de9e9010cec994c0ff2", size = 451401, upload-time = "2025-10-06T19:56:15.188Z" }, - { url = "https://files.pythonhosted.org/packages/86/2c/ac53efdc9c10e41399acc2395af98f835b86d0141d5c3820857eb9f6a14a/aiohttp-3.13.0-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:00243e51f16f6ec0fb021659d4af92f675f3cf9f9b39efd142aa3ad641d8d1e6", size = 730090, upload-time = "2025-10-06T19:56:16.858Z" }, - { url = "https://files.pythonhosted.org/packages/13/18/1ac95683e1c1d48ef4503965c96f5401618a04c139edae12e200392daae8/aiohttp-3.13.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:059978d2fddc462e9211362cbc8446747ecd930537fa559d3d25c256f032ff54", size = 488041, upload-time = "2025-10-06T19:56:18.659Z" }, - { url = "https://files.pythonhosted.org/packages/fd/79/ef0d477c771a642d1a881b92d226314c43d3c74bc674c93e12e679397a97/aiohttp-3.13.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:564b36512a7da3b386143c611867e3f7cfb249300a1bf60889bd9985da67ab77", size = 486989, upload-time = "2025-10-06T19:56:20.371Z" }, - { url = "https://files.pythonhosted.org/packages/37/b4/0e440481a0e77a551d6c5dcab5d11f1ff6b2b2ddb8dedc24f54f5caad732/aiohttp-3.13.0-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:4aa995b9156ae499393d949a456a7ab0b994a8241a96db73a3b73c7a090eff6a", size = 1718331, upload-time = "2025-10-06T19:56:22.188Z" }, - { url = "https://files.pythonhosted.org/packages/e6/59/76c421cc4a75bb1aceadb92f20ee6f05a990aa6960c64b59e8e0d340e3f5/aiohttp-3.13.0-cp313-cp313-manylinux2014_armv7l.manylinux_2_17_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:55ca0e95a3905f62f00900255ed807c580775174252999286f283e646d675a49", size = 1686263, upload-time = "2025-10-06T19:56:24.393Z" }, - { url = "https://files.pythonhosted.org/packages/ec/ac/5095f12a79c7775f402cfc3e83651b6e0a92ade10ddf7f2c78c4fed79f71/aiohttp-3.13.0-cp313-cp313-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:49ce7525853a981fc35d380aa2353536a01a9ec1b30979ea4e35966316cace7e", size = 1754265, upload-time = "2025-10-06T19:56:26.365Z" }, - { url = "https://files.pythonhosted.org/packages/05/d7/a48e4989bd76cc70600c505bbdd0d90ca1ad7f9053eceeb9dbcf9345a9ec/aiohttp-3.13.0-cp313-cp313-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:2117be9883501eaf95503bd313eb4c7a23d567edd44014ba15835a1e9ec6d852", size = 1856486, upload-time = "2025-10-06T19:56:28.438Z" }, - { url = "https://files.pythonhosted.org/packages/1e/02/45b388b49e37933f316e1fb39c0de6fb1d77384b0c8f4cf6af5f2cbe3ea6/aiohttp-3.13.0-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:d169c47e40c911f728439da853b6fd06da83761012e6e76f11cb62cddae7282b", size = 1737545, upload-time = "2025-10-06T19:56:30.688Z" }, - { url = "https://files.pythonhosted.org/packages/6c/a7/4fde058f1605c34a219348a83a99f14724cc64e68a42480fc03cf40f9ea3/aiohttp-3.13.0-cp313-cp313-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:703ad3f742fc81e543638a7bebddd35acadaa0004a5e00535e795f4b6f2c25ca", size = 1552958, upload-time = "2025-10-06T19:56:32.528Z" }, - { url = "https://files.pythonhosted.org/packages/d1/12/0bac4d29231981e3aa234e88d1931f6ba38135ff4c2cf3afbb7895527630/aiohttp-3.13.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:5bf635c3476f4119b940cc8d94ad454cbe0c377e61b4527f0192aabeac1e9370", size = 1681166, upload-time = "2025-10-06T19:56:34.81Z" }, - { url = "https://files.pythonhosted.org/packages/71/95/b829eb5f8ac1ca1d8085bb8df614c8acf3ff32e23ad5ad1173c7c9761daa/aiohttp-3.13.0-cp313-cp313-musllinux_1_2_armv7l.whl", hash = "sha256:cfe6285ef99e7ee51cef20609be2bc1dd0e8446462b71c9db8bb296ba632810a", size = 1710516, upload-time = "2025-10-06T19:56:36.787Z" }, - { url = "https://files.pythonhosted.org/packages/47/6d/15ccf4ef3c254d899f62580e0c7fc717014f4d14a3ac31771e505d2c736c/aiohttp-3.13.0-cp313-cp313-musllinux_1_2_ppc64le.whl", hash = "sha256:34d8af6391c5f2e69749d7f037b614b8c5c42093c251f336bdbfa4b03c57d6c4", size = 1731354, upload-time = "2025-10-06T19:56:38.659Z" }, - { url = "https://files.pythonhosted.org/packages/46/6a/8acf6c57e03b6fdcc8b4c06392e66abaff3213ea275e41db3edb20738d91/aiohttp-3.13.0-cp313-cp313-musllinux_1_2_riscv64.whl", hash = "sha256:12f5d820fadc5848d4559ea838aef733cf37ed2a1103bba148ac2f5547c14c29", size = 1548040, upload-time = "2025-10-06T19:56:40.578Z" }, - { url = "https://files.pythonhosted.org/packages/75/7d/fbfd59ab2a83fe2578ce79ac3db49727b81e9f4c3376217ad09c03c6d279/aiohttp-3.13.0-cp313-cp313-musllinux_1_2_s390x.whl", hash = "sha256:0f1338b61ea66f4757a0544ed8a02ccbf60e38d9cfb3225888888dd4475ebb96", size = 1756031, upload-time = "2025-10-06T19:56:42.492Z" }, - { url = "https://files.pythonhosted.org/packages/99/e7/cc9f0fdf06cab3ca61e6b62bff9a4b978b8ca736e9d76ddf54365673ab19/aiohttp-3.13.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:582770f82513419512da096e8df21ca44f86a2e56e25dc93c5ab4df0fe065bf0", size = 1714933, upload-time = "2025-10-06T19:56:45.542Z" }, - { url = "https://files.pythonhosted.org/packages/db/43/7abbe1de94748a58a71881163ee280fd3217db36e8344d109f63638fe16a/aiohttp-3.13.0-cp313-cp313-win32.whl", hash = "sha256:3194b8cab8dbc882f37c13ef1262e0a3d62064fa97533d3aa124771f7bf1ecee", size = 423799, upload-time = "2025-10-06T19:56:47.779Z" }, - { url = "https://files.pythonhosted.org/packages/c9/58/afab7f2b9e7df88c995995172eb78cae8a3d5a62d5681abaade86b3f0089/aiohttp-3.13.0-cp313-cp313-win_amd64.whl", hash = "sha256:7897298b3eedc790257fef8a6ec582ca04e9dbe568ba4a9a890913b925b8ea21", size = 450138, upload-time = "2025-10-06T19:56:49.49Z" }, +sdist = { url = "https://files.pythonhosted.org/packages/9b/e7/d92a237d8802ca88483906c388f7c201bbe96cd80a165ffd0ac2f6a8d59f/aiohttp-3.12.15.tar.gz", hash = "sha256:4fc61385e9c98d72fcdf47e6dd81833f47b2f77c114c29cd64a361be57a763a2", size = 7823716, upload-time = "2025-07-29T05:52:32.215Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/20/19/9e86722ec8e835959bd97ce8c1efa78cf361fa4531fca372551abcc9cdd6/aiohttp-3.12.15-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:d3ce17ce0220383a0f9ea07175eeaa6aa13ae5a41f30bc61d84df17f0e9b1117", size = 711246, upload-time = "2025-07-29T05:50:15.937Z" }, + { url = "https://files.pythonhosted.org/packages/71/f9/0a31fcb1a7d4629ac9d8f01f1cb9242e2f9943f47f5d03215af91c3c1a26/aiohttp-3.12.15-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:010cc9bbd06db80fe234d9003f67e97a10fe003bfbedb40da7d71c1008eda0fe", size = 483515, upload-time = "2025-07-29T05:50:17.442Z" }, + { url = "https://files.pythonhosted.org/packages/62/6c/94846f576f1d11df0c2e41d3001000527c0fdf63fce7e69b3927a731325d/aiohttp-3.12.15-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:3f9d7c55b41ed687b9d7165b17672340187f87a773c98236c987f08c858145a9", size = 471776, upload-time = "2025-07-29T05:50:19.568Z" }, + { url = "https://files.pythonhosted.org/packages/f8/6c/f766d0aaafcee0447fad0328da780d344489c042e25cd58fde566bf40aed/aiohttp-3.12.15-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bc4fbc61bb3548d3b482f9ac7ddd0f18c67e4225aaa4e8552b9f1ac7e6bda9e5", size = 1741977, upload-time = "2025-07-29T05:50:21.665Z" }, + { url = "https://files.pythonhosted.org/packages/17/e5/fb779a05ba6ff44d7bc1e9d24c644e876bfff5abe5454f7b854cace1b9cc/aiohttp-3.12.15-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:7fbc8a7c410bb3ad5d595bb7118147dfbb6449d862cc1125cf8867cb337e8728", size = 1690645, upload-time = "2025-07-29T05:50:23.333Z" }, + { url = "https://files.pythonhosted.org/packages/37/4e/a22e799c2035f5d6a4ad2cf8e7c1d1bd0923192871dd6e367dafb158b14c/aiohttp-3.12.15-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:74dad41b3458dbb0511e760fb355bb0b6689e0630de8a22b1b62a98777136e16", size = 1789437, upload-time = "2025-07-29T05:50:25.007Z" }, + { url = "https://files.pythonhosted.org/packages/28/e5/55a33b991f6433569babb56018b2fb8fb9146424f8b3a0c8ecca80556762/aiohttp-3.12.15-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:3b6f0af863cf17e6222b1735a756d664159e58855da99cfe965134a3ff63b0b0", size = 1828482, upload-time = "2025-07-29T05:50:26.693Z" }, + { url = "https://files.pythonhosted.org/packages/c6/82/1ddf0ea4f2f3afe79dffed5e8a246737cff6cbe781887a6a170299e33204/aiohttp-3.12.15-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b5b7fe4972d48a4da367043b8e023fb70a04d1490aa7d68800e465d1b97e493b", size = 1730944, upload-time = "2025-07-29T05:50:28.382Z" }, + { url = "https://files.pythonhosted.org/packages/1b/96/784c785674117b4cb3877522a177ba1b5e4db9ce0fd519430b5de76eec90/aiohttp-3.12.15-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6443cca89553b7a5485331bc9bedb2342b08d073fa10b8c7d1c60579c4a7b9bd", size = 1668020, upload-time = "2025-07-29T05:50:30.032Z" }, + { url = "https://files.pythonhosted.org/packages/12/8a/8b75f203ea7e5c21c0920d84dd24a5c0e971fe1e9b9ebbf29ae7e8e39790/aiohttp-3.12.15-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:6c5f40ec615e5264f44b4282ee27628cea221fcad52f27405b80abb346d9f3f8", size = 1716292, upload-time = "2025-07-29T05:50:31.983Z" }, + { url = "https://files.pythonhosted.org/packages/47/0b/a1451543475bb6b86a5cfc27861e52b14085ae232896a2654ff1231c0992/aiohttp-3.12.15-cp311-cp311-musllinux_1_2_armv7l.whl", hash = "sha256:2abbb216a1d3a2fe86dbd2edce20cdc5e9ad0be6378455b05ec7f77361b3ab50", size = 1711451, upload-time = "2025-07-29T05:50:33.989Z" }, + { url = "https://files.pythonhosted.org/packages/55/fd/793a23a197cc2f0d29188805cfc93aa613407f07e5f9da5cd1366afd9d7c/aiohttp-3.12.15-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:db71ce547012a5420a39c1b744d485cfb823564d01d5d20805977f5ea1345676", size = 1691634, upload-time = "2025-07-29T05:50:35.846Z" }, + { url = "https://files.pythonhosted.org/packages/ca/bf/23a335a6670b5f5dfc6d268328e55a22651b440fca341a64fccf1eada0c6/aiohttp-3.12.15-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:ced339d7c9b5030abad5854aa5413a77565e5b6e6248ff927d3e174baf3badf7", size = 1785238, upload-time = "2025-07-29T05:50:37.597Z" }, + { url = "https://files.pythonhosted.org/packages/57/4f/ed60a591839a9d85d40694aba5cef86dde9ee51ce6cca0bb30d6eb1581e7/aiohttp-3.12.15-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:7c7dd29c7b5bda137464dc9bfc738d7ceea46ff70309859ffde8c022e9b08ba7", size = 1805701, upload-time = "2025-07-29T05:50:39.591Z" }, + { url = "https://files.pythonhosted.org/packages/85/e0/444747a9455c5de188c0f4a0173ee701e2e325d4b2550e9af84abb20cdba/aiohttp-3.12.15-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:421da6fd326460517873274875c6c5a18ff225b40da2616083c5a34a7570b685", size = 1718758, upload-time = "2025-07-29T05:50:41.292Z" }, + { url = "https://files.pythonhosted.org/packages/36/ab/1006278d1ffd13a698e5dd4bfa01e5878f6bddefc296c8b62649753ff249/aiohttp-3.12.15-cp311-cp311-win32.whl", hash = "sha256:4420cf9d179ec8dfe4be10e7d0fe47d6d606485512ea2265b0d8c5113372771b", size = 428868, upload-time = "2025-07-29T05:50:43.063Z" }, + { url = "https://files.pythonhosted.org/packages/10/97/ad2b18700708452400278039272032170246a1bf8ec5d832772372c71f1a/aiohttp-3.12.15-cp311-cp311-win_amd64.whl", hash = "sha256:edd533a07da85baa4b423ee8839e3e91681c7bfa19b04260a469ee94b778bf6d", size = 453273, upload-time = "2025-07-29T05:50:44.613Z" }, + { url = "https://files.pythonhosted.org/packages/63/97/77cb2450d9b35f517d6cf506256bf4f5bda3f93a66b4ad64ba7fc917899c/aiohttp-3.12.15-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:802d3868f5776e28f7bf69d349c26fc0efadb81676d0afa88ed00d98a26340b7", size = 702333, upload-time = "2025-07-29T05:50:46.507Z" }, + { url = "https://files.pythonhosted.org/packages/83/6d/0544e6b08b748682c30b9f65640d006e51f90763b41d7c546693bc22900d/aiohttp-3.12.15-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:f2800614cd560287be05e33a679638e586a2d7401f4ddf99e304d98878c29444", size = 476948, upload-time = "2025-07-29T05:50:48.067Z" }, + { url = "https://files.pythonhosted.org/packages/3a/1d/c8c40e611e5094330284b1aea8a4b02ca0858f8458614fa35754cab42b9c/aiohttp-3.12.15-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:8466151554b593909d30a0a125d638b4e5f3836e5aecde85b66b80ded1cb5b0d", size = 469787, upload-time = "2025-07-29T05:50:49.669Z" }, + { url = "https://files.pythonhosted.org/packages/38/7d/b76438e70319796bfff717f325d97ce2e9310f752a267bfdf5192ac6082b/aiohttp-3.12.15-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2e5a495cb1be69dae4b08f35a6c4579c539e9b5706f606632102c0f855bcba7c", size = 1716590, upload-time = "2025-07-29T05:50:51.368Z" }, + { url = "https://files.pythonhosted.org/packages/79/b1/60370d70cdf8b269ee1444b390cbd72ce514f0d1cd1a715821c784d272c9/aiohttp-3.12.15-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:6404dfc8cdde35c69aaa489bb3542fb86ef215fc70277c892be8af540e5e21c0", size = 1699241, upload-time = "2025-07-29T05:50:53.628Z" }, + { url = "https://files.pythonhosted.org/packages/a3/2b/4968a7b8792437ebc12186db31523f541943e99bda8f30335c482bea6879/aiohttp-3.12.15-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3ead1c00f8521a5c9070fcb88f02967b1d8a0544e6d85c253f6968b785e1a2ab", size = 1754335, upload-time = "2025-07-29T05:50:55.394Z" }, + { url = "https://files.pythonhosted.org/packages/fb/c1/49524ed553f9a0bec1a11fac09e790f49ff669bcd14164f9fab608831c4d/aiohttp-3.12.15-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:6990ef617f14450bc6b34941dba4f12d5613cbf4e33805932f853fbd1cf18bfb", size = 1800491, upload-time = "2025-07-29T05:50:57.202Z" }, + { url = "https://files.pythonhosted.org/packages/de/5e/3bf5acea47a96a28c121b167f5ef659cf71208b19e52a88cdfa5c37f1fcc/aiohttp-3.12.15-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:fd736ed420f4db2b8148b52b46b88ed038d0354255f9a73196b7bbce3ea97545", size = 1719929, upload-time = "2025-07-29T05:50:59.192Z" }, + { url = "https://files.pythonhosted.org/packages/39/94/8ae30b806835bcd1cba799ba35347dee6961a11bd507db634516210e91d8/aiohttp-3.12.15-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3c5092ce14361a73086b90c6efb3948ffa5be2f5b6fbcf52e8d8c8b8848bb97c", size = 1635733, upload-time = "2025-07-29T05:51:01.394Z" }, + { url = "https://files.pythonhosted.org/packages/7a/46/06cdef71dd03acd9da7f51ab3a9107318aee12ad38d273f654e4f981583a/aiohttp-3.12.15-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:aaa2234bb60c4dbf82893e934d8ee8dea30446f0647e024074237a56a08c01bd", size = 1696790, upload-time = "2025-07-29T05:51:03.657Z" }, + { url = "https://files.pythonhosted.org/packages/02/90/6b4cfaaf92ed98d0ec4d173e78b99b4b1a7551250be8937d9d67ecb356b4/aiohttp-3.12.15-cp312-cp312-musllinux_1_2_armv7l.whl", hash = "sha256:6d86a2fbdd14192e2f234a92d3b494dd4457e683ba07e5905a0b3ee25389ac9f", size = 1718245, upload-time = "2025-07-29T05:51:05.911Z" }, + { url = "https://files.pythonhosted.org/packages/2e/e6/2593751670fa06f080a846f37f112cbe6f873ba510d070136a6ed46117c6/aiohttp-3.12.15-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:a041e7e2612041a6ddf1c6a33b883be6a421247c7afd47e885969ee4cc58bd8d", size = 1658899, upload-time = "2025-07-29T05:51:07.753Z" }, + { url = "https://files.pythonhosted.org/packages/8f/28/c15bacbdb8b8eb5bf39b10680d129ea7410b859e379b03190f02fa104ffd/aiohttp-3.12.15-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:5015082477abeafad7203757ae44299a610e89ee82a1503e3d4184e6bafdd519", size = 1738459, upload-time = "2025-07-29T05:51:09.56Z" }, + { url = "https://files.pythonhosted.org/packages/00/de/c269cbc4faa01fb10f143b1670633a8ddd5b2e1ffd0548f7aa49cb5c70e2/aiohttp-3.12.15-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:56822ff5ddfd1b745534e658faba944012346184fbfe732e0d6134b744516eea", size = 1766434, upload-time = "2025-07-29T05:51:11.423Z" }, + { url = "https://files.pythonhosted.org/packages/52/b0/4ff3abd81aa7d929b27d2e1403722a65fc87b763e3a97b3a2a494bfc63bc/aiohttp-3.12.15-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:b2acbbfff69019d9014508c4ba0401822e8bae5a5fdc3b6814285b71231b60f3", size = 1726045, upload-time = "2025-07-29T05:51:13.689Z" }, + { url = "https://files.pythonhosted.org/packages/71/16/949225a6a2dd6efcbd855fbd90cf476052e648fb011aa538e3b15b89a57a/aiohttp-3.12.15-cp312-cp312-win32.whl", hash = "sha256:d849b0901b50f2185874b9a232f38e26b9b3d4810095a7572eacea939132d4e1", size = 423591, upload-time = "2025-07-29T05:51:15.452Z" }, + { url = "https://files.pythonhosted.org/packages/2b/d8/fa65d2a349fe938b76d309db1a56a75c4fb8cc7b17a398b698488a939903/aiohttp-3.12.15-cp312-cp312-win_amd64.whl", hash = "sha256:b390ef5f62bb508a9d67cb3bba9b8356e23b3996da7062f1a57ce1a79d2b3d34", size = 450266, upload-time = "2025-07-29T05:51:17.239Z" }, + { url = "https://files.pythonhosted.org/packages/f2/33/918091abcf102e39d15aba2476ad9e7bd35ddb190dcdd43a854000d3da0d/aiohttp-3.12.15-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:9f922ffd05034d439dde1c77a20461cf4a1b0831e6caa26151fe7aa8aaebc315", size = 696741, upload-time = "2025-07-29T05:51:19.021Z" }, + { url = "https://files.pythonhosted.org/packages/b5/2a/7495a81e39a998e400f3ecdd44a62107254803d1681d9189be5c2e4530cd/aiohttp-3.12.15-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:2ee8a8ac39ce45f3e55663891d4b1d15598c157b4d494a4613e704c8b43112cd", size = 474407, upload-time = "2025-07-29T05:51:21.165Z" }, + { url = "https://files.pythonhosted.org/packages/49/fc/a9576ab4be2dcbd0f73ee8675d16c707cfc12d5ee80ccf4015ba543480c9/aiohttp-3.12.15-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:3eae49032c29d356b94eee45a3f39fdf4b0814b397638c2f718e96cfadf4c4e4", size = 466703, upload-time = "2025-07-29T05:51:22.948Z" }, + { url = "https://files.pythonhosted.org/packages/09/2f/d4bcc8448cf536b2b54eed48f19682031ad182faa3a3fee54ebe5b156387/aiohttp-3.12.15-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b97752ff12cc12f46a9b20327104448042fce5c33a624f88c18f66f9368091c7", size = 1705532, upload-time = "2025-07-29T05:51:25.211Z" }, + { url = "https://files.pythonhosted.org/packages/f1/f3/59406396083f8b489261e3c011aa8aee9df360a96ac8fa5c2e7e1b8f0466/aiohttp-3.12.15-cp313-cp313-manylinux_2_17_armv7l.manylinux2014_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:894261472691d6fe76ebb7fcf2e5870a2ac284c7406ddc95823c8598a1390f0d", size = 1686794, upload-time = "2025-07-29T05:51:27.145Z" }, + { url = "https://files.pythonhosted.org/packages/dc/71/164d194993a8d114ee5656c3b7ae9c12ceee7040d076bf7b32fb98a8c5c6/aiohttp-3.12.15-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5fa5d9eb82ce98959fc1031c28198b431b4d9396894f385cb63f1e2f3f20ca6b", size = 1738865, upload-time = "2025-07-29T05:51:29.366Z" }, + { url = "https://files.pythonhosted.org/packages/1c/00/d198461b699188a93ead39cb458554d9f0f69879b95078dce416d3209b54/aiohttp-3.12.15-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:f0fa751efb11a541f57db59c1dd821bec09031e01452b2b6217319b3a1f34f3d", size = 1788238, upload-time = "2025-07-29T05:51:31.285Z" }, + { url = "https://files.pythonhosted.org/packages/85/b8/9e7175e1fa0ac8e56baa83bf3c214823ce250d0028955dfb23f43d5e61fd/aiohttp-3.12.15-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5346b93e62ab51ee2a9d68e8f73c7cf96ffb73568a23e683f931e52450e4148d", size = 1710566, upload-time = "2025-07-29T05:51:33.219Z" }, + { url = "https://files.pythonhosted.org/packages/59/e4/16a8eac9df39b48ae102ec030fa9f726d3570732e46ba0c592aeeb507b93/aiohttp-3.12.15-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:049ec0360f939cd164ecbfd2873eaa432613d5e77d6b04535e3d1fbae5a9e645", size = 1624270, upload-time = "2025-07-29T05:51:35.195Z" }, + { url = "https://files.pythonhosted.org/packages/1f/f8/cd84dee7b6ace0740908fd0af170f9fab50c2a41ccbc3806aabcb1050141/aiohttp-3.12.15-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:b52dcf013b57464b6d1e51b627adfd69a8053e84b7103a7cd49c030f9ca44461", size = 1677294, upload-time = "2025-07-29T05:51:37.215Z" }, + { url = "https://files.pythonhosted.org/packages/ce/42/d0f1f85e50d401eccd12bf85c46ba84f947a84839c8a1c2c5f6e8ab1eb50/aiohttp-3.12.15-cp313-cp313-musllinux_1_2_armv7l.whl", hash = "sha256:9b2af240143dd2765e0fb661fd0361a1b469cab235039ea57663cda087250ea9", size = 1708958, upload-time = "2025-07-29T05:51:39.328Z" }, + { url = "https://files.pythonhosted.org/packages/d5/6b/f6fa6c5790fb602538483aa5a1b86fcbad66244997e5230d88f9412ef24c/aiohttp-3.12.15-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:ac77f709a2cde2cc71257ab2d8c74dd157c67a0558a0d2799d5d571b4c63d44d", size = 1651553, upload-time = "2025-07-29T05:51:41.356Z" }, + { url = "https://files.pythonhosted.org/packages/04/36/a6d36ad545fa12e61d11d1932eef273928b0495e6a576eb2af04297fdd3c/aiohttp-3.12.15-cp313-cp313-musllinux_1_2_ppc64le.whl", hash = "sha256:47f6b962246f0a774fbd3b6b7be25d59b06fdb2f164cf2513097998fc6a29693", size = 1727688, upload-time = "2025-07-29T05:51:43.452Z" }, + { url = "https://files.pythonhosted.org/packages/aa/c8/f195e5e06608a97a4e52c5d41c7927301bf757a8e8bb5bbf8cef6c314961/aiohttp-3.12.15-cp313-cp313-musllinux_1_2_s390x.whl", hash = "sha256:760fb7db442f284996e39cf9915a94492e1896baac44f06ae551974907922b64", size = 1761157, upload-time = "2025-07-29T05:51:45.643Z" }, + { url = "https://files.pythonhosted.org/packages/05/6a/ea199e61b67f25ba688d3ce93f63b49b0a4e3b3d380f03971b4646412fc6/aiohttp-3.12.15-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:ad702e57dc385cae679c39d318def49aef754455f237499d5b99bea4ef582e51", size = 1710050, upload-time = "2025-07-29T05:51:48.203Z" }, + { url = "https://files.pythonhosted.org/packages/b4/2e/ffeb7f6256b33635c29dbed29a22a723ff2dd7401fff42ea60cf2060abfb/aiohttp-3.12.15-cp313-cp313-win32.whl", hash = "sha256:f813c3e9032331024de2eb2e32a88d86afb69291fbc37a3a3ae81cc9917fb3d0", size = 422647, upload-time = "2025-07-29T05:51:50.718Z" }, + { url = "https://files.pythonhosted.org/packages/1b/8e/78ee35774201f38d5e1ba079c9958f7629b1fd079459aea9467441dbfbf5/aiohttp-3.12.15-cp313-cp313-win_amd64.whl", hash = "sha256:1a649001580bdb37c6fdb1bebbd7e3bc688e8ec2b5c6f52edbb664662b17dc84", size = 449067, upload-time = "2025-07-29T05:51:52.549Z" }, ] [[package]] @@ -392,7 +390,8 @@ name = "aiopath" version = "0.7.7" source = { registry = "https://pypi.org/simple" } resolution-markers = [ - "python_full_version >= '3.13'", + "python_full_version >= '3.14'", + "python_full_version == '3.13.*'", "python_full_version == '3.12.*'", ] dependencies = [ @@ -435,15 +434,6 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/4d/3f/3bc3f1d83f6e4a7fcb834d3720544ca597590425be5ba9db032b2bf322a2/altgraph-0.17.4-py2.py3-none-any.whl", hash = "sha256:642743b4750de17e655e6711601b077bc6598dbfa3ba5fa2b2a35ce12b508dff", size = 21212, upload-time = "2023-09-25T09:04:50.691Z" }, ] -[[package]] -name = "annotated-doc" -version = "0.0.3" -source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/d7/a6/dc46877b911e40c00d395771ea710d5e77b6de7bacd5fdcd78d70cc5a48f/annotated_doc-0.0.3.tar.gz", hash = "sha256:e18370014c70187422c33e945053ff4c286f453a984eba84d0dbfa0c935adeda", size = 5535, upload-time = "2025-10-24T14:57:10.718Z" } -wheels = [ - { url = "https://files.pythonhosted.org/packages/02/b7/cf592cb5de5cb3bade3357f8d2cf42bf103bbe39f459824b4939fd212911/annotated_doc-0.0.3-py3-none-any.whl", hash = "sha256:348ec6664a76f1fd3be81f43dffbee4c7e8ce931ba71ec67cc7f4ade7fbbb580", size = 5488, upload-time = "2025-10-24T14:57:09.462Z" }, -] - [[package]] name = "annotated-types" version = "0.7.0" @@ -474,7 +464,8 @@ name = "anyio" version = "4.11.0" source = { registry = "https://pypi.org/simple" } resolution-markers = [ - "python_full_version >= '3.13'", + "python_full_version >= '3.14'", + "python_full_version == '3.13.*'", "python_full_version == '3.12.*'", ] dependencies = [ @@ -554,6 +545,16 @@ dependencies = [ ] sdist = { url = "https://files.pythonhosted.org/packages/5c/2d/db8af0df73c1cf454f71b2bbe5e356b8c1f8041c979f505b3d3186e520a9/argon2_cffi_bindings-25.1.0.tar.gz", hash = "sha256:b957f3e6ea4d55d820e40ff76f450952807013d361a65d7f28acc0acbf29229d", size = 1783441, upload-time = "2025-07-30T10:02:05.147Z" } wheels = [ + { url = "https://files.pythonhosted.org/packages/60/97/3c0a35f46e52108d4707c44b95cfe2afcafc50800b5450c197454569b776/argon2_cffi_bindings-25.1.0-cp314-cp314t-macosx_10_13_universal2.whl", hash = "sha256:3d3f05610594151994ca9ccb3c771115bdb4daef161976a266f0dd8aa9996b8f", size = 54393, upload-time = "2025-07-30T10:01:40.97Z" }, + { url = "https://files.pythonhosted.org/packages/9d/f4/98bbd6ee89febd4f212696f13c03ca302b8552e7dbf9c8efa11ea4a388c3/argon2_cffi_bindings-25.1.0-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:8b8efee945193e667a396cbc7b4fb7d357297d6234d30a489905d96caabde56b", size = 29328, upload-time = "2025-07-30T10:01:41.916Z" }, + { url = "https://files.pythonhosted.org/packages/43/24/90a01c0ef12ac91a6be05969f29944643bc1e5e461155ae6559befa8f00b/argon2_cffi_bindings-25.1.0-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:3c6702abc36bf3ccba3f802b799505def420a1b7039862014a65db3205967f5a", size = 31269, upload-time = "2025-07-30T10:01:42.716Z" }, + { url = "https://files.pythonhosted.org/packages/d4/d3/942aa10782b2697eee7af5e12eeff5ebb325ccfb86dd8abda54174e377e4/argon2_cffi_bindings-25.1.0-cp314-cp314t-manylinux_2_26_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:a1c70058c6ab1e352304ac7e3b52554daadacd8d453c1752e547c76e9c99ac44", size = 86558, upload-time = "2025-07-30T10:01:43.943Z" }, + { url = "https://files.pythonhosted.org/packages/0d/82/b484f702fec5536e71836fc2dbc8c5267b3f6e78d2d539b4eaa6f0db8bf8/argon2_cffi_bindings-25.1.0-cp314-cp314t-manylinux_2_26_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:e2fd3bfbff3c5d74fef31a722f729bf93500910db650c925c2d6ef879a7e51cb", size = 92364, upload-time = "2025-07-30T10:01:44.887Z" }, + { url = "https://files.pythonhosted.org/packages/c9/c1/a606ff83b3f1735f3759ad0f2cd9e038a0ad11a3de3b6c673aa41c24bb7b/argon2_cffi_bindings-25.1.0-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:c4f9665de60b1b0e99bcd6be4f17d90339698ce954cfd8d9cf4f91c995165a92", size = 85637, upload-time = "2025-07-30T10:01:46.225Z" }, + { url = "https://files.pythonhosted.org/packages/44/b4/678503f12aceb0262f84fa201f6027ed77d71c5019ae03b399b97caa2f19/argon2_cffi_bindings-25.1.0-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:ba92837e4a9aa6a508c8d2d7883ed5a8f6c308c89a4790e1e447a220deb79a85", size = 91934, upload-time = "2025-07-30T10:01:47.203Z" }, + { url = "https://files.pythonhosted.org/packages/f0/c7/f36bd08ef9bd9f0a9cff9428406651f5937ce27b6c5b07b92d41f91ae541/argon2_cffi_bindings-25.1.0-cp314-cp314t-win32.whl", hash = "sha256:84a461d4d84ae1295871329b346a97f68eade8c53b6ed9a7ca2d7467f3c8ff6f", size = 28158, upload-time = "2025-07-30T10:01:48.341Z" }, + { url = "https://files.pythonhosted.org/packages/b3/80/0106a7448abb24a2c467bf7d527fe5413b7fdfa4ad6d6a96a43a62ef3988/argon2_cffi_bindings-25.1.0-cp314-cp314t-win_amd64.whl", hash = "sha256:b55aec3565b65f56455eebc9b9f34130440404f27fe21c3b375bf1ea4d8fbae6", size = 32597, upload-time = "2025-07-30T10:01:49.112Z" }, + { url = "https://files.pythonhosted.org/packages/05/b8/d663c9caea07e9180b2cb662772865230715cbd573ba3b5e81793d580316/argon2_cffi_bindings-25.1.0-cp314-cp314t-win_arm64.whl", hash = "sha256:87c33a52407e4c41f3b70a9c2d3f6056d88b10dad7695be708c5021673f55623", size = 28231, upload-time = "2025-07-30T10:01:49.92Z" }, { url = "https://files.pythonhosted.org/packages/1d/57/96b8b9f93166147826da5f90376e784a10582dd39a393c99bb62cfcf52f0/argon2_cffi_bindings-25.1.0-cp39-abi3-macosx_10_9_universal2.whl", hash = "sha256:aecba1723ae35330a008418a91ea6cfcedf6d31e5fbaa056a166462ff066d500", size = 54121, upload-time = "2025-07-30T10:01:50.815Z" }, { url = "https://files.pythonhosted.org/packages/0a/08/a9bebdb2e0e602dde230bdde8021b29f71f7841bd54801bcfd514acb5dcf/argon2_cffi_bindings-25.1.0-cp39-abi3-macosx_10_9_x86_64.whl", hash = "sha256:2630b6240b495dfab90aebe159ff784d08ea999aa4b0d17efa734055a07d2f44", size = 29177, upload-time = "2025-07-30T10:01:51.681Z" }, { url = "https://files.pythonhosted.org/packages/b6/02/d297943bcacf05e4f2a94ab6f462831dc20158614e5d067c35d4e63b9acb/argon2_cffi_bindings-25.1.0-cp39-abi3-macosx_11_0_arm64.whl", hash = "sha256:7aef0c91e2c0fbca6fc68e7555aa60ef7008a739cbe045541e438373bc54d2b0", size = 31090, upload-time = "2025-07-30T10:01:53.184Z" }, @@ -608,11 +609,11 @@ wheels = [ [[package]] name = "attrs" -version = "25.4.0" +version = "25.3.0" source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/6b/5c/685e6633917e101e5dcb62b9dd76946cbb57c26e133bae9e0cd36033c0a9/attrs-25.4.0.tar.gz", hash = "sha256:16d5969b87f0859ef33a48b35d55ac1be6e42ae49d5e853b597db70c35c57e11", size = 934251, upload-time = "2025-10-06T13:54:44.725Z" } +sdist = { url = "https://files.pythonhosted.org/packages/5a/b0/1367933a8532ee6ff8d63537de4f1177af4bff9f3e829baf7331f595bb24/attrs-25.3.0.tar.gz", hash = "sha256:75d7cefc7fb576747b2c81b4442d4d4a1ce0900973527c011d1030fd3bf4af1b", size = 812032, upload-time = "2025-03-13T11:10:22.779Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/3a/2a/7cc015f5b9f5db42b7d48157e23356022889fc354a2813c15934b7cb5c0e/attrs-25.4.0-py3-none-any.whl", hash = "sha256:adcf7e2a1fb3b36ac48d97835bb6d8ade15b8dcce26aba8bf1d14847b57a3373", size = 67615, upload-time = "2025-10-06T13:54:43.17Z" }, + { url = "https://files.pythonhosted.org/packages/77/06/bb80f5f86020c4551da315d78b3ab75e8228f89f0162f2c3a819e407941a/attrs-25.3.0-py3-none-any.whl", hash = "sha256:427318ce031701fea540783410126f03899a97ffc6f61596ad581ac2e40e3bc3", size = 63815, upload-time = "2025-03-13T11:10:21.14Z" }, ] [[package]] @@ -699,30 +700,30 @@ wheels = [ [[package]] name = "boto3" -version = "1.40.64" +version = "1.40.47" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "botocore" }, { name = "jmespath" }, { name = "s3transfer" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/08/d2/e508e5f42dc1c8a7412f5170751e626a18ed32c6e95c5df30bde6c5addf1/boto3-1.40.64.tar.gz", hash = "sha256:b92d6961c352f2bb8710c9892557d4b0e11258b70967d4e740e1c97375bcd779", size = 111543, upload-time = "2025-10-31T19:33:24.336Z" } +sdist = { url = "https://files.pythonhosted.org/packages/af/32/16ffef2a7ca05babd5d36d512b07bd318feb6af87aa83ced29d565f6a6be/boto3-1.40.47.tar.gz", hash = "sha256:c0ea31655c41b3f9bf46bc370520eafc081ba4c3e79fa564b60e976663abf7e7", size = 111607, upload-time = "2025-10-07T19:26:37.684Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/65/c2/27da558ceb90d17b1e4c0cca5dab29f8aea7f63242a1005a8f54230ce5e6/boto3-1.40.64-py3-none-any.whl", hash = "sha256:35ca3dd80dd90d5f4e8ed032440f28790696fdf50f48c0d16a09a75675f9112f", size = 139321, upload-time = "2025-10-31T19:33:22.92Z" }, + { url = "https://files.pythonhosted.org/packages/c7/25/a3ad490d4ab04417cf887fc52ce266cacde2f15b8d46ec45cc40583f22cd/boto3-1.40.47-py3-none-any.whl", hash = "sha256:33b291200cbb042ca8faac0b52a5d460850712641930d32335b75bc65e88653c", size = 139346, upload-time = "2025-10-07T19:26:35.481Z" }, ] [[package]] name = "botocore" -version = "1.40.64" +version = "1.40.47" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "jmespath" }, { name = "python-dateutil" }, { name = "urllib3" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/c1/15/109cb31c156a64bfaf4c809d2638fd95d8ba39b6deb7f1d0526c05257fd7/botocore-1.40.64.tar.gz", hash = "sha256:a13af4009f6912eafe32108f6fa584fb26e24375149836c2bcaaaaec9a7a9e58", size = 14409921, upload-time = "2025-10-31T19:33:12.291Z" } +sdist = { url = "https://files.pythonhosted.org/packages/5f/9e/65a9507f6f4d7ea1f3050a2b555faac7f4afa074ce9bb1dd12aa6fd19fc3/botocore-1.40.47.tar.gz", hash = "sha256:8eb950046ba8afc99dedb0268282b4f9a61bca2c7a6415036bff2beee5e180ca", size = 14401848, upload-time = "2025-10-07T19:26:26.686Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/8f/c5/70bec18aef3fe9af63847d8766f81864b20daacd1dc7bf0c1d1ad90c7e98/botocore-1.40.64-py3-none-any.whl", hash = "sha256:6902b3dadfba1fbacc9648171bef3942530d8f823ff2bdb0e585a332323f89fc", size = 14072939, upload-time = "2025-10-31T19:33:09.081Z" }, + { url = "https://files.pythonhosted.org/packages/a2/16/96226328857ab02123bc7b6dc08e27aa5bd1cfa1c553a922239263014ce8/botocore-1.40.47-py3-none-any.whl", hash = "sha256:0845c5bc49fc9d45938ff3609df7ec1eff0d26c1a4edcd03e16ad2194c3a9a56", size = 14072266, upload-time = "2025-10-07T19:26:22.79Z" }, ] [[package]] @@ -783,11 +784,11 @@ filecache = [ [[package]] name = "cachetools" -version = "6.2.1" +version = "6.2.0" source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/cc/7e/b975b5814bd36faf009faebe22c1072a1fa1168db34d285ef0ba071ad78c/cachetools-6.2.1.tar.gz", hash = "sha256:3f391e4bd8f8bf0931169baf7456cc822705f4e2a31f840d218f445b9a854201", size = 31325, upload-time = "2025-10-12T14:55:30.139Z" } +sdist = { url = "https://files.pythonhosted.org/packages/9d/61/e4fad8155db4a04bfb4734c7c8ff0882f078f24294d42798b3568eb63bff/cachetools-6.2.0.tar.gz", hash = "sha256:38b328c0889450f05f5e120f56ab68c8abaf424e1275522b138ffc93253f7e32", size = 30988, upload-time = "2025-08-25T18:57:30.924Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/96/c5/1e741d26306c42e2bf6ab740b2202872727e0f606033c9dd713f8b93f5a8/cachetools-6.2.1-py3-none-any.whl", hash = "sha256:09868944b6dde876dfd44e1d47e18484541eaf12f26f29b7af91b26cc892d701", size = 11280, upload-time = "2025-10-12T14:55:28.382Z" }, + { url = "https://files.pythonhosted.org/packages/6c/56/3124f61d37a7a4e7cc96afc5492c78ba0cb551151e530b54669ddd1436ef/cachetools-6.2.0-py3-none-any.whl", hash = "sha256:1c76a8960c0041fcc21097e357f882197c79da0dbff766e7317890a65d7d8ba6", size = 11276, upload-time = "2025-08-25T18:57:29.684Z" }, ] [[package]] @@ -862,6 +863,28 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/eb/6d/bf9bda840d5f1dfdbf0feca87fbdb64a918a69bca42cfa0ba7b137c48cb8/cffi-2.0.0-cp313-cp313-win32.whl", hash = "sha256:74a03b9698e198d47562765773b4a8309919089150a0bb17d829ad7b44b60d27", size = 172909, upload-time = "2025-09-08T23:23:14.32Z" }, { url = "https://files.pythonhosted.org/packages/37/18/6519e1ee6f5a1e579e04b9ddb6f1676c17368a7aba48299c3759bbc3c8b3/cffi-2.0.0-cp313-cp313-win_amd64.whl", hash = "sha256:19f705ada2530c1167abacb171925dd886168931e0a7b78f5bffcae5c6b5be75", size = 183402, upload-time = "2025-09-08T23:23:15.535Z" }, { url = "https://files.pythonhosted.org/packages/cb/0e/02ceeec9a7d6ee63bb596121c2c8e9b3a9e150936f4fbef6ca1943e6137c/cffi-2.0.0-cp313-cp313-win_arm64.whl", hash = "sha256:256f80b80ca3853f90c21b23ee78cd008713787b1b1e93eae9f3d6a7134abd91", size = 177780, upload-time = "2025-09-08T23:23:16.761Z" }, + { url = "https://files.pythonhosted.org/packages/92/c4/3ce07396253a83250ee98564f8d7e9789fab8e58858f35d07a9a2c78de9f/cffi-2.0.0-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:fc33c5141b55ed366cfaad382df24fe7dcbc686de5be719b207bb248e3053dc5", size = 185320, upload-time = "2025-09-08T23:23:18.087Z" }, + { url = "https://files.pythonhosted.org/packages/59/dd/27e9fa567a23931c838c6b02d0764611c62290062a6d4e8ff7863daf9730/cffi-2.0.0-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:c654de545946e0db659b3400168c9ad31b5d29593291482c43e3564effbcee13", size = 181487, upload-time = "2025-09-08T23:23:19.622Z" }, + { url = "https://files.pythonhosted.org/packages/d6/43/0e822876f87ea8a4ef95442c3d766a06a51fc5298823f884ef87aaad168c/cffi-2.0.0-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:24b6f81f1983e6df8db3adc38562c83f7d4a0c36162885ec7f7b77c7dcbec97b", size = 220049, upload-time = "2025-09-08T23:23:20.853Z" }, + { url = "https://files.pythonhosted.org/packages/b4/89/76799151d9c2d2d1ead63c2429da9ea9d7aac304603de0c6e8764e6e8e70/cffi-2.0.0-cp314-cp314-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:12873ca6cb9b0f0d3a0da705d6086fe911591737a59f28b7936bdfed27c0d47c", size = 207793, upload-time = "2025-09-08T23:23:22.08Z" }, + { url = "https://files.pythonhosted.org/packages/bb/dd/3465b14bb9e24ee24cb88c9e3730f6de63111fffe513492bf8c808a3547e/cffi-2.0.0-cp314-cp314-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:d9b97165e8aed9272a6bb17c01e3cc5871a594a446ebedc996e2397a1c1ea8ef", size = 206300, upload-time = "2025-09-08T23:23:23.314Z" }, + { url = "https://files.pythonhosted.org/packages/47/d9/d83e293854571c877a92da46fdec39158f8d7e68da75bf73581225d28e90/cffi-2.0.0-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:afb8db5439b81cf9c9d0c80404b60c3cc9c3add93e114dcae767f1477cb53775", size = 219244, upload-time = "2025-09-08T23:23:24.541Z" }, + { url = "https://files.pythonhosted.org/packages/2b/0f/1f177e3683aead2bb00f7679a16451d302c436b5cbf2505f0ea8146ef59e/cffi-2.0.0-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:737fe7d37e1a1bffe70bd5754ea763a62a066dc5913ca57e957824b72a85e205", size = 222828, upload-time = "2025-09-08T23:23:26.143Z" }, + { url = "https://files.pythonhosted.org/packages/c6/0f/cafacebd4b040e3119dcb32fed8bdef8dfe94da653155f9d0b9dc660166e/cffi-2.0.0-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:38100abb9d1b1435bc4cc340bb4489635dc2f0da7456590877030c9b3d40b0c1", size = 220926, upload-time = "2025-09-08T23:23:27.873Z" }, + { url = "https://files.pythonhosted.org/packages/3e/aa/df335faa45b395396fcbc03de2dfcab242cd61a9900e914fe682a59170b1/cffi-2.0.0-cp314-cp314-win32.whl", hash = "sha256:087067fa8953339c723661eda6b54bc98c5625757ea62e95eb4898ad5e776e9f", size = 175328, upload-time = "2025-09-08T23:23:44.61Z" }, + { url = "https://files.pythonhosted.org/packages/bb/92/882c2d30831744296ce713f0feb4c1cd30f346ef747b530b5318715cc367/cffi-2.0.0-cp314-cp314-win_amd64.whl", hash = "sha256:203a48d1fb583fc7d78a4c6655692963b860a417c0528492a6bc21f1aaefab25", size = 185650, upload-time = "2025-09-08T23:23:45.848Z" }, + { url = "https://files.pythonhosted.org/packages/9f/2c/98ece204b9d35a7366b5b2c6539c350313ca13932143e79dc133ba757104/cffi-2.0.0-cp314-cp314-win_arm64.whl", hash = "sha256:dbd5c7a25a7cb98f5ca55d258b103a2054f859a46ae11aaf23134f9cc0d356ad", size = 180687, upload-time = "2025-09-08T23:23:47.105Z" }, + { url = "https://files.pythonhosted.org/packages/3e/61/c768e4d548bfa607abcda77423448df8c471f25dbe64fb2ef6d555eae006/cffi-2.0.0-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:9a67fc9e8eb39039280526379fb3a70023d77caec1852002b4da7e8b270c4dd9", size = 188773, upload-time = "2025-09-08T23:23:29.347Z" }, + { url = "https://files.pythonhosted.org/packages/2c/ea/5f76bce7cf6fcd0ab1a1058b5af899bfbef198bea4d5686da88471ea0336/cffi-2.0.0-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:7a66c7204d8869299919db4d5069a82f1561581af12b11b3c9f48c584eb8743d", size = 185013, upload-time = "2025-09-08T23:23:30.63Z" }, + { url = "https://files.pythonhosted.org/packages/be/b4/c56878d0d1755cf9caa54ba71e5d049479c52f9e4afc230f06822162ab2f/cffi-2.0.0-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:7cc09976e8b56f8cebd752f7113ad07752461f48a58cbba644139015ac24954c", size = 221593, upload-time = "2025-09-08T23:23:31.91Z" }, + { url = "https://files.pythonhosted.org/packages/e0/0d/eb704606dfe8033e7128df5e90fee946bbcb64a04fcdaa97321309004000/cffi-2.0.0-cp314-cp314t-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:92b68146a71df78564e4ef48af17551a5ddd142e5190cdf2c5624d0c3ff5b2e8", size = 209354, upload-time = "2025-09-08T23:23:33.214Z" }, + { url = "https://files.pythonhosted.org/packages/d8/19/3c435d727b368ca475fb8742ab97c9cb13a0de600ce86f62eab7fa3eea60/cffi-2.0.0-cp314-cp314t-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:b1e74d11748e7e98e2f426ab176d4ed720a64412b6a15054378afdb71e0f37dc", size = 208480, upload-time = "2025-09-08T23:23:34.495Z" }, + { url = "https://files.pythonhosted.org/packages/d0/44/681604464ed9541673e486521497406fadcc15b5217c3e326b061696899a/cffi-2.0.0-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:28a3a209b96630bca57cce802da70c266eb08c6e97e5afd61a75611ee6c64592", size = 221584, upload-time = "2025-09-08T23:23:36.096Z" }, + { url = "https://files.pythonhosted.org/packages/25/8e/342a504ff018a2825d395d44d63a767dd8ebc927ebda557fecdaca3ac33a/cffi-2.0.0-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:7553fb2090d71822f02c629afe6042c299edf91ba1bf94951165613553984512", size = 224443, upload-time = "2025-09-08T23:23:37.328Z" }, + { url = "https://files.pythonhosted.org/packages/e1/5e/b666bacbbc60fbf415ba9988324a132c9a7a0448a9a8f125074671c0f2c3/cffi-2.0.0-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:6c6c373cfc5c83a975506110d17457138c8c63016b563cc9ed6e056a82f13ce4", size = 223437, upload-time = "2025-09-08T23:23:38.945Z" }, + { url = "https://files.pythonhosted.org/packages/a0/1d/ec1a60bd1a10daa292d3cd6bb0b359a81607154fb8165f3ec95fe003b85c/cffi-2.0.0-cp314-cp314t-win32.whl", hash = "sha256:1fc9ea04857caf665289b7a75923f2c6ed559b8298a1b8c49e59f7dd95c8481e", size = 180487, upload-time = "2025-09-08T23:23:40.423Z" }, + { url = "https://files.pythonhosted.org/packages/bf/41/4c1168c74fac325c0c8156f04b6749c8b6a8f405bbf91413ba088359f60d/cffi-2.0.0-cp314-cp314t-win_amd64.whl", hash = "sha256:d68b6cef7827e8641e8ef16f4494edda8b36104d79773a334beaa1e3521430f6", size = 191726, upload-time = "2025-09-08T23:23:41.742Z" }, + { url = "https://files.pythonhosted.org/packages/ae/3a/dbeec9d1ee0844c679f6bb5d6ad4e9f198b1224f4e7a32825f47f6192b0c/cffi-2.0.0-cp314-cp314t-win_arm64.whl", hash = "sha256:0a1527a803f0a659de1af2e1fd700213caba79377e27e4693648c2923da066f9", size = 184195, upload-time = "2025-09-08T23:23:43.004Z" }, ] [[package]] @@ -884,59 +907,55 @@ wheels = [ [[package]] name = "charset-normalizer" -version = "3.4.4" -source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/13/69/33ddede1939fdd074bce5434295f38fae7136463422fe4fd3e0e89b98062/charset_normalizer-3.4.4.tar.gz", hash = "sha256:94537985111c35f28720e43603b8e7b43a6ecfb2ce1d3058bbe955b73404e21a", size = 129418, upload-time = "2025-10-14T04:42:32.879Z" } -wheels = [ - { url = "https://files.pythonhosted.org/packages/ed/27/c6491ff4954e58a10f69ad90aca8a1b6fe9c5d3c6f380907af3c37435b59/charset_normalizer-3.4.4-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:6e1fcf0720908f200cd21aa4e6750a48ff6ce4afe7ff5a79a90d5ed8a08296f8", size = 206988, upload-time = "2025-10-14T04:40:33.79Z" }, - { url = "https://files.pythonhosted.org/packages/94/59/2e87300fe67ab820b5428580a53cad894272dbb97f38a7a814a2a1ac1011/charset_normalizer-3.4.4-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:5f819d5fe9234f9f82d75bdfa9aef3a3d72c4d24a6e57aeaebba32a704553aa0", size = 147324, upload-time = "2025-10-14T04:40:34.961Z" }, - { url = "https://files.pythonhosted.org/packages/07/fb/0cf61dc84b2b088391830f6274cb57c82e4da8bbc2efeac8c025edb88772/charset_normalizer-3.4.4-cp311-cp311-manylinux2014_armv7l.manylinux_2_17_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:a59cb51917aa591b1c4e6a43c132f0cdc3c76dbad6155df4e28ee626cc77a0a3", size = 142742, upload-time = "2025-10-14T04:40:36.105Z" }, - { url = "https://files.pythonhosted.org/packages/62/8b/171935adf2312cd745d290ed93cf16cf0dfe320863ab7cbeeae1dcd6535f/charset_normalizer-3.4.4-cp311-cp311-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:8ef3c867360f88ac904fd3f5e1f902f13307af9052646963ee08ff4f131adafc", size = 160863, upload-time = "2025-10-14T04:40:37.188Z" }, - { url = "https://files.pythonhosted.org/packages/09/73/ad875b192bda14f2173bfc1bc9a55e009808484a4b256748d931b6948442/charset_normalizer-3.4.4-cp311-cp311-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:d9e45d7faa48ee908174d8fe84854479ef838fc6a705c9315372eacbc2f02897", size = 157837, upload-time = "2025-10-14T04:40:38.435Z" }, - { url = "https://files.pythonhosted.org/packages/6d/fc/de9cce525b2c5b94b47c70a4b4fb19f871b24995c728e957ee68ab1671ea/charset_normalizer-3.4.4-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:840c25fb618a231545cbab0564a799f101b63b9901f2569faecd6b222ac72381", size = 151550, upload-time = "2025-10-14T04:40:40.053Z" }, - { url = "https://files.pythonhosted.org/packages/55/c2/43edd615fdfba8c6f2dfbd459b25a6b3b551f24ea21981e23fb768503ce1/charset_normalizer-3.4.4-cp311-cp311-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:ca5862d5b3928c4940729dacc329aa9102900382fea192fc5e52eb69d6093815", size = 149162, upload-time = "2025-10-14T04:40:41.163Z" }, - { url = "https://files.pythonhosted.org/packages/03/86/bde4ad8b4d0e9429a4e82c1e8f5c659993a9a863ad62c7df05cf7b678d75/charset_normalizer-3.4.4-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:d9c7f57c3d666a53421049053eaacdd14bbd0a528e2186fcb2e672effd053bb0", size = 150019, upload-time = "2025-10-14T04:40:42.276Z" }, - { url = "https://files.pythonhosted.org/packages/1f/86/a151eb2af293a7e7bac3a739b81072585ce36ccfb4493039f49f1d3cae8c/charset_normalizer-3.4.4-cp311-cp311-musllinux_1_2_armv7l.whl", hash = "sha256:277e970e750505ed74c832b4bf75dac7476262ee2a013f5574dd49075879e161", size = 143310, upload-time = "2025-10-14T04:40:43.439Z" }, - { url = "https://files.pythonhosted.org/packages/b5/fe/43dae6144a7e07b87478fdfc4dbe9efd5defb0e7ec29f5f58a55aeef7bf7/charset_normalizer-3.4.4-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:31fd66405eaf47bb62e8cd575dc621c56c668f27d46a61d975a249930dd5e2a4", size = 162022, upload-time = "2025-10-14T04:40:44.547Z" }, - { url = "https://files.pythonhosted.org/packages/80/e6/7aab83774f5d2bca81f42ac58d04caf44f0cc2b65fc6db2b3b2e8a05f3b3/charset_normalizer-3.4.4-cp311-cp311-musllinux_1_2_riscv64.whl", hash = "sha256:0d3d8f15c07f86e9ff82319b3d9ef6f4bf907608f53fe9d92b28ea9ae3d1fd89", size = 149383, upload-time = "2025-10-14T04:40:46.018Z" }, - { url = "https://files.pythonhosted.org/packages/4f/e8/b289173b4edae05c0dde07f69f8db476a0b511eac556dfe0d6bda3c43384/charset_normalizer-3.4.4-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:9f7fcd74d410a36883701fafa2482a6af2ff5ba96b9a620e9e0721e28ead5569", size = 159098, upload-time = "2025-10-14T04:40:47.081Z" }, - { url = "https://files.pythonhosted.org/packages/d8/df/fe699727754cae3f8478493c7f45f777b17c3ef0600e28abfec8619eb49c/charset_normalizer-3.4.4-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:ebf3e58c7ec8a8bed6d66a75d7fb37b55e5015b03ceae72a8e7c74495551e224", size = 152991, upload-time = "2025-10-14T04:40:48.246Z" }, - { url = "https://files.pythonhosted.org/packages/1a/86/584869fe4ddb6ffa3bd9f491b87a01568797fb9bd8933f557dba9771beaf/charset_normalizer-3.4.4-cp311-cp311-win32.whl", hash = "sha256:eecbc200c7fd5ddb9a7f16c7decb07b566c29fa2161a16cf67b8d068bd21690a", size = 99456, upload-time = "2025-10-14T04:40:49.376Z" }, - { url = "https://files.pythonhosted.org/packages/65/f6/62fdd5feb60530f50f7e38b4f6a1d5203f4d16ff4f9f0952962c044e919a/charset_normalizer-3.4.4-cp311-cp311-win_amd64.whl", hash = "sha256:5ae497466c7901d54b639cf42d5b8c1b6a4fead55215500d2f486d34db48d016", size = 106978, upload-time = "2025-10-14T04:40:50.844Z" }, - { url = "https://files.pythonhosted.org/packages/7a/9d/0710916e6c82948b3be62d9d398cb4fcf4e97b56d6a6aeccd66c4b2f2bd5/charset_normalizer-3.4.4-cp311-cp311-win_arm64.whl", hash = "sha256:65e2befcd84bc6f37095f5961e68a6f077bf44946771354a28ad434c2cce0ae1", size = 99969, upload-time = "2025-10-14T04:40:52.272Z" }, - { url = "https://files.pythonhosted.org/packages/f3/85/1637cd4af66fa687396e757dec650f28025f2a2f5a5531a3208dc0ec43f2/charset_normalizer-3.4.4-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:0a98e6759f854bd25a58a73fa88833fba3b7c491169f86ce1180c948ab3fd394", size = 208425, upload-time = "2025-10-14T04:40:53.353Z" }, - { url = "https://files.pythonhosted.org/packages/9d/6a/04130023fef2a0d9c62d0bae2649b69f7b7d8d24ea5536feef50551029df/charset_normalizer-3.4.4-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:b5b290ccc2a263e8d185130284f8501e3e36c5e02750fc6b6bdeb2e9e96f1e25", size = 148162, upload-time = "2025-10-14T04:40:54.558Z" }, - { url = "https://files.pythonhosted.org/packages/78/29/62328d79aa60da22c9e0b9a66539feae06ca0f5a4171ac4f7dc285b83688/charset_normalizer-3.4.4-cp312-cp312-manylinux2014_armv7l.manylinux_2_17_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:74bb723680f9f7a6234dcf67aea57e708ec1fbdf5699fb91dfd6f511b0a320ef", size = 144558, upload-time = "2025-10-14T04:40:55.677Z" }, - { url = "https://files.pythonhosted.org/packages/86/bb/b32194a4bf15b88403537c2e120b817c61cd4ecffa9b6876e941c3ee38fe/charset_normalizer-3.4.4-cp312-cp312-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:f1e34719c6ed0b92f418c7c780480b26b5d9c50349e9a9af7d76bf757530350d", size = 161497, upload-time = "2025-10-14T04:40:57.217Z" }, - { url = "https://files.pythonhosted.org/packages/19/89/a54c82b253d5b9b111dc74aca196ba5ccfcca8242d0fb64146d4d3183ff1/charset_normalizer-3.4.4-cp312-cp312-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:2437418e20515acec67d86e12bf70056a33abdacb5cb1655042f6538d6b085a8", size = 159240, upload-time = "2025-10-14T04:40:58.358Z" }, - { url = "https://files.pythonhosted.org/packages/c0/10/d20b513afe03acc89ec33948320a5544d31f21b05368436d580dec4e234d/charset_normalizer-3.4.4-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:11d694519d7f29d6cd09f6ac70028dba10f92f6cdd059096db198c283794ac86", size = 153471, upload-time = "2025-10-14T04:40:59.468Z" }, - { url = "https://files.pythonhosted.org/packages/61/fa/fbf177b55bdd727010f9c0a3c49eefa1d10f960e5f09d1d887bf93c2e698/charset_normalizer-3.4.4-cp312-cp312-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:ac1c4a689edcc530fc9d9aa11f5774b9e2f33f9a0c6a57864e90908f5208d30a", size = 150864, upload-time = "2025-10-14T04:41:00.623Z" }, - { url = "https://files.pythonhosted.org/packages/05/12/9fbc6a4d39c0198adeebbde20b619790e9236557ca59fc40e0e3cebe6f40/charset_normalizer-3.4.4-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:21d142cc6c0ec30d2efee5068ca36c128a30b0f2c53c1c07bd78cb6bc1d3be5f", size = 150647, upload-time = "2025-10-14T04:41:01.754Z" }, - { url = "https://files.pythonhosted.org/packages/ad/1f/6a9a593d52e3e8c5d2b167daf8c6b968808efb57ef4c210acb907c365bc4/charset_normalizer-3.4.4-cp312-cp312-musllinux_1_2_armv7l.whl", hash = "sha256:5dbe56a36425d26d6cfb40ce79c314a2e4dd6211d51d6d2191c00bed34f354cc", size = 145110, upload-time = "2025-10-14T04:41:03.231Z" }, - { url = "https://files.pythonhosted.org/packages/30/42/9a52c609e72471b0fc54386dc63c3781a387bb4fe61c20231a4ebcd58bdd/charset_normalizer-3.4.4-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:5bfbb1b9acf3334612667b61bd3002196fe2a1eb4dd74d247e0f2a4d50ec9bbf", size = 162839, upload-time = "2025-10-14T04:41:04.715Z" }, - { url = "https://files.pythonhosted.org/packages/c4/5b/c0682bbf9f11597073052628ddd38344a3d673fda35a36773f7d19344b23/charset_normalizer-3.4.4-cp312-cp312-musllinux_1_2_riscv64.whl", hash = "sha256:d055ec1e26e441f6187acf818b73564e6e6282709e9bcb5b63f5b23068356a15", size = 150667, upload-time = "2025-10-14T04:41:05.827Z" }, - { url = "https://files.pythonhosted.org/packages/e4/24/a41afeab6f990cf2daf6cb8c67419b63b48cf518e4f56022230840c9bfb2/charset_normalizer-3.4.4-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:af2d8c67d8e573d6de5bc30cdb27e9b95e49115cd9baad5ddbd1a6207aaa82a9", size = 160535, upload-time = "2025-10-14T04:41:06.938Z" }, - { url = "https://files.pythonhosted.org/packages/2a/e5/6a4ce77ed243c4a50a1fecca6aaaab419628c818a49434be428fe24c9957/charset_normalizer-3.4.4-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:780236ac706e66881f3b7f2f32dfe90507a09e67d1d454c762cf642e6e1586e0", size = 154816, upload-time = "2025-10-14T04:41:08.101Z" }, - { url = "https://files.pythonhosted.org/packages/a8/ef/89297262b8092b312d29cdb2517cb1237e51db8ecef2e9af5edbe7b683b1/charset_normalizer-3.4.4-cp312-cp312-win32.whl", hash = "sha256:5833d2c39d8896e4e19b689ffc198f08ea58116bee26dea51e362ecc7cd3ed26", size = 99694, upload-time = "2025-10-14T04:41:09.23Z" }, - { url = "https://files.pythonhosted.org/packages/3d/2d/1e5ed9dd3b3803994c155cd9aacb60c82c331bad84daf75bcb9c91b3295e/charset_normalizer-3.4.4-cp312-cp312-win_amd64.whl", hash = "sha256:a79cfe37875f822425b89a82333404539ae63dbdddf97f84dcbc3d339aae9525", size = 107131, upload-time = "2025-10-14T04:41:10.467Z" }, - { url = "https://files.pythonhosted.org/packages/d0/d9/0ed4c7098a861482a7b6a95603edce4c0d9db2311af23da1fb2b75ec26fc/charset_normalizer-3.4.4-cp312-cp312-win_arm64.whl", hash = "sha256:376bec83a63b8021bb5c8ea75e21c4ccb86e7e45ca4eb81146091b56599b80c3", size = 100390, upload-time = "2025-10-14T04:41:11.915Z" }, - { url = "https://files.pythonhosted.org/packages/97/45/4b3a1239bbacd321068ea6e7ac28875b03ab8bc0aa0966452db17cd36714/charset_normalizer-3.4.4-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:e1f185f86a6f3403aa2420e815904c67b2f9ebc443f045edd0de921108345794", size = 208091, upload-time = "2025-10-14T04:41:13.346Z" }, - { url = "https://files.pythonhosted.org/packages/7d/62/73a6d7450829655a35bb88a88fca7d736f9882a27eacdca2c6d505b57e2e/charset_normalizer-3.4.4-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:6b39f987ae8ccdf0d2642338faf2abb1862340facc796048b604ef14919e55ed", size = 147936, upload-time = "2025-10-14T04:41:14.461Z" }, - { url = "https://files.pythonhosted.org/packages/89/c5/adb8c8b3d6625bef6d88b251bbb0d95f8205831b987631ab0c8bb5d937c2/charset_normalizer-3.4.4-cp313-cp313-manylinux2014_armv7l.manylinux_2_17_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:3162d5d8ce1bb98dd51af660f2121c55d0fa541b46dff7bb9b9f86ea1d87de72", size = 144180, upload-time = "2025-10-14T04:41:15.588Z" }, - { url = "https://files.pythonhosted.org/packages/91/ed/9706e4070682d1cc219050b6048bfd293ccf67b3d4f5a4f39207453d4b99/charset_normalizer-3.4.4-cp313-cp313-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:81d5eb2a312700f4ecaa977a8235b634ce853200e828fbadf3a9c50bab278328", size = 161346, upload-time = "2025-10-14T04:41:16.738Z" }, - { url = "https://files.pythonhosted.org/packages/d5/0d/031f0d95e4972901a2f6f09ef055751805ff541511dc1252ba3ca1f80cf5/charset_normalizer-3.4.4-cp313-cp313-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:5bd2293095d766545ec1a8f612559f6b40abc0eb18bb2f5d1171872d34036ede", size = 158874, upload-time = "2025-10-14T04:41:17.923Z" }, - { url = "https://files.pythonhosted.org/packages/f5/83/6ab5883f57c9c801ce5e5677242328aa45592be8a00644310a008d04f922/charset_normalizer-3.4.4-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:a8a8b89589086a25749f471e6a900d3f662d1d3b6e2e59dcecf787b1cc3a1894", size = 153076, upload-time = "2025-10-14T04:41:19.106Z" }, - { url = "https://files.pythonhosted.org/packages/75/1e/5ff781ddf5260e387d6419959ee89ef13878229732732ee73cdae01800f2/charset_normalizer-3.4.4-cp313-cp313-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:bc7637e2f80d8530ee4a78e878bce464f70087ce73cf7c1caf142416923b98f1", size = 150601, upload-time = "2025-10-14T04:41:20.245Z" }, - { url = "https://files.pythonhosted.org/packages/d7/57/71be810965493d3510a6ca79b90c19e48696fb1ff964da319334b12677f0/charset_normalizer-3.4.4-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:f8bf04158c6b607d747e93949aa60618b61312fe647a6369f88ce2ff16043490", size = 150376, upload-time = "2025-10-14T04:41:21.398Z" }, - { url = "https://files.pythonhosted.org/packages/e5/d5/c3d057a78c181d007014feb7e9f2e65905a6c4ef182c0ddf0de2924edd65/charset_normalizer-3.4.4-cp313-cp313-musllinux_1_2_armv7l.whl", hash = "sha256:554af85e960429cf30784dd47447d5125aaa3b99a6f0683589dbd27e2f45da44", size = 144825, upload-time = "2025-10-14T04:41:22.583Z" }, - { url = "https://files.pythonhosted.org/packages/e6/8c/d0406294828d4976f275ffbe66f00266c4b3136b7506941d87c00cab5272/charset_normalizer-3.4.4-cp313-cp313-musllinux_1_2_ppc64le.whl", hash = "sha256:74018750915ee7ad843a774364e13a3db91682f26142baddf775342c3f5b1133", size = 162583, upload-time = "2025-10-14T04:41:23.754Z" }, - { url = "https://files.pythonhosted.org/packages/d7/24/e2aa1f18c8f15c4c0e932d9287b8609dd30ad56dbe41d926bd846e22fb8d/charset_normalizer-3.4.4-cp313-cp313-musllinux_1_2_riscv64.whl", hash = "sha256:c0463276121fdee9c49b98908b3a89c39be45d86d1dbaa22957e38f6321d4ce3", size = 150366, upload-time = "2025-10-14T04:41:25.27Z" }, - { url = "https://files.pythonhosted.org/packages/e4/5b/1e6160c7739aad1e2df054300cc618b06bf784a7a164b0f238360721ab86/charset_normalizer-3.4.4-cp313-cp313-musllinux_1_2_s390x.whl", hash = "sha256:362d61fd13843997c1c446760ef36f240cf81d3ebf74ac62652aebaf7838561e", size = 160300, upload-time = "2025-10-14T04:41:26.725Z" }, - { url = "https://files.pythonhosted.org/packages/7a/10/f882167cd207fbdd743e55534d5d9620e095089d176d55cb22d5322f2afd/charset_normalizer-3.4.4-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:9a26f18905b8dd5d685d6d07b0cdf98a79f3c7a918906af7cc143ea2e164c8bc", size = 154465, upload-time = "2025-10-14T04:41:28.322Z" }, - { url = "https://files.pythonhosted.org/packages/89/66/c7a9e1b7429be72123441bfdbaf2bc13faab3f90b933f664db506dea5915/charset_normalizer-3.4.4-cp313-cp313-win32.whl", hash = "sha256:9b35f4c90079ff2e2edc5b26c0c77925e5d2d255c42c74fdb70fb49b172726ac", size = 99404, upload-time = "2025-10-14T04:41:29.95Z" }, - { url = "https://files.pythonhosted.org/packages/c4/26/b9924fa27db384bdcd97ab83b4f0a8058d96ad9626ead570674d5e737d90/charset_normalizer-3.4.4-cp313-cp313-win_amd64.whl", hash = "sha256:b435cba5f4f750aa6c0a0d92c541fb79f69a387c91e61f1795227e4ed9cece14", size = 107092, upload-time = "2025-10-14T04:41:31.188Z" }, - { url = "https://files.pythonhosted.org/packages/af/8f/3ed4bfa0c0c72a7ca17f0380cd9e4dd842b09f664e780c13cff1dcf2ef1b/charset_normalizer-3.4.4-cp313-cp313-win_arm64.whl", hash = "sha256:542d2cee80be6f80247095cc36c418f7bddd14f4a6de45af91dfad36d817bba2", size = 100408, upload-time = "2025-10-14T04:41:32.624Z" }, - { url = "https://files.pythonhosted.org/packages/0a/4c/925909008ed5a988ccbb72dcc897407e5d6d3bd72410d69e051fc0c14647/charset_normalizer-3.4.4-py3-none-any.whl", hash = "sha256:7a32c560861a02ff789ad905a2fe94e3f840803362c84fecf1851cb4cf3dc37f", size = 53402, upload-time = "2025-10-14T04:42:31.76Z" }, +version = "3.4.3" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/83/2d/5fd176ceb9b2fc619e63405525573493ca23441330fcdaee6bef9460e924/charset_normalizer-3.4.3.tar.gz", hash = "sha256:6fce4b8500244f6fcb71465d4a4930d132ba9ab8e71a7859e6a5d59851068d14", size = 122371, upload-time = "2025-08-09T07:57:28.46Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/7f/b5/991245018615474a60965a7c9cd2b4efbaabd16d582a5547c47ee1c7730b/charset_normalizer-3.4.3-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:b256ee2e749283ef3ddcff51a675ff43798d92d746d1a6e4631bf8c707d22d0b", size = 204483, upload-time = "2025-08-09T07:55:53.12Z" }, + { url = "https://files.pythonhosted.org/packages/c7/2a/ae245c41c06299ec18262825c1569c5d3298fc920e4ddf56ab011b417efd/charset_normalizer-3.4.3-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:13faeacfe61784e2559e690fc53fa4c5ae97c6fcedb8eb6fb8d0a15b475d2c64", size = 145520, upload-time = "2025-08-09T07:55:54.712Z" }, + { url = "https://files.pythonhosted.org/packages/3a/a4/b3b6c76e7a635748c4421d2b92c7b8f90a432f98bda5082049af37ffc8e3/charset_normalizer-3.4.3-cp311-cp311-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:00237675befef519d9af72169d8604a067d92755e84fe76492fef5441db05b91", size = 158876, upload-time = "2025-08-09T07:55:56.024Z" }, + { url = "https://files.pythonhosted.org/packages/e2/e6/63bb0e10f90a8243c5def74b5b105b3bbbfb3e7bb753915fe333fb0c11ea/charset_normalizer-3.4.3-cp311-cp311-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:585f3b2a80fbd26b048a0be90c5aae8f06605d3c92615911c3a2b03a8a3b796f", size = 156083, upload-time = "2025-08-09T07:55:57.582Z" }, + { url = "https://files.pythonhosted.org/packages/87/df/b7737ff046c974b183ea9aa111b74185ac8c3a326c6262d413bd5a1b8c69/charset_normalizer-3.4.3-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:0e78314bdc32fa80696f72fa16dc61168fda4d6a0c014e0380f9d02f0e5d8a07", size = 150295, upload-time = "2025-08-09T07:55:59.147Z" }, + { url = "https://files.pythonhosted.org/packages/61/f1/190d9977e0084d3f1dc169acd060d479bbbc71b90bf3e7bf7b9927dec3eb/charset_normalizer-3.4.3-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:96b2b3d1a83ad55310de8c7b4a2d04d9277d5591f40761274856635acc5fcb30", size = 148379, upload-time = "2025-08-09T07:56:00.364Z" }, + { url = "https://files.pythonhosted.org/packages/4c/92/27dbe365d34c68cfe0ca76f1edd70e8705d82b378cb54ebbaeabc2e3029d/charset_normalizer-3.4.3-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:939578d9d8fd4299220161fdd76e86c6a251987476f5243e8864a7844476ba14", size = 160018, upload-time = "2025-08-09T07:56:01.678Z" }, + { url = "https://files.pythonhosted.org/packages/99/04/baae2a1ea1893a01635d475b9261c889a18fd48393634b6270827869fa34/charset_normalizer-3.4.3-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:fd10de089bcdcd1be95a2f73dbe6254798ec1bda9f450d5828c96f93e2536b9c", size = 157430, upload-time = "2025-08-09T07:56:02.87Z" }, + { url = "https://files.pythonhosted.org/packages/2f/36/77da9c6a328c54d17b960c89eccacfab8271fdaaa228305330915b88afa9/charset_normalizer-3.4.3-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:1e8ac75d72fa3775e0b7cb7e4629cec13b7514d928d15ef8ea06bca03ef01cae", size = 151600, upload-time = "2025-08-09T07:56:04.089Z" }, + { url = "https://files.pythonhosted.org/packages/64/d4/9eb4ff2c167edbbf08cdd28e19078bf195762e9bd63371689cab5ecd3d0d/charset_normalizer-3.4.3-cp311-cp311-win32.whl", hash = "sha256:6cf8fd4c04756b6b60146d98cd8a77d0cdae0e1ca20329da2ac85eed779b6849", size = 99616, upload-time = "2025-08-09T07:56:05.658Z" }, + { url = "https://files.pythonhosted.org/packages/f4/9c/996a4a028222e7761a96634d1820de8a744ff4327a00ada9c8942033089b/charset_normalizer-3.4.3-cp311-cp311-win_amd64.whl", hash = "sha256:31a9a6f775f9bcd865d88ee350f0ffb0e25936a7f930ca98995c05abf1faf21c", size = 107108, upload-time = "2025-08-09T07:56:07.176Z" }, + { url = "https://files.pythonhosted.org/packages/e9/5e/14c94999e418d9b87682734589404a25854d5f5d0408df68bc15b6ff54bb/charset_normalizer-3.4.3-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:e28e334d3ff134e88989d90ba04b47d84382a828c061d0d1027b1b12a62b39b1", size = 205655, upload-time = "2025-08-09T07:56:08.475Z" }, + { url = "https://files.pythonhosted.org/packages/7d/a8/c6ec5d389672521f644505a257f50544c074cf5fc292d5390331cd6fc9c3/charset_normalizer-3.4.3-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:0cacf8f7297b0c4fcb74227692ca46b4a5852f8f4f24b3c766dd94a1075c4884", size = 146223, upload-time = "2025-08-09T07:56:09.708Z" }, + { url = "https://files.pythonhosted.org/packages/fc/eb/a2ffb08547f4e1e5415fb69eb7db25932c52a52bed371429648db4d84fb1/charset_normalizer-3.4.3-cp312-cp312-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:c6fd51128a41297f5409deab284fecbe5305ebd7e5a1f959bee1c054622b7018", size = 159366, upload-time = "2025-08-09T07:56:11.326Z" }, + { url = "https://files.pythonhosted.org/packages/82/10/0fd19f20c624b278dddaf83b8464dcddc2456cb4b02bb902a6da126b87a1/charset_normalizer-3.4.3-cp312-cp312-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:3cfb2aad70f2c6debfbcb717f23b7eb55febc0bb23dcffc0f076009da10c6392", size = 157104, upload-time = "2025-08-09T07:56:13.014Z" }, + { url = "https://files.pythonhosted.org/packages/16/ab/0233c3231af734f5dfcf0844aa9582d5a1466c985bbed6cedab85af9bfe3/charset_normalizer-3.4.3-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:1606f4a55c0fd363d754049cdf400175ee96c992b1f8018b993941f221221c5f", size = 151830, upload-time = "2025-08-09T07:56:14.428Z" }, + { url = "https://files.pythonhosted.org/packages/ae/02/e29e22b4e02839a0e4a06557b1999d0a47db3567e82989b5bb21f3fbbd9f/charset_normalizer-3.4.3-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:027b776c26d38b7f15b26a5da1044f376455fb3766df8fc38563b4efbc515154", size = 148854, upload-time = "2025-08-09T07:56:16.051Z" }, + { url = "https://files.pythonhosted.org/packages/05/6b/e2539a0a4be302b481e8cafb5af8792da8093b486885a1ae4d15d452bcec/charset_normalizer-3.4.3-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:42e5088973e56e31e4fa58eb6bd709e42fc03799c11c42929592889a2e54c491", size = 160670, upload-time = "2025-08-09T07:56:17.314Z" }, + { url = "https://files.pythonhosted.org/packages/31/e7/883ee5676a2ef217a40ce0bffcc3d0dfbf9e64cbcfbdf822c52981c3304b/charset_normalizer-3.4.3-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:cc34f233c9e71701040d772aa7490318673aa7164a0efe3172b2981218c26d93", size = 158501, upload-time = "2025-08-09T07:56:18.641Z" }, + { url = "https://files.pythonhosted.org/packages/c1/35/6525b21aa0db614cf8b5792d232021dca3df7f90a1944db934efa5d20bb1/charset_normalizer-3.4.3-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:320e8e66157cc4e247d9ddca8e21f427efc7a04bbd0ac8a9faf56583fa543f9f", size = 153173, upload-time = "2025-08-09T07:56:20.289Z" }, + { url = "https://files.pythonhosted.org/packages/50/ee/f4704bad8201de513fdc8aac1cabc87e38c5818c93857140e06e772b5892/charset_normalizer-3.4.3-cp312-cp312-win32.whl", hash = "sha256:fb6fecfd65564f208cbf0fba07f107fb661bcd1a7c389edbced3f7a493f70e37", size = 99822, upload-time = "2025-08-09T07:56:21.551Z" }, + { url = "https://files.pythonhosted.org/packages/39/f5/3b3836ca6064d0992c58c7561c6b6eee1b3892e9665d650c803bd5614522/charset_normalizer-3.4.3-cp312-cp312-win_amd64.whl", hash = "sha256:86df271bf921c2ee3818f0522e9a5b8092ca2ad8b065ece5d7d9d0e9f4849bcc", size = 107543, upload-time = "2025-08-09T07:56:23.115Z" }, + { url = "https://files.pythonhosted.org/packages/65/ca/2135ac97709b400c7654b4b764daf5c5567c2da45a30cdd20f9eefe2d658/charset_normalizer-3.4.3-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:14c2a87c65b351109f6abfc424cab3927b3bdece6f706e4d12faaf3d52ee5efe", size = 205326, upload-time = "2025-08-09T07:56:24.721Z" }, + { url = "https://files.pythonhosted.org/packages/71/11/98a04c3c97dd34e49c7d247083af03645ca3730809a5509443f3c37f7c99/charset_normalizer-3.4.3-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:41d1fc408ff5fdfb910200ec0e74abc40387bccb3252f3f27c0676731df2b2c8", size = 146008, upload-time = "2025-08-09T07:56:26.004Z" }, + { url = "https://files.pythonhosted.org/packages/60/f5/4659a4cb3c4ec146bec80c32d8bb16033752574c20b1252ee842a95d1a1e/charset_normalizer-3.4.3-cp313-cp313-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:1bb60174149316da1c35fa5233681f7c0f9f514509b8e399ab70fea5f17e45c9", size = 159196, upload-time = "2025-08-09T07:56:27.25Z" }, + { url = "https://files.pythonhosted.org/packages/86/9e/f552f7a00611f168b9a5865a1414179b2c6de8235a4fa40189f6f79a1753/charset_normalizer-3.4.3-cp313-cp313-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:30d006f98569de3459c2fc1f2acde170b7b2bd265dc1943e87e1a4efe1b67c31", size = 156819, upload-time = "2025-08-09T07:56:28.515Z" }, + { url = "https://files.pythonhosted.org/packages/7e/95/42aa2156235cbc8fa61208aded06ef46111c4d3f0de233107b3f38631803/charset_normalizer-3.4.3-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:416175faf02e4b0810f1f38bcb54682878a4af94059a1cd63b8747244420801f", size = 151350, upload-time = "2025-08-09T07:56:29.716Z" }, + { url = "https://files.pythonhosted.org/packages/c2/a9/3865b02c56f300a6f94fc631ef54f0a8a29da74fb45a773dfd3dcd380af7/charset_normalizer-3.4.3-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:6aab0f181c486f973bc7262a97f5aca3ee7e1437011ef0c2ec04b5a11d16c927", size = 148644, upload-time = "2025-08-09T07:56:30.984Z" }, + { url = "https://files.pythonhosted.org/packages/77/d9/cbcf1a2a5c7d7856f11e7ac2d782aec12bdfea60d104e60e0aa1c97849dc/charset_normalizer-3.4.3-cp313-cp313-musllinux_1_2_ppc64le.whl", hash = "sha256:fdabf8315679312cfa71302f9bd509ded4f2f263fb5b765cf1433b39106c3cc9", size = 160468, upload-time = "2025-08-09T07:56:32.252Z" }, + { url = "https://files.pythonhosted.org/packages/f6/42/6f45efee8697b89fda4d50580f292b8f7f9306cb2971d4b53f8914e4d890/charset_normalizer-3.4.3-cp313-cp313-musllinux_1_2_s390x.whl", hash = "sha256:bd28b817ea8c70215401f657edef3a8aa83c29d447fb0b622c35403780ba11d5", size = 158187, upload-time = "2025-08-09T07:56:33.481Z" }, + { url = "https://files.pythonhosted.org/packages/70/99/f1c3bdcfaa9c45b3ce96f70b14f070411366fa19549c1d4832c935d8e2c3/charset_normalizer-3.4.3-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:18343b2d246dc6761a249ba1fb13f9ee9a2bcd95decc767319506056ea4ad4dc", size = 152699, upload-time = "2025-08-09T07:56:34.739Z" }, + { url = "https://files.pythonhosted.org/packages/a3/ad/b0081f2f99a4b194bcbb1934ef3b12aa4d9702ced80a37026b7607c72e58/charset_normalizer-3.4.3-cp313-cp313-win32.whl", hash = "sha256:6fb70de56f1859a3f71261cbe41005f56a7842cc348d3aeb26237560bfa5e0ce", size = 99580, upload-time = "2025-08-09T07:56:35.981Z" }, + { url = "https://files.pythonhosted.org/packages/9a/8f/ae790790c7b64f925e5c953b924aaa42a243fb778fed9e41f147b2a5715a/charset_normalizer-3.4.3-cp313-cp313-win_amd64.whl", hash = "sha256:cf1ebb7d78e1ad8ec2a8c4732c7be2e736f6e5123a4146c5b89c9d1f585f8cef", size = 107366, upload-time = "2025-08-09T07:56:37.339Z" }, + { url = "https://files.pythonhosted.org/packages/8e/91/b5a06ad970ddc7a0e513112d40113e834638f4ca1120eb727a249fb2715e/charset_normalizer-3.4.3-cp314-cp314-macosx_10_13_universal2.whl", hash = "sha256:3cd35b7e8aedeb9e34c41385fda4f73ba609e561faedfae0a9e75e44ac558a15", size = 204342, upload-time = "2025-08-09T07:56:38.687Z" }, + { url = "https://files.pythonhosted.org/packages/ce/ec/1edc30a377f0a02689342f214455c3f6c2fbedd896a1d2f856c002fc3062/charset_normalizer-3.4.3-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:b89bc04de1d83006373429975f8ef9e7932534b8cc9ca582e4db7d20d91816db", size = 145995, upload-time = "2025-08-09T07:56:40.048Z" }, + { url = "https://files.pythonhosted.org/packages/17/e5/5e67ab85e6d22b04641acb5399c8684f4d37caf7558a53859f0283a650e9/charset_normalizer-3.4.3-cp314-cp314-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:2001a39612b241dae17b4687898843f254f8748b796a2e16f1051a17078d991d", size = 158640, upload-time = "2025-08-09T07:56:41.311Z" }, + { url = "https://files.pythonhosted.org/packages/f1/e5/38421987f6c697ee3722981289d554957c4be652f963d71c5e46a262e135/charset_normalizer-3.4.3-cp314-cp314-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:8dcfc373f888e4fb39a7bc57e93e3b845e7f462dacc008d9749568b1c4ece096", size = 156636, upload-time = "2025-08-09T07:56:43.195Z" }, + { url = "https://files.pythonhosted.org/packages/a0/e4/5a075de8daa3ec0745a9a3b54467e0c2967daaaf2cec04c845f73493e9a1/charset_normalizer-3.4.3-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:18b97b8404387b96cdbd30ad660f6407799126d26a39ca65729162fd810a99aa", size = 150939, upload-time = "2025-08-09T07:56:44.819Z" }, + { url = "https://files.pythonhosted.org/packages/02/f7/3611b32318b30974131db62b4043f335861d4d9b49adc6d57c1149cc49d4/charset_normalizer-3.4.3-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:ccf600859c183d70eb47e05a44cd80a4ce77394d1ac0f79dbd2dd90a69a3a049", size = 148580, upload-time = "2025-08-09T07:56:46.684Z" }, + { url = "https://files.pythonhosted.org/packages/7e/61/19b36f4bd67f2793ab6a99b979b4e4f3d8fc754cbdffb805335df4337126/charset_normalizer-3.4.3-cp314-cp314-musllinux_1_2_ppc64le.whl", hash = "sha256:53cd68b185d98dde4ad8990e56a58dea83a4162161b1ea9272e5c9182ce415e0", size = 159870, upload-time = "2025-08-09T07:56:47.941Z" }, + { url = "https://files.pythonhosted.org/packages/06/57/84722eefdd338c04cf3030ada66889298eaedf3e7a30a624201e0cbe424a/charset_normalizer-3.4.3-cp314-cp314-musllinux_1_2_s390x.whl", hash = "sha256:30a96e1e1f865f78b030d65241c1ee850cdf422d869e9028e2fc1d5e4db73b92", size = 157797, upload-time = "2025-08-09T07:56:49.756Z" }, + { url = "https://files.pythonhosted.org/packages/72/2a/aff5dd112b2f14bcc3462c312dce5445806bfc8ab3a7328555da95330e4b/charset_normalizer-3.4.3-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:d716a916938e03231e86e43782ca7878fb602a125a91e7acb8b5112e2e96ac16", size = 152224, upload-time = "2025-08-09T07:56:51.369Z" }, + { url = "https://files.pythonhosted.org/packages/b7/8c/9839225320046ed279c6e839d51f028342eb77c91c89b8ef2549f951f3ec/charset_normalizer-3.4.3-cp314-cp314-win32.whl", hash = "sha256:c6dbd0ccdda3a2ba7c2ecd9d77b37f3b5831687d8dc1b6ca5f56a4880cc7b7ce", size = 100086, upload-time = "2025-08-09T07:56:52.722Z" }, + { url = "https://files.pythonhosted.org/packages/ee/7a/36fbcf646e41f710ce0a563c1c9a343c6edf9be80786edeb15b6f62e17db/charset_normalizer-3.4.3-cp314-cp314-win_amd64.whl", hash = "sha256:73dc19b562516fc9bcf6e5d6e596df0b4eb98d87e4f79f3ae71840e6ed21361c", size = 107400, upload-time = "2025-08-09T07:56:55.172Z" }, + { url = "https://files.pythonhosted.org/packages/8a/1f/f041989e93b001bc4e44bb1669ccdcf54d3f00e628229a85b08d330615c5/charset_normalizer-3.4.3-py3-none-any.whl", hash = "sha256:ce571ab16d890d23b5c278547ba694193a45011ff86a9162a71307ed9f86759a", size = 53175, upload-time = "2025-08-09T07:57:26.864Z" }, ] [[package]] @@ -1064,6 +1083,28 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/b9/70/f308384a3ae9cd2209e0849f33c913f658d3326900d0ff5d378d6a1422d2/contourpy-1.3.3-cp313-cp313t-win32.whl", hash = "sha256:283edd842a01e3dcd435b1c5116798d661378d83d36d337b8dde1d16a5fc9ba3", size = 196157, upload-time = "2025-07-26T12:02:11.488Z" }, { url = "https://files.pythonhosted.org/packages/b2/dd/880f890a6663b84d9e34a6f88cded89d78f0091e0045a284427cb6b18521/contourpy-1.3.3-cp313-cp313t-win_amd64.whl", hash = "sha256:87acf5963fc2b34825e5b6b048f40e3635dd547f590b04d2ab317c2619ef7ae8", size = 240570, upload-time = "2025-07-26T12:02:12.754Z" }, { url = "https://files.pythonhosted.org/packages/80/99/2adc7d8ffead633234817ef8e9a87115c8a11927a94478f6bb3d3f4d4f7d/contourpy-1.3.3-cp313-cp313t-win_arm64.whl", hash = "sha256:3c30273eb2a55024ff31ba7d052dde990d7d8e5450f4bbb6e913558b3d6c2301", size = 199713, upload-time = "2025-07-26T12:02:14.4Z" }, + { url = "https://files.pythonhosted.org/packages/72/8b/4546f3ab60f78c514ffb7d01a0bd743f90de36f0019d1be84d0a708a580a/contourpy-1.3.3-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:fde6c716d51c04b1c25d0b90364d0be954624a0ee9d60e23e850e8d48353d07a", size = 292189, upload-time = "2025-07-26T12:02:16.095Z" }, + { url = "https://files.pythonhosted.org/packages/fd/e1/3542a9cb596cadd76fcef413f19c79216e002623158befe6daa03dbfa88c/contourpy-1.3.3-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:cbedb772ed74ff5be440fa8eee9bd49f64f6e3fc09436d9c7d8f1c287b121d77", size = 273251, upload-time = "2025-07-26T12:02:17.524Z" }, + { url = "https://files.pythonhosted.org/packages/b1/71/f93e1e9471d189f79d0ce2497007731c1e6bf9ef6d1d61b911430c3db4e5/contourpy-1.3.3-cp314-cp314-manylinux_2_26_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:22e9b1bd7a9b1d652cd77388465dc358dafcd2e217d35552424aa4f996f524f5", size = 335810, upload-time = "2025-07-26T12:02:18.9Z" }, + { url = "https://files.pythonhosted.org/packages/91/f9/e35f4c1c93f9275d4e38681a80506b5510e9327350c51f8d4a5a724d178c/contourpy-1.3.3-cp314-cp314-manylinux_2_26_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:a22738912262aa3e254e4f3cb079a95a67132fc5a063890e224393596902f5a4", size = 382871, upload-time = "2025-07-26T12:02:20.418Z" }, + { url = "https://files.pythonhosted.org/packages/b5/71/47b512f936f66a0a900d81c396a7e60d73419868fba959c61efed7a8ab46/contourpy-1.3.3-cp314-cp314-manylinux_2_26_s390x.manylinux_2_28_s390x.whl", hash = "sha256:afe5a512f31ee6bd7d0dda52ec9864c984ca3d66664444f2d72e0dc4eb832e36", size = 386264, upload-time = "2025-07-26T12:02:21.916Z" }, + { url = "https://files.pythonhosted.org/packages/04/5f/9ff93450ba96b09c7c2b3f81c94de31c89f92292f1380261bd7195bea4ea/contourpy-1.3.3-cp314-cp314-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:f64836de09927cba6f79dcd00fdd7d5329f3fccc633468507079c829ca4db4e3", size = 363819, upload-time = "2025-07-26T12:02:23.759Z" }, + { url = "https://files.pythonhosted.org/packages/3e/a6/0b185d4cc480ee494945cde102cb0149ae830b5fa17bf855b95f2e70ad13/contourpy-1.3.3-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:1fd43c3be4c8e5fd6e4f2baeae35ae18176cf2e5cced681cca908addf1cdd53b", size = 1333650, upload-time = "2025-07-26T12:02:26.181Z" }, + { url = "https://files.pythonhosted.org/packages/43/d7/afdc95580ca56f30fbcd3060250f66cedbde69b4547028863abd8aa3b47e/contourpy-1.3.3-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:6afc576f7b33cf00996e5c1102dc2a8f7cc89e39c0b55df93a0b78c1bd992b36", size = 1404833, upload-time = "2025-07-26T12:02:28.782Z" }, + { url = "https://files.pythonhosted.org/packages/e2/e2/366af18a6d386f41132a48f033cbd2102e9b0cf6345d35ff0826cd984566/contourpy-1.3.3-cp314-cp314-win32.whl", hash = "sha256:66c8a43a4f7b8df8b71ee1840e4211a3c8d93b214b213f590e18a1beca458f7d", size = 189692, upload-time = "2025-07-26T12:02:30.128Z" }, + { url = "https://files.pythonhosted.org/packages/7d/c2/57f54b03d0f22d4044b8afb9ca0e184f8b1afd57b4f735c2fa70883dc601/contourpy-1.3.3-cp314-cp314-win_amd64.whl", hash = "sha256:cf9022ef053f2694e31d630feaacb21ea24224be1c3ad0520b13d844274614fd", size = 232424, upload-time = "2025-07-26T12:02:31.395Z" }, + { url = "https://files.pythonhosted.org/packages/18/79/a9416650df9b525737ab521aa181ccc42d56016d2123ddcb7b58e926a42c/contourpy-1.3.3-cp314-cp314-win_arm64.whl", hash = "sha256:95b181891b4c71de4bb404c6621e7e2390745f887f2a026b2d99e92c17892339", size = 198300, upload-time = "2025-07-26T12:02:32.956Z" }, + { url = "https://files.pythonhosted.org/packages/1f/42/38c159a7d0f2b7b9c04c64ab317042bb6952b713ba875c1681529a2932fe/contourpy-1.3.3-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:33c82d0138c0a062380332c861387650c82e4cf1747aaa6938b9b6516762e772", size = 306769, upload-time = "2025-07-26T12:02:34.2Z" }, + { url = "https://files.pythonhosted.org/packages/c3/6c/26a8205f24bca10974e77460de68d3d7c63e282e23782f1239f226fcae6f/contourpy-1.3.3-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:ea37e7b45949df430fe649e5de8351c423430046a2af20b1c1961cae3afcda77", size = 287892, upload-time = "2025-07-26T12:02:35.807Z" }, + { url = "https://files.pythonhosted.org/packages/66/06/8a475c8ab718ebfd7925661747dbb3c3ee9c82ac834ccb3570be49d129f4/contourpy-1.3.3-cp314-cp314t-manylinux_2_26_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:d304906ecc71672e9c89e87c4675dc5c2645e1f4269a5063b99b0bb29f232d13", size = 326748, upload-time = "2025-07-26T12:02:37.193Z" }, + { url = "https://files.pythonhosted.org/packages/b4/a3/c5ca9f010a44c223f098fccd8b158bb1cb287378a31ac141f04730dc49be/contourpy-1.3.3-cp314-cp314t-manylinux_2_26_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:ca658cd1a680a5c9ea96dc61cdbae1e85c8f25849843aa799dfd3cb370ad4fbe", size = 375554, upload-time = "2025-07-26T12:02:38.894Z" }, + { url = "https://files.pythonhosted.org/packages/80/5b/68bd33ae63fac658a4145088c1e894405e07584a316738710b636c6d0333/contourpy-1.3.3-cp314-cp314t-manylinux_2_26_s390x.manylinux_2_28_s390x.whl", hash = "sha256:ab2fd90904c503739a75b7c8c5c01160130ba67944a7b77bbf36ef8054576e7f", size = 388118, upload-time = "2025-07-26T12:02:40.642Z" }, + { url = "https://files.pythonhosted.org/packages/40/52/4c285a6435940ae25d7410a6c36bda5145839bc3f0beb20c707cda18b9d2/contourpy-1.3.3-cp314-cp314t-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:b7301b89040075c30e5768810bc96a8e8d78085b47d8be6e4c3f5a0b4ed478a0", size = 352555, upload-time = "2025-07-26T12:02:42.25Z" }, + { url = "https://files.pythonhosted.org/packages/24/ee/3e81e1dd174f5c7fefe50e85d0892de05ca4e26ef1c9a59c2a57e43b865a/contourpy-1.3.3-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:2a2a8b627d5cc6b7c41a4beff6c5ad5eb848c88255fda4a8745f7e901b32d8e4", size = 1322295, upload-time = "2025-07-26T12:02:44.668Z" }, + { url = "https://files.pythonhosted.org/packages/3c/b2/6d913d4d04e14379de429057cd169e5e00f6c2af3bb13e1710bcbdb5da12/contourpy-1.3.3-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:fd6ec6be509c787f1caf6b247f0b1ca598bef13f4ddeaa126b7658215529ba0f", size = 1391027, upload-time = "2025-07-26T12:02:47.09Z" }, + { url = "https://files.pythonhosted.org/packages/93/8a/68a4ec5c55a2971213d29a9374913f7e9f18581945a7a31d1a39b5d2dfe5/contourpy-1.3.3-cp314-cp314t-win32.whl", hash = "sha256:e74a9a0f5e3fff48fb5a7f2fd2b9b70a3fe014a67522f79b7cca4c0c7e43c9ae", size = 202428, upload-time = "2025-07-26T12:02:48.691Z" }, + { url = "https://files.pythonhosted.org/packages/fa/96/fd9f641ffedc4fa3ace923af73b9d07e869496c9cc7a459103e6e978992f/contourpy-1.3.3-cp314-cp314t-win_amd64.whl", hash = "sha256:13b68d6a62db8eafaebb8039218921399baf6e47bf85006fd8529f2a08ef33fc", size = 250331, upload-time = "2025-07-26T12:02:50.137Z" }, + { url = "https://files.pythonhosted.org/packages/ae/8c/469afb6465b853afff216f9528ffda78a915ff880ed58813ba4faf4ba0b6/contourpy-1.3.3-cp314-cp314t-win_arm64.whl", hash = "sha256:b7448cb5a725bb1e35ce88771b86fba35ef418952474492cf7c764059933ff8b", size = 203831, upload-time = "2025-07-26T12:02:51.449Z" }, { url = "https://files.pythonhosted.org/packages/a5/29/8dcfe16f0107943fa92388c23f6e05cff0ba58058c4c95b00280d4c75a14/contourpy-1.3.3-pp311-pypy311_pp73-macosx_10_15_x86_64.whl", hash = "sha256:cd5dfcaeb10f7b7f9dc8941717c6c2ade08f587be2226222c12b25f0483ed497", size = 278809, upload-time = "2025-07-26T12:02:52.74Z" }, { url = "https://files.pythonhosted.org/packages/85/a9/8b37ef4f7dafeb335daee3c8254645ef5725be4d9c6aa70b50ec46ef2f7e/contourpy-1.3.3-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:0c1fc238306b35f246d61a1d416a627348b5cf0648648a031e14bb8705fcdfe8", size = 261593, upload-time = "2025-07-26T12:02:54.037Z" }, { url = "https://files.pythonhosted.org/packages/0a/59/ebfb8c677c75605cc27f7122c90313fd2f375ff3c8d19a1694bda74aaa63/contourpy-1.3.3-pp311-pypy311_pp73-manylinux_2_26_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:70f9aad7de812d6541d29d2bbf8feb22ff7e1c299523db288004e3157ff4674e", size = 302202, upload-time = "2025-07-26T12:02:55.947Z" }, @@ -1129,6 +1170,32 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/ee/51/a03bec00d37faaa891b3ff7387192cef20f01604e5283a5fabc95346befa/coverage-7.10.7-cp313-cp313t-win32.whl", hash = "sha256:2a78cd46550081a7909b3329e2266204d584866e8d97b898cd7fb5ac8d888b1a", size = 221403, upload-time = "2025-09-21T20:02:37.034Z" }, { url = "https://files.pythonhosted.org/packages/53/22/3cf25d614e64bf6d8e59c7c669b20d6d940bb337bdee5900b9ca41c820bb/coverage-7.10.7-cp313-cp313t-win_amd64.whl", hash = "sha256:33a5e6396ab684cb43dc7befa386258acb2d7fae7f67330ebb85ba4ea27938eb", size = 222469, upload-time = "2025-09-21T20:02:39.011Z" }, { url = "https://files.pythonhosted.org/packages/49/a1/00164f6d30d8a01c3c9c48418a7a5be394de5349b421b9ee019f380df2a0/coverage-7.10.7-cp313-cp313t-win_arm64.whl", hash = "sha256:86b0e7308289ddde73d863b7683f596d8d21c7d8664ce1dee061d0bcf3fbb4bb", size = 220731, upload-time = "2025-09-21T20:02:40.939Z" }, + { url = "https://files.pythonhosted.org/packages/23/9c/5844ab4ca6a4dd97a1850e030a15ec7d292b5c5cb93082979225126e35dd/coverage-7.10.7-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:b06f260b16ead11643a5a9f955bd4b5fd76c1a4c6796aeade8520095b75de520", size = 218302, upload-time = "2025-09-21T20:02:42.527Z" }, + { url = "https://files.pythonhosted.org/packages/f0/89/673f6514b0961d1f0e20ddc242e9342f6da21eaba3489901b565c0689f34/coverage-7.10.7-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:212f8f2e0612778f09c55dd4872cb1f64a1f2b074393d139278ce902064d5b32", size = 218578, upload-time = "2025-09-21T20:02:44.468Z" }, + { url = "https://files.pythonhosted.org/packages/05/e8/261cae479e85232828fb17ad536765c88dd818c8470aca690b0ac6feeaa3/coverage-7.10.7-cp314-cp314-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:3445258bcded7d4aa630ab8296dea4d3f15a255588dd535f980c193ab6b95f3f", size = 249629, upload-time = "2025-09-21T20:02:46.503Z" }, + { url = "https://files.pythonhosted.org/packages/82/62/14ed6546d0207e6eda876434e3e8475a3e9adbe32110ce896c9e0c06bb9a/coverage-7.10.7-cp314-cp314-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:bb45474711ba385c46a0bfe696c695a929ae69ac636cda8f532be9e8c93d720a", size = 252162, upload-time = "2025-09-21T20:02:48.689Z" }, + { url = "https://files.pythonhosted.org/packages/ff/49/07f00db9ac6478e4358165a08fb41b469a1b053212e8a00cb02f0d27a05f/coverage-7.10.7-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:813922f35bd800dca9994c5971883cbc0d291128a5de6b167c7aa697fcf59360", size = 253517, upload-time = "2025-09-21T20:02:50.31Z" }, + { url = "https://files.pythonhosted.org/packages/a2/59/c5201c62dbf165dfbc91460f6dbbaa85a8b82cfa6131ac45d6c1bfb52deb/coverage-7.10.7-cp314-cp314-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:93c1b03552081b2a4423091d6fb3787265b8f86af404cff98d1b5342713bdd69", size = 249632, upload-time = "2025-09-21T20:02:51.971Z" }, + { url = "https://files.pythonhosted.org/packages/07/ae/5920097195291a51fb00b3a70b9bbd2edbfe3c84876a1762bd1ef1565ebc/coverage-7.10.7-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:cc87dd1b6eaf0b848eebb1c86469b9f72a1891cb42ac7adcfbce75eadb13dd14", size = 251520, upload-time = "2025-09-21T20:02:53.858Z" }, + { url = "https://files.pythonhosted.org/packages/b9/3c/a815dde77a2981f5743a60b63df31cb322c944843e57dbd579326625a413/coverage-7.10.7-cp314-cp314-musllinux_1_2_i686.whl", hash = "sha256:39508ffda4f343c35f3236fe8d1a6634a51f4581226a1262769d7f970e73bffe", size = 249455, upload-time = "2025-09-21T20:02:55.807Z" }, + { url = "https://files.pythonhosted.org/packages/aa/99/f5cdd8421ea656abefb6c0ce92556709db2265c41e8f9fc6c8ae0f7824c9/coverage-7.10.7-cp314-cp314-musllinux_1_2_riscv64.whl", hash = "sha256:925a1edf3d810537c5a3abe78ec5530160c5f9a26b1f4270b40e62cc79304a1e", size = 249287, upload-time = "2025-09-21T20:02:57.784Z" }, + { url = "https://files.pythonhosted.org/packages/c3/7a/e9a2da6a1fc5d007dd51fca083a663ab930a8c4d149c087732a5dbaa0029/coverage-7.10.7-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:2c8b9a0636f94c43cd3576811e05b89aa9bc2d0a85137affc544ae5cb0e4bfbd", size = 250946, upload-time = "2025-09-21T20:02:59.431Z" }, + { url = "https://files.pythonhosted.org/packages/ef/5b/0b5799aa30380a949005a353715095d6d1da81927d6dbed5def2200a4e25/coverage-7.10.7-cp314-cp314-win32.whl", hash = "sha256:b7b8288eb7cdd268b0304632da8cb0bb93fadcfec2fe5712f7b9cc8f4d487be2", size = 221009, upload-time = "2025-09-21T20:03:01.324Z" }, + { url = "https://files.pythonhosted.org/packages/da/b0/e802fbb6eb746de006490abc9bb554b708918b6774b722bb3a0e6aa1b7de/coverage-7.10.7-cp314-cp314-win_amd64.whl", hash = "sha256:1ca6db7c8807fb9e755d0379ccc39017ce0a84dcd26d14b5a03b78563776f681", size = 221804, upload-time = "2025-09-21T20:03:03.4Z" }, + { url = "https://files.pythonhosted.org/packages/9e/e8/71d0c8e374e31f39e3389bb0bd19e527d46f00ea8571ec7ec8fd261d8b44/coverage-7.10.7-cp314-cp314-win_arm64.whl", hash = "sha256:097c1591f5af4496226d5783d036bf6fd6cd0cbc132e071b33861de756efb880", size = 220384, upload-time = "2025-09-21T20:03:05.111Z" }, + { url = "https://files.pythonhosted.org/packages/62/09/9a5608d319fa3eba7a2019addeacb8c746fb50872b57a724c9f79f146969/coverage-7.10.7-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:a62c6ef0d50e6de320c270ff91d9dd0a05e7250cac2a800b7784bae474506e63", size = 219047, upload-time = "2025-09-21T20:03:06.795Z" }, + { url = "https://files.pythonhosted.org/packages/f5/6f/f58d46f33db9f2e3647b2d0764704548c184e6f5e014bef528b7f979ef84/coverage-7.10.7-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:9fa6e4dd51fe15d8738708a973470f67a855ca50002294852e9571cdbd9433f2", size = 219266, upload-time = "2025-09-21T20:03:08.495Z" }, + { url = "https://files.pythonhosted.org/packages/74/5c/183ffc817ba68e0b443b8c934c8795553eb0c14573813415bd59941ee165/coverage-7.10.7-cp314-cp314t-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:8fb190658865565c549b6b4706856d6a7b09302c797eb2cf8e7fe9dabb043f0d", size = 260767, upload-time = "2025-09-21T20:03:10.172Z" }, + { url = "https://files.pythonhosted.org/packages/0f/48/71a8abe9c1ad7e97548835e3cc1adbf361e743e9d60310c5f75c9e7bf847/coverage-7.10.7-cp314-cp314t-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:affef7c76a9ef259187ef31599a9260330e0335a3011732c4b9effa01e1cd6e0", size = 262931, upload-time = "2025-09-21T20:03:11.861Z" }, + { url = "https://files.pythonhosted.org/packages/84/fd/193a8fb132acfc0a901f72020e54be5e48021e1575bb327d8ee1097a28fd/coverage-7.10.7-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:6e16e07d85ca0cf8bafe5f5d23a0b850064e8e945d5677492b06bbe6f09cc699", size = 265186, upload-time = "2025-09-21T20:03:13.539Z" }, + { url = "https://files.pythonhosted.org/packages/b1/8f/74ecc30607dd95ad50e3034221113ccb1c6d4e8085cc761134782995daae/coverage-7.10.7-cp314-cp314t-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:03ffc58aacdf65d2a82bbeb1ffe4d01ead4017a21bfd0454983b88ca73af94b9", size = 259470, upload-time = "2025-09-21T20:03:15.584Z" }, + { url = "https://files.pythonhosted.org/packages/0f/55/79ff53a769f20d71b07023ea115c9167c0bb56f281320520cf64c5298a96/coverage-7.10.7-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:1b4fd784344d4e52647fd7857b2af5b3fbe6c239b0b5fa63e94eb67320770e0f", size = 262626, upload-time = "2025-09-21T20:03:17.673Z" }, + { url = "https://files.pythonhosted.org/packages/88/e2/dac66c140009b61ac3fc13af673a574b00c16efdf04f9b5c740703e953c0/coverage-7.10.7-cp314-cp314t-musllinux_1_2_i686.whl", hash = "sha256:0ebbaddb2c19b71912c6f2518e791aa8b9f054985a0769bdb3a53ebbc765c6a1", size = 260386, upload-time = "2025-09-21T20:03:19.36Z" }, + { url = "https://files.pythonhosted.org/packages/a2/f1/f48f645e3f33bb9ca8a496bc4a9671b52f2f353146233ebd7c1df6160440/coverage-7.10.7-cp314-cp314t-musllinux_1_2_riscv64.whl", hash = "sha256:a2d9a3b260cc1d1dbdb1c582e63ddcf5363426a1a68faa0f5da28d8ee3c722a0", size = 258852, upload-time = "2025-09-21T20:03:21.007Z" }, + { url = "https://files.pythonhosted.org/packages/bb/3b/8442618972c51a7affeead957995cfa8323c0c9bcf8fa5a027421f720ff4/coverage-7.10.7-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:a3cc8638b2480865eaa3926d192e64ce6c51e3d29c849e09d5b4ad95efae5399", size = 261534, upload-time = "2025-09-21T20:03:23.12Z" }, + { url = "https://files.pythonhosted.org/packages/b2/dc/101f3fa3a45146db0cb03f5b4376e24c0aac818309da23e2de0c75295a91/coverage-7.10.7-cp314-cp314t-win32.whl", hash = "sha256:67f8c5cbcd3deb7a60b3345dffc89a961a484ed0af1f6f73de91705cc6e31235", size = 221784, upload-time = "2025-09-21T20:03:24.769Z" }, + { url = "https://files.pythonhosted.org/packages/4c/a1/74c51803fc70a8a40d7346660379e144be772bab4ac7bb6e6b905152345c/coverage-7.10.7-cp314-cp314t-win_amd64.whl", hash = "sha256:e1ed71194ef6dea7ed2d5cb5f7243d4bcd334bfb63e59878519be558078f848d", size = 222905, upload-time = "2025-09-21T20:03:26.93Z" }, + { url = "https://files.pythonhosted.org/packages/12/65/f116a6d2127df30bcafbceef0302d8a64ba87488bf6f73a6d8eebf060873/coverage-7.10.7-cp314-cp314t-win_arm64.whl", hash = "sha256:7fe650342addd8524ca63d77b2362b02345e5f1a093266787d210c70a50b471a", size = 220922, upload-time = "2025-09-21T20:03:28.672Z" }, { url = "https://files.pythonhosted.org/packages/ec/16/114df1c291c22cac3b0c127a73e0af5c12ed7bbb6558d310429a0ae24023/coverage-7.10.7-py3-none-any.whl", hash = "sha256:f7941f6f2fe6dd6807a1208737b8a0cbcf1cc6d7b07d24998ad2d63590868260", size = 209952, upload-time = "2025-09-21T20:03:53.918Z" }, ] @@ -1188,6 +1255,36 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/13/5b/966365523ce8290a08e163e3b489626c5adacdff2b3da9da1b0823dfb14e/cramjam-2.11.0-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:f8195006fdd0fc0a85b19df3d64a3ef8a240e483ae1dfc7ac6a4316019eb5df2", size = 2154950, upload-time = "2025-07-27T21:23:12.514Z" }, { url = "https://files.pythonhosted.org/packages/3a/7d/7f8eb5c534b72b32c6eb79d74585bfee44a9a5647a14040bb65c31c2572d/cramjam-2.11.0-cp313-cp313-win32.whl", hash = "sha256:ccf30e3fe6d770a803dcdf3bb863fa44ba5dc2664d4610ba2746a3c73599f2e4", size = 1603199, upload-time = "2025-07-27T21:23:14.38Z" }, { url = "https://files.pythonhosted.org/packages/37/05/47b5e0bf7c41a3b1cdd3b7c2147f880c93226a6bef1f5d85183040cbdece/cramjam-2.11.0-cp313-cp313-win_amd64.whl", hash = "sha256:ee36348a204f0a68b03400f4736224e9f61d1c6a1582d7f875c1ca56f0254268", size = 1708924, upload-time = "2025-07-27T21:23:16.332Z" }, + { url = "https://files.pythonhosted.org/packages/de/07/a1051cdbbe6d723df16d756b97f09da7c1adb69e29695c58f0392bc12515/cramjam-2.11.0-cp314-cp314-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl", hash = "sha256:7ba5e38c9fbd06f086f4a5a64a1a5b7b417cd3f8fc07a20e5c03651f72f36100", size = 3554141, upload-time = "2025-07-27T21:23:17.938Z" }, + { url = "https://files.pythonhosted.org/packages/74/66/58487d2e16ef3d04f51a7c7f0e69823e806744b4c21101e89da4873074bc/cramjam-2.11.0-cp314-cp314-macosx_10_12_x86_64.whl", hash = "sha256:b8adeee57b41fe08e4520698a4b0bd3cc76dbd81f99424b806d70a5256a391d3", size = 1860353, upload-time = "2025-07-27T21:23:19.593Z" }, + { url = "https://files.pythonhosted.org/packages/67/b4/67f6254d166ffbcc9d5fa1b56876eaa920c32ebc8e9d3d525b27296b693b/cramjam-2.11.0-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:b96a74fa03a636c8a7d76f700d50e9a8bc17a516d6a72d28711225d641e30968", size = 1693832, upload-time = "2025-07-27T21:23:21.185Z" }, + { url = "https://files.pythonhosted.org/packages/55/a3/4e0b31c0d454ae70c04684ed7c13d3c67b4c31790c278c1e788cb804fa4a/cramjam-2.11.0-cp314-cp314-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:c3811a56fa32e00b377ef79121c0193311fd7501f0fb378f254c7f083cc1fbe0", size = 2027080, upload-time = "2025-07-27T21:23:23.303Z" }, + { url = "https://files.pythonhosted.org/packages/d9/c7/5e8eed361d1d3b8be14f38a54852c5370cc0ceb2c2d543b8ba590c34f080/cramjam-2.11.0-cp314-cp314-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c5d927e87461f8a0d448e4ab5eb2bca9f31ca5d8ea86d70c6f470bb5bc666d7e", size = 1761543, upload-time = "2025-07-27T21:23:24.991Z" }, + { url = "https://files.pythonhosted.org/packages/09/0c/06b7f8b0ce9fde89470505116a01fc0b6cb92d406c4fb1e46f168b5d3fa5/cramjam-2.11.0-cp314-cp314-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:f1f5c450121430fd89cb5767e0a9728ecc65997768fd4027d069cb0368af62f9", size = 1854636, upload-time = "2025-07-27T21:23:26.987Z" }, + { url = "https://files.pythonhosted.org/packages/6f/c6/6ebc02c9d5acdf4e5f2b1ec6e1252bd5feee25762246798ae823b3347457/cramjam-2.11.0-cp314-cp314-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:724aa7490be50235d97f07e2ca10067927c5d7f336b786ddbc868470e822aa25", size = 2032715, upload-time = "2025-07-27T21:23:28.603Z" }, + { url = "https://files.pythonhosted.org/packages/a2/77/a122971c23f5ca4b53e4322c647ac7554626c95978f92d19419315dddd05/cramjam-2.11.0-cp314-cp314-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:54c4637122e7cfd7aac5c1d3d4c02364f446d6923ea34cf9d0e8816d6e7a4936", size = 2069039, upload-time = "2025-07-27T21:23:30.319Z" }, + { url = "https://files.pythonhosted.org/packages/19/0f/f6121b90b86b9093c066889274d26a1de3f29969d45c2ed1ecbe2033cb78/cramjam-2.11.0-cp314-cp314-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:17eb39b1696179fb471eea2de958fa21f40a2cd8bf6b40d428312d5541e19dc4", size = 1979566, upload-time = "2025-07-27T21:23:32.002Z" }, + { url = "https://files.pythonhosted.org/packages/e0/a3/f95bc57fd7f4166ce6da816cfa917fb7df4bb80e669eb459d85586498414/cramjam-2.11.0-cp314-cp314-musllinux_1_1_aarch64.whl", hash = "sha256:36aa5a798aa34e11813a80425a30d8e052d8de4a28f27bfc0368cfc454d1b403", size = 2030905, upload-time = "2025-07-27T21:23:33.696Z" }, + { url = "https://files.pythonhosted.org/packages/fc/52/e429de4e8bc86ee65e090dae0f87f45abd271742c63fb2d03c522ffde28a/cramjam-2.11.0-cp314-cp314-musllinux_1_1_armv7l.whl", hash = "sha256:449fca52774dc0199545fbf11f5128933e5a6833946707885cf7be8018017839", size = 2155592, upload-time = "2025-07-27T21:23:35.375Z" }, + { url = "https://files.pythonhosted.org/packages/6c/6c/65a7a0207787ad39ad804af4da7f06a60149de19481d73d270b540657234/cramjam-2.11.0-cp314-cp314-musllinux_1_1_i686.whl", hash = "sha256:d87d37b3d476f4f7623c56a232045d25bd9b988314702ea01bd9b4a94948a778", size = 2170839, upload-time = "2025-07-27T21:23:37.197Z" }, + { url = "https://files.pythonhosted.org/packages/b2/c5/5c5db505ba692bc844246b066e23901d5905a32baf2f33719c620e65887f/cramjam-2.11.0-cp314-cp314-musllinux_1_1_x86_64.whl", hash = "sha256:26cb45c47d71982d76282e303931c6dd4baee1753e5d48f9a89b3a63e690b3a3", size = 2157236, upload-time = "2025-07-27T21:23:38.854Z" }, + { url = "https://files.pythonhosted.org/packages/b0/22/88e6693e60afe98901e5bbe91b8dea193e3aa7f42e2770f9c3339f5c1065/cramjam-2.11.0-cp314-cp314-win32.whl", hash = "sha256:4efe919d443c2fd112fe25fe636a52f9628250c9a50d9bddb0488d8a6c09acc6", size = 1604136, upload-time = "2025-07-27T21:23:40.56Z" }, + { url = "https://files.pythonhosted.org/packages/cc/f8/01618801cd59ccedcc99f0f96d20be67d8cfc3497da9ccaaad6b481781dd/cramjam-2.11.0-cp314-cp314-win_amd64.whl", hash = "sha256:ccec3524ea41b9abd5600e3e27001fd774199dbb4f7b9cb248fcee37d4bda84c", size = 1710272, upload-time = "2025-07-27T21:23:42.236Z" }, + { url = "https://files.pythonhosted.org/packages/40/81/6cdb3ed222d13ae86bda77aafe8d50566e81a1169d49ed195b6263610704/cramjam-2.11.0-cp314-cp314t-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl", hash = "sha256:966ac9358b23d21ecd895c418c048e806fd254e46d09b1ff0cdad2eba195ea3e", size = 3559671, upload-time = "2025-07-27T21:23:44.504Z" }, + { url = "https://files.pythonhosted.org/packages/cb/43/52b7e54fe5ba1ef0270d9fdc43dabd7971f70ea2d7179be918c997820247/cramjam-2.11.0-cp314-cp314t-macosx_10_12_x86_64.whl", hash = "sha256:387f09d647a0d38dcb4539f8a14281f8eb6bb1d3e023471eb18a5974b2121c86", size = 1867876, upload-time = "2025-07-27T21:23:46.987Z" }, + { url = "https://files.pythonhosted.org/packages/9d/28/30d5b8d10acd30db3193bc562a313bff722888eaa45cfe32aa09389f2b24/cramjam-2.11.0-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:665b0d8fbbb1a7f300265b43926457ec78385200133e41fef19d85790fc1e800", size = 1695562, upload-time = "2025-07-27T21:23:48.644Z" }, + { url = "https://files.pythonhosted.org/packages/d9/86/ec806f986e01b896a650655024ea52a13e25c3ac8a3a382f493089483cdc/cramjam-2.11.0-cp314-cp314t-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:ca905387c7a371531b9622d93471be4d745ef715f2890c3702479cd4fc85aa51", size = 2025056, upload-time = "2025-07-27T21:23:50.404Z" }, + { url = "https://files.pythonhosted.org/packages/09/43/c2c17586b90848d29d63181f7d14b8bd3a7d00975ad46e3edf2af8af7e1f/cramjam-2.11.0-cp314-cp314t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3c1aa56aef2c8af55a21ed39040a94a12b53fb23beea290f94d19a76027e2ffb", size = 1764084, upload-time = "2025-07-27T21:23:52.265Z" }, + { url = "https://files.pythonhosted.org/packages/2b/a9/68bc334fadb434a61df10071dc8606702aa4f5b6cdb2df62474fc21d2845/cramjam-2.11.0-cp314-cp314t-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:e5db59c1cdfaa2ab85cc988e602d6919495f735ca8a5fd7603608eb1e23c26d5", size = 1854859, upload-time = "2025-07-27T21:23:54.085Z" }, + { url = "https://files.pythonhosted.org/packages/5b/4e/b48e67835b5811ec5e9cb2e2bcba9c3fd76dab3e732569fe801b542c6ca9/cramjam-2.11.0-cp314-cp314t-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:b1f893014f00fe5e89a660a032e813bf9f6d91de74cd1490cdb13b2b59d0c9a3", size = 2035970, upload-time = "2025-07-27T21:23:55.758Z" }, + { url = "https://files.pythonhosted.org/packages/c4/70/d2ac33d572b4d90f7f0f2c8a1d60fb48f06b128fdc2c05f9b49891bb0279/cramjam-2.11.0-cp314-cp314t-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:c26a1eb487947010f5de24943bd7c422dad955b2b0f8650762539778c380ca89", size = 2069320, upload-time = "2025-07-27T21:23:57.494Z" }, + { url = "https://files.pythonhosted.org/packages/1d/4c/85cec77af4a74308ba5fca8e296c4e2f80ec465c537afc7ab1e0ca2f9a00/cramjam-2.11.0-cp314-cp314t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7d5c8bfb438d94e7b892d1426da5fc4b4a5370cc360df9b8d9d77c33b896c37e", size = 1982668, upload-time = "2025-07-27T21:23:59.126Z" }, + { url = "https://files.pythonhosted.org/packages/55/45/938546d1629e008cc3138df7c424ef892719b1796ff408a2ab8550032e5e/cramjam-2.11.0-cp314-cp314t-musllinux_1_1_aarch64.whl", hash = "sha256:cb1fb8c9337ab0da25a01c05d69a0463209c347f16512ac43be5986f3d1ebaf4", size = 2034028, upload-time = "2025-07-27T21:24:00.865Z" }, + { url = "https://files.pythonhosted.org/packages/01/76/b5a53e20505555f1640e66dcf70394bcf51a1a3a072aa18ea35135a0f9ed/cramjam-2.11.0-cp314-cp314t-musllinux_1_1_armv7l.whl", hash = "sha256:1f6449f6de52dde3e2f1038284910c8765a397a25e2d05083870f3f5e7fc682c", size = 2155513, upload-time = "2025-07-27T21:24:02.92Z" }, + { url = "https://files.pythonhosted.org/packages/84/12/8d3f6ceefae81bbe45a347fdfa2219d9f3ac75ebc304f92cd5fcb4fbddc5/cramjam-2.11.0-cp314-cp314t-musllinux_1_1_i686.whl", hash = "sha256:382dec4f996be48ed9c6958d4e30c2b89435d7c2c4dbf32480b3b8886293dd65", size = 2170035, upload-time = "2025-07-27T21:24:04.558Z" }, + { url = "https://files.pythonhosted.org/packages/4b/85/3be6f0a1398f976070672be64f61895f8839857618a2d8cc0d3ab529d3dc/cramjam-2.11.0-cp314-cp314t-musllinux_1_1_x86_64.whl", hash = "sha256:d388bd5723732c3afe1dd1d181e4213cc4e1be210b080572e7d5749f6e955656", size = 2160229, upload-time = "2025-07-27T21:24:06.729Z" }, + { url = "https://files.pythonhosted.org/packages/57/5e/66cfc3635511b20014bbb3f2ecf0095efb3049e9e96a4a9e478e4f3d7b78/cramjam-2.11.0-cp314-cp314t-win32.whl", hash = "sha256:0a70ff17f8e1d13f322df616505550f0f4c39eda62290acb56f069d4857037c8", size = 1610267, upload-time = "2025-07-27T21:24:08.428Z" }, + { url = "https://files.pythonhosted.org/packages/ce/c6/c71e82e041c95ffe6a92ac707785500aa2a515a4339c2c7dd67e3c449249/cramjam-2.11.0-cp314-cp314t-win_amd64.whl", hash = "sha256:028400d699442d40dbda02f74158c73d05cb76587a12490d0bfedd958fd49188", size = 1713108, upload-time = "2025-07-27T21:24:10.147Z" }, { url = "https://files.pythonhosted.org/packages/81/da/b3301962ccd6fce9fefa1ecd8ea479edaeaa38fadb1f34d5391d2587216a/cramjam-2.11.0-pp311-pypy311_pp73-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl", hash = "sha256:52d5db3369f95b27b9f3c14d067acb0b183333613363ed34268c9e04560f997f", size = 3573546, upload-time = "2025-07-27T21:24:52.944Z" }, { url = "https://files.pythonhosted.org/packages/b6/c2/410ddb8ad4b9dfb129284666293cb6559479645da560f7077dc19d6bee9e/cramjam-2.11.0-pp311-pypy311_pp73-macosx_10_12_x86_64.whl", hash = "sha256:4820516366d455b549a44d0e2210ee7c4575882dda677564ce79092588321d54", size = 1873654, upload-time = "2025-07-27T21:24:54.958Z" }, { url = "https://files.pythonhosted.org/packages/d5/99/f68a443c64f7ce7aff5bed369b0aa5b2fac668fa3dfd441837e316e97a1f/cramjam-2.11.0-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:d9e5db525dc0a950a825202f84ee68d89a072479e07da98795a3469df942d301", size = 1702846, upload-time = "2025-07-27T21:24:57.124Z" }, @@ -1220,6 +1317,21 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/d1/2b/531e37408573e1da33adfb4c58875013ee8ac7d548d1548967d94a0ae5c4/cryptography-46.0.2-cp311-abi3-win32.whl", hash = "sha256:8b9bf67b11ef9e28f4d78ff88b04ed0929fcd0e4f70bb0f704cfc32a5c6311ee", size = 3056077, upload-time = "2025-10-01T00:27:48.424Z" }, { url = "https://files.pythonhosted.org/packages/a8/cd/2f83cafd47ed2dc5a3a9c783ff5d764e9e70d3a160e0df9a9dcd639414ce/cryptography-46.0.2-cp311-abi3-win_amd64.whl", hash = "sha256:758cfc7f4c38c5c5274b55a57ef1910107436f4ae842478c4989abbd24bd5acb", size = 3512585, upload-time = "2025-10-01T00:27:50.521Z" }, { url = "https://files.pythonhosted.org/packages/00/36/676f94e10bfaa5c5b86c469ff46d3e0663c5dc89542f7afbadac241a3ee4/cryptography-46.0.2-cp311-abi3-win_arm64.whl", hash = "sha256:218abd64a2e72f8472c2102febb596793347a3e65fafbb4ad50519969da44470", size = 2927474, upload-time = "2025-10-01T00:27:52.91Z" }, + { url = "https://files.pythonhosted.org/packages/6f/cc/47fc6223a341f26d103cb6da2216805e08a37d3b52bee7f3b2aee8066f95/cryptography-46.0.2-cp314-cp314t-macosx_10_9_universal2.whl", hash = "sha256:bda55e8dbe8533937956c996beaa20266a8eca3570402e52ae52ed60de1faca8", size = 7198626, upload-time = "2025-10-01T00:27:54.8Z" }, + { url = "https://files.pythonhosted.org/packages/93/22/d66a8591207c28bbe4ac7afa25c4656dc19dc0db29a219f9809205639ede/cryptography-46.0.2-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:e7155c0b004e936d381b15425273aee1cebc94f879c0ce82b0d7fecbf755d53a", size = 4287584, upload-time = "2025-10-01T00:27:57.018Z" }, + { url = "https://files.pythonhosted.org/packages/8c/3e/fac3ab6302b928e0398c269eddab5978e6c1c50b2b77bb5365ffa8633b37/cryptography-46.0.2-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:a61c154cc5488272a6c4b86e8d5beff4639cdb173d75325ce464d723cda0052b", size = 4433796, upload-time = "2025-10-01T00:27:58.631Z" }, + { url = "https://files.pythonhosted.org/packages/7d/d8/24392e5d3c58e2d83f98fe5a2322ae343360ec5b5b93fe18bc52e47298f5/cryptography-46.0.2-cp314-cp314t-manylinux_2_28_aarch64.whl", hash = "sha256:9ec3f2e2173f36a9679d3b06d3d01121ab9b57c979de1e6a244b98d51fea1b20", size = 4292126, upload-time = "2025-10-01T00:28:00.643Z" }, + { url = "https://files.pythonhosted.org/packages/ed/38/3d9f9359b84c16c49a5a336ee8be8d322072a09fac17e737f3bb11f1ce64/cryptography-46.0.2-cp314-cp314t-manylinux_2_28_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:2fafb6aa24e702bbf74de4cb23bfa2c3beb7ab7683a299062b69724c92e0fa73", size = 3993056, upload-time = "2025-10-01T00:28:02.8Z" }, + { url = "https://files.pythonhosted.org/packages/d6/a3/4c44fce0d49a4703cc94bfbe705adebf7ab36efe978053742957bc7ec324/cryptography-46.0.2-cp314-cp314t-manylinux_2_28_ppc64le.whl", hash = "sha256:0c7ffe8c9b1fcbb07a26d7c9fa5e857c2fe80d72d7b9e0353dcf1d2180ae60ee", size = 4967604, upload-time = "2025-10-01T00:28:04.783Z" }, + { url = "https://files.pythonhosted.org/packages/eb/c2/49d73218747c8cac16bb8318a5513fde3129e06a018af3bc4dc722aa4a98/cryptography-46.0.2-cp314-cp314t-manylinux_2_28_x86_64.whl", hash = "sha256:5840f05518caa86b09d23f8b9405a7b6d5400085aa14a72a98fdf5cf1568c0d2", size = 4465367, upload-time = "2025-10-01T00:28:06.864Z" }, + { url = "https://files.pythonhosted.org/packages/1b/64/9afa7d2ee742f55ca6285a54386ed2778556a4ed8871571cb1c1bfd8db9e/cryptography-46.0.2-cp314-cp314t-manylinux_2_34_aarch64.whl", hash = "sha256:27c53b4f6a682a1b645fbf1cd5058c72cf2f5aeba7d74314c36838c7cbc06e0f", size = 4291678, upload-time = "2025-10-01T00:28:08.982Z" }, + { url = "https://files.pythonhosted.org/packages/50/48/1696d5ea9623a7b72ace87608f6899ca3c331709ac7ebf80740abb8ac673/cryptography-46.0.2-cp314-cp314t-manylinux_2_34_ppc64le.whl", hash = "sha256:512c0250065e0a6b286b2db4bbcc2e67d810acd53eb81733e71314340366279e", size = 4931366, upload-time = "2025-10-01T00:28:10.74Z" }, + { url = "https://files.pythonhosted.org/packages/eb/3c/9dfc778401a334db3b24435ee0733dd005aefb74afe036e2d154547cb917/cryptography-46.0.2-cp314-cp314t-manylinux_2_34_x86_64.whl", hash = "sha256:07c0eb6657c0e9cca5891f4e35081dbf985c8131825e21d99b4f440a8f496f36", size = 4464738, upload-time = "2025-10-01T00:28:12.491Z" }, + { url = "https://files.pythonhosted.org/packages/dc/b1/abcde62072b8f3fd414e191a6238ce55a0050e9738090dc6cded24c12036/cryptography-46.0.2-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:48b983089378f50cba258f7f7aa28198c3f6e13e607eaf10472c26320332ca9a", size = 4419305, upload-time = "2025-10-01T00:28:14.145Z" }, + { url = "https://files.pythonhosted.org/packages/c7/1f/3d2228492f9391395ca34c677e8f2571fb5370fe13dc48c1014f8c509864/cryptography-46.0.2-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:e6f6775eaaa08c0eec73e301f7592f4367ccde5e4e4df8e58320f2ebf161ea2c", size = 4681201, upload-time = "2025-10-01T00:28:15.951Z" }, + { url = "https://files.pythonhosted.org/packages/de/77/b687745804a93a55054f391528fcfc76c3d6bfd082ce9fb62c12f0d29fc1/cryptography-46.0.2-cp314-cp314t-win32.whl", hash = "sha256:e8633996579961f9b5a3008683344c2558d38420029d3c0bc7ff77c17949a4e1", size = 3022492, upload-time = "2025-10-01T00:28:17.643Z" }, + { url = "https://files.pythonhosted.org/packages/60/a5/8d498ef2996e583de0bef1dcc5e70186376f00883ae27bf2133f490adf21/cryptography-46.0.2-cp314-cp314t-win_amd64.whl", hash = "sha256:48c01988ecbb32979bb98731f5c2b2f79042a6c58cc9a319c8c2f9987c7f68f9", size = 3496215, upload-time = "2025-10-01T00:28:19.272Z" }, + { url = "https://files.pythonhosted.org/packages/56/db/ee67aaef459a2706bc302b15889a1a8126ebe66877bab1487ae6ad00f33d/cryptography-46.0.2-cp314-cp314t-win_arm64.whl", hash = "sha256:8e2ad4d1a5899b7caa3a450e33ee2734be7cc0689010964703a7c4bcc8dd4fd0", size = 2919255, upload-time = "2025-10-01T00:28:21.115Z" }, { url = "https://files.pythonhosted.org/packages/d5/bb/fa95abcf147a1b0bb94d95f53fbb09da77b24c776c5d87d36f3d94521d2c/cryptography-46.0.2-cp38-abi3-macosx_10_9_universal2.whl", hash = "sha256:a08e7401a94c002e79dc3bc5231b6558cd4b2280ee525c4673f650a37e2c7685", size = 7248090, upload-time = "2025-10-01T00:28:22.846Z" }, { url = "https://files.pythonhosted.org/packages/b7/66/f42071ce0e3ffbfa80a88feadb209c779fda92a23fbc1e14f74ebf72ef6b/cryptography-46.0.2-cp38-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:d30bc11d35743bf4ddf76674a0a369ec8a21f87aaa09b0661b04c5f6c46e8d7b", size = 4293123, upload-time = "2025-10-01T00:28:25.072Z" }, { url = "https://files.pythonhosted.org/packages/a8/5d/1fdbd2e5c1ba822828d250e5a966622ef00185e476d1cd2726b6dd135e53/cryptography-46.0.2-cp38-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:bca3f0ce67e5a2a2cf524e86f44697c4323a86e0fd7ba857de1c30d52c11ede1", size = 4439524, upload-time = "2025-10-01T00:28:26.808Z" }, @@ -1344,6 +1456,10 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/5f/60/ce5c34fcdfec493701f9d1532dba95b21b2f6394147234dce21160bd923f/debugpy-1.8.17-cp313-cp313-manylinux_2_34_x86_64.whl", hash = "sha256:3bea3b0b12f3946e098cce9b43c3c46e317b567f79570c3f43f0b96d00788088", size = 4292100, upload-time = "2025-09-17T16:33:56.353Z" }, { url = "https://files.pythonhosted.org/packages/e8/95/7873cf2146577ef71d2a20bf553f12df865922a6f87b9e8ee1df04f01785/debugpy-1.8.17-cp313-cp313-win32.whl", hash = "sha256:e34ee844c2f17b18556b5bbe59e1e2ff4e86a00282d2a46edab73fd7f18f4a83", size = 5277002, upload-time = "2025-09-17T16:33:58.231Z" }, { url = "https://files.pythonhosted.org/packages/46/11/18c79a1cee5ff539a94ec4aa290c1c069a5580fd5cfd2fb2e282f8e905da/debugpy-1.8.17-cp313-cp313-win_amd64.whl", hash = "sha256:6c5cd6f009ad4fca8e33e5238210dc1e5f42db07d4b6ab21ac7ffa904a196420", size = 5319047, upload-time = "2025-09-17T16:34:00.586Z" }, + { url = "https://files.pythonhosted.org/packages/de/45/115d55b2a9da6de812696064ceb505c31e952c5d89c4ed1d9bb983deec34/debugpy-1.8.17-cp314-cp314-macosx_15_0_universal2.whl", hash = "sha256:045290c010bcd2d82bc97aa2daf6837443cd52f6328592698809b4549babcee1", size = 2536899, upload-time = "2025-09-17T16:34:02.657Z" }, + { url = "https://files.pythonhosted.org/packages/5a/73/2aa00c7f1f06e997ef57dc9b23d61a92120bec1437a012afb6d176585197/debugpy-1.8.17-cp314-cp314-manylinux_2_34_x86_64.whl", hash = "sha256:b69b6bd9dba6a03632534cdf67c760625760a215ae289f7489a452af1031fe1f", size = 4268254, upload-time = "2025-09-17T16:34:04.486Z" }, + { url = "https://files.pythonhosted.org/packages/86/b5/ed3e65c63c68a6634e3ba04bd10255c8e46ec16ebed7d1c79e4816d8a760/debugpy-1.8.17-cp314-cp314-win32.whl", hash = "sha256:5c59b74aa5630f3a5194467100c3b3d1c77898f9ab27e3f7dc5d40fc2f122670", size = 5277203, upload-time = "2025-09-17T16:34:06.65Z" }, + { url = "https://files.pythonhosted.org/packages/b0/26/394276b71c7538445f29e792f589ab7379ae70fd26ff5577dfde71158e96/debugpy-1.8.17-cp314-cp314-win_amd64.whl", hash = "sha256:893cba7bb0f55161de4365584b025f7064e1f88913551bcd23be3260b231429c", size = 5318493, upload-time = "2025-09-17T16:34:08.483Z" }, { url = "https://files.pythonhosted.org/packages/b0/d0/89247ec250369fc76db477720a26b2fce7ba079ff1380e4ab4529d2fe233/debugpy-1.8.17-py2.py3-none-any.whl", hash = "sha256:60c7dca6571efe660ccb7a9508d73ca14b8796c4ed484c2002abba714226cfef", size = 5283210, upload-time = "2025-09-17T16:34:25.835Z" }, ] @@ -1392,16 +1508,16 @@ wheels = [ [[package]] name = "dicom-validator" -version = "0.7.3" +version = "0.7.2" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "lxml" }, { name = "pydicom" }, { name = "pyparsing" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/a8/db/922c5d3c662aff47e1ab1a64fe9203b5ea813ea493b7b216c03b9bfb4676/dicom_validator-0.7.3.tar.gz", hash = "sha256:4ee8376688fb94ca33c48af34bc5a93dd3e3743abefdc1d16b70eba41c4aa542", size = 63526, upload-time = "2025-10-13T17:32:49.564Z" } +sdist = { url = "https://files.pythonhosted.org/packages/a4/d8/d32e0ae83bd66a563f31afdba863bbd2e2ac23cfc67d29ccc5521f4a81cc/dicom_validator-0.7.2.tar.gz", hash = "sha256:0be29c15cedf2ee8c55f90a166f1c09b7dbfee063a1d2c479e7924d650b2b1a4", size = 62766, upload-time = "2025-08-16T16:32:40.431Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/6e/8d/b3401e93c7301e96315d4d816f5643815f5e6797a8c511e56e91e237b546/dicom_validator-0.7.3-py3-none-any.whl", hash = "sha256:2bffcdf2ae774ef18c521774561987098bfe8c157fdd1fa5bf04f552a195499f", size = 72200, upload-time = "2025-10-13T17:32:48.204Z" }, + { url = "https://files.pythonhosted.org/packages/78/00/a8a1d94c06aabb2cfb1dd2c4fcf62b146ee8d779738572004ae394c4f964/dicom_validator-0.7.2-py3-none-any.whl", hash = "sha256:822fdc700fbe5ac246c24cb3723076aed1f44eed082e634f73de7f1b7aebec8e", size = 71501, upload-time = "2025-08-16T16:32:38.922Z" }, ] [[package]] @@ -1480,28 +1596,28 @@ wheels = [ [[package]] name = "duckdb" -version = "1.4.1" +version = "1.4.0" source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/ea/e7/21cf50a3d52ffceee1f0bcc3997fa96a5062e6bab705baee4f6c4e33cce5/duckdb-1.4.1.tar.gz", hash = "sha256:f903882f045d057ebccad12ac69975952832edfe133697694854bb784b8d6c76", size = 18461687, upload-time = "2025-10-07T10:37:28.605Z" } -wheels = [ - { url = "https://files.pythonhosted.org/packages/d9/52/606f13fa9669a24166d2fe523e28982d8ef9039874b4de774255c7806d1f/duckdb-1.4.1-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:605d563c1d5203ca992497cd33fb386ac3d533deca970f9dcf539f62a34e22a9", size = 29065894, upload-time = "2025-10-07T10:36:29.837Z" }, - { url = "https://files.pythonhosted.org/packages/84/57/138241952ece868b9577e607858466315bed1739e1fbb47205df4dfdfd88/duckdb-1.4.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:d3305c7c4b70336171de7adfdb50431f23671c000f11839b580c4201d9ce6ef5", size = 16163720, upload-time = "2025-10-07T10:36:32.241Z" }, - { url = "https://files.pythonhosted.org/packages/a3/81/afa3a0a78498a6f4acfea75c48a70c5082032d9ac87822713d7c2d164af1/duckdb-1.4.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:a063d6febbe34b32f1ad2e68822db4d0e4b1102036f49aaeeb22b844427a75df", size = 13756223, upload-time = "2025-10-07T10:36:34.673Z" }, - { url = "https://files.pythonhosted.org/packages/47/dd/5f6064fbd9248e37a3e806a244f81e0390ab8f989d231b584fb954f257fc/duckdb-1.4.1-cp311-cp311-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:d1ffcaaf74f7d1df3684b54685cbf8d3ce732781c541def8e1ced304859733ae", size = 18487022, upload-time = "2025-10-07T10:36:36.759Z" }, - { url = "https://files.pythonhosted.org/packages/a1/10/b54969a1c42fd9344ad39228d671faceb8aa9f144b67cd9531a63551757f/duckdb-1.4.1-cp311-cp311-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:685d3d1599dc08160e0fa0cf09e93ac4ff8b8ed399cb69f8b5391cd46b5b207c", size = 20491004, upload-time = "2025-10-07T10:36:39.318Z" }, - { url = "https://files.pythonhosted.org/packages/ed/d5/7332ae8f804869a4e895937821b776199a283f8d9fc775fd3ae5a0558099/duckdb-1.4.1-cp311-cp311-win_amd64.whl", hash = "sha256:78f1d28a15ae73bd449c43f80233732adffa49be1840a32de8f1a6bb5b286764", size = 12327619, upload-time = "2025-10-07T10:36:41.509Z" }, - { url = "https://files.pythonhosted.org/packages/0e/6c/906a3fe41cd247b5638866fc1245226b528de196588802d4df4df1e6e819/duckdb-1.4.1-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:cd1765a7d180b7482874586859fc23bc9969d7d6c96ced83b245e6c6f49cde7f", size = 29076820, upload-time = "2025-10-07T10:36:43.782Z" }, - { url = "https://files.pythonhosted.org/packages/66/c7/01dd33083f01f618c2a29f6dd068baf16945b8cbdb132929d3766610bbbb/duckdb-1.4.1-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:8ed7a86725185470953410823762956606693c0813bb64e09c7d44dbd9253a64", size = 16167558, upload-time = "2025-10-07T10:36:46.003Z" }, - { url = "https://files.pythonhosted.org/packages/81/e2/f983b4b7ae1dfbdd2792dd31dee9a0d35f88554452cbfc6c9d65e22fdfa9/duckdb-1.4.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:8a189bdfc64cfb9cc1adfbe4f2dcfde0a4992ec08505ad8ce33c886e4813f0bf", size = 13762226, upload-time = "2025-10-07T10:36:48.55Z" }, - { url = "https://files.pythonhosted.org/packages/ed/34/fb69a7be19b90f573b3cc890961be7b11870b77514769655657514f10a98/duckdb-1.4.1-cp312-cp312-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:a9090089b6486f7319c92acdeed8acda022d4374032d78a465956f50fc52fabf", size = 18500901, upload-time = "2025-10-07T10:36:52.445Z" }, - { url = "https://files.pythonhosted.org/packages/e4/a5/1395d7b49d5589e85da9a9d7ffd8b50364c9d159c2807bef72d547f0ad1e/duckdb-1.4.1-cp312-cp312-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:142552ea3e768048e0e8c832077a545ca07792631c59edaee925e3e67401c2a0", size = 20514177, upload-time = "2025-10-07T10:36:55.358Z" }, - { url = "https://files.pythonhosted.org/packages/c0/21/08f10706d30252753349ec545833fc0cea67c11abd0b5223acf2827f1056/duckdb-1.4.1-cp312-cp312-win_amd64.whl", hash = "sha256:567f3b3a785a9e8650612461893c49ca799661d2345a6024dda48324ece89ded", size = 12336422, upload-time = "2025-10-07T10:36:57.521Z" }, - { url = "https://files.pythonhosted.org/packages/d7/08/705988c33e38665c969f7876b3ca4328be578554aa7e3dc0f34158da3e64/duckdb-1.4.1-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:46496a2518752ae0c6c5d75d4cdecf56ea23dd098746391176dd8e42cf157791", size = 29077070, upload-time = "2025-10-07T10:36:59.83Z" }, - { url = "https://files.pythonhosted.org/packages/99/c5/7c9165f1e6b9069441bcda4da1e19382d4a2357783d37ff9ae238c5c41ac/duckdb-1.4.1-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:1c65ae7e9b541cea07d8075343bcfebdecc29a3c0481aa6078ee63d51951cfcd", size = 16167506, upload-time = "2025-10-07T10:37:02.24Z" }, - { url = "https://files.pythonhosted.org/packages/38/46/267f4a570a0ee3ae6871ddc03435f9942884284e22a7ba9b7cb252ee69b6/duckdb-1.4.1-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:598d1a314e34b65d9399ddd066ccce1eeab6a60a2ef5885a84ce5ed62dbaf729", size = 13762330, upload-time = "2025-10-07T10:37:04.581Z" }, - { url = "https://files.pythonhosted.org/packages/15/7b/c4f272a40c36d82df20937d93a1780eb39ab0107fe42b62cba889151eab9/duckdb-1.4.1-cp313-cp313-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:e2f16b8def782d484a9f035fc422bb6f06941ed0054b4511ddcdc514a7fb6a75", size = 18504687, upload-time = "2025-10-07T10:37:06.991Z" }, - { url = "https://files.pythonhosted.org/packages/17/fc/9b958751f0116d7b0406406b07fa6f5a10c22d699be27826d0b896f9bf51/duckdb-1.4.1-cp313-cp313-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:a5a7d0aed068a5c33622a8848857947cab5cfb3f2a315b1251849bac2c74c492", size = 20513823, upload-time = "2025-10-07T10:37:09.349Z" }, - { url = "https://files.pythonhosted.org/packages/30/79/4f544d73fcc0513b71296cb3ebb28a227d22e80dec27204977039b9fa875/duckdb-1.4.1-cp313-cp313-win_amd64.whl", hash = "sha256:280fd663dacdd12bb3c3bf41f3e5b2e5b95e00b88120afabb8b8befa5f335c6f", size = 12336460, upload-time = "2025-10-07T10:37:12.154Z" }, +sdist = { url = "https://files.pythonhosted.org/packages/82/93/adc0d183642fc9a602ca9b97cb16754c84b8c1d92e5b99aec412e0c419a8/duckdb-1.4.0.tar.gz", hash = "sha256:bd5edee8bd5a73b5822f2b390668597b5fcdc2d3292c244d8d933bb87ad6ac4c", size = 18453175, upload-time = "2025-09-16T10:22:41.509Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/60/e9/b29cc5bceac52e049b20d613551a2171a092df07f26d4315f3f9651c80d4/duckdb-1.4.0-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:6505fed1ccae8df9f574e744c48fa32ee2feaeebe5346c2daf4d4d10a8dac5aa", size = 31290878, upload-time = "2025-09-16T10:21:43.256Z" }, + { url = "https://files.pythonhosted.org/packages/1f/68/d88a15dba48bf6a4b33f1be5097ef45c83f7b9e97c854cc638a85bb07d70/duckdb-1.4.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:36974a04b29c74ac2143457e95420a7422016d050e28573060b89a90b9cf2b57", size = 17288823, upload-time = "2025-09-16T10:21:45.716Z" }, + { url = "https://files.pythonhosted.org/packages/8c/7e/e3d2101dc6bbd60f2b3c1d748351ff541fc8c48790ac1218c0199cb930f6/duckdb-1.4.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:90484b896e5059f145d1facfabea38e22c54a2dcc2bd62dd6c290423f0aee258", size = 14819684, upload-time = "2025-09-16T10:21:48.117Z" }, + { url = "https://files.pythonhosted.org/packages/c4/bb/4ec8e4d03cb5b77d75b9ee0057c2c714cffaa9bda1e55ffec833458af0a3/duckdb-1.4.0-cp311-cp311-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:a969d624b385853b31a43b0a23089683297da2f14846243921c6dbec8382d659", size = 18410075, upload-time = "2025-09-16T10:21:50.517Z" }, + { url = "https://files.pythonhosted.org/packages/ec/21/e896616d892d50dc1e0c142428e9359b483d4dd6e339231d822e57834ad3/duckdb-1.4.0-cp311-cp311-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:5935644f96a75e9f6f3c3eeb3da14cdcaf7bad14d1199c08439103decb29466a", size = 20402984, upload-time = "2025-09-16T10:21:52.808Z" }, + { url = "https://files.pythonhosted.org/packages/c4/c0/b5eb9497e4a9167d23fbad745969eaa36e28d346648e17565471892d1b33/duckdb-1.4.0-cp311-cp311-win_amd64.whl", hash = "sha256:300aa0e963af97969c38440877fffd576fc1f49c1f5914789a9d01f2fe7def91", size = 12282971, upload-time = "2025-09-16T10:21:55.314Z" }, + { url = "https://files.pythonhosted.org/packages/e8/6d/0c774d6af1aed82dbe855d266cb000a1c09ea31ed7d6c3a79e2167a38e7a/duckdb-1.4.0-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:18b3a048fca6cc7bafe08b10e1b0ab1509d7a0381ffb2c70359e7dc56d8a705d", size = 31307425, upload-time = "2025-09-16T10:21:57.83Z" }, + { url = "https://files.pythonhosted.org/packages/d3/c0/1fd7b7b2c0c53d8d748d2f28ea9096df5ee9dc39fa736cca68acabe69656/duckdb-1.4.0-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:2c1271cb85aeacccfd0b1284e816280a7450df1dd4dd85ccb2848563cfdf90e9", size = 17295727, upload-time = "2025-09-16T10:22:02.242Z" }, + { url = "https://files.pythonhosted.org/packages/98/d3/4d4c4bd667b7ada5f6c207c2f127591ebb8468333f207f8f10ff0532578e/duckdb-1.4.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:55064dd2e25711eeaa6a72c25405bdd7994c81a3221657e94309a2faf65d25a6", size = 14826879, upload-time = "2025-09-16T10:22:05.162Z" }, + { url = "https://files.pythonhosted.org/packages/b0/48/e0c1b97d76fb7567c53db5739931323238fad54a642707008104f501db37/duckdb-1.4.0-cp312-cp312-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:0536d7c81bc506532daccf373ddbc8c6add46aeb70ef3cd5ee70ad5c2b3165ea", size = 18417856, upload-time = "2025-09-16T10:22:07.919Z" }, + { url = "https://files.pythonhosted.org/packages/12/78/297b838f3b9511589badc8f472f70b31cf3bbf9eb99fa0a4d6e911d3114a/duckdb-1.4.0-cp312-cp312-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:784554e3ddfcfc5c5c7b1aa1f9925fedb7938f6628729adba48f7ea37554598f", size = 20427154, upload-time = "2025-09-16T10:22:10.216Z" }, + { url = "https://files.pythonhosted.org/packages/ea/57/500d251b886494f6c52d56eeab8a1860572ee62aed05d7d50c71ba2320f3/duckdb-1.4.0-cp312-cp312-win_amd64.whl", hash = "sha256:c5d2aa4d6981f525ada95e6db41bb929403632bb5ff24bd6d6dd551662b1b613", size = 12290108, upload-time = "2025-09-16T10:22:12.668Z" }, + { url = "https://files.pythonhosted.org/packages/2f/64/ee22b2b8572746e1523143b9f28d606575782e0204de5020656a1d15dd14/duckdb-1.4.0-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:1d94d010a09b1a62d9021a2a71cf266188750f3c9b1912ccd6afe104a6ce8010", size = 31307662, upload-time = "2025-09-16T10:22:14.9Z" }, + { url = "https://files.pythonhosted.org/packages/76/2e/4241cd00046ca6b781bd1d9002e8223af061e85d1cc21830aa63e7a7db7c/duckdb-1.4.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:c61756fa8b3374627e5fa964b8e0d5b58e364dce59b87dba7fb7bc6ede196b26", size = 17295617, upload-time = "2025-09-16T10:22:17.239Z" }, + { url = "https://files.pythonhosted.org/packages/f7/98/5ab136bc7b12ac18580350a220db7c00606be9eac2d89de259cce733f64c/duckdb-1.4.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:e70d7d9881ea2c0836695de70ea68c970e18a2856ba3d6502e276c85bd414ae7", size = 14826727, upload-time = "2025-09-16T10:22:19.415Z" }, + { url = "https://files.pythonhosted.org/packages/23/32/57866cf8881288b3dfb9212720221fb890daaa534dbdc6fe3fff3979ecd1/duckdb-1.4.0-cp313-cp313-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:2de258a93435c977a0ec3a74ec8f60c2f215ddc73d427ee49adc4119558facd3", size = 18421289, upload-time = "2025-09-16T10:22:21.564Z" }, + { url = "https://files.pythonhosted.org/packages/a0/83/7438fb43be451a7d4a04650aaaf662b2ff2d95895bbffe3e0e28cbe030c9/duckdb-1.4.0-cp313-cp313-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:a6d3659641d517dd9ed1ab66f110cdbdaa6900106f116effaf2dbedd83c38de3", size = 20426547, upload-time = "2025-09-16T10:22:23.759Z" }, + { url = "https://files.pythonhosted.org/packages/21/b2/98fb89ae81611855f35984e96f648d871f3967bb3f524b51d1372d052f0c/duckdb-1.4.0-cp313-cp313-win_amd64.whl", hash = "sha256:07fcc612ea5f0fe6032b92bcc93693034eb00e7a23eb9146576911d5326af4f7", size = 12290467, upload-time = "2025-09-16T10:22:25.923Z" }, ] [[package]] @@ -1550,29 +1666,28 @@ wheels = [ [[package]] name = "faker" -version = "37.11.0" +version = "37.8.0" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "tzdata" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/c9/4b/ca43f6bbcef63deb8ac01201af306388670a172587169aab3b192f7490f0/faker-37.11.0.tar.gz", hash = "sha256:22969803849ba0618be8eee2dd01d0d9e2cd3b75e6ff1a291fa9abcdb34da5e6", size = 1935301, upload-time = "2025-10-07T14:49:01.481Z" } +sdist = { url = "https://files.pythonhosted.org/packages/3a/da/1336008d39e5d4076dddb4e0f3a52ada41429274bf558a3cc28030d324a3/faker-37.8.0.tar.gz", hash = "sha256:090bb5abbec2b30949a95ce1ba6b20d1d0ed222883d63483a0d4be4a970d6fb8", size = 1912113, upload-time = "2025-09-15T20:24:13.592Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/a3/46/8f4097b55e43af39e8e71e1f7aec59ff7398bca54d975c30889bc844719d/faker-37.11.0-py3-none-any.whl", hash = "sha256:1508d2da94dfd1e0087b36f386126d84f8583b3de19ac18e392a2831a6676c57", size = 1975525, upload-time = "2025-10-07T14:48:58.29Z" }, + { url = "https://files.pythonhosted.org/packages/f5/11/02ebebb09ff2104b690457cb7bc6ed700c9e0ce88cf581486bb0a5d3c88b/faker-37.8.0-py3-none-any.whl", hash = "sha256:b08233118824423b5fc239f7dd51f145e7018082b4164f8da6a9994e1f1ae793", size = 1953940, upload-time = "2025-09-15T20:24:11.482Z" }, ] [[package]] name = "fastapi" -version = "0.120.4" +version = "0.118.2" source = { registry = "https://pypi.org/simple" } dependencies = [ - { name = "annotated-doc" }, { name = "pydantic" }, { name = "starlette" }, { name = "typing-extensions" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/3f/3a/0bf90d5189d7f62dc2bd0523899629ca59b58ff4290d631cd3bb5c8889d4/fastapi-0.120.4.tar.gz", hash = "sha256:2d856bc847893ca4d77896d4504ffdec0fb04312b705065fca9104428eca3868", size = 339716, upload-time = "2025-10-31T18:37:28.81Z" } +sdist = { url = "https://files.pythonhosted.org/packages/2e/ad/31a59efecca3b584440cafac6f69634f4661295c858912c2b2905280a089/fastapi-0.118.2.tar.gz", hash = "sha256:d5388dbe76d97cb6ccd2c93b4dd981608062ebf6335280edfa9a11af82443e18", size = 311963, upload-time = "2025-10-08T14:52:17.796Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/ed/47/14a76b926edc3957c8a8258423db789d3fa925d2fed800102fce58959413/fastapi-0.120.4-py3-none-any.whl", hash = "sha256:9bdf192308676480d3593e10fd05094e56d6fdc7d9283db26053d8104d5f82a0", size = 108235, upload-time = "2025-10-31T18:37:27.038Z" }, + { url = "https://files.pythonhosted.org/packages/45/7c/97d033faf771c9fe960c7b51eb78ab266bfa64cbc917601978963f0c3c7b/fastapi-0.118.2-py3-none-any.whl", hash = "sha256:d1f842612e6a305f95abe784b7f8d3215477742e7c67a16fccd20bd79db68150", size = 97954, upload-time = "2025-10-08T14:52:16.166Z" }, ] [package.optional-dependencies] @@ -1621,7 +1736,7 @@ standard = [ [[package]] name = "fastapi-cloud-cli" -version = "0.3.1" +version = "0.3.0" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "httpx" }, @@ -1632,9 +1747,9 @@ dependencies = [ { name = "typer" }, { name = "uvicorn", extra = ["standard"] }, ] -sdist = { url = "https://files.pythonhosted.org/packages/f9/48/0f14d8555b750dc8c04382804e4214f1d7f55298127f3a0237ba566e69dd/fastapi_cloud_cli-0.3.1.tar.gz", hash = "sha256:8c7226c36e92e92d0c89827e8f56dbf164ab2de4444bd33aa26b6c3f7675db69", size = 24080, upload-time = "2025-10-09T11:32:58.174Z" } +sdist = { url = "https://files.pythonhosted.org/packages/a6/5f/17b403148a23dd708e3166f534136f4d3918942e168aca66659311eb0678/fastapi_cloud_cli-0.3.0.tar.gz", hash = "sha256:17c7f8baa16b2f907696bf77d49df4a04e8715bbf5233024f273870f3ff1ca4d", size = 24388, upload-time = "2025-10-02T13:25:52.361Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/68/79/7f5a5e5513e6a737e5fb089d9c59c74d4d24dc24d581d3aa519b326bedda/fastapi_cloud_cli-0.3.1-py3-none-any.whl", hash = "sha256:7d1a98a77791a9d0757886b2ffbf11bcc6b3be93210dd15064be10b216bf7e00", size = 19711, upload-time = "2025-10-09T11:32:57.118Z" }, + { url = "https://files.pythonhosted.org/packages/58/59/7d12c5173fe2eed21e99bb1a6eb7e4f301951db870a4d915d126e0b6062d/fastapi_cloud_cli-0.3.0-py3-none-any.whl", hash = "sha256:572677dbe38b6d4712d30097a8807b383d648ca09eb58e4a07cef4a517020832", size = 19921, upload-time = "2025-10-02T13:25:51.164Z" }, ] [[package]] @@ -1687,11 +1802,11 @@ wheels = [ [[package]] name = "filelock" -version = "3.20.0" +version = "3.19.1" source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/58/46/0028a82567109b5ef6e4d2a1f04a583fb513e6cf9527fcdd09afd817deeb/filelock-3.20.0.tar.gz", hash = "sha256:711e943b4ec6be42e1d4e6690b48dc175c822967466bb31c0c293f34334c13f4", size = 18922, upload-time = "2025-10-08T18:03:50.056Z" } +sdist = { url = "https://files.pythonhosted.org/packages/40/bb/0ab3e58d22305b6f5440629d20683af28959bf793d98d11950e305c1c326/filelock-3.19.1.tar.gz", hash = "sha256:66eda1888b0171c998b35be2bcc0f6d75c388a7ce20c3f3f37aa8e96c2dddf58", size = 17687, upload-time = "2025-08-14T16:56:03.016Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/76/91/7216b27286936c16f5b4d0c530087e4a54eead683e6b0b73dd0c64844af6/filelock-3.20.0-py3-none-any.whl", hash = "sha256:339b4732ffda5cd79b13f4e2711a31b0365ce445d95d243bb996273d072546a2", size = 16054, upload-time = "2025-10-08T18:03:48.35Z" }, + { url = "https://files.pythonhosted.org/packages/42/14/42b2651a2f46b022ccd948bca9f2d5af0fd8929c4eec235b8d6d844fbe67/filelock-3.19.1-py3-none-any.whl", hash = "sha256:d38e30481def20772f5baf097c122c3babc4fcdb7e14e57049eb9d88c6dc017d", size = 15988, upload-time = "2025-08-14T16:56:01.633Z" }, ] [[package]] @@ -1724,6 +1839,22 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/fd/9e/eb76f77e82f8d4a46420aadff12cec6237751b0fb9ef1de373186dcffb5f/fonttools-4.60.1-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:145daa14bf24824b677b9357c5e44fd8895c2a8f53596e1b9ea3496081dc692c", size = 5044495, upload-time = "2025-09-29T21:12:15.241Z" }, { url = "https://files.pythonhosted.org/packages/f8/b3/cede8f8235d42ff7ae891bae8d619d02c8ac9fd0cfc450c5927a6200c70d/fonttools-4.60.1-cp313-cp313-win32.whl", hash = "sha256:2299df884c11162617a66b7c316957d74a18e3758c0274762d2cc87df7bc0272", size = 2217028, upload-time = "2025-09-29T21:12:17.96Z" }, { url = "https://files.pythonhosted.org/packages/75/4d/b022c1577807ce8b31ffe055306ec13a866f2337ecee96e75b24b9b753ea/fonttools-4.60.1-cp313-cp313-win_amd64.whl", hash = "sha256:a3db56f153bd4c5c2b619ab02c5db5192e222150ce5a1bc10f16164714bc39ac", size = 2266200, upload-time = "2025-09-29T21:12:20.14Z" }, + { url = "https://files.pythonhosted.org/packages/9a/83/752ca11c1aa9a899b793a130f2e466b79ea0cf7279c8d79c178fc954a07b/fonttools-4.60.1-cp314-cp314-macosx_10_13_universal2.whl", hash = "sha256:a884aef09d45ba1206712c7dbda5829562d3fea7726935d3289d343232ecb0d3", size = 2822830, upload-time = "2025-09-29T21:12:24.406Z" }, + { url = "https://files.pythonhosted.org/packages/57/17/bbeab391100331950a96ce55cfbbff27d781c1b85ebafb4167eae50d9fe3/fonttools-4.60.1-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:8a44788d9d91df72d1a5eac49b31aeb887a5f4aab761b4cffc4196c74907ea85", size = 2345524, upload-time = "2025-09-29T21:12:26.819Z" }, + { url = "https://files.pythonhosted.org/packages/3d/2e/d4831caa96d85a84dd0da1d9f90d81cec081f551e0ea216df684092c6c97/fonttools-4.60.1-cp314-cp314-manylinux1_x86_64.manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:e852d9dda9f93ad3651ae1e3bb770eac544ec93c3807888798eccddf84596537", size = 4843490, upload-time = "2025-09-29T21:12:29.123Z" }, + { url = "https://files.pythonhosted.org/packages/49/13/5e2ea7c7a101b6fc3941be65307ef8df92cbbfa6ec4804032baf1893b434/fonttools-4.60.1-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:154cb6ee417e417bf5f7c42fe25858c9140c26f647c7347c06f0cc2d47eff003", size = 4944184, upload-time = "2025-09-29T21:12:31.414Z" }, + { url = "https://files.pythonhosted.org/packages/0c/2b/cf9603551c525b73fc47c52ee0b82a891579a93d9651ed694e4e2cd08bb8/fonttools-4.60.1-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:5664fd1a9ea7f244487ac8f10340c4e37664675e8667d6fee420766e0fb3cf08", size = 4890218, upload-time = "2025-09-29T21:12:33.936Z" }, + { url = "https://files.pythonhosted.org/packages/fd/2f/933d2352422e25f2376aae74f79eaa882a50fb3bfef3c0d4f50501267101/fonttools-4.60.1-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:583b7f8e3c49486e4d489ad1deacfb8d5be54a8ef34d6df824f6a171f8511d99", size = 4999324, upload-time = "2025-09-29T21:12:36.637Z" }, + { url = "https://files.pythonhosted.org/packages/38/99/234594c0391221f66216bc2c886923513b3399a148defaccf81dc3be6560/fonttools-4.60.1-cp314-cp314-win32.whl", hash = "sha256:66929e2ea2810c6533a5184f938502cfdaea4bc3efb7130d8cc02e1c1b4108d6", size = 2220861, upload-time = "2025-09-29T21:12:39.108Z" }, + { url = "https://files.pythonhosted.org/packages/3e/1d/edb5b23726dde50fc4068e1493e4fc7658eeefcaf75d4c5ffce067d07ae5/fonttools-4.60.1-cp314-cp314-win_amd64.whl", hash = "sha256:f3d5be054c461d6a2268831f04091dc82753176f6ea06dc6047a5e168265a987", size = 2270934, upload-time = "2025-09-29T21:12:41.339Z" }, + { url = "https://files.pythonhosted.org/packages/fb/da/1392aaa2170adc7071fe7f9cfd181a5684a7afcde605aebddf1fb4d76df5/fonttools-4.60.1-cp314-cp314t-macosx_10_13_universal2.whl", hash = "sha256:b6379e7546ba4ae4b18f8ae2b9bc5960936007a1c0e30b342f662577e8bc3299", size = 2894340, upload-time = "2025-09-29T21:12:43.774Z" }, + { url = "https://files.pythonhosted.org/packages/bf/a7/3b9f16e010d536ce567058b931a20b590d8f3177b2eda09edd92e392375d/fonttools-4.60.1-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:9d0ced62b59e0430b3690dbc5373df1c2aa7585e9a8ce38eff87f0fd993c5b01", size = 2375073, upload-time = "2025-09-29T21:12:46.437Z" }, + { url = "https://files.pythonhosted.org/packages/9b/b5/e9bcf51980f98e59bb5bb7c382a63c6f6cac0eec5f67de6d8f2322382065/fonttools-4.60.1-cp314-cp314t-manylinux1_x86_64.manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:875cb7764708b3132637f6c5fb385b16eeba0f7ac9fa45a69d35e09b47045801", size = 4849758, upload-time = "2025-09-29T21:12:48.694Z" }, + { url = "https://files.pythonhosted.org/packages/e3/dc/1d2cf7d1cba82264b2f8385db3f5960e3d8ce756b4dc65b700d2c496f7e9/fonttools-4.60.1-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:a184b2ea57b13680ab6d5fbde99ccef152c95c06746cb7718c583abd8f945ccc", size = 5085598, upload-time = "2025-09-29T21:12:51.081Z" }, + { url = "https://files.pythonhosted.org/packages/5d/4d/279e28ba87fb20e0c69baf72b60bbf1c4d873af1476806a7b5f2b7fac1ff/fonttools-4.60.1-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:026290e4ec76583881763fac284aca67365e0be9f13a7fb137257096114cb3bc", size = 4957603, upload-time = "2025-09-29T21:12:53.423Z" }, + { url = "https://files.pythonhosted.org/packages/78/d4/ff19976305e0c05aa3340c805475abb00224c954d3c65e82c0a69633d55d/fonttools-4.60.1-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:f0e8817c7d1a0c2eedebf57ef9a9896f3ea23324769a9a2061a80fe8852705ed", size = 4974184, upload-time = "2025-09-29T21:12:55.962Z" }, + { url = "https://files.pythonhosted.org/packages/63/22/8553ff6166f5cd21cfaa115aaacaa0dc73b91c079a8cfd54a482cbc0f4f5/fonttools-4.60.1-cp314-cp314t-win32.whl", hash = "sha256:1410155d0e764a4615774e5c2c6fc516259fe3eca5882f034eb9bfdbee056259", size = 2282241, upload-time = "2025-09-29T21:12:58.179Z" }, + { url = "https://files.pythonhosted.org/packages/8a/cb/fa7b4d148e11d5a72761a22e595344133e83a9507a4c231df972e657579b/fonttools-4.60.1-cp314-cp314t-win_amd64.whl", hash = "sha256:022beaea4b73a70295b688f817ddc24ed3e3418b5036ffcd5658141184ef0d0c", size = 2345760, upload-time = "2025-09-29T21:13:00.375Z" }, { url = "https://files.pythonhosted.org/packages/c7/93/0dd45cd283c32dea1545151d8c3637b4b8c53cdb3a625aeb2885b184d74d/fonttools-4.60.1-py3-none-any.whl", hash = "sha256:906306ac7afe2156fcf0042173d6ebbb05416af70f6b370967b47f8f00103bbb", size = 1143175, upload-time = "2025-09-29T21:13:24.134Z" }, ] @@ -1738,75 +1869,79 @@ wheels = [ [[package]] name = "frozenlist" -version = "1.8.0" +version = "1.7.0" source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/2d/f5/c831fac6cc817d26fd54c7eaccd04ef7e0288806943f7cc5bbf69f3ac1f0/frozenlist-1.8.0.tar.gz", hash = "sha256:3ede829ed8d842f6cd48fc7081d7a41001a56f1f38603f9d49bf3020d59a31ad", size = 45875, upload-time = "2025-10-06T05:38:17.865Z" } -wheels = [ - { url = "https://files.pythonhosted.org/packages/bc/03/077f869d540370db12165c0aa51640a873fb661d8b315d1d4d67b284d7ac/frozenlist-1.8.0-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:09474e9831bc2b2199fad6da3c14c7b0fbdd377cce9d3d77131be28906cb7d84", size = 86912, upload-time = "2025-10-06T05:35:45.98Z" }, - { url = "https://files.pythonhosted.org/packages/df/b5/7610b6bd13e4ae77b96ba85abea1c8cb249683217ef09ac9e0ae93f25a91/frozenlist-1.8.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:17c883ab0ab67200b5f964d2b9ed6b00971917d5d8a92df149dc2c9779208ee9", size = 50046, upload-time = "2025-10-06T05:35:47.009Z" }, - { url = "https://files.pythonhosted.org/packages/6e/ef/0e8f1fe32f8a53dd26bdd1f9347efe0778b0fddf62789ea683f4cc7d787d/frozenlist-1.8.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:fa47e444b8ba08fffd1c18e8cdb9a75db1b6a27f17507522834ad13ed5922b93", size = 50119, upload-time = "2025-10-06T05:35:48.38Z" }, - { url = "https://files.pythonhosted.org/packages/11/b1/71a477adc7c36e5fb628245dfbdea2166feae310757dea848d02bd0689fd/frozenlist-1.8.0-cp311-cp311-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:2552f44204b744fba866e573be4c1f9048d6a324dfe14475103fd51613eb1d1f", size = 231067, upload-time = "2025-10-06T05:35:49.97Z" }, - { url = "https://files.pythonhosted.org/packages/45/7e/afe40eca3a2dc19b9904c0f5d7edfe82b5304cb831391edec0ac04af94c2/frozenlist-1.8.0-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:957e7c38f250991e48a9a73e6423db1bb9dd14e722a10f6b8bb8e16a0f55f695", size = 233160, upload-time = "2025-10-06T05:35:51.729Z" }, - { url = "https://files.pythonhosted.org/packages/a6/aa/7416eac95603ce428679d273255ffc7c998d4132cfae200103f164b108aa/frozenlist-1.8.0-cp311-cp311-manylinux2014_armv7l.manylinux_2_17_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:8585e3bb2cdea02fc88ffa245069c36555557ad3609e83be0ec71f54fd4abb52", size = 228544, upload-time = "2025-10-06T05:35:53.246Z" }, - { url = "https://files.pythonhosted.org/packages/8b/3d/2a2d1f683d55ac7e3875e4263d28410063e738384d3adc294f5ff3d7105e/frozenlist-1.8.0-cp311-cp311-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:edee74874ce20a373d62dc28b0b18b93f645633c2943fd90ee9d898550770581", size = 243797, upload-time = "2025-10-06T05:35:54.497Z" }, - { url = "https://files.pythonhosted.org/packages/78/1e/2d5565b589e580c296d3bb54da08d206e797d941a83a6fdea42af23be79c/frozenlist-1.8.0-cp311-cp311-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:c9a63152fe95756b85f31186bddf42e4c02c6321207fd6601a1c89ebac4fe567", size = 247923, upload-time = "2025-10-06T05:35:55.861Z" }, - { url = "https://files.pythonhosted.org/packages/aa/c3/65872fcf1d326a7f101ad4d86285c403c87be7d832b7470b77f6d2ed5ddc/frozenlist-1.8.0-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:b6db2185db9be0a04fecf2f241c70b63b1a242e2805be291855078f2b404dd6b", size = 230886, upload-time = "2025-10-06T05:35:57.399Z" }, - { url = "https://files.pythonhosted.org/packages/a0/76/ac9ced601d62f6956f03cc794f9e04c81719509f85255abf96e2510f4265/frozenlist-1.8.0-cp311-cp311-musllinux_1_2_armv7l.whl", hash = "sha256:f4be2e3d8bc8aabd566f8d5b8ba7ecc09249d74ba3c9ed52e54dc23a293f0b92", size = 245731, upload-time = "2025-10-06T05:35:58.563Z" }, - { url = "https://files.pythonhosted.org/packages/b9/49/ecccb5f2598daf0b4a1415497eba4c33c1e8ce07495eb07d2860c731b8d5/frozenlist-1.8.0-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:c8d1634419f39ea6f5c427ea2f90ca85126b54b50837f31497f3bf38266e853d", size = 241544, upload-time = "2025-10-06T05:35:59.719Z" }, - { url = "https://files.pythonhosted.org/packages/53/4b/ddf24113323c0bbcc54cb38c8b8916f1da7165e07b8e24a717b4a12cbf10/frozenlist-1.8.0-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:1a7fa382a4a223773ed64242dbe1c9c326ec09457e6b8428efb4118c685c3dfd", size = 241806, upload-time = "2025-10-06T05:36:00.959Z" }, - { url = "https://files.pythonhosted.org/packages/a7/fb/9b9a084d73c67175484ba2789a59f8eebebd0827d186a8102005ce41e1ba/frozenlist-1.8.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:11847b53d722050808926e785df837353bd4d75f1d494377e59b23594d834967", size = 229382, upload-time = "2025-10-06T05:36:02.22Z" }, - { url = "https://files.pythonhosted.org/packages/95/a3/c8fb25aac55bf5e12dae5c5aa6a98f85d436c1dc658f21c3ac73f9fa95e5/frozenlist-1.8.0-cp311-cp311-win32.whl", hash = "sha256:27c6e8077956cf73eadd514be8fb04d77fc946a7fe9f7fe167648b0b9085cc25", size = 39647, upload-time = "2025-10-06T05:36:03.409Z" }, - { url = "https://files.pythonhosted.org/packages/0a/f5/603d0d6a02cfd4c8f2a095a54672b3cf967ad688a60fb9faf04fc4887f65/frozenlist-1.8.0-cp311-cp311-win_amd64.whl", hash = "sha256:ac913f8403b36a2c8610bbfd25b8013488533e71e62b4b4adce9c86c8cea905b", size = 44064, upload-time = "2025-10-06T05:36:04.368Z" }, - { url = "https://files.pythonhosted.org/packages/5d/16/c2c9ab44e181f043a86f9a8f84d5124b62dbcb3a02c0977ec72b9ac1d3e0/frozenlist-1.8.0-cp311-cp311-win_arm64.whl", hash = "sha256:d4d3214a0f8394edfa3e303136d0575eece0745ff2b47bd2cb2e66dd92d4351a", size = 39937, upload-time = "2025-10-06T05:36:05.669Z" }, - { url = "https://files.pythonhosted.org/packages/69/29/948b9aa87e75820a38650af445d2ef2b6b8a6fab1a23b6bb9e4ef0be2d59/frozenlist-1.8.0-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:78f7b9e5d6f2fdb88cdde9440dc147259b62b9d3b019924def9f6478be254ac1", size = 87782, upload-time = "2025-10-06T05:36:06.649Z" }, - { url = "https://files.pythonhosted.org/packages/64/80/4f6e318ee2a7c0750ed724fa33a4bdf1eacdc5a39a7a24e818a773cd91af/frozenlist-1.8.0-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:229bf37d2e4acdaf808fd3f06e854a4a7a3661e871b10dc1f8f1896a3b05f18b", size = 50594, upload-time = "2025-10-06T05:36:07.69Z" }, - { url = "https://files.pythonhosted.org/packages/2b/94/5c8a2b50a496b11dd519f4a24cb5496cf125681dd99e94c604ccdea9419a/frozenlist-1.8.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:f833670942247a14eafbb675458b4e61c82e002a148f49e68257b79296e865c4", size = 50448, upload-time = "2025-10-06T05:36:08.78Z" }, - { url = "https://files.pythonhosted.org/packages/6a/bd/d91c5e39f490a49df14320f4e8c80161cfcce09f1e2cde1edd16a551abb3/frozenlist-1.8.0-cp312-cp312-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:494a5952b1c597ba44e0e78113a7266e656b9794eec897b19ead706bd7074383", size = 242411, upload-time = "2025-10-06T05:36:09.801Z" }, - { url = "https://files.pythonhosted.org/packages/8f/83/f61505a05109ef3293dfb1ff594d13d64a2324ac3482be2cedc2be818256/frozenlist-1.8.0-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:96f423a119f4777a4a056b66ce11527366a8bb92f54e541ade21f2374433f6d4", size = 243014, upload-time = "2025-10-06T05:36:11.394Z" }, - { url = "https://files.pythonhosted.org/packages/d8/cb/cb6c7b0f7d4023ddda30cf56b8b17494eb3a79e3fda666bf735f63118b35/frozenlist-1.8.0-cp312-cp312-manylinux2014_armv7l.manylinux_2_17_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:3462dd9475af2025c31cc61be6652dfa25cbfb56cbbf52f4ccfe029f38decaf8", size = 234909, upload-time = "2025-10-06T05:36:12.598Z" }, - { url = "https://files.pythonhosted.org/packages/31/c5/cd7a1f3b8b34af009fb17d4123c5a778b44ae2804e3ad6b86204255f9ec5/frozenlist-1.8.0-cp312-cp312-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:c4c800524c9cd9bac5166cd6f55285957fcfc907db323e193f2afcd4d9abd69b", size = 250049, upload-time = "2025-10-06T05:36:14.065Z" }, - { url = "https://files.pythonhosted.org/packages/c0/01/2f95d3b416c584a1e7f0e1d6d31998c4a795f7544069ee2e0962a4b60740/frozenlist-1.8.0-cp312-cp312-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:d6a5df73acd3399d893dafc71663ad22534b5aa4f94e8a2fabfe856c3c1b6a52", size = 256485, upload-time = "2025-10-06T05:36:15.39Z" }, - { url = "https://files.pythonhosted.org/packages/ce/03/024bf7720b3abaebcff6d0793d73c154237b85bdf67b7ed55e5e9596dc9a/frozenlist-1.8.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:405e8fe955c2280ce66428b3ca55e12b3c4e9c336fb2103a4937e891c69a4a29", size = 237619, upload-time = "2025-10-06T05:36:16.558Z" }, - { url = "https://files.pythonhosted.org/packages/69/fa/f8abdfe7d76b731f5d8bd217827cf6764d4f1d9763407e42717b4bed50a0/frozenlist-1.8.0-cp312-cp312-musllinux_1_2_armv7l.whl", hash = "sha256:908bd3f6439f2fef9e85031b59fd4f1297af54415fb60e4254a95f75b3cab3f3", size = 250320, upload-time = "2025-10-06T05:36:17.821Z" }, - { url = "https://files.pythonhosted.org/packages/f5/3c/b051329f718b463b22613e269ad72138cc256c540f78a6de89452803a47d/frozenlist-1.8.0-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:294e487f9ec720bd8ffcebc99d575f7eff3568a08a253d1ee1a0378754b74143", size = 246820, upload-time = "2025-10-06T05:36:19.046Z" }, - { url = "https://files.pythonhosted.org/packages/0f/ae/58282e8f98e444b3f4dd42448ff36fa38bef29e40d40f330b22e7108f565/frozenlist-1.8.0-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:74c51543498289c0c43656701be6b077f4b265868fa7f8a8859c197006efb608", size = 250518, upload-time = "2025-10-06T05:36:20.763Z" }, - { url = "https://files.pythonhosted.org/packages/8f/96/007e5944694d66123183845a106547a15944fbbb7154788cbf7272789536/frozenlist-1.8.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:776f352e8329135506a1d6bf16ac3f87bc25b28e765949282dcc627af36123aa", size = 239096, upload-time = "2025-10-06T05:36:22.129Z" }, - { url = "https://files.pythonhosted.org/packages/66/bb/852b9d6db2fa40be96f29c0d1205c306288f0684df8fd26ca1951d461a56/frozenlist-1.8.0-cp312-cp312-win32.whl", hash = "sha256:433403ae80709741ce34038da08511d4a77062aa924baf411ef73d1146e74faf", size = 39985, upload-time = "2025-10-06T05:36:23.661Z" }, - { url = "https://files.pythonhosted.org/packages/b8/af/38e51a553dd66eb064cdf193841f16f077585d4d28394c2fa6235cb41765/frozenlist-1.8.0-cp312-cp312-win_amd64.whl", hash = "sha256:34187385b08f866104f0c0617404c8eb08165ab1272e884abc89c112e9c00746", size = 44591, upload-time = "2025-10-06T05:36:24.958Z" }, - { url = "https://files.pythonhosted.org/packages/a7/06/1dc65480ab147339fecc70797e9c2f69d9cea9cf38934ce08df070fdb9cb/frozenlist-1.8.0-cp312-cp312-win_arm64.whl", hash = "sha256:fe3c58d2f5db5fbd18c2987cba06d51b0529f52bc3a6cdc33d3f4eab725104bd", size = 40102, upload-time = "2025-10-06T05:36:26.333Z" }, - { url = "https://files.pythonhosted.org/packages/2d/40/0832c31a37d60f60ed79e9dfb5a92e1e2af4f40a16a29abcc7992af9edff/frozenlist-1.8.0-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:8d92f1a84bb12d9e56f818b3a746f3efba93c1b63c8387a73dde655e1e42282a", size = 85717, upload-time = "2025-10-06T05:36:27.341Z" }, - { url = "https://files.pythonhosted.org/packages/30/ba/b0b3de23f40bc55a7057bd38434e25c34fa48e17f20ee273bbde5e0650f3/frozenlist-1.8.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:96153e77a591c8adc2ee805756c61f59fef4cf4073a9275ee86fe8cba41241f7", size = 49651, upload-time = "2025-10-06T05:36:28.855Z" }, - { url = "https://files.pythonhosted.org/packages/0c/ab/6e5080ee374f875296c4243c381bbdef97a9ac39c6e3ce1d5f7d42cb78d6/frozenlist-1.8.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:f21f00a91358803399890ab167098c131ec2ddd5f8f5fd5fe9c9f2c6fcd91e40", size = 49417, upload-time = "2025-10-06T05:36:29.877Z" }, - { url = "https://files.pythonhosted.org/packages/d5/4e/e4691508f9477ce67da2015d8c00acd751e6287739123113a9fca6f1604e/frozenlist-1.8.0-cp313-cp313-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:fb30f9626572a76dfe4293c7194a09fb1fe93ba94c7d4f720dfae3b646b45027", size = 234391, upload-time = "2025-10-06T05:36:31.301Z" }, - { url = "https://files.pythonhosted.org/packages/40/76/c202df58e3acdf12969a7895fd6f3bc016c642e6726aa63bd3025e0fc71c/frozenlist-1.8.0-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:eaa352d7047a31d87dafcacbabe89df0aa506abb5b1b85a2fb91bc3faa02d822", size = 233048, upload-time = "2025-10-06T05:36:32.531Z" }, - { url = "https://files.pythonhosted.org/packages/f9/c0/8746afb90f17b73ca5979c7a3958116e105ff796e718575175319b5bb4ce/frozenlist-1.8.0-cp313-cp313-manylinux2014_armv7l.manylinux_2_17_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:03ae967b4e297f58f8c774c7eabcce57fe3c2434817d4385c50661845a058121", size = 226549, upload-time = "2025-10-06T05:36:33.706Z" }, - { url = "https://files.pythonhosted.org/packages/7e/eb/4c7eefc718ff72f9b6c4893291abaae5fbc0c82226a32dcd8ef4f7a5dbef/frozenlist-1.8.0-cp313-cp313-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:f6292f1de555ffcc675941d65fffffb0a5bcd992905015f85d0592201793e0e5", size = 239833, upload-time = "2025-10-06T05:36:34.947Z" }, - { url = "https://files.pythonhosted.org/packages/c2/4e/e5c02187cf704224f8b21bee886f3d713ca379535f16893233b9d672ea71/frozenlist-1.8.0-cp313-cp313-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:29548f9b5b5e3460ce7378144c3010363d8035cea44bc0bf02d57f5a685e084e", size = 245363, upload-time = "2025-10-06T05:36:36.534Z" }, - { url = "https://files.pythonhosted.org/packages/1f/96/cb85ec608464472e82ad37a17f844889c36100eed57bea094518bf270692/frozenlist-1.8.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:ec3cc8c5d4084591b4237c0a272cc4f50a5b03396a47d9caaf76f5d7b38a4f11", size = 229314, upload-time = "2025-10-06T05:36:38.582Z" }, - { url = "https://files.pythonhosted.org/packages/5d/6f/4ae69c550e4cee66b57887daeebe006fe985917c01d0fff9caab9883f6d0/frozenlist-1.8.0-cp313-cp313-musllinux_1_2_armv7l.whl", hash = "sha256:517279f58009d0b1f2e7c1b130b377a349405da3f7621ed6bfae50b10adf20c1", size = 243365, upload-time = "2025-10-06T05:36:40.152Z" }, - { url = "https://files.pythonhosted.org/packages/7a/58/afd56de246cf11780a40a2c28dc7cbabbf06337cc8ddb1c780a2d97e88d8/frozenlist-1.8.0-cp313-cp313-musllinux_1_2_ppc64le.whl", hash = "sha256:db1e72ede2d0d7ccb213f218df6a078a9c09a7de257c2fe8fcef16d5925230b1", size = 237763, upload-time = "2025-10-06T05:36:41.355Z" }, - { url = "https://files.pythonhosted.org/packages/cb/36/cdfaf6ed42e2644740d4a10452d8e97fa1c062e2a8006e4b09f1b5fd7d63/frozenlist-1.8.0-cp313-cp313-musllinux_1_2_s390x.whl", hash = "sha256:b4dec9482a65c54a5044486847b8a66bf10c9cb4926d42927ec4e8fd5db7fed8", size = 240110, upload-time = "2025-10-06T05:36:42.716Z" }, - { url = "https://files.pythonhosted.org/packages/03/a8/9ea226fbefad669f11b52e864c55f0bd57d3c8d7eb07e9f2e9a0b39502e1/frozenlist-1.8.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:21900c48ae04d13d416f0e1e0c4d81f7931f73a9dfa0b7a8746fb2fe7dd970ed", size = 233717, upload-time = "2025-10-06T05:36:44.251Z" }, - { url = "https://files.pythonhosted.org/packages/1e/0b/1b5531611e83ba7d13ccc9988967ea1b51186af64c42b7a7af465dcc9568/frozenlist-1.8.0-cp313-cp313-win32.whl", hash = "sha256:8b7b94a067d1c504ee0b16def57ad5738701e4ba10cec90529f13fa03c833496", size = 39628, upload-time = "2025-10-06T05:36:45.423Z" }, - { url = "https://files.pythonhosted.org/packages/d8/cf/174c91dbc9cc49bc7b7aab74d8b734e974d1faa8f191c74af9b7e80848e6/frozenlist-1.8.0-cp313-cp313-win_amd64.whl", hash = "sha256:878be833caa6a3821caf85eb39c5ba92d28e85df26d57afb06b35b2efd937231", size = 43882, upload-time = "2025-10-06T05:36:46.796Z" }, - { url = "https://files.pythonhosted.org/packages/c1/17/502cd212cbfa96eb1388614fe39a3fc9ab87dbbe042b66f97acb57474834/frozenlist-1.8.0-cp313-cp313-win_arm64.whl", hash = "sha256:44389d135b3ff43ba8cc89ff7f51f5a0bb6b63d829c8300f79a2fe4fe61bcc62", size = 39676, upload-time = "2025-10-06T05:36:47.8Z" }, - { url = "https://files.pythonhosted.org/packages/d2/5c/3bbfaa920dfab09e76946a5d2833a7cbdf7b9b4a91c714666ac4855b88b4/frozenlist-1.8.0-cp313-cp313t-macosx_10_13_universal2.whl", hash = "sha256:e25ac20a2ef37e91c1b39938b591457666a0fa835c7783c3a8f33ea42870db94", size = 89235, upload-time = "2025-10-06T05:36:48.78Z" }, - { url = "https://files.pythonhosted.org/packages/d2/d6/f03961ef72166cec1687e84e8925838442b615bd0b8854b54923ce5b7b8a/frozenlist-1.8.0-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:07cdca25a91a4386d2e76ad992916a85038a9b97561bf7a3fd12d5d9ce31870c", size = 50742, upload-time = "2025-10-06T05:36:49.837Z" }, - { url = "https://files.pythonhosted.org/packages/1e/bb/a6d12b7ba4c3337667d0e421f7181c82dda448ce4e7ad7ecd249a16fa806/frozenlist-1.8.0-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:4e0c11f2cc6717e0a741f84a527c52616140741cd812a50422f83dc31749fb52", size = 51725, upload-time = "2025-10-06T05:36:50.851Z" }, - { url = "https://files.pythonhosted.org/packages/bc/71/d1fed0ffe2c2ccd70b43714c6cab0f4188f09f8a67a7914a6b46ee30f274/frozenlist-1.8.0-cp313-cp313t-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:b3210649ee28062ea6099cfda39e147fa1bc039583c8ee4481cb7811e2448c51", size = 284533, upload-time = "2025-10-06T05:36:51.898Z" }, - { url = "https://files.pythonhosted.org/packages/c9/1f/fb1685a7b009d89f9bf78a42d94461bc06581f6e718c39344754a5d9bada/frozenlist-1.8.0-cp313-cp313t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:581ef5194c48035a7de2aefc72ac6539823bb71508189e5de01d60c9dcd5fa65", size = 292506, upload-time = "2025-10-06T05:36:53.101Z" }, - { url = "https://files.pythonhosted.org/packages/e6/3b/b991fe1612703f7e0d05c0cf734c1b77aaf7c7d321df4572e8d36e7048c8/frozenlist-1.8.0-cp313-cp313t-manylinux2014_armv7l.manylinux_2_17_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:3ef2d026f16a2b1866e1d86fc4e1291e1ed8a387b2c333809419a2f8b3a77b82", size = 274161, upload-time = "2025-10-06T05:36:54.309Z" }, - { url = "https://files.pythonhosted.org/packages/ca/ec/c5c618767bcdf66e88945ec0157d7f6c4a1322f1473392319b7a2501ded7/frozenlist-1.8.0-cp313-cp313t-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:5500ef82073f599ac84d888e3a8c1f77ac831183244bfd7f11eaa0289fb30714", size = 294676, upload-time = "2025-10-06T05:36:55.566Z" }, - { url = "https://files.pythonhosted.org/packages/7c/ce/3934758637d8f8a88d11f0585d6495ef54b2044ed6ec84492a91fa3b27aa/frozenlist-1.8.0-cp313-cp313t-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:50066c3997d0091c411a66e710f4e11752251e6d2d73d70d8d5d4c76442a199d", size = 300638, upload-time = "2025-10-06T05:36:56.758Z" }, - { url = "https://files.pythonhosted.org/packages/fc/4f/a7e4d0d467298f42de4b41cbc7ddaf19d3cfeabaf9ff97c20c6c7ee409f9/frozenlist-1.8.0-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:5c1c8e78426e59b3f8005e9b19f6ff46e5845895adbde20ece9218319eca6506", size = 283067, upload-time = "2025-10-06T05:36:57.965Z" }, - { url = "https://files.pythonhosted.org/packages/dc/48/c7b163063d55a83772b268e6d1affb960771b0e203b632cfe09522d67ea5/frozenlist-1.8.0-cp313-cp313t-musllinux_1_2_armv7l.whl", hash = "sha256:eefdba20de0d938cec6a89bd4d70f346a03108a19b9df4248d3cf0d88f1b0f51", size = 292101, upload-time = "2025-10-06T05:36:59.237Z" }, - { url = "https://files.pythonhosted.org/packages/9f/d0/2366d3c4ecdc2fd391e0afa6e11500bfba0ea772764d631bbf82f0136c9d/frozenlist-1.8.0-cp313-cp313t-musllinux_1_2_ppc64le.whl", hash = "sha256:cf253e0e1c3ceb4aaff6df637ce033ff6535fb8c70a764a8f46aafd3d6ab798e", size = 289901, upload-time = "2025-10-06T05:37:00.811Z" }, - { url = "https://files.pythonhosted.org/packages/b8/94/daff920e82c1b70e3618a2ac39fbc01ae3e2ff6124e80739ce5d71c9b920/frozenlist-1.8.0-cp313-cp313t-musllinux_1_2_s390x.whl", hash = "sha256:032efa2674356903cd0261c4317a561a6850f3ac864a63fc1583147fb05a79b0", size = 289395, upload-time = "2025-10-06T05:37:02.115Z" }, - { url = "https://files.pythonhosted.org/packages/e3/20/bba307ab4235a09fdcd3cc5508dbabd17c4634a1af4b96e0f69bfe551ebd/frozenlist-1.8.0-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:6da155091429aeba16851ecb10a9104a108bcd32f6c1642867eadaee401c1c41", size = 283659, upload-time = "2025-10-06T05:37:03.711Z" }, - { url = "https://files.pythonhosted.org/packages/fd/00/04ca1c3a7a124b6de4f8a9a17cc2fcad138b4608e7a3fc5877804b8715d7/frozenlist-1.8.0-cp313-cp313t-win32.whl", hash = "sha256:0f96534f8bfebc1a394209427d0f8a63d343c9779cda6fc25e8e121b5fd8555b", size = 43492, upload-time = "2025-10-06T05:37:04.915Z" }, - { url = "https://files.pythonhosted.org/packages/59/5e/c69f733a86a94ab10f68e496dc6b7e8bc078ebb415281d5698313e3af3a1/frozenlist-1.8.0-cp313-cp313t-win_amd64.whl", hash = "sha256:5d63a068f978fc69421fb0e6eb91a9603187527c86b7cd3f534a5b77a592b888", size = 48034, upload-time = "2025-10-06T05:37:06.343Z" }, - { url = "https://files.pythonhosted.org/packages/16/6c/be9d79775d8abe79b05fa6d23da99ad6e7763a1d080fbae7290b286093fd/frozenlist-1.8.0-cp313-cp313t-win_arm64.whl", hash = "sha256:bf0a7e10b077bf5fb9380ad3ae8ce20ef919a6ad93b4552896419ac7e1d8e042", size = 41749, upload-time = "2025-10-06T05:37:07.431Z" }, - { url = "https://files.pythonhosted.org/packages/9a/9a/e35b4a917281c0b8419d4207f4334c8e8c5dbf4f3f5f9ada73958d937dcc/frozenlist-1.8.0-py3-none-any.whl", hash = "sha256:0c18a16eab41e82c295618a77502e17b195883241c563b00f0aa5106fc4eaa0d", size = 13409, upload-time = "2025-10-06T05:38:16.721Z" }, +sdist = { url = "https://files.pythonhosted.org/packages/79/b1/b64018016eeb087db503b038296fd782586432b9c077fc5c7839e9cb6ef6/frozenlist-1.7.0.tar.gz", hash = "sha256:2e310d81923c2437ea8670467121cc3e9b0f76d3043cc1d2331d56c7fb7a3a8f", size = 45078, upload-time = "2025-06-09T23:02:35.538Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/34/7e/803dde33760128acd393a27eb002f2020ddb8d99d30a44bfbaab31c5f08a/frozenlist-1.7.0-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:aa51e147a66b2d74de1e6e2cf5921890de6b0f4820b257465101d7f37b49fb5a", size = 82251, upload-time = "2025-06-09T23:00:16.279Z" }, + { url = "https://files.pythonhosted.org/packages/75/a9/9c2c5760b6ba45eae11334db454c189d43d34a4c0b489feb2175e5e64277/frozenlist-1.7.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:9b35db7ce1cd71d36ba24f80f0c9e7cff73a28d7a74e91fe83e23d27c7828750", size = 48183, upload-time = "2025-06-09T23:00:17.698Z" }, + { url = "https://files.pythonhosted.org/packages/47/be/4038e2d869f8a2da165f35a6befb9158c259819be22eeaf9c9a8f6a87771/frozenlist-1.7.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:34a69a85e34ff37791e94542065c8416c1afbf820b68f720452f636d5fb990cd", size = 47107, upload-time = "2025-06-09T23:00:18.952Z" }, + { url = "https://files.pythonhosted.org/packages/79/26/85314b8a83187c76a37183ceed886381a5f992975786f883472fcb6dc5f2/frozenlist-1.7.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4a646531fa8d82c87fe4bb2e596f23173caec9185bfbca5d583b4ccfb95183e2", size = 237333, upload-time = "2025-06-09T23:00:20.275Z" }, + { url = "https://files.pythonhosted.org/packages/1f/fd/e5b64f7d2c92a41639ffb2ad44a6a82f347787abc0c7df5f49057cf11770/frozenlist-1.7.0-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:79b2ffbba483f4ed36a0f236ccb85fbb16e670c9238313709638167670ba235f", size = 231724, upload-time = "2025-06-09T23:00:21.705Z" }, + { url = "https://files.pythonhosted.org/packages/20/fb/03395c0a43a5976af4bf7534759d214405fbbb4c114683f434dfdd3128ef/frozenlist-1.7.0-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:a26f205c9ca5829cbf82bb2a84b5c36f7184c4316617d7ef1b271a56720d6b30", size = 245842, upload-time = "2025-06-09T23:00:23.148Z" }, + { url = "https://files.pythonhosted.org/packages/d0/15/c01c8e1dffdac5d9803507d824f27aed2ba76b6ed0026fab4d9866e82f1f/frozenlist-1.7.0-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:bcacfad3185a623fa11ea0e0634aac7b691aa925d50a440f39b458e41c561d98", size = 239767, upload-time = "2025-06-09T23:00:25.103Z" }, + { url = "https://files.pythonhosted.org/packages/14/99/3f4c6fe882c1f5514b6848aa0a69b20cb5e5d8e8f51a339d48c0e9305ed0/frozenlist-1.7.0-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:72c1b0fe8fe451b34f12dce46445ddf14bd2a5bcad7e324987194dc8e3a74c86", size = 224130, upload-time = "2025-06-09T23:00:27.061Z" }, + { url = "https://files.pythonhosted.org/packages/4d/83/220a374bd7b2aeba9d0725130665afe11de347d95c3620b9b82cc2fcab97/frozenlist-1.7.0-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:61d1a5baeaac6c0798ff6edfaeaa00e0e412d49946c53fae8d4b8e8b3566c4ae", size = 235301, upload-time = "2025-06-09T23:00:29.02Z" }, + { url = "https://files.pythonhosted.org/packages/03/3c/3e3390d75334a063181625343e8daab61b77e1b8214802cc4e8a1bb678fc/frozenlist-1.7.0-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:7edf5c043c062462f09b6820de9854bf28cc6cc5b6714b383149745e287181a8", size = 234606, upload-time = "2025-06-09T23:00:30.514Z" }, + { url = "https://files.pythonhosted.org/packages/23/1e/58232c19608b7a549d72d9903005e2d82488f12554a32de2d5fb59b9b1ba/frozenlist-1.7.0-cp311-cp311-musllinux_1_2_armv7l.whl", hash = "sha256:d50ac7627b3a1bd2dcef6f9da89a772694ec04d9a61b66cf87f7d9446b4a0c31", size = 248372, upload-time = "2025-06-09T23:00:31.966Z" }, + { url = "https://files.pythonhosted.org/packages/c0/a4/e4a567e01702a88a74ce8a324691e62a629bf47d4f8607f24bf1c7216e7f/frozenlist-1.7.0-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:ce48b2fece5aeb45265bb7a58259f45027db0abff478e3077e12b05b17fb9da7", size = 229860, upload-time = "2025-06-09T23:00:33.375Z" }, + { url = "https://files.pythonhosted.org/packages/73/a6/63b3374f7d22268b41a9db73d68a8233afa30ed164c46107b33c4d18ecdd/frozenlist-1.7.0-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:fe2365ae915a1fafd982c146754e1de6ab3478def8a59c86e1f7242d794f97d5", size = 245893, upload-time = "2025-06-09T23:00:35.002Z" }, + { url = "https://files.pythonhosted.org/packages/6d/eb/d18b3f6e64799a79673c4ba0b45e4cfbe49c240edfd03a68be20002eaeaa/frozenlist-1.7.0-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:45a6f2fdbd10e074e8814eb98b05292f27bad7d1883afbe009d96abdcf3bc898", size = 246323, upload-time = "2025-06-09T23:00:36.468Z" }, + { url = "https://files.pythonhosted.org/packages/5a/f5/720f3812e3d06cd89a1d5db9ff6450088b8f5c449dae8ffb2971a44da506/frozenlist-1.7.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:21884e23cffabb157a9dd7e353779077bf5b8f9a58e9b262c6caad2ef5f80a56", size = 233149, upload-time = "2025-06-09T23:00:37.963Z" }, + { url = "https://files.pythonhosted.org/packages/69/68/03efbf545e217d5db8446acfd4c447c15b7c8cf4dbd4a58403111df9322d/frozenlist-1.7.0-cp311-cp311-win32.whl", hash = "sha256:284d233a8953d7b24f9159b8a3496fc1ddc00f4db99c324bd5fb5f22d8698ea7", size = 39565, upload-time = "2025-06-09T23:00:39.753Z" }, + { url = "https://files.pythonhosted.org/packages/58/17/fe61124c5c333ae87f09bb67186d65038834a47d974fc10a5fadb4cc5ae1/frozenlist-1.7.0-cp311-cp311-win_amd64.whl", hash = "sha256:387cbfdcde2f2353f19c2f66bbb52406d06ed77519ac7ee21be0232147c2592d", size = 44019, upload-time = "2025-06-09T23:00:40.988Z" }, + { url = "https://files.pythonhosted.org/packages/ef/a2/c8131383f1e66adad5f6ecfcce383d584ca94055a34d683bbb24ac5f2f1c/frozenlist-1.7.0-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:3dbf9952c4bb0e90e98aec1bd992b3318685005702656bc6f67c1a32b76787f2", size = 81424, upload-time = "2025-06-09T23:00:42.24Z" }, + { url = "https://files.pythonhosted.org/packages/4c/9d/02754159955088cb52567337d1113f945b9e444c4960771ea90eb73de8db/frozenlist-1.7.0-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:1f5906d3359300b8a9bb194239491122e6cf1444c2efb88865426f170c262cdb", size = 47952, upload-time = "2025-06-09T23:00:43.481Z" }, + { url = "https://files.pythonhosted.org/packages/01/7a/0046ef1bd6699b40acd2067ed6d6670b4db2f425c56980fa21c982c2a9db/frozenlist-1.7.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:3dabd5a8f84573c8d10d8859a50ea2dec01eea372031929871368c09fa103478", size = 46688, upload-time = "2025-06-09T23:00:44.793Z" }, + { url = "https://files.pythonhosted.org/packages/d6/a2/a910bafe29c86997363fb4c02069df4ff0b5bc39d33c5198b4e9dd42d8f8/frozenlist-1.7.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:aa57daa5917f1738064f302bf2626281a1cb01920c32f711fbc7bc36111058a8", size = 243084, upload-time = "2025-06-09T23:00:46.125Z" }, + { url = "https://files.pythonhosted.org/packages/64/3e/5036af9d5031374c64c387469bfcc3af537fc0f5b1187d83a1cf6fab1639/frozenlist-1.7.0-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:c193dda2b6d49f4c4398962810fa7d7c78f032bf45572b3e04dd5249dff27e08", size = 233524, upload-time = "2025-06-09T23:00:47.73Z" }, + { url = "https://files.pythonhosted.org/packages/06/39/6a17b7c107a2887e781a48ecf20ad20f1c39d94b2a548c83615b5b879f28/frozenlist-1.7.0-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:bfe2b675cf0aaa6d61bf8fbffd3c274b3c9b7b1623beb3809df8a81399a4a9c4", size = 248493, upload-time = "2025-06-09T23:00:49.742Z" }, + { url = "https://files.pythonhosted.org/packages/be/00/711d1337c7327d88c44d91dd0f556a1c47fb99afc060ae0ef66b4d24793d/frozenlist-1.7.0-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:8fc5d5cda37f62b262405cf9652cf0856839c4be8ee41be0afe8858f17f4c94b", size = 244116, upload-time = "2025-06-09T23:00:51.352Z" }, + { url = "https://files.pythonhosted.org/packages/24/fe/74e6ec0639c115df13d5850e75722750adabdc7de24e37e05a40527ca539/frozenlist-1.7.0-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b0d5ce521d1dd7d620198829b87ea002956e4319002ef0bc8d3e6d045cb4646e", size = 224557, upload-time = "2025-06-09T23:00:52.855Z" }, + { url = "https://files.pythonhosted.org/packages/8d/db/48421f62a6f77c553575201e89048e97198046b793f4a089c79a6e3268bd/frozenlist-1.7.0-cp312-cp312-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:488d0a7d6a0008ca0db273c542098a0fa9e7dfaa7e57f70acef43f32b3f69dca", size = 241820, upload-time = "2025-06-09T23:00:54.43Z" }, + { url = "https://files.pythonhosted.org/packages/1d/fa/cb4a76bea23047c8462976ea7b7a2bf53997a0ca171302deae9d6dd12096/frozenlist-1.7.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:15a7eaba63983d22c54d255b854e8108e7e5f3e89f647fc854bd77a237e767df", size = 236542, upload-time = "2025-06-09T23:00:56.409Z" }, + { url = "https://files.pythonhosted.org/packages/5d/32/476a4b5cfaa0ec94d3f808f193301debff2ea42288a099afe60757ef6282/frozenlist-1.7.0-cp312-cp312-musllinux_1_2_armv7l.whl", hash = "sha256:1eaa7e9c6d15df825bf255649e05bd8a74b04a4d2baa1ae46d9c2d00b2ca2cb5", size = 249350, upload-time = "2025-06-09T23:00:58.468Z" }, + { url = "https://files.pythonhosted.org/packages/8d/ba/9a28042f84a6bf8ea5dbc81cfff8eaef18d78b2a1ad9d51c7bc5b029ad16/frozenlist-1.7.0-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:e4389e06714cfa9d47ab87f784a7c5be91d3934cd6e9a7b85beef808297cc025", size = 225093, upload-time = "2025-06-09T23:01:00.015Z" }, + { url = "https://files.pythonhosted.org/packages/bc/29/3a32959e68f9cf000b04e79ba574527c17e8842e38c91d68214a37455786/frozenlist-1.7.0-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:73bd45e1488c40b63fe5a7df892baf9e2a4d4bb6409a2b3b78ac1c6236178e01", size = 245482, upload-time = "2025-06-09T23:01:01.474Z" }, + { url = "https://files.pythonhosted.org/packages/80/e8/edf2f9e00da553f07f5fa165325cfc302dead715cab6ac8336a5f3d0adc2/frozenlist-1.7.0-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:99886d98e1643269760e5fe0df31e5ae7050788dd288947f7f007209b8c33f08", size = 249590, upload-time = "2025-06-09T23:01:02.961Z" }, + { url = "https://files.pythonhosted.org/packages/1c/80/9a0eb48b944050f94cc51ee1c413eb14a39543cc4f760ed12657a5a3c45a/frozenlist-1.7.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:290a172aae5a4c278c6da8a96222e6337744cd9c77313efe33d5670b9f65fc43", size = 237785, upload-time = "2025-06-09T23:01:05.095Z" }, + { url = "https://files.pythonhosted.org/packages/f3/74/87601e0fb0369b7a2baf404ea921769c53b7ae00dee7dcfe5162c8c6dbf0/frozenlist-1.7.0-cp312-cp312-win32.whl", hash = "sha256:426c7bc70e07cfebc178bc4c2bf2d861d720c4fff172181eeb4a4c41d4ca2ad3", size = 39487, upload-time = "2025-06-09T23:01:06.54Z" }, + { url = "https://files.pythonhosted.org/packages/0b/15/c026e9a9fc17585a9d461f65d8593d281fedf55fbf7eb53f16c6df2392f9/frozenlist-1.7.0-cp312-cp312-win_amd64.whl", hash = "sha256:563b72efe5da92e02eb68c59cb37205457c977aa7a449ed1b37e6939e5c47c6a", size = 43874, upload-time = "2025-06-09T23:01:07.752Z" }, + { url = "https://files.pythonhosted.org/packages/24/90/6b2cebdabdbd50367273c20ff6b57a3dfa89bd0762de02c3a1eb42cb6462/frozenlist-1.7.0-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:ee80eeda5e2a4e660651370ebffd1286542b67e268aa1ac8d6dbe973120ef7ee", size = 79791, upload-time = "2025-06-09T23:01:09.368Z" }, + { url = "https://files.pythonhosted.org/packages/83/2e/5b70b6a3325363293fe5fc3ae74cdcbc3e996c2a11dde2fd9f1fb0776d19/frozenlist-1.7.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:d1a81c85417b914139e3a9b995d4a1c84559afc839a93cf2cb7f15e6e5f6ed2d", size = 47165, upload-time = "2025-06-09T23:01:10.653Z" }, + { url = "https://files.pythonhosted.org/packages/f4/25/a0895c99270ca6966110f4ad98e87e5662eab416a17e7fd53c364bf8b954/frozenlist-1.7.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:cbb65198a9132ebc334f237d7b0df163e4de83fb4f2bdfe46c1e654bdb0c5d43", size = 45881, upload-time = "2025-06-09T23:01:12.296Z" }, + { url = "https://files.pythonhosted.org/packages/19/7c/71bb0bbe0832793c601fff68cd0cf6143753d0c667f9aec93d3c323f4b55/frozenlist-1.7.0-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:dab46c723eeb2c255a64f9dc05b8dd601fde66d6b19cdb82b2e09cc6ff8d8b5d", size = 232409, upload-time = "2025-06-09T23:01:13.641Z" }, + { url = "https://files.pythonhosted.org/packages/c0/45/ed2798718910fe6eb3ba574082aaceff4528e6323f9a8570be0f7028d8e9/frozenlist-1.7.0-cp313-cp313-manylinux_2_17_armv7l.manylinux2014_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:6aeac207a759d0dedd2e40745575ae32ab30926ff4fa49b1635def65806fddee", size = 225132, upload-time = "2025-06-09T23:01:15.264Z" }, + { url = "https://files.pythonhosted.org/packages/ba/e2/8417ae0f8eacb1d071d4950f32f229aa6bf68ab69aab797b72a07ea68d4f/frozenlist-1.7.0-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:bd8c4e58ad14b4fa7802b8be49d47993182fdd4023393899632c88fd8cd994eb", size = 237638, upload-time = "2025-06-09T23:01:16.752Z" }, + { url = "https://files.pythonhosted.org/packages/f8/b7/2ace5450ce85f2af05a871b8c8719b341294775a0a6c5585d5e6170f2ce7/frozenlist-1.7.0-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:04fb24d104f425da3540ed83cbfc31388a586a7696142004c577fa61c6298c3f", size = 233539, upload-time = "2025-06-09T23:01:18.202Z" }, + { url = "https://files.pythonhosted.org/packages/46/b9/6989292c5539553dba63f3c83dc4598186ab2888f67c0dc1d917e6887db6/frozenlist-1.7.0-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6a5c505156368e4ea6b53b5ac23c92d7edc864537ff911d2fb24c140bb175e60", size = 215646, upload-time = "2025-06-09T23:01:19.649Z" }, + { url = "https://files.pythonhosted.org/packages/72/31/bc8c5c99c7818293458fe745dab4fd5730ff49697ccc82b554eb69f16a24/frozenlist-1.7.0-cp313-cp313-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8bd7eb96a675f18aa5c553eb7ddc24a43c8c18f22e1f9925528128c052cdbe00", size = 232233, upload-time = "2025-06-09T23:01:21.175Z" }, + { url = "https://files.pythonhosted.org/packages/59/52/460db4d7ba0811b9ccb85af996019f5d70831f2f5f255f7cc61f86199795/frozenlist-1.7.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:05579bf020096fe05a764f1f84cd104a12f78eaab68842d036772dc6d4870b4b", size = 227996, upload-time = "2025-06-09T23:01:23.098Z" }, + { url = "https://files.pythonhosted.org/packages/ba/c9/f4b39e904c03927b7ecf891804fd3b4df3db29b9e487c6418e37988d6e9d/frozenlist-1.7.0-cp313-cp313-musllinux_1_2_armv7l.whl", hash = "sha256:376b6222d114e97eeec13d46c486facd41d4f43bab626b7c3f6a8b4e81a5192c", size = 242280, upload-time = "2025-06-09T23:01:24.808Z" }, + { url = "https://files.pythonhosted.org/packages/b8/33/3f8d6ced42f162d743e3517781566b8481322be321b486d9d262adf70bfb/frozenlist-1.7.0-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:0aa7e176ebe115379b5b1c95b4096fb1c17cce0847402e227e712c27bdb5a949", size = 217717, upload-time = "2025-06-09T23:01:26.28Z" }, + { url = "https://files.pythonhosted.org/packages/3e/e8/ad683e75da6ccef50d0ab0c2b2324b32f84fc88ceee778ed79b8e2d2fe2e/frozenlist-1.7.0-cp313-cp313-musllinux_1_2_ppc64le.whl", hash = "sha256:3fbba20e662b9c2130dc771e332a99eff5da078b2b2648153a40669a6d0e36ca", size = 236644, upload-time = "2025-06-09T23:01:27.887Z" }, + { url = "https://files.pythonhosted.org/packages/b2/14/8d19ccdd3799310722195a72ac94ddc677541fb4bef4091d8e7775752360/frozenlist-1.7.0-cp313-cp313-musllinux_1_2_s390x.whl", hash = "sha256:f3f4410a0a601d349dd406b5713fec59b4cee7e71678d5b17edda7f4655a940b", size = 238879, upload-time = "2025-06-09T23:01:29.524Z" }, + { url = "https://files.pythonhosted.org/packages/ce/13/c12bf657494c2fd1079a48b2db49fa4196325909249a52d8f09bc9123fd7/frozenlist-1.7.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:e2cdfaaec6a2f9327bf43c933c0319a7c429058e8537c508964a133dffee412e", size = 232502, upload-time = "2025-06-09T23:01:31.287Z" }, + { url = "https://files.pythonhosted.org/packages/d7/8b/e7f9dfde869825489382bc0d512c15e96d3964180c9499efcec72e85db7e/frozenlist-1.7.0-cp313-cp313-win32.whl", hash = "sha256:5fc4df05a6591c7768459caba1b342d9ec23fa16195e744939ba5914596ae3e1", size = 39169, upload-time = "2025-06-09T23:01:35.503Z" }, + { url = "https://files.pythonhosted.org/packages/35/89/a487a98d94205d85745080a37860ff5744b9820a2c9acbcdd9440bfddf98/frozenlist-1.7.0-cp313-cp313-win_amd64.whl", hash = "sha256:52109052b9791a3e6b5d1b65f4b909703984b770694d3eb64fad124c835d7cba", size = 43219, upload-time = "2025-06-09T23:01:36.784Z" }, + { url = "https://files.pythonhosted.org/packages/56/d5/5c4cf2319a49eddd9dd7145e66c4866bdc6f3dbc67ca3d59685149c11e0d/frozenlist-1.7.0-cp313-cp313t-macosx_10_13_universal2.whl", hash = "sha256:a6f86e4193bb0e235ef6ce3dde5cbabed887e0b11f516ce8a0f4d3b33078ec2d", size = 84345, upload-time = "2025-06-09T23:01:38.295Z" }, + { url = "https://files.pythonhosted.org/packages/a4/7d/ec2c1e1dc16b85bc9d526009961953df9cec8481b6886debb36ec9107799/frozenlist-1.7.0-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:82d664628865abeb32d90ae497fb93df398a69bb3434463d172b80fc25b0dd7d", size = 48880, upload-time = "2025-06-09T23:01:39.887Z" }, + { url = "https://files.pythonhosted.org/packages/69/86/f9596807b03de126e11e7d42ac91e3d0b19a6599c714a1989a4e85eeefc4/frozenlist-1.7.0-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:912a7e8375a1c9a68325a902f3953191b7b292aa3c3fb0d71a216221deca460b", size = 48498, upload-time = "2025-06-09T23:01:41.318Z" }, + { url = "https://files.pythonhosted.org/packages/5e/cb/df6de220f5036001005f2d726b789b2c0b65f2363b104bbc16f5be8084f8/frozenlist-1.7.0-cp313-cp313t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9537c2777167488d539bc5de2ad262efc44388230e5118868e172dd4a552b146", size = 292296, upload-time = "2025-06-09T23:01:42.685Z" }, + { url = "https://files.pythonhosted.org/packages/83/1f/de84c642f17c8f851a2905cee2dae401e5e0daca9b5ef121e120e19aa825/frozenlist-1.7.0-cp313-cp313t-manylinux_2_17_armv7l.manylinux2014_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:f34560fb1b4c3e30ba35fa9a13894ba39e5acfc5f60f57d8accde65f46cc5e74", size = 273103, upload-time = "2025-06-09T23:01:44.166Z" }, + { url = "https://files.pythonhosted.org/packages/88/3c/c840bfa474ba3fa13c772b93070893c6e9d5c0350885760376cbe3b6c1b3/frozenlist-1.7.0-cp313-cp313t-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:acd03d224b0175f5a850edc104ac19040d35419eddad04e7cf2d5986d98427f1", size = 292869, upload-time = "2025-06-09T23:01:45.681Z" }, + { url = "https://files.pythonhosted.org/packages/a6/1c/3efa6e7d5a39a1d5ef0abeb51c48fb657765794a46cf124e5aca2c7a592c/frozenlist-1.7.0-cp313-cp313t-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:f2038310bc582f3d6a09b3816ab01737d60bf7b1ec70f5356b09e84fb7408ab1", size = 291467, upload-time = "2025-06-09T23:01:47.234Z" }, + { url = "https://files.pythonhosted.org/packages/4f/00/d5c5e09d4922c395e2f2f6b79b9a20dab4b67daaf78ab92e7729341f61f6/frozenlist-1.7.0-cp313-cp313t-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b8c05e4c8e5f36e5e088caa1bf78a687528f83c043706640a92cb76cd6999384", size = 266028, upload-time = "2025-06-09T23:01:48.819Z" }, + { url = "https://files.pythonhosted.org/packages/4e/27/72765be905619dfde25a7f33813ac0341eb6b076abede17a2e3fbfade0cb/frozenlist-1.7.0-cp313-cp313t-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:765bb588c86e47d0b68f23c1bee323d4b703218037765dcf3f25c838c6fecceb", size = 284294, upload-time = "2025-06-09T23:01:50.394Z" }, + { url = "https://files.pythonhosted.org/packages/88/67/c94103a23001b17808eb7dd1200c156bb69fb68e63fcf0693dde4cd6228c/frozenlist-1.7.0-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:32dc2e08c67d86d0969714dd484fd60ff08ff81d1a1e40a77dd34a387e6ebc0c", size = 281898, upload-time = "2025-06-09T23:01:52.234Z" }, + { url = "https://files.pythonhosted.org/packages/42/34/a3e2c00c00f9e2a9db5653bca3fec306349e71aff14ae45ecc6d0951dd24/frozenlist-1.7.0-cp313-cp313t-musllinux_1_2_armv7l.whl", hash = "sha256:c0303e597eb5a5321b4de9c68e9845ac8f290d2ab3f3e2c864437d3c5a30cd65", size = 290465, upload-time = "2025-06-09T23:01:53.788Z" }, + { url = "https://files.pythonhosted.org/packages/bb/73/f89b7fbce8b0b0c095d82b008afd0590f71ccb3dee6eee41791cf8cd25fd/frozenlist-1.7.0-cp313-cp313t-musllinux_1_2_i686.whl", hash = "sha256:a47f2abb4e29b3a8d0b530f7c3598badc6b134562b1a5caee867f7c62fee51e3", size = 266385, upload-time = "2025-06-09T23:01:55.769Z" }, + { url = "https://files.pythonhosted.org/packages/cd/45/e365fdb554159462ca12df54bc59bfa7a9a273ecc21e99e72e597564d1ae/frozenlist-1.7.0-cp313-cp313t-musllinux_1_2_ppc64le.whl", hash = "sha256:3d688126c242a6fabbd92e02633414d40f50bb6002fa4cf995a1d18051525657", size = 288771, upload-time = "2025-06-09T23:01:57.4Z" }, + { url = "https://files.pythonhosted.org/packages/00/11/47b6117002a0e904f004d70ec5194fe9144f117c33c851e3d51c765962d0/frozenlist-1.7.0-cp313-cp313t-musllinux_1_2_s390x.whl", hash = "sha256:4e7e9652b3d367c7bd449a727dc79d5043f48b88d0cbfd4f9f1060cf2b414104", size = 288206, upload-time = "2025-06-09T23:01:58.936Z" }, + { url = "https://files.pythonhosted.org/packages/40/37/5f9f3c3fd7f7746082ec67bcdc204db72dad081f4f83a503d33220a92973/frozenlist-1.7.0-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:1a85e345b4c43db8b842cab1feb41be5cc0b10a1830e6295b69d7310f99becaf", size = 282620, upload-time = "2025-06-09T23:02:00.493Z" }, + { url = "https://files.pythonhosted.org/packages/0b/31/8fbc5af2d183bff20f21aa743b4088eac4445d2bb1cdece449ae80e4e2d1/frozenlist-1.7.0-cp313-cp313t-win32.whl", hash = "sha256:3a14027124ddb70dfcee5148979998066897e79f89f64b13328595c4bdf77c81", size = 43059, upload-time = "2025-06-09T23:02:02.072Z" }, + { url = "https://files.pythonhosted.org/packages/bb/ed/41956f52105b8dbc26e457c5705340c67c8cc2b79f394b79bffc09d0e938/frozenlist-1.7.0-cp313-cp313t-win_amd64.whl", hash = "sha256:3bf8010d71d4507775f658e9823210b7427be36625b387221642725b515dcf3e", size = 47516, upload-time = "2025-06-09T23:02:03.779Z" }, + { url = "https://files.pythonhosted.org/packages/ee/45/b82e3c16be2182bff01179db177fe144d58b5dc787a7d4492c6ed8b9317f/frozenlist-1.7.0-py3-none-any.whl", hash = "sha256:9a5af342e34f7e97caf8c995864c7a396418ae2859cc6fdf1b1073020d516a7e", size = 13106, upload-time = "2025-06-09T23:02:34.204Z" }, ] [[package]] @@ -1853,7 +1988,7 @@ wheels = [ [[package]] name = "google-api-core" -version = "2.26.0" +version = "2.25.2" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "google-auth" }, @@ -1862,9 +1997,9 @@ dependencies = [ { name = "protobuf" }, { name = "requests" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/32/ea/e7b6ac3c7b557b728c2d0181010548cbbdd338e9002513420c5a354fa8df/google_api_core-2.26.0.tar.gz", hash = "sha256:e6e6d78bd6cf757f4aee41dcc85b07f485fbb069d5daa3afb126defba1e91a62", size = 166369, upload-time = "2025-10-08T21:37:38.39Z" } +sdist = { url = "https://files.pythonhosted.org/packages/09/cd/63f1557235c2440fe0577acdbc32577c5c002684c58c7f4d770a92366a24/google_api_core-2.25.2.tar.gz", hash = "sha256:1c63aa6af0d0d5e37966f157a77f9396d820fba59f9e43e9415bc3dc5baff300", size = 166266, upload-time = "2025-10-03T00:07:34.778Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/77/ad/f73cf9fe9bd95918502b270e3ddb8764e4c900b3bbd7782b90c56fac14bb/google_api_core-2.26.0-py3-none-any.whl", hash = "sha256:2b204bd0da2c81f918e3582c48458e24c11771f987f6258e6e227212af78f3ed", size = 162505, upload-time = "2025-10-08T21:37:36.651Z" }, + { url = "https://files.pythonhosted.org/packages/c8/d8/894716a5423933f5c8d2d5f04b16f052a515f78e815dab0c2c6f1fd105dc/google_api_core-2.25.2-py3-none-any.whl", hash = "sha256:e9a8f62d363dc8424a8497f4c2a47d6bcda6c16514c935629c257ab5d10210e7", size = 162489, upload-time = "2025-10-03T00:07:32.924Z" }, ] [[package]] @@ -1896,7 +2031,7 @@ wheels = [ [[package]] name = "google-cloud-storage" -version = "3.4.1" +version = "3.4.0" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "google-api-core" }, @@ -1906,9 +2041,9 @@ dependencies = [ { name = "google-resumable-media" }, { name = "requests" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/bd/ef/7cefdca67a6c8b3af0ec38612f9e78e5a9f6179dd91352772ae1a9849246/google_cloud_storage-3.4.1.tar.gz", hash = "sha256:6f041a297e23a4b485fad8c305a7a6e6831855c208bcbe74d00332a909f82268", size = 17238203, upload-time = "2025-10-08T18:43:39.665Z" } +sdist = { url = "https://files.pythonhosted.org/packages/4e/a6/6e0a318f70975a3c048c0e1a18aee4f7b6d7dac1e798fdc5353c5248d418/google_cloud_storage-3.4.0.tar.gz", hash = "sha256:4c77ec00c98ccc6428e4c39404926f41e2152f48809b02af29d5116645c3c317", size = 17226847, upload-time = "2025-09-15T10:40:05.045Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/83/6e/b47d83d3a35231c6232566341b0355cce78fd4e6988a7343725408547b2c/google_cloud_storage-3.4.1-py3-none-any.whl", hash = "sha256:972764cc0392aa097be8f49a5354e22eb47c3f62370067fb1571ffff4a1c1189", size = 290142, upload-time = "2025-10-08T18:43:37.524Z" }, + { url = "https://files.pythonhosted.org/packages/16/12/164a90e4692423ed5532274928b0e19c8cae345ae1aa413d78c6b688231b/google_cloud_storage-3.4.0-py3-none-any.whl", hash = "sha256:16eeca305e4747a6871f8f7627eef3b862fdd365b872ca74d4a89e9841d0f8e8", size = 278423, upload-time = "2025-09-15T10:40:03.349Z" }, ] [[package]] @@ -2029,31 +2164,31 @@ wheels = [ [[package]] name = "httptools" -version = "0.7.1" -source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/b5/46/120a669232c7bdedb9d52d4aeae7e6c7dfe151e99dc70802e2fc7a5e1993/httptools-0.7.1.tar.gz", hash = "sha256:abd72556974f8e7c74a259655924a717a2365b236c882c3f6f8a45fe94703ac9", size = 258961, upload-time = "2025-10-10T03:55:08.559Z" } -wheels = [ - { url = "https://files.pythonhosted.org/packages/9c/08/17e07e8d89ab8f343c134616d72eebfe03798835058e2ab579dcc8353c06/httptools-0.7.1-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:474d3b7ab469fefcca3697a10d11a32ee2b9573250206ba1e50d5980910da657", size = 206521, upload-time = "2025-10-10T03:54:31.002Z" }, - { url = "https://files.pythonhosted.org/packages/aa/06/c9c1b41ff52f16aee526fd10fbda99fa4787938aa776858ddc4a1ea825ec/httptools-0.7.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:a3c3b7366bb6c7b96bd72d0dbe7f7d5eead261361f013be5f6d9590465ea1c70", size = 110375, upload-time = "2025-10-10T03:54:31.941Z" }, - { url = "https://files.pythonhosted.org/packages/cc/cc/10935db22fda0ee34c76f047590ca0a8bd9de531406a3ccb10a90e12ea21/httptools-0.7.1-cp311-cp311-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:379b479408b8747f47f3b253326183d7c009a3936518cdb70db58cffd369d9df", size = 456621, upload-time = "2025-10-10T03:54:33.176Z" }, - { url = "https://files.pythonhosted.org/packages/0e/84/875382b10d271b0c11aa5d414b44f92f8dd53e9b658aec338a79164fa548/httptools-0.7.1-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:cad6b591a682dcc6cf1397c3900527f9affef1e55a06c4547264796bbd17cf5e", size = 454954, upload-time = "2025-10-10T03:54:34.226Z" }, - { url = "https://files.pythonhosted.org/packages/30/e1/44f89b280f7e46c0b1b2ccee5737d46b3bb13136383958f20b580a821ca0/httptools-0.7.1-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:eb844698d11433d2139bbeeb56499102143beb582bd6c194e3ba69c22f25c274", size = 440175, upload-time = "2025-10-10T03:54:35.942Z" }, - { url = "https://files.pythonhosted.org/packages/6f/7e/b9287763159e700e335028bc1824359dc736fa9b829dacedace91a39b37e/httptools-0.7.1-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:f65744d7a8bdb4bda5e1fa23e4ba16832860606fcc09d674d56e425e991539ec", size = 440310, upload-time = "2025-10-10T03:54:37.1Z" }, - { url = "https://files.pythonhosted.org/packages/b3/07/5b614f592868e07f5c94b1f301b5e14a21df4e8076215a3bccb830a687d8/httptools-0.7.1-cp311-cp311-win_amd64.whl", hash = "sha256:135fbe974b3718eada677229312e97f3b31f8a9c8ffa3ae6f565bf808d5b6bcb", size = 86875, upload-time = "2025-10-10T03:54:38.421Z" }, - { url = "https://files.pythonhosted.org/packages/53/7f/403e5d787dc4942316e515e949b0c8a013d84078a915910e9f391ba9b3ed/httptools-0.7.1-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:38e0c83a2ea9746ebbd643bdfb521b9aa4a91703e2cd705c20443405d2fd16a5", size = 206280, upload-time = "2025-10-10T03:54:39.274Z" }, - { url = "https://files.pythonhosted.org/packages/2a/0d/7f3fd28e2ce311ccc998c388dd1c53b18120fda3b70ebb022b135dc9839b/httptools-0.7.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:f25bbaf1235e27704f1a7b86cd3304eabc04f569c828101d94a0e605ef7205a5", size = 110004, upload-time = "2025-10-10T03:54:40.403Z" }, - { url = "https://files.pythonhosted.org/packages/84/a6/b3965e1e146ef5762870bbe76117876ceba51a201e18cc31f5703e454596/httptools-0.7.1-cp312-cp312-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:2c15f37ef679ab9ecc06bfc4e6e8628c32a8e4b305459de7cf6785acd57e4d03", size = 517655, upload-time = "2025-10-10T03:54:41.347Z" }, - { url = "https://files.pythonhosted.org/packages/11/7d/71fee6f1844e6fa378f2eddde6c3e41ce3a1fb4b2d81118dd544e3441ec0/httptools-0.7.1-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:7fe6e96090df46b36ccfaf746f03034e5ab723162bc51b0a4cf58305324036f2", size = 511440, upload-time = "2025-10-10T03:54:42.452Z" }, - { url = "https://files.pythonhosted.org/packages/22/a5/079d216712a4f3ffa24af4a0381b108aa9c45b7a5cc6eb141f81726b1823/httptools-0.7.1-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:f72fdbae2dbc6e68b8239defb48e6a5937b12218e6ffc2c7846cc37befa84362", size = 495186, upload-time = "2025-10-10T03:54:43.937Z" }, - { url = "https://files.pythonhosted.org/packages/e9/9e/025ad7b65278745dee3bd0ebf9314934c4592560878308a6121f7f812084/httptools-0.7.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:e99c7b90a29fd82fea9ef57943d501a16f3404d7b9ee81799d41639bdaae412c", size = 499192, upload-time = "2025-10-10T03:54:45.003Z" }, - { url = "https://files.pythonhosted.org/packages/6d/de/40a8f202b987d43afc4d54689600ff03ce65680ede2f31df348d7f368b8f/httptools-0.7.1-cp312-cp312-win_amd64.whl", hash = "sha256:3e14f530fefa7499334a79b0cf7e7cd2992870eb893526fb097d51b4f2d0f321", size = 86694, upload-time = "2025-10-10T03:54:45.923Z" }, - { url = "https://files.pythonhosted.org/packages/09/8f/c77b1fcbfd262d422f12da02feb0d218fa228d52485b77b953832105bb90/httptools-0.7.1-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:6babce6cfa2a99545c60bfef8bee0cc0545413cb0018f617c8059a30ad985de3", size = 202889, upload-time = "2025-10-10T03:54:47.089Z" }, - { url = "https://files.pythonhosted.org/packages/0a/1a/22887f53602feaa066354867bc49a68fc295c2293433177ee90870a7d517/httptools-0.7.1-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:601b7628de7504077dd3dcb3791c6b8694bbd967148a6d1f01806509254fb1ca", size = 108180, upload-time = "2025-10-10T03:54:48.052Z" }, - { url = "https://files.pythonhosted.org/packages/32/6a/6aaa91937f0010d288d3d124ca2946d48d60c3a5ee7ca62afe870e3ea011/httptools-0.7.1-cp313-cp313-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:04c6c0e6c5fb0739c5b8a9eb046d298650a0ff38cf42537fc372b28dc7e4472c", size = 478596, upload-time = "2025-10-10T03:54:48.919Z" }, - { url = "https://files.pythonhosted.org/packages/6d/70/023d7ce117993107be88d2cbca566a7c1323ccbaf0af7eabf2064fe356f6/httptools-0.7.1-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:69d4f9705c405ae3ee83d6a12283dc9feba8cc6aaec671b412917e644ab4fa66", size = 473268, upload-time = "2025-10-10T03:54:49.993Z" }, - { url = "https://files.pythonhosted.org/packages/32/4d/9dd616c38da088e3f436e9a616e1d0cc66544b8cdac405cc4e81c8679fc7/httptools-0.7.1-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:44c8f4347d4b31269c8a9205d8a5ee2df5322b09bbbd30f8f862185bb6b05346", size = 455517, upload-time = "2025-10-10T03:54:51.066Z" }, - { url = "https://files.pythonhosted.org/packages/1d/3a/a6c595c310b7df958e739aae88724e24f9246a514d909547778d776799be/httptools-0.7.1-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:465275d76db4d554918aba40bf1cbebe324670f3dfc979eaffaa5d108e2ed650", size = 458337, upload-time = "2025-10-10T03:54:52.196Z" }, - { url = "https://files.pythonhosted.org/packages/fd/82/88e8d6d2c51edc1cc391b6e044c6c435b6aebe97b1abc33db1b0b24cd582/httptools-0.7.1-cp313-cp313-win_amd64.whl", hash = "sha256:322d00c2068d125bd570f7bf78b2d367dad02b919d8581d7476d8b75b294e3e6", size = 85743, upload-time = "2025-10-10T03:54:53.448Z" }, +version = "0.6.4" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/a7/9a/ce5e1f7e131522e6d3426e8e7a490b3a01f39a6696602e1c4f33f9e94277/httptools-0.6.4.tar.gz", hash = "sha256:4e93eee4add6493b59a5c514da98c939b244fce4a0d8879cd3f466562f4b7d5c", size = 240639, upload-time = "2024-10-16T19:45:08.902Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/7b/26/bb526d4d14c2774fe07113ca1db7255737ffbb119315839af2065abfdac3/httptools-0.6.4-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:f47f8ed67cc0ff862b84a1189831d1d33c963fb3ce1ee0c65d3b0cbe7b711069", size = 199029, upload-time = "2024-10-16T19:44:18.427Z" }, + { url = "https://files.pythonhosted.org/packages/a6/17/3e0d3e9b901c732987a45f4f94d4e2c62b89a041d93db89eafb262afd8d5/httptools-0.6.4-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:0614154d5454c21b6410fdf5262b4a3ddb0f53f1e1721cfd59d55f32138c578a", size = 103492, upload-time = "2024-10-16T19:44:19.515Z" }, + { url = "https://files.pythonhosted.org/packages/b7/24/0fe235d7b69c42423c7698d086d4db96475f9b50b6ad26a718ef27a0bce6/httptools-0.6.4-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f8787367fbdfccae38e35abf7641dafc5310310a5987b689f4c32cc8cc3ee975", size = 462891, upload-time = "2024-10-16T19:44:21.067Z" }, + { url = "https://files.pythonhosted.org/packages/b1/2f/205d1f2a190b72da6ffb5f41a3736c26d6fa7871101212b15e9b5cd8f61d/httptools-0.6.4-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:40b0f7fe4fd38e6a507bdb751db0379df1e99120c65fbdc8ee6c1d044897a636", size = 459788, upload-time = "2024-10-16T19:44:22.958Z" }, + { url = "https://files.pythonhosted.org/packages/6e/4c/d09ce0eff09057a206a74575ae8f1e1e2f0364d20e2442224f9e6612c8b9/httptools-0.6.4-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:40a5ec98d3f49904b9fe36827dcf1aadfef3b89e2bd05b0e35e94f97c2b14721", size = 433214, upload-time = "2024-10-16T19:44:24.513Z" }, + { url = "https://files.pythonhosted.org/packages/3e/d2/84c9e23edbccc4a4c6f96a1b8d99dfd2350289e94f00e9ccc7aadde26fb5/httptools-0.6.4-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:dacdd3d10ea1b4ca9df97a0a303cbacafc04b5cd375fa98732678151643d4988", size = 434120, upload-time = "2024-10-16T19:44:26.295Z" }, + { url = "https://files.pythonhosted.org/packages/d0/46/4d8e7ba9581416de1c425b8264e2cadd201eb709ec1584c381f3e98f51c1/httptools-0.6.4-cp311-cp311-win_amd64.whl", hash = "sha256:288cd628406cc53f9a541cfaf06041b4c71d751856bab45e3702191f931ccd17", size = 88565, upload-time = "2024-10-16T19:44:29.188Z" }, + { url = "https://files.pythonhosted.org/packages/bb/0e/d0b71465c66b9185f90a091ab36389a7352985fe857e352801c39d6127c8/httptools-0.6.4-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:df017d6c780287d5c80601dafa31f17bddb170232d85c066604d8558683711a2", size = 200683, upload-time = "2024-10-16T19:44:30.175Z" }, + { url = "https://files.pythonhosted.org/packages/e2/b8/412a9bb28d0a8988de3296e01efa0bd62068b33856cdda47fe1b5e890954/httptools-0.6.4-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:85071a1e8c2d051b507161f6c3e26155b5c790e4e28d7f236422dbacc2a9cc44", size = 104337, upload-time = "2024-10-16T19:44:31.786Z" }, + { url = "https://files.pythonhosted.org/packages/9b/01/6fb20be3196ffdc8eeec4e653bc2a275eca7f36634c86302242c4fbb2760/httptools-0.6.4-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:69422b7f458c5af875922cdb5bd586cc1f1033295aa9ff63ee196a87519ac8e1", size = 508796, upload-time = "2024-10-16T19:44:32.825Z" }, + { url = "https://files.pythonhosted.org/packages/f7/d8/b644c44acc1368938317d76ac991c9bba1166311880bcc0ac297cb9d6bd7/httptools-0.6.4-cp312-cp312-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:16e603a3bff50db08cd578d54f07032ca1631450ceb972c2f834c2b860c28ea2", size = 510837, upload-time = "2024-10-16T19:44:33.974Z" }, + { url = "https://files.pythonhosted.org/packages/52/d8/254d16a31d543073a0e57f1c329ca7378d8924e7e292eda72d0064987486/httptools-0.6.4-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:ec4f178901fa1834d4a060320d2f3abc5c9e39766953d038f1458cb885f47e81", size = 485289, upload-time = "2024-10-16T19:44:35.111Z" }, + { url = "https://files.pythonhosted.org/packages/5f/3c/4aee161b4b7a971660b8be71a92c24d6c64372c1ab3ae7f366b3680df20f/httptools-0.6.4-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:f9eb89ecf8b290f2e293325c646a211ff1c2493222798bb80a530c5e7502494f", size = 489779, upload-time = "2024-10-16T19:44:36.253Z" }, + { url = "https://files.pythonhosted.org/packages/12/b7/5cae71a8868e555f3f67a50ee7f673ce36eac970f029c0c5e9d584352961/httptools-0.6.4-cp312-cp312-win_amd64.whl", hash = "sha256:db78cb9ca56b59b016e64b6031eda5653be0589dba2b1b43453f6e8b405a0970", size = 88634, upload-time = "2024-10-16T19:44:37.357Z" }, + { url = "https://files.pythonhosted.org/packages/94/a3/9fe9ad23fd35f7de6b91eeb60848986058bd8b5a5c1e256f5860a160cc3e/httptools-0.6.4-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:ade273d7e767d5fae13fa637f4d53b6e961fb7fd93c7797562663f0171c26660", size = 197214, upload-time = "2024-10-16T19:44:38.738Z" }, + { url = "https://files.pythonhosted.org/packages/ea/d9/82d5e68bab783b632023f2fa31db20bebb4e89dfc4d2293945fd68484ee4/httptools-0.6.4-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:856f4bc0478ae143bad54a4242fccb1f3f86a6e1be5548fecfd4102061b3a083", size = 102431, upload-time = "2024-10-16T19:44:39.818Z" }, + { url = "https://files.pythonhosted.org/packages/96/c1/cb499655cbdbfb57b577734fde02f6fa0bbc3fe9fb4d87b742b512908dff/httptools-0.6.4-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:322d20ea9cdd1fa98bd6a74b77e2ec5b818abdc3d36695ab402a0de8ef2865a3", size = 473121, upload-time = "2024-10-16T19:44:41.189Z" }, + { url = "https://files.pythonhosted.org/packages/af/71/ee32fd358f8a3bb199b03261f10921716990808a675d8160b5383487a317/httptools-0.6.4-cp313-cp313-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4d87b29bd4486c0093fc64dea80231f7c7f7eb4dc70ae394d70a495ab8436071", size = 473805, upload-time = "2024-10-16T19:44:42.384Z" }, + { url = "https://files.pythonhosted.org/packages/8a/0a/0d4df132bfca1507114198b766f1737d57580c9ad1cf93c1ff673e3387be/httptools-0.6.4-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:342dd6946aa6bda4b8f18c734576106b8a31f2fe31492881a9a160ec84ff4bd5", size = 448858, upload-time = "2024-10-16T19:44:43.959Z" }, + { url = "https://files.pythonhosted.org/packages/1e/6a/787004fdef2cabea27bad1073bf6a33f2437b4dbd3b6fb4a9d71172b1c7c/httptools-0.6.4-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:4b36913ba52008249223042dca46e69967985fb4051951f94357ea681e1f5dc0", size = 452042, upload-time = "2024-10-16T19:44:45.071Z" }, + { url = "https://files.pythonhosted.org/packages/4d/dc/7decab5c404d1d2cdc1bb330b1bf70e83d6af0396fd4fc76fc60c0d522bf/httptools-0.6.4-cp313-cp313-win_amd64.whl", hash = "sha256:28908df1b9bb8187393d5b5db91435ccc9c8e891657f9cbb42a2541b44c82fc8", size = 87682, upload-time = "2024-10-16T19:44:46.46Z" }, ] [[package]] @@ -2074,11 +2209,11 @@ wheels = [ [[package]] name = "humanize" -version = "4.14.0" +version = "4.13.0" source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/b6/43/50033d25ad96a7f3845f40999b4778f753c3901a11808a584fed7c00d9f5/humanize-4.14.0.tar.gz", hash = "sha256:2fa092705ea640d605c435b1ca82b2866a1b601cdf96f076d70b79a855eba90d", size = 82939, upload-time = "2025-10-15T13:04:51.214Z" } +sdist = { url = "https://files.pythonhosted.org/packages/98/1d/3062fcc89ee05a715c0b9bfe6490c00c576314f27ffee3a704122c6fd259/humanize-4.13.0.tar.gz", hash = "sha256:78f79e68f76f0b04d711c4e55d32bebef5be387148862cb1ef83d2b58e7935a0", size = 81884, upload-time = "2025-08-25T09:39:20.04Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/c3/5b/9512c5fb6c8218332b530f13500c6ff5f3ce3342f35e0dd7be9ac3856fd3/humanize-4.14.0-py3-none-any.whl", hash = "sha256:d57701248d040ad456092820e6fde56c930f17749956ac47f4f655c0c547bfff", size = 132092, upload-time = "2025-10-15T13:04:49.404Z" }, + { url = "https://files.pythonhosted.org/packages/1e/c7/316e7ca04d26695ef0635dc81683d628350810eb8e9b2299fc08ba49f366/humanize-4.13.0-py3-none-any.whl", hash = "sha256:b810820b31891813b1673e8fec7f1ed3312061eab2f26e3fa192c393d11ed25f", size = 128869, upload-time = "2025-08-25T09:39:18.54Z" }, ] [[package]] @@ -2101,11 +2236,11 @@ wheels = [ [[package]] name = "idna" -version = "3.11" +version = "3.10" source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/6f/6d/0703ccc57f3a7233505399edb88de3cbd678da106337b9fcde432b65ed60/idna-3.11.tar.gz", hash = "sha256:795dafcc9c04ed0c1fb032c2aa73654d8e8c5023a7df64a53f39190ada629902", size = 194582, upload-time = "2025-10-12T14:55:20.501Z" } +sdist = { url = "https://files.pythonhosted.org/packages/f1/70/7703c29685631f5a7590aa73f1f1d3fa9a380e654b86af429e0934a32f7d/idna-3.10.tar.gz", hash = "sha256:12f65c9b470abda6dc35cf8e63cc574b1c52b11df2c86030af0ac09b01b13ea9", size = 190490, upload-time = "2024-09-15T18:07:39.745Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/0e/61/66938bbb5fc52dbdf84594873d5b51fb1f7c7794e9c0f5bd885f30bc507b/idna-3.11-py3-none-any.whl", hash = "sha256:771a87f49d9defaf64091e6e6fe9c18d4833f140bd19464795bc32d966ca37ea", size = 71008, upload-time = "2025-10-12T14:55:18.883Z" }, + { url = "https://files.pythonhosted.org/packages/76/c6/c88e154df9c4e1a2a66ccf0005a88dfb2650c1dffb6f5ce603dfbd452ce3/idna-3.10-py3-none-any.whl", hash = "sha256:946d195a0d259cbba61165e88e65941f16e9b36ea6ddb97f00452bae8b1287d3", size = 70442, upload-time = "2024-09-15T18:07:37.964Z" }, ] [[package]] @@ -2119,60 +2254,60 @@ wheels = [ [[package]] name = "ijson" -version = "3.4.0.post0" -source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/2d/30/7ab4b9e88e7946f6beef419f74edcc541df3ea562c7882257b4eaa82417d/ijson-3.4.0.post0.tar.gz", hash = "sha256:9aa02dc70bb245670a6ca7fba737b992aeeb4895360980622f7e568dbf23e41e", size = 67216, upload-time = "2025-10-10T05:29:25.62Z" } -wheels = [ - { url = "https://files.pythonhosted.org/packages/a7/ac/3d57249d4acba66a33eaef794edb5b2a2222ca449ae08800f8abe9286645/ijson-3.4.0.post0-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:0b473112e72c0c506da425da3278367b6680f340ecc093084693a1e819d28435", size = 88278, upload-time = "2025-10-10T05:27:55.403Z" }, - { url = "https://files.pythonhosted.org/packages/12/fb/2d068d23d1a665f500282ceb6f2473952a95fc7107d739fd629b4ab41959/ijson-3.4.0.post0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:043f9b7cf9cc744263a78175e769947733710d2412d25180df44b1086b23ebd5", size = 59898, upload-time = "2025-10-10T05:27:56.361Z" }, - { url = "https://files.pythonhosted.org/packages/26/3d/8b14589dfb0e5dbb7bcf9063e53d3617c041cf315ff3dfa60945382237ce/ijson-3.4.0.post0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:b55e49045f4c8031f3673f56662fd828dc9e8d65bd3b03a9420dda0d370e64ba", size = 59945, upload-time = "2025-10-10T05:27:57.581Z" }, - { url = "https://files.pythonhosted.org/packages/77/57/086a75094397d4b7584698a540a279689e12905271af78cdfc903bf9eaf8/ijson-3.4.0.post0-cp311-cp311-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:11f13b73194ea2a5a8b4a2863f25b0b4624311f10db3a75747b510c4958179b0", size = 131318, upload-time = "2025-10-10T05:27:58.453Z" }, - { url = "https://files.pythonhosted.org/packages/df/35/7f61e9ce4a9ff1306ec581eb851f8a660439126d92ee595c6dc8084aac97/ijson-3.4.0.post0-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:659acb2843433e080c271ecedf7d19c71adde1ee5274fc7faa2fec0a793f9f1c", size = 137990, upload-time = "2025-10-10T05:27:59.328Z" }, - { url = "https://files.pythonhosted.org/packages/59/bf/590bbc3c3566adce5e2f43ba5894520cbaf19a3e7f38c1250926ba67eee4/ijson-3.4.0.post0-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:deda4cfcaafa72ca3fa845350045b1d0fef9364ec9f413241bb46988afbe6ee6", size = 134416, upload-time = "2025-10-10T05:28:00.317Z" }, - { url = "https://files.pythonhosted.org/packages/24/c1/fb719049851979df71f3e039d6f1a565d349c9cb1b29c0f8775d9db141b4/ijson-3.4.0.post0-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:47352563e8c594360bacee2e0753e97025f0861234722d02faace62b1b6d2b2a", size = 138034, upload-time = "2025-10-10T05:28:01.627Z" }, - { url = "https://files.pythonhosted.org/packages/10/ce/ccda891f572876aaf2c43f0b2079e31d5b476c3ae53196187eab1a788eff/ijson-3.4.0.post0-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:5a48b9486242d1295abe7fd0fbb6308867da5ca3f69b55c77922a93c2b6847aa", size = 132510, upload-time = "2025-10-10T05:28:03.141Z" }, - { url = "https://files.pythonhosted.org/packages/11/b5/ca8e64ab7cf5252f358e467be767630f085b5bbcd3c04333a3a5f36c3dd3/ijson-3.4.0.post0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:9c0886234d1fae15cf4581a430bdba03d79251c1ab3b07e30aa31b13ef28d01c", size = 134907, upload-time = "2025-10-10T05:28:04.438Z" }, - { url = "https://files.pythonhosted.org/packages/93/14/63a4d5dc548690f29f0c2fc9cabd5ecbb37532547439c05f5b3b9ce73021/ijson-3.4.0.post0-cp311-cp311-win32.whl", hash = "sha256:fecae19b5187d92900c73debb3a979b0b3290a53f85df1f8f3c5ba7d1e9fb9cb", size = 52006, upload-time = "2025-10-10T05:28:05.424Z" }, - { url = "https://files.pythonhosted.org/packages/fa/bf/932740899e572a97f9be0c6cd64ebda557eae7701ac216fc284aba21786d/ijson-3.4.0.post0-cp311-cp311-win_amd64.whl", hash = "sha256:b39dbf87071f23a23c8077eea2ae7cfeeca9ff9ffec722dfc8b5f352e4dd729c", size = 54410, upload-time = "2025-10-10T05:28:06.264Z" }, - { url = "https://files.pythonhosted.org/packages/7d/fe/3b6af0025288e769dbfa30485dae1b3bd3f33f00390f3ee532cbb1c33e9b/ijson-3.4.0.post0-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:b607a500fca26101be47d2baf7cddb457b819ab60a75ce51ed1092a40da8b2f9", size = 87847, upload-time = "2025-10-10T05:28:07.229Z" }, - { url = "https://files.pythonhosted.org/packages/6e/a5/95ee2ca82f3b1a57892452f6e5087607d56c620beb8ce625475194568698/ijson-3.4.0.post0-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:4827d9874a6a81625412c59f7ca979a84d01f7f6bfb3c6d4dc4c46d0382b14e0", size = 59815, upload-time = "2025-10-10T05:28:08.448Z" }, - { url = "https://files.pythonhosted.org/packages/51/8d/5a704ab3c17c55c21c86423458db8610626ca99cc9086a74dfeb7ee9054c/ijson-3.4.0.post0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:d4d4afec780881edb2a0d2dd40b1cdbe246e630022d5192f266172a0307986a7", size = 59648, upload-time = "2025-10-10T05:28:09.307Z" }, - { url = "https://files.pythonhosted.org/packages/25/56/ca5d6ca145d007f30b44e747f3c163bc08710ce004af0deaad4a2301339b/ijson-3.4.0.post0-cp312-cp312-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:432fb60ffb952926f9438e0539011e2dfcd108f8426ee826ccc6173308c3ff2c", size = 138279, upload-time = "2025-10-10T05:28:10.489Z" }, - { url = "https://files.pythonhosted.org/packages/c3/d3/22e3cc806fcdda7ad4c8482ed74db7a017d4a1d49b4300c7bc07052fb561/ijson-3.4.0.post0-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:54a0e3e05d9a0c95ecba73d9579f146cf6d5c5874116c849dba2d39a5f30380e", size = 149110, upload-time = "2025-10-10T05:28:12.263Z" }, - { url = "https://files.pythonhosted.org/packages/3e/04/efb30f413648b9267f5a33920ac124d7ebef3bc4063af8f6ffc8ca11ddcb/ijson-3.4.0.post0-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:05807edc0bcbd222dc6ea32a2b897f0c81dc7f12c8580148bc82f6d7f5e7ec7b", size = 149026, upload-time = "2025-10-10T05:28:13.557Z" }, - { url = "https://files.pythonhosted.org/packages/2d/cf/481165f7046ade32488719300a3994a437020bc41cfbb54334356348f513/ijson-3.4.0.post0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:a5269af16f715855d9864937f9dd5c348ca1ac49cee6a2c7a1b7091c159e874f", size = 150012, upload-time = "2025-10-10T05:28:14.859Z" }, - { url = "https://files.pythonhosted.org/packages/0f/24/642e3289917ecf860386e26dfde775f9962d26ab7f6c2e364ed3ca3c25d8/ijson-3.4.0.post0-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:b200df83c901f5bfa416d069ac71077aa1608f854a4c50df1b84ced560e9c9ec", size = 142193, upload-time = "2025-10-10T05:28:16.131Z" }, - { url = "https://files.pythonhosted.org/packages/0f/f5/fd2f038abe95e553e1c3ee207cda19db9196eb416e63c7c89699a8cf0db7/ijson-3.4.0.post0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:6458bd8e679cdff459a0a5e555b107c3bbacb1f382da3fe0f40e392871eb518d", size = 150904, upload-time = "2025-10-10T05:28:17.401Z" }, - { url = "https://files.pythonhosted.org/packages/49/35/24259d22519987928164e6cb8fe3486e1df0899b2999ada4b0498639b463/ijson-3.4.0.post0-cp312-cp312-win32.whl", hash = "sha256:55f7f656b5986326c978cbb3a9eea9e33f3ef6ecc4535b38f1d452c731da39ab", size = 52358, upload-time = "2025-10-10T05:28:18.315Z" }, - { url = "https://files.pythonhosted.org/packages/a1/2b/6f7ade27a8ff5758fc41006dadd2de01730def84fe3e60553b329c59e0d4/ijson-3.4.0.post0-cp312-cp312-win_amd64.whl", hash = "sha256:e15833dcf6f6d188fdc624a31cd0520c3ba21b6855dc304bc7c1a8aeca02d4ac", size = 54789, upload-time = "2025-10-10T05:28:19.552Z" }, - { url = "https://files.pythonhosted.org/packages/1b/20/aaec6977f9d538bbadd760c7fa0f6a0937742abdcc920ec6478a8576e55f/ijson-3.4.0.post0-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:114ed248166ac06377e87a245a158d6b98019d2bdd3bb93995718e0bd996154f", size = 87863, upload-time = "2025-10-10T05:28:20.786Z" }, - { url = "https://files.pythonhosted.org/packages/5b/29/06bf56a866e2fe21453a1ad8f3a5d7bca3c723f73d96329656dfee969783/ijson-3.4.0.post0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:ffb21203736b08fe27cb30df6a4f802fafb9ef7646c5ff7ef79569b63ea76c57", size = 59806, upload-time = "2025-10-10T05:28:21.596Z" }, - { url = "https://files.pythonhosted.org/packages/ba/ae/e1d0fda91ba7a444b75f0d60cb845fdb1f55d3111351529dcbf4b1c276fe/ijson-3.4.0.post0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:07f20ecd748602ac7f18c617637e53bd73ded7f3b22260bba3abe401a7fc284e", size = 59643, upload-time = "2025-10-10T05:28:22.45Z" }, - { url = "https://files.pythonhosted.org/packages/4d/24/5a24533be2726396cc1724dc237bada09b19715b5bfb0e7b9400db0901ad/ijson-3.4.0.post0-cp313-cp313-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:27aa193d47ffc6bc4e45453896ad98fb089a367e8283b973f1fe5c0198b60b4e", size = 138082, upload-time = "2025-10-10T05:28:23.319Z" }, - { url = "https://files.pythonhosted.org/packages/05/60/026c3efcec23c329657e878cbc0a9a25b42e7eb3971e8c2377cb3284e2b7/ijson-3.4.0.post0-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:ccddb2894eb7af162ba43b9475ac5825d15d568832f82eb8783036e5d2aebd42", size = 149145, upload-time = "2025-10-10T05:28:24.279Z" }, - { url = "https://files.pythonhosted.org/packages/ed/c2/036499909b7a1bc0bcd85305e4348ad171aeb9df57581287533bdb3497e9/ijson-3.4.0.post0-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:61ab0b8c5bf707201dc67e02c116f4b6545c4afd7feb2264b989d242d9c4348a", size = 149046, upload-time = "2025-10-10T05:28:25.186Z" }, - { url = "https://files.pythonhosted.org/packages/ba/75/e7736073ad96867c129f9e799e3e65086badd89dbf3911f76d9b3bf8a115/ijson-3.4.0.post0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:254cfb8c124af68327a0e7a49b50bbdacafd87c4690a3d62c96eb01020a685ef", size = 150356, upload-time = "2025-10-10T05:28:26.135Z" }, - { url = "https://files.pythonhosted.org/packages/9d/1b/1c1575d2cda136985561fcf774fe6c54412cd0fa08005342015af0403193/ijson-3.4.0.post0-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:04ac9ca54db20f82aeda6379b5f4f6112fdb150d09ebce04affeab98a17b4ed3", size = 142322, upload-time = "2025-10-10T05:28:27.125Z" }, - { url = "https://files.pythonhosted.org/packages/28/4d/aba9871feb624df8494435d1a9ddc7b6a4f782c6044bfc0d770a4b59f145/ijson-3.4.0.post0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:a603d7474bf35e7b3a8e49c8dabfc4751841931301adff3f3318171c4e407f32", size = 151386, upload-time = "2025-10-10T05:28:28.274Z" }, - { url = "https://files.pythonhosted.org/packages/3f/9a/791baa83895fb6e492bce2c7a0ea6427b6a41fe854349e62a37d0c9deaf0/ijson-3.4.0.post0-cp313-cp313-win32.whl", hash = "sha256:ec5bb1520cb212ebead7dba048bb9b70552c3440584f83b01b0abc96862e2a09", size = 52352, upload-time = "2025-10-10T05:28:29.191Z" }, - { url = "https://files.pythonhosted.org/packages/a9/0c/061f51493e1da21116d74ee8f6a6b9ae06ca5fa2eb53c3b38b64f9a9a5ae/ijson-3.4.0.post0-cp313-cp313-win_amd64.whl", hash = "sha256:3505dff18bdeb8b171eb28af6df34857e2be80dc01e2e3b624e77215ad58897f", size = 54783, upload-time = "2025-10-10T05:28:30.048Z" }, - { url = "https://files.pythonhosted.org/packages/c7/89/4344e176f2c5f5ef3251c9bfa4ddd5b4cf3f9601fd6ec3f677a3ba0b9c71/ijson-3.4.0.post0-cp313-cp313t-macosx_10_13_universal2.whl", hash = "sha256:45a0b1c833ed2620eaf8da958f06ac8351c59e5e470e078400d23814670ed708", size = 92342, upload-time = "2025-10-10T05:28:31.389Z" }, - { url = "https://files.pythonhosted.org/packages/d4/b1/85012c586a6645f9fb8bfa3ef62ed2f303c8d73fc7c2f705111582925980/ijson-3.4.0.post0-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:7809ec8c8f40228edaaa089f33e811dff4c5b8509702652870d3f286c9682e27", size = 62028, upload-time = "2025-10-10T05:28:32.849Z" }, - { url = "https://files.pythonhosted.org/packages/65/ea/7b7e2815c101d78b33e74d64ddb70cccc377afccd5dda76e566ed3fcb56f/ijson-3.4.0.post0-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:cf4a34c2cfe852aee75c89c05b0a4531c49dc0be27eeed221afd6fbf9c3e149c", size = 61773, upload-time = "2025-10-10T05:28:34.016Z" }, - { url = "https://files.pythonhosted.org/packages/59/7d/2175e599cb77a64f528629bad3ce95dfdf2aa6171d313c1fc00bbfaf0d22/ijson-3.4.0.post0-cp313-cp313t-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:a39d5d36067604b26b78de70b8951c90e9272450642661fe531a8f7a6936a7fa", size = 198562, upload-time = "2025-10-10T05:28:34.878Z" }, - { url = "https://files.pythonhosted.org/packages/13/97/82247c501c92405bb2fc44ab5efb497335bcb9cf0f5d3a0b04a800737bd8/ijson-3.4.0.post0-cp313-cp313t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:83fc738d81c9ea686b452996110b8a6678296c481e0546857db24785bff8da92", size = 216212, upload-time = "2025-10-10T05:28:36.208Z" }, - { url = "https://files.pythonhosted.org/packages/95/ca/b956f507bb02e05ce109fd11ab6a2c054f8b686cc5affe41afe50630984d/ijson-3.4.0.post0-cp313-cp313t-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:b2a81aee91633868f5b40280e2523f7c5392e920a5082f47c5e991e516b483f6", size = 206618, upload-time = "2025-10-10T05:28:37.243Z" }, - { url = "https://files.pythonhosted.org/packages/3e/12/e827840ab81d86a9882e499097934df53294f05155f1acfcb9a211ac1142/ijson-3.4.0.post0-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:56169e298c5a2e7196aaa55da78ddc2415876a74fe6304f81b1eb0d3273346f7", size = 210689, upload-time = "2025-10-10T05:28:38.252Z" }, - { url = "https://files.pythonhosted.org/packages/1b/3b/59238d9422c31a4aefa22ebeb8e599e706158a0ab03669ef623be77a499a/ijson-3.4.0.post0-cp313-cp313t-musllinux_1_2_i686.whl", hash = "sha256:eeb9540f0b1a575cbb5968166706946458f98c16e7accc6f2fe71efa29864241", size = 199927, upload-time = "2025-10-10T05:28:39.233Z" }, - { url = "https://files.pythonhosted.org/packages/b6/0f/ec01c36c128c37edb8a5ae8f3de3256009f886338d459210dfe121ee4ba9/ijson-3.4.0.post0-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:ba3478ff0bb49d7ba88783f491a99b6e3fa929c930ab062d2bb7837e6a38fe88", size = 204455, upload-time = "2025-10-10T05:28:40.644Z" }, - { url = "https://files.pythonhosted.org/packages/c8/cf/5560e1db96c6d10a5313be76bf5a1754266cbfb5cc13ff64d107829e07b1/ijson-3.4.0.post0-cp313-cp313t-win32.whl", hash = "sha256:b005ce84e82f28b00bf777a464833465dfe3efa43a0a26c77b5ac40723e1a728", size = 54566, upload-time = "2025-10-10T05:28:41.663Z" }, - { url = "https://files.pythonhosted.org/packages/22/5a/cbb69144c3b25dd56f5421ff7dc0cf3051355579062024772518e4f4b3c5/ijson-3.4.0.post0-cp313-cp313t-win_amd64.whl", hash = "sha256:fe9c84c9b1c8798afa407be1cea1603401d99bfc7c34497e19f4f5e5ddc9b441", size = 57298, upload-time = "2025-10-10T05:28:42.881Z" }, - { url = "https://files.pythonhosted.org/packages/43/66/27cfcea16e85b95e33814eae2052dab187206b8820cdd90aa39d32ffb441/ijson-3.4.0.post0-pp311-pypy311_pp73-macosx_10_15_x86_64.whl", hash = "sha256:add9242f886eae844a7410b84aee2bbb8bdc83c624f227cb1fdb2d0476a96cb1", size = 57029, upload-time = "2025-10-10T05:29:19.733Z" }, - { url = "https://files.pythonhosted.org/packages/b8/1b/df3f1561c6629241fb2f8bd7ea1da14e3c2dd16fe9d7cbc97120870ed09c/ijson-3.4.0.post0-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:69718ed41710dfcaa7564b0af42abc05875d4f7aaa24627c808867ef32634bc7", size = 56523, upload-time = "2025-10-10T05:29:20.641Z" }, - { url = "https://files.pythonhosted.org/packages/39/0a/6c6a3221ddecf62b696fde0e864415237e05b9a36ab6685a606b8fb3b5a2/ijson-3.4.0.post0-pp311-pypy311_pp73-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:636b6eca96c6c43c04629c6b37fad0181662eaacf9877c71c698485637f752f9", size = 70546, upload-time = "2025-10-10T05:29:21.526Z" }, - { url = "https://files.pythonhosted.org/packages/42/cb/edf69755e86a3a9f8b418efd60239cb308af46c7c8e12f869423f51c9851/ijson-3.4.0.post0-pp311-pypy311_pp73-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:eb5e73028f6e63d27b3d286069fe350ed80a4ccc493b022b590fea4bb086710d", size = 70532, upload-time = "2025-10-10T05:29:22.718Z" }, - { url = "https://files.pythonhosted.org/packages/96/7e/c8730ea39b8712622cd5a1bdff676098208400e37bb92052ba52f93e2aa1/ijson-3.4.0.post0-pp311-pypy311_pp73-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:461acf4320219459dabe5ed90a45cb86c9ba8cc6d6db9dad0d9427d42f57794c", size = 67927, upload-time = "2025-10-10T05:29:23.596Z" }, - { url = "https://files.pythonhosted.org/packages/ec/f2/53b6e9bdd2a91202066764eaa74b572ba4dede0fe47a5a26f4de34b7541a/ijson-3.4.0.post0-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:a0fedf09c0f6ffa2a99e7e7fd9c5f3caf74e655c1ee015a0797383e99382ebc3", size = 54657, upload-time = "2025-10-10T05:29:24.482Z" }, +version = "3.4.0" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/a3/4f/1cfeada63f5fce87536651268ddf5cca79b8b4bbb457aee4e45777964a0a/ijson-3.4.0.tar.gz", hash = "sha256:5f74dcbad9d592c428d3ca3957f7115a42689ee7ee941458860900236ae9bb13", size = 65782, upload-time = "2025-05-08T02:37:20.135Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/1a/0d/3e2998f4d7b7d2db2d511e4f0cf9127b6e2140c325c3cb77be46ae46ff1d/ijson-3.4.0-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:9e369bf5a173ca51846c243002ad8025d32032532523b06510881ecc8723ee54", size = 87643, upload-time = "2025-05-08T02:35:35.693Z" }, + { url = "https://files.pythonhosted.org/packages/e9/7b/afef2b08af2fee5ead65fcd972fadc3e31f9ae2b517fe2c378d50a9bf79b/ijson-3.4.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:26e7da0a3cd2a56a1fde1b34231867693f21c528b683856f6691e95f9f39caec", size = 59260, upload-time = "2025-05-08T02:35:37.166Z" }, + { url = "https://files.pythonhosted.org/packages/da/4a/39f583a2a13096f5063028bb767622f09cafc9ec254c193deee6c80af59f/ijson-3.4.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:1c28c7f604729be22aa453e604e9617b665fa0c24cd25f9f47a970e8130c571a", size = 59311, upload-time = "2025-05-08T02:35:38.538Z" }, + { url = "https://files.pythonhosted.org/packages/3c/58/5b80efd54b093e479c98d14b31d7794267281f6a8729f2c94fbfab661029/ijson-3.4.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0bed8bcb84d3468940f97869da323ba09ae3e6b950df11dea9b62e2b231ca1e3", size = 136125, upload-time = "2025-05-08T02:35:39.976Z" }, + { url = "https://files.pythonhosted.org/packages/e5/f5/f37659b1647ecc3992216277cd8a45e2194e84e8818178f77c99e1d18463/ijson-3.4.0-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:296bc824f4088f2af814aaf973b0435bc887ce3d9f517b1577cc4e7d1afb1cb7", size = 130699, upload-time = "2025-05-08T02:35:41.483Z" }, + { url = "https://files.pythonhosted.org/packages/ee/2f/4c580ac4bb5eda059b672ad0a05e4bafdae5182a6ec6ab43546763dafa91/ijson-3.4.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8145f8f40617b6a8aa24e28559d0adc8b889e56a203725226a8a60fa3501073f", size = 134963, upload-time = "2025-05-08T02:35:43.017Z" }, + { url = "https://files.pythonhosted.org/packages/6d/9e/64ec39718609faab6ed6e1ceb44f9c35d71210ad9c87fff477c03503e8f8/ijson-3.4.0-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:b674a97bd503ea21bc85103e06b6493b1b2a12da3372950f53e1c664566a33a4", size = 137405, upload-time = "2025-05-08T02:35:44.618Z" }, + { url = "https://files.pythonhosted.org/packages/71/b2/f0bf0e4a0962845597996de6de59c0078bc03a1f899e03908220039f4cf6/ijson-3.4.0-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:8bc731cf1c3282b021d3407a601a5a327613da9ad3c4cecb1123232623ae1826", size = 131861, upload-time = "2025-05-08T02:35:46.22Z" }, + { url = "https://files.pythonhosted.org/packages/17/83/4a2e3611e2b4842b413ec84d2e54adea55ab52e4408ea0f1b1b927e19536/ijson-3.4.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:42ace5e940e0cf58c9de72f688d6829ddd815096d07927ee7e77df2648006365", size = 134297, upload-time = "2025-05-08T02:35:47.401Z" }, + { url = "https://files.pythonhosted.org/packages/38/75/2d332911ac765b44cd7da0cb2b06143521ad5e31dfcc8d8587e6e6168bc8/ijson-3.4.0-cp311-cp311-win32.whl", hash = "sha256:5be39a0df4cd3f02b304382ea8885391900ac62e95888af47525a287c50005e9", size = 51161, upload-time = "2025-05-08T02:35:49.164Z" }, + { url = "https://files.pythonhosted.org/packages/7d/ba/4ad571f9f7fcf5906b26e757b130c1713c5f0198a1e59568f05d53a0816c/ijson-3.4.0-cp311-cp311-win_amd64.whl", hash = "sha256:0b1be1781792291e70d2e177acf564ec672a7907ba74f313583bdf39fe81f9b7", size = 53710, upload-time = "2025-05-08T02:35:50.323Z" }, + { url = "https://files.pythonhosted.org/packages/f8/ec/317ee5b2d13e50448833ead3aa906659a32b376191f6abc2a7c6112d2b27/ijson-3.4.0-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:956b148f88259a80a9027ffbe2d91705fae0c004fbfba3e5a24028fbe72311a9", size = 87212, upload-time = "2025-05-08T02:35:51.835Z" }, + { url = "https://files.pythonhosted.org/packages/f8/43/b06c96ced30cacecc5d518f89b0fd1c98c294a30ff88848b70ed7b7f72a1/ijson-3.4.0-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:06b89960f5c721106394c7fba5760b3f67c515b8eb7d80f612388f5eca2f4621", size = 59175, upload-time = "2025-05-08T02:35:52.988Z" }, + { url = "https://files.pythonhosted.org/packages/e9/df/b4aeafb7ecde463130840ee9be36130823ec94a00525049bf700883378b8/ijson-3.4.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:9a0bb591cf250dd7e9dfab69d634745a7f3272d31cfe879f9156e0a081fd97ee", size = 59011, upload-time = "2025-05-08T02:35:54.394Z" }, + { url = "https://files.pythonhosted.org/packages/e3/7c/a80b8e361641609507f62022089626d4b8067f0826f51e1c09e4ba86eba8/ijson-3.4.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:72e92de999977f4c6b660ffcf2b8d59604ccd531edcbfde05b642baf283e0de8", size = 146094, upload-time = "2025-05-08T02:35:55.601Z" }, + { url = "https://files.pythonhosted.org/packages/01/44/fa416347b9a802e3646c6ff377fc3278bd7d6106e17beb339514b6a3184e/ijson-3.4.0-cp312-cp312-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:9e9602157a5b869d44b6896e64f502c712a312fcde044c2e586fccb85d3e316e", size = 137903, upload-time = "2025-05-08T02:35:56.814Z" }, + { url = "https://files.pythonhosted.org/packages/24/c6/41a9ad4d42df50ff6e70fdce79b034f09b914802737ebbdc141153d8d791/ijson-3.4.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b1e83660edb931a425b7ff662eb49db1f10d30ca6d4d350e5630edbed098bc01", size = 148339, upload-time = "2025-05-08T02:35:58.595Z" }, + { url = "https://files.pythonhosted.org/packages/5f/6f/7d01efda415b8502dce67e067ed9e8a124f53e763002c02207e542e1a2f1/ijson-3.4.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:49bf8eac1c7b7913073865a859c215488461f7591b4fa6a33c14b51cb73659d0", size = 149383, upload-time = "2025-05-08T02:36:00.197Z" }, + { url = "https://files.pythonhosted.org/packages/95/6c/0d67024b9ecb57916c5e5ab0350251c9fe2f86dc9c8ca2b605c194bdad6a/ijson-3.4.0-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:160b09273cb42019f1811469508b0a057d19f26434d44752bde6f281da6d3f32", size = 141580, upload-time = "2025-05-08T02:36:01.998Z" }, + { url = "https://files.pythonhosted.org/packages/06/43/e10edcc1c6a3b619294de835e7678bfb3a1b8a75955f3689fd66a1e9e7b4/ijson-3.4.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:2019ff4e6f354aa00c76c8591bd450899111c61f2354ad55cc127e2ce2492c44", size = 150280, upload-time = "2025-05-08T02:36:03.926Z" }, + { url = "https://files.pythonhosted.org/packages/07/84/1cbeee8e8190a1ebe6926569a92cf1fa80ddb380c129beb6f86559e1bb24/ijson-3.4.0-cp312-cp312-win32.whl", hash = "sha256:931c007bf6bb8330705429989b2deed6838c22b63358a330bf362b6e458ba0bf", size = 51512, upload-time = "2025-05-08T02:36:05.595Z" }, + { url = "https://files.pythonhosted.org/packages/66/13/530802bc391c95be6fe9f96e9aa427d94067e7c0b7da7a9092344dc44c4b/ijson-3.4.0-cp312-cp312-win_amd64.whl", hash = "sha256:71523f2b64cb856a820223e94d23e88369f193017ecc789bb4de198cc9d349eb", size = 54081, upload-time = "2025-05-08T02:36:07.099Z" }, + { url = "https://files.pythonhosted.org/packages/77/b3/b1d2eb2745e5204ec7a25365a6deb7868576214feb5e109bce368fb692c9/ijson-3.4.0-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:e8d96f88d75196a61c9d9443de2b72c2d4a7ba9456ff117b57ae3bba23a54256", size = 87216, upload-time = "2025-05-08T02:36:08.414Z" }, + { url = "https://files.pythonhosted.org/packages/b1/cd/cd6d340087617f8cc9bedbb21d974542fe2f160ed0126b8288d3499a469b/ijson-3.4.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:c45906ce2c1d3b62f15645476fc3a6ca279549127f01662a39ca5ed334a00cf9", size = 59170, upload-time = "2025-05-08T02:36:09.604Z" }, + { url = "https://files.pythonhosted.org/packages/3e/4d/32d3a9903b488d3306e3c8288f6ee4217d2eea82728261db03a1045eb5d1/ijson-3.4.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:4ab4bc2119b35c4363ea49f29563612237cae9413d2fbe54b223be098b97bc9e", size = 59013, upload-time = "2025-05-08T02:36:10.696Z" }, + { url = "https://files.pythonhosted.org/packages/d5/c8/db15465ab4b0b477cee5964c8bfc94bf8c45af8e27a23e1ad78d1926e587/ijson-3.4.0-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:97b0a9b5a15e61dfb1f14921ea4e0dba39f3a650df6d8f444ddbc2b19b479ff1", size = 146564, upload-time = "2025-05-08T02:36:11.916Z" }, + { url = "https://files.pythonhosted.org/packages/c4/d8/0755545bc122473a9a434ab90e0f378780e603d75495b1ca3872de757873/ijson-3.4.0-cp313-cp313-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e3047bb994dabedf11de11076ed1147a307924b6e5e2df6784fb2599c4ad8c60", size = 137917, upload-time = "2025-05-08T02:36:13.532Z" }, + { url = "https://files.pythonhosted.org/packages/d0/c6/aeb89c8939ebe3f534af26c8c88000c5e870dbb6ae33644c21a4531f87d2/ijson-3.4.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:68c83161b052e9f5dc8191acbc862bb1e63f8a35344cb5cd0db1afd3afd487a6", size = 148897, upload-time = "2025-05-08T02:36:14.813Z" }, + { url = "https://files.pythonhosted.org/packages/be/0e/7ef6e9b372106f2682a4a32b3c65bf86bb471a1670e4dac242faee4a7d3f/ijson-3.4.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:1eebd9b6c20eb1dffde0ae1f0fbb4aeacec2eb7b89adb5c7c0449fc9fd742760", size = 149711, upload-time = "2025-05-08T02:36:16.476Z" }, + { url = "https://files.pythonhosted.org/packages/d1/5d/9841c3ed75bcdabf19b3202de5f862a9c9c86ce5c7c9d95fa32347fdbf5f/ijson-3.4.0-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:13fb6d5c35192c541421f3ee81239d91fc15a8d8f26c869250f941f4b346a86c", size = 141691, upload-time = "2025-05-08T02:36:18.044Z" }, + { url = "https://files.pythonhosted.org/packages/d5/d2/ce74e17218dba292e9be10a44ed0c75439f7958cdd263adb0b5b92d012d5/ijson-3.4.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:28b7196ff7b37c4897c547a28fa4876919696739fc91c1f347651c9736877c69", size = 150738, upload-time = "2025-05-08T02:36:19.483Z" }, + { url = "https://files.pythonhosted.org/packages/4e/43/dcc480f94453b1075c9911d4755b823f3ace275761bb37b40139f22109ca/ijson-3.4.0-cp313-cp313-win32.whl", hash = "sha256:3c2691d2da42629522140f77b99587d6f5010440d58d36616f33bc7bdc830cc3", size = 51512, upload-time = "2025-05-08T02:36:20.99Z" }, + { url = "https://files.pythonhosted.org/packages/35/dd/d8c5f15efd85ba51e6e11451ebe23d779361a9ec0d192064c2a8c3cdfcb8/ijson-3.4.0-cp313-cp313-win_amd64.whl", hash = "sha256:c4554718c275a044c47eb3874f78f2c939f300215d9031e785a6711cc51b83fc", size = 54074, upload-time = "2025-05-08T02:36:22.075Z" }, + { url = "https://files.pythonhosted.org/packages/79/73/24ad8cd106203419c4d22bed627e02e281d66b83e91bc206a371893d0486/ijson-3.4.0-cp313-cp313t-macosx_10_13_universal2.whl", hash = "sha256:915a65e3f3c0eee2ea937bc62aaedb6c14cc1e8f0bb9f3f4fb5a9e2bbfa4b480", size = 91694, upload-time = "2025-05-08T02:36:23.289Z" }, + { url = "https://files.pythonhosted.org/packages/17/2d/f7f680984bcb7324a46a4c2df3bd73cf70faef0acfeb85a3f811abdfd590/ijson-3.4.0-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:afbe9748707684b6c5adc295c4fdcf27765b300aec4d484e14a13dca4e5c0afa", size = 61390, upload-time = "2025-05-08T02:36:24.42Z" }, + { url = "https://files.pythonhosted.org/packages/09/a1/f3ca7bab86f95bdb82494739e71d271410dfefce4590785d511669127145/ijson-3.4.0-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:d823f8f321b4d8d5fa020d0a84f089fec5d52b7c0762430476d9f8bf95bbc1a9", size = 61140, upload-time = "2025-05-08T02:36:26.708Z" }, + { url = "https://files.pythonhosted.org/packages/51/79/dd340df3d4fc7771c95df29997956b92ed0570fe7b616d1792fea9ad93f2/ijson-3.4.0-cp313-cp313t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b8a0a2c54f3becf76881188beefd98b484b1d3bd005769a740d5b433b089fa23", size = 214739, upload-time = "2025-05-08T02:36:27.973Z" }, + { url = "https://files.pythonhosted.org/packages/59/f0/85380b7f51d1f5fb7065d76a7b623e02feca920cc678d329b2eccc0011e0/ijson-3.4.0-cp313-cp313t-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ced19a83ab09afa16257a0b15bc1aa888dbc555cb754be09d375c7f8d41051f2", size = 198338, upload-time = "2025-05-08T02:36:29.496Z" }, + { url = "https://files.pythonhosted.org/packages/a5/cd/313264cf2ec42e0f01d198c49deb7b6fadeb793b3685e20e738eb6b3fa13/ijson-3.4.0-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8100f9885eff1f38d35cef80ef759a1bbf5fc946349afa681bd7d0e681b7f1a0", size = 207515, upload-time = "2025-05-08T02:36:30.981Z" }, + { url = "https://files.pythonhosted.org/packages/12/94/bf14457aa87ea32641f2db577c9188ef4e4ae373478afef422b31fc7f309/ijson-3.4.0-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:d7bcc3f7f21b0f703031ecd15209b1284ea51b2a329d66074b5261de3916c1eb", size = 210081, upload-time = "2025-05-08T02:36:32.403Z" }, + { url = "https://files.pythonhosted.org/packages/7d/b4/eaee39e290e40e52d665db9bd1492cfdce86bd1e47948e0440db209c6023/ijson-3.4.0-cp313-cp313t-musllinux_1_2_i686.whl", hash = "sha256:2dcb190227b09dd171bdcbfe4720fddd574933c66314818dfb3960c8a6246a77", size = 199253, upload-time = "2025-05-08T02:36:33.861Z" }, + { url = "https://files.pythonhosted.org/packages/c5/9c/e09c7b9ac720a703ab115b221b819f149ed54c974edfff623c1e925e57da/ijson-3.4.0-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:eda4cfb1d49c6073a901735aaa62e39cb7ab47f3ad7bb184862562f776f1fa8a", size = 203816, upload-time = "2025-05-08T02:36:35.348Z" }, + { url = "https://files.pythonhosted.org/packages/7c/14/acd304f412e32d16a2c12182b9d78206bb0ae35354d35664f45db05c1b3b/ijson-3.4.0-cp313-cp313t-win32.whl", hash = "sha256:0772638efa1f3b72b51736833404f1cbd2f5beeb9c1a3d392e7d385b9160cba7", size = 53760, upload-time = "2025-05-08T02:36:36.608Z" }, + { url = "https://files.pythonhosted.org/packages/2f/24/93dd0a467191590a5ed1fc2b35842bca9d09900d001e00b0b497c0208ef6/ijson-3.4.0-cp313-cp313t-win_amd64.whl", hash = "sha256:3d8a0d67f36e4fb97c61a724456ef0791504b16ce6f74917a31c2e92309bbeb9", size = 56948, upload-time = "2025-05-08T02:36:37.849Z" }, + { url = "https://files.pythonhosted.org/packages/a3/9b/0bc0594d357600c03c3b5a3a34043d764fc3ad3f0757d2f3aae5b28f6c1c/ijson-3.4.0-pp311-pypy311_pp73-macosx_10_15_x86_64.whl", hash = "sha256:cdc8c5ca0eec789ed99db29c68012dda05027af0860bb360afd28d825238d69d", size = 56483, upload-time = "2025-05-08T02:37:03.274Z" }, + { url = "https://files.pythonhosted.org/packages/00/1f/506cf2574673da1adcc8a794ebb85bf857cabe6294523978637e646814de/ijson-3.4.0-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:8e6b44b6ec45d5b1a0ee9d97e0e65ab7f62258727004cbbe202bf5f198bc21f7", size = 55957, upload-time = "2025-05-08T02:37:04.865Z" }, + { url = "https://files.pythonhosted.org/packages/dc/3d/a7cd8d8a6de0f3084fe4d457a8f76176e11b013867d1cad16c67d25e8bec/ijson-3.4.0-pp311-pypy311_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b51e239e4cb537929796e840d349fc731fdc0d58b1a0683ce5465ad725321e0f", size = 69394, upload-time = "2025-05-08T02:37:06.142Z" }, + { url = "https://files.pythonhosted.org/packages/32/51/aa30abc02aabfc41c95887acf5f1f88da569642d7197fbe5aa105545226d/ijson-3.4.0-pp311-pypy311_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ed05d43ec02be8ddb1ab59579761f6656b25d241a77fd74f4f0f7ec09074318a", size = 70377, upload-time = "2025-05-08T02:37:07.353Z" }, + { url = "https://files.pythonhosted.org/packages/c7/37/7773659b8d8d98b34234e1237352f6b446a3c12941619686c7d4a8a5c69c/ijson-3.4.0-pp311-pypy311_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:cfeca1aaa59d93fd0a3718cbe5f7ef0effff85cf837e0bceb71831a47f39cc14", size = 67767, upload-time = "2025-05-08T02:37:08.587Z" }, + { url = "https://files.pythonhosted.org/packages/cd/1f/dd52a84ed140e31a5d226cd47d98d21aa559aead35ef7bae479eab4c494c/ijson-3.4.0-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:7ca72ca12e9a1dd4252c97d952be34282907f263f7e28fcdff3a01b83981e837", size = 53864, upload-time = "2025-05-08T02:37:10.044Z" }, ] [[package]] @@ -2207,7 +2342,7 @@ wheels = [ [[package]] name = "ipykernel" -version = "7.0.1" +version = "6.30.1" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "appnope", marker = "sys_platform == 'darwin'" }, @@ -2224,9 +2359,9 @@ dependencies = [ { name = "tornado" }, { name = "traitlets" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/a8/4c/9f0024c8457286c6bfd5405a15d650ec5ea36f420ef9bbc58b301f66cfc5/ipykernel-7.0.1.tar.gz", hash = "sha256:2d3fd7cdef22071c2abbad78f142b743228c5d59cd470d034871ae0ac359533c", size = 171460, upload-time = "2025-10-14T16:17:07.325Z" } +sdist = { url = "https://files.pythonhosted.org/packages/bb/76/11082e338e0daadc89c8ff866185de11daf67d181901038f9e139d109761/ipykernel-6.30.1.tar.gz", hash = "sha256:6abb270161896402e76b91394fcdce5d1be5d45f456671e5080572f8505be39b", size = 166260, upload-time = "2025-08-04T15:47:35.018Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/b8/f7/761037905ffdec673533bfa43af8d4c31c859c778dfc3bbb71899875ec18/ipykernel-7.0.1-py3-none-any.whl", hash = "sha256:87182a8305e28954b6721087dec45b171712610111d494c17bb607befa1c4000", size = 118157, upload-time = "2025-10-14T16:17:05.606Z" }, + { url = "https://files.pythonhosted.org/packages/fc/c7/b445faca8deb954fe536abebff4ece5b097b923de482b26e78448c89d1dd/ipykernel-6.30.1-py3-none-any.whl", hash = "sha256:aa6b9fb93dca949069d8b85b6c79b2518e32ac583ae9c7d37c51d119e18b3fb4", size = 117484, upload-time = "2025-08-04T15:47:32.622Z" }, ] [[package]] @@ -2676,6 +2811,32 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/99/43/7320c50e4133575c66e9f7dadead35ab22d7c012a3b09bb35647792b2a6d/kiwisolver-1.4.9-cp313-cp313t-musllinux_1_2_s390x.whl", hash = "sha256:0ab74e19f6a2b027ea4f845a78827969af45ce790e6cb3e1ebab71bdf9f215ff", size = 2594639, upload-time = "2025-08-10T21:26:57.882Z" }, { url = "https://files.pythonhosted.org/packages/65/d6/17ae4a270d4a987ef8a385b906d2bdfc9fce502d6dc0d3aea865b47f548c/kiwisolver-1.4.9-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:dba5ee5d3981160c28d5490f0d1b7ed730c22470ff7f6cc26cfcfaacb9896a07", size = 2391741, upload-time = "2025-08-10T21:26:59.237Z" }, { url = "https://files.pythonhosted.org/packages/2a/8f/8f6f491d595a9e5912971f3f863d81baddccc8a4d0c3749d6a0dd9ffc9df/kiwisolver-1.4.9-cp313-cp313t-win_arm64.whl", hash = "sha256:0749fd8f4218ad2e851e11cc4dc05c7cbc0cbc4267bdfdb31782e65aace4ee9c", size = 68646, upload-time = "2025-08-10T21:27:00.52Z" }, + { url = "https://files.pythonhosted.org/packages/6b/32/6cc0fbc9c54d06c2969faa9c1d29f5751a2e51809dd55c69055e62d9b426/kiwisolver-1.4.9-cp314-cp314-macosx_10_13_universal2.whl", hash = "sha256:9928fe1eb816d11ae170885a74d074f57af3a0d65777ca47e9aeb854a1fba386", size = 123806, upload-time = "2025-08-10T21:27:01.537Z" }, + { url = "https://files.pythonhosted.org/packages/b2/dd/2bfb1d4a4823d92e8cbb420fe024b8d2167f72079b3bb941207c42570bdf/kiwisolver-1.4.9-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:d0005b053977e7b43388ddec89fa567f43d4f6d5c2c0affe57de5ebf290dc552", size = 66605, upload-time = "2025-08-10T21:27:03.335Z" }, + { url = "https://files.pythonhosted.org/packages/f7/69/00aafdb4e4509c2ca6064646cba9cd4b37933898f426756adb2cb92ebbed/kiwisolver-1.4.9-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:2635d352d67458b66fd0667c14cb1d4145e9560d503219034a18a87e971ce4f3", size = 64925, upload-time = "2025-08-10T21:27:04.339Z" }, + { url = "https://files.pythonhosted.org/packages/43/dc/51acc6791aa14e5cb6d8a2e28cefb0dc2886d8862795449d021334c0df20/kiwisolver-1.4.9-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:767c23ad1c58c9e827b649a9ab7809fd5fd9db266a9cf02b0e926ddc2c680d58", size = 1472414, upload-time = "2025-08-10T21:27:05.437Z" }, + { url = "https://files.pythonhosted.org/packages/3d/bb/93fa64a81db304ac8a246f834d5094fae4b13baf53c839d6bb6e81177129/kiwisolver-1.4.9-cp314-cp314-manylinux_2_24_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:72d0eb9fba308b8311685c2268cf7d0a0639a6cd027d8128659f72bdd8a024b4", size = 1281272, upload-time = "2025-08-10T21:27:07.063Z" }, + { url = "https://files.pythonhosted.org/packages/70/e6/6df102916960fb8d05069d4bd92d6d9a8202d5a3e2444494e7cd50f65b7a/kiwisolver-1.4.9-cp314-cp314-manylinux_2_24_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:f68e4f3eeca8fb22cc3d731f9715a13b652795ef657a13df1ad0c7dc0e9731df", size = 1298578, upload-time = "2025-08-10T21:27:08.452Z" }, + { url = "https://files.pythonhosted.org/packages/7c/47/e142aaa612f5343736b087864dbaebc53ea8831453fb47e7521fa8658f30/kiwisolver-1.4.9-cp314-cp314-manylinux_2_24_s390x.manylinux_2_28_s390x.whl", hash = "sha256:d84cd4061ae292d8ac367b2c3fa3aad11cb8625a95d135fe93f286f914f3f5a6", size = 1345607, upload-time = "2025-08-10T21:27:10.125Z" }, + { url = "https://files.pythonhosted.org/packages/54/89/d641a746194a0f4d1a3670fb900d0dbaa786fb98341056814bc3f058fa52/kiwisolver-1.4.9-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:a60ea74330b91bd22a29638940d115df9dc00af5035a9a2a6ad9399ffb4ceca5", size = 2230150, upload-time = "2025-08-10T21:27:11.484Z" }, + { url = "https://files.pythonhosted.org/packages/aa/6b/5ee1207198febdf16ac11f78c5ae40861b809cbe0e6d2a8d5b0b3044b199/kiwisolver-1.4.9-cp314-cp314-musllinux_1_2_ppc64le.whl", hash = "sha256:ce6a3a4e106cf35c2d9c4fa17c05ce0b180db622736845d4315519397a77beaf", size = 2325979, upload-time = "2025-08-10T21:27:12.917Z" }, + { url = "https://files.pythonhosted.org/packages/fc/ff/b269eefd90f4ae14dcc74973d5a0f6d28d3b9bb1afd8c0340513afe6b39a/kiwisolver-1.4.9-cp314-cp314-musllinux_1_2_s390x.whl", hash = "sha256:77937e5e2a38a7b48eef0585114fe7930346993a88060d0bf886086d2aa49ef5", size = 2491456, upload-time = "2025-08-10T21:27:14.353Z" }, + { url = "https://files.pythonhosted.org/packages/fc/d4/10303190bd4d30de547534601e259a4fbf014eed94aae3e5521129215086/kiwisolver-1.4.9-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:24c175051354f4a28c5d6a31c93906dc653e2bf234e8a4bbfb964892078898ce", size = 2294621, upload-time = "2025-08-10T21:27:15.808Z" }, + { url = "https://files.pythonhosted.org/packages/28/e0/a9a90416fce5c0be25742729c2ea52105d62eda6c4be4d803c2a7be1fa50/kiwisolver-1.4.9-cp314-cp314-win_amd64.whl", hash = "sha256:0763515d4df10edf6d06a3c19734e2566368980d21ebec439f33f9eb936c07b7", size = 75417, upload-time = "2025-08-10T21:27:17.436Z" }, + { url = "https://files.pythonhosted.org/packages/1f/10/6949958215b7a9a264299a7db195564e87900f709db9245e4ebdd3c70779/kiwisolver-1.4.9-cp314-cp314-win_arm64.whl", hash = "sha256:0e4e2bf29574a6a7b7f6cb5fa69293b9f96c928949ac4a53ba3f525dffb87f9c", size = 66582, upload-time = "2025-08-10T21:27:18.436Z" }, + { url = "https://files.pythonhosted.org/packages/ec/79/60e53067903d3bc5469b369fe0dfc6b3482e2133e85dae9daa9527535991/kiwisolver-1.4.9-cp314-cp314t-macosx_10_13_universal2.whl", hash = "sha256:d976bbb382b202f71c67f77b0ac11244021cfa3f7dfd9e562eefcea2df711548", size = 126514, upload-time = "2025-08-10T21:27:19.465Z" }, + { url = "https://files.pythonhosted.org/packages/25/d1/4843d3e8d46b072c12a38c97c57fab4608d36e13fe47d47ee96b4d61ba6f/kiwisolver-1.4.9-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:2489e4e5d7ef9a1c300a5e0196e43d9c739f066ef23270607d45aba368b91f2d", size = 67905, upload-time = "2025-08-10T21:27:20.51Z" }, + { url = "https://files.pythonhosted.org/packages/8c/ae/29ffcbd239aea8b93108de1278271ae764dfc0d803a5693914975f200596/kiwisolver-1.4.9-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:e2ea9f7ab7fbf18fffb1b5434ce7c69a07582f7acc7717720f1d69f3e806f90c", size = 66399, upload-time = "2025-08-10T21:27:21.496Z" }, + { url = "https://files.pythonhosted.org/packages/a1/ae/d7ba902aa604152c2ceba5d352d7b62106bedbccc8e95c3934d94472bfa3/kiwisolver-1.4.9-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:b34e51affded8faee0dfdb705416153819d8ea9250bbbf7ea1b249bdeb5f1122", size = 1582197, upload-time = "2025-08-10T21:27:22.604Z" }, + { url = "https://files.pythonhosted.org/packages/f2/41/27c70d427eddb8bc7e4f16420a20fefc6f480312122a59a959fdfe0445ad/kiwisolver-1.4.9-cp314-cp314t-manylinux_2_24_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:d8aacd3d4b33b772542b2e01beb50187536967b514b00003bdda7589722d2a64", size = 1390125, upload-time = "2025-08-10T21:27:24.036Z" }, + { url = "https://files.pythonhosted.org/packages/41/42/b3799a12bafc76d962ad69083f8b43b12bf4fe78b097b12e105d75c9b8f1/kiwisolver-1.4.9-cp314-cp314t-manylinux_2_24_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:7cf974dd4e35fa315563ac99d6287a1024e4dc2077b8a7d7cd3d2fb65d283134", size = 1402612, upload-time = "2025-08-10T21:27:25.773Z" }, + { url = "https://files.pythonhosted.org/packages/d2/b5/a210ea073ea1cfaca1bb5c55a62307d8252f531beb364e18aa1e0888b5a0/kiwisolver-1.4.9-cp314-cp314t-manylinux_2_24_s390x.manylinux_2_28_s390x.whl", hash = "sha256:85bd218b5ecfbee8c8a82e121802dcb519a86044c9c3b2e4aef02fa05c6da370", size = 1453990, upload-time = "2025-08-10T21:27:27.089Z" }, + { url = "https://files.pythonhosted.org/packages/5f/ce/a829eb8c033e977d7ea03ed32fb3c1781b4fa0433fbadfff29e39c676f32/kiwisolver-1.4.9-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:0856e241c2d3df4efef7c04a1e46b1936b6120c9bcf36dd216e3acd84bc4fb21", size = 2331601, upload-time = "2025-08-10T21:27:29.343Z" }, + { url = "https://files.pythonhosted.org/packages/e0/4b/b5e97eb142eb9cd0072dacfcdcd31b1c66dc7352b0f7c7255d339c0edf00/kiwisolver-1.4.9-cp314-cp314t-musllinux_1_2_ppc64le.whl", hash = "sha256:9af39d6551f97d31a4deebeac6f45b156f9755ddc59c07b402c148f5dbb6482a", size = 2422041, upload-time = "2025-08-10T21:27:30.754Z" }, + { url = "https://files.pythonhosted.org/packages/40/be/8eb4cd53e1b85ba4edc3a9321666f12b83113a178845593307a3e7891f44/kiwisolver-1.4.9-cp314-cp314t-musllinux_1_2_s390x.whl", hash = "sha256:bb4ae2b57fc1d8cbd1cf7b1d9913803681ffa903e7488012be5b76dedf49297f", size = 2594897, upload-time = "2025-08-10T21:27:32.803Z" }, + { url = "https://files.pythonhosted.org/packages/99/dd/841e9a66c4715477ea0abc78da039832fbb09dac5c35c58dc4c41a407b8a/kiwisolver-1.4.9-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:aedff62918805fb62d43a4aa2ecd4482c380dc76cd31bd7c8878588a61bd0369", size = 2391835, upload-time = "2025-08-10T21:27:34.23Z" }, + { url = "https://files.pythonhosted.org/packages/0c/28/4b2e5c47a0da96896fdfdb006340ade064afa1e63675d01ea5ac222b6d52/kiwisolver-1.4.9-cp314-cp314t-win_amd64.whl", hash = "sha256:1fa333e8b2ce4d9660f2cda9c0e1b6bafcfb2457a9d259faa82289e73ec24891", size = 79988, upload-time = "2025-08-10T21:27:35.587Z" }, + { url = "https://files.pythonhosted.org/packages/80/be/3578e8afd18c88cdf9cb4cffde75a96d2be38c5a903f1ed0ceec061bd09e/kiwisolver-1.4.9-cp314-cp314t-win_arm64.whl", hash = "sha256:4a48a2ce79d65d363597ef7b567ce3d14d68783d2b2263d98db3d9477805ba32", size = 70260, upload-time = "2025-08-10T21:27:36.606Z" }, { url = "https://files.pythonhosted.org/packages/a3/0f/36d89194b5a32c054ce93e586d4049b6c2c22887b0eb229c61c68afd3078/kiwisolver-1.4.9-pp311-pypy311_pp73-macosx_10_15_x86_64.whl", hash = "sha256:720e05574713db64c356e86732c0f3c5252818d05f9df320f0ad8380641acea5", size = 60104, upload-time = "2025-08-10T21:27:43.287Z" }, { url = "https://files.pythonhosted.org/packages/52/ba/4ed75f59e4658fd21fe7dde1fee0ac397c678ec3befba3fe6482d987af87/kiwisolver-1.4.9-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:17680d737d5335b552994a2008fab4c851bcd7de33094a82067ef3a576ff02fa", size = 58592, upload-time = "2025-08-10T21:27:44.314Z" }, { url = "https://files.pythonhosted.org/packages/33/01/a8ea7c5ea32a9b45ceeaee051a04c8ed4320f5add3c51bfa20879b765b70/kiwisolver-1.4.9-pp311-pypy311_pp73-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:85b5352f94e490c028926ea567fc569c52ec79ce131dadb968d3853e809518c2", size = 80281, upload-time = "2025-08-10T21:27:45.369Z" }, @@ -2706,7 +2867,7 @@ wheels = [ [[package]] name = "logfire" -version = "4.14.2" +version = "4.12.0" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "executing" }, @@ -2717,9 +2878,9 @@ dependencies = [ { name = "rich" }, { name = "typing-extensions" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/5c/89/d26951b6b21790641720c12cfd6dca0cf7ead0f5ddd7de4299837b90b8b1/logfire-4.14.2.tar.gz", hash = "sha256:8dcedbd59c3d06a8794a93bbf09add788de3b74c45afa821750992f0c822c628", size = 548291, upload-time = "2025-10-24T20:14:39.115Z" } +sdist = { url = "https://files.pythonhosted.org/packages/b4/cf/38d617c783a1c4233f025b8a27e617d25958494fc53ff396d0dd1dea54d2/logfire-4.12.0.tar.gz", hash = "sha256:92de3b9640fd7dfde1e5a37e67c0df1f1a95c704fa72ddd9b6db07903b6bd3d7", size = 547148, upload-time = "2025-10-08T11:30:09.684Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/a7/92/4fba7b8f4f56f721ad279cb0c08164bffa14e93cfd184d1a4cc7151c52a2/logfire-4.14.2-py3-none-any.whl", hash = "sha256:caa8111b20f263f4ebb0ae380a62f2a214aeb07d5e2f03c9300fa096d0a8e692", size = 228364, upload-time = "2025-10-24T20:14:34.495Z" }, + { url = "https://files.pythonhosted.org/packages/ba/58/0aed8ff902d6638916ae9a5f45f951cad84d8eb929a1e0b9844edd4c9135/logfire-4.12.0-py3-none-any.whl", hash = "sha256:b9777b12605ec987f8ce04855bd95ee5b86f4a6dbf9766e1ed101a07257322d2", size = 227201, upload-time = "2025-10-08T11:30:06.389Z" }, ] [package.optional-dependencies] @@ -2783,6 +2944,12 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/2a/43/70201ccf7b57f172ee1bb4d14fc7194359802aa17c1ac1608d503c19ee47/loro-1.8.1-cp313-cp313t-musllinux_1_2_armv7l.whl", hash = "sha256:9e6dde01971d72ba678161aaa24bc5261929def86a6feb8149d3e2dab0964aea", size = 3444718, upload-time = "2025-09-23T15:51:33.872Z" }, { url = "https://files.pythonhosted.org/packages/14/b8/01c1d4339ab67d8aff6a5038db6251f6d44967a663f2692be6aabe276035/loro-1.8.1-cp313-cp313t-musllinux_1_2_i686.whl", hash = "sha256:8d3789752b26b40f26a44a80d784a4f9e40f2bd0e40a4eeb01e1e386920feaaa", size = 3490418, upload-time = "2025-09-23T15:52:11.183Z" }, { url = "https://files.pythonhosted.org/packages/60/67/88e0edaf4158184d87eee4efdce283306831632ef7ef010153abf6d36b82/loro-1.8.1-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:ab04743218b6cbbfdf4ca74d158aed20ed0c9d7019620d35548e89f1d519923b", size = 3389761, upload-time = "2025-09-23T15:52:47.785Z" }, + { url = "https://files.pythonhosted.org/packages/54/fb/ccf317276518df910340ddf7729a0ed1602d215db1f6ca8ccda0fc6071df/loro-1.8.1-cp314-cp314-macosx_10_12_x86_64.whl", hash = "sha256:29c25f33c659a8027974cd88c94f08b4708376b200290a858c8abd891d64ba15", size = 3072231, upload-time = "2025-09-23T15:50:43.568Z" }, + { url = "https://files.pythonhosted.org/packages/bd/5c/87f37c4bbef478373b15ad4052ab9ee69ae87646a9c853dda97147f4e87a/loro-1.8.1-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:4a9643d19eee7379b6980fc3b31a492bd22aa1e9aaa6fd67c8b5b4b57a0c7a1c", size = 2870631, upload-time = "2025-09-23T15:50:26.223Z" }, + { url = "https://files.pythonhosted.org/packages/a2/7f/b0d121297000d1278c4be96ebaed245b7e1edf74851b9ed5aa552daf85eb/loro-1.8.1-cp314-cp314-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:323370115c37a793e805952e21703d8e8c91cc7ef16dd3a378043fe40174599f", size = 3156119, upload-time = "2025-09-23T15:49:51.227Z" }, + { url = "https://files.pythonhosted.org/packages/70/ee/35c62e7acfc572397ffb09db60f20b32be422a7983ae3d891527983a6a7e/loro-1.8.1-cp314-cp314-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:69265d6751e536fd7ba1f04c6be200239b4d8090bcd1325a95ded08621c4c910", size = 3492080, upload-time = "2025-09-23T15:49:22.137Z" }, + { url = "https://files.pythonhosted.org/packages/23/36/543916bb43228e4d13e155d9f31cbe16cf4f995d306aa5dbf4aba2b44170/loro-1.8.1-cp314-cp314-win32.whl", hash = "sha256:00c3662f50b81276a0f45d90504402e36512fda9f98e3e9353cc2b2394aa56a5", size = 2584938, upload-time = "2025-09-23T15:53:49.355Z" }, + { url = "https://files.pythonhosted.org/packages/4e/b1/8369c393107cafcaf6d5bdfe8cc4fead384b8ab8c7ddaf5d16235e5482e2/loro-1.8.1-cp314-cp314-win_amd64.whl", hash = "sha256:c6ebacceed553dad118dd61f946f5f8fb23ace5ca93e8ee8ebd4f6ca4cffa854", size = 2722278, upload-time = "2025-09-23T15:53:36.035Z" }, { url = "https://files.pythonhosted.org/packages/c3/3b/2d13e114e6e4e0fed0e2626d00437b9295b4cf700831b363b3a5cebf1704/loro-1.8.1-pp311-pypy311_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c3767a49698ca87c981cf081e83cd693bb6db1891afa735e26eb07e4a8e251eb", size = 3106733, upload-time = "2025-09-23T15:46:59.98Z" }, { url = "https://files.pythonhosted.org/packages/d3/78/c44830c89c786dfa2164e573b4954ce1efca708bcffffc1ea283f26dbfeb/loro-1.8.1-pp311-pypy311_pp73-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:af22361dd0f9d1fde7fe51da97a6348d811f1c89e4646d1ae539a8ebf08d2174", size = 3178590, upload-time = "2025-09-23T15:47:38.454Z" }, { url = "https://files.pythonhosted.org/packages/b4/1b/3aea45999e3a3f9d8162824cee70ec358b5a7b0e603d475b7856c7269246/loro-1.8.1-pp311-pypy311_pp73-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:b499c748cb6840223c39e07844975e62d7405de4341ea6f84cf61fc7d9f983c7", size = 3562843, upload-time = "2025-09-23T15:48:14.966Z" }, @@ -2880,7 +3047,7 @@ wheels = [ [[package]] name = "marimo" -version = "0.17.2" +version = "0.16.5" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "click" }, @@ -2889,7 +3056,7 @@ dependencies = [ { name = "jedi" }, { name = "loro" }, { name = "markdown" }, - { name = "msgspec-m" }, + { name = "msgspec" }, { name = "narwhals" }, { name = "packaging" }, { name = "psutil" }, @@ -2901,9 +3068,9 @@ dependencies = [ { name = "uvicorn" }, { name = "websockets" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/b7/49/a8c37df4c2c4c4d13c50d06843be81bc86ee3b2acd7530df6e038bc71571/marimo-0.17.2.tar.gz", hash = "sha256:c95a357d688d2cd1d0235f97ea597b009e64c708fdd4760396cc9e62ca5de544", size = 33964383, upload-time = "2025-10-24T20:15:26.672Z" } +sdist = { url = "https://files.pythonhosted.org/packages/67/6b/5e1fcdeebebf6cebfbbf7cd6eda3be1ee019e50c693d99f00e478c4f3f8c/marimo-0.16.5.tar.gz", hash = "sha256:8f5939d3c4e67ff25f6cfeefe731971ed7f3346c20098034b923a24a0d7770d6", size = 33882430, upload-time = "2025-10-02T19:57:49.438Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/90/66/c77eafb32d09f1d1e570cee06c4b1d265027864202de4177c8980b90fe0b/marimo-0.17.2-py3-none-any.whl", hash = "sha256:5ed1f013694e375cb524663c0930d83b8aa2bfb0e8af9400ffc078d72d253404", size = 34478909, upload-time = "2025-10-24T20:15:22.302Z" }, + { url = "https://files.pythonhosted.org/packages/0d/a0/14bd2ae122db3c4c43c6ebf0c861196328156f61e6452c2cd67b730430b0/marimo-0.16.5-py3-none-any.whl", hash = "sha256:1f98c0ee0fed9337e26c895c662f92cc578cdd03502c194eac9ceeb434bf479b", size = 34400840, upload-time = "2025-10-02T19:57:45.076Z" }, ] [[package]] @@ -2986,6 +3153,28 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/80/d6/2d1b89f6ca4bff1036499b1e29a1d02d282259f3681540e16563f27ebc23/markupsafe-3.0.3-cp313-cp313t-win32.whl", hash = "sha256:69c0b73548bc525c8cb9a251cddf1931d1db4d2258e9599c28c07ef3580ef354", size = 14612, upload-time = "2025-09-27T18:37:02.639Z" }, { url = "https://files.pythonhosted.org/packages/2b/98/e48a4bfba0a0ffcf9925fe2d69240bfaa19c6f7507b8cd09c70684a53c1e/markupsafe-3.0.3-cp313-cp313t-win_amd64.whl", hash = "sha256:1b4b79e8ebf6b55351f0d91fe80f893b4743f104bff22e90697db1590e47a218", size = 15200, upload-time = "2025-09-27T18:37:03.582Z" }, { url = "https://files.pythonhosted.org/packages/0e/72/e3cc540f351f316e9ed0f092757459afbc595824ca724cbc5a5d4263713f/markupsafe-3.0.3-cp313-cp313t-win_arm64.whl", hash = "sha256:ad2cf8aa28b8c020ab2fc8287b0f823d0a7d8630784c31e9ee5edea20f406287", size = 13973, upload-time = "2025-09-27T18:37:04.929Z" }, + { url = "https://files.pythonhosted.org/packages/33/8a/8e42d4838cd89b7dde187011e97fe6c3af66d8c044997d2183fbd6d31352/markupsafe-3.0.3-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:eaa9599de571d72e2daf60164784109f19978b327a3910d3e9de8c97b5b70cfe", size = 11619, upload-time = "2025-09-27T18:37:06.342Z" }, + { url = "https://files.pythonhosted.org/packages/b5/64/7660f8a4a8e53c924d0fa05dc3a55c9cee10bbd82b11c5afb27d44b096ce/markupsafe-3.0.3-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:c47a551199eb8eb2121d4f0f15ae0f923d31350ab9280078d1e5f12b249e0026", size = 12029, upload-time = "2025-09-27T18:37:07.213Z" }, + { url = "https://files.pythonhosted.org/packages/da/ef/e648bfd021127bef5fa12e1720ffed0c6cbb8310c8d9bea7266337ff06de/markupsafe-3.0.3-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:f34c41761022dd093b4b6896d4810782ffbabe30f2d443ff5f083e0cbbb8c737", size = 24408, upload-time = "2025-09-27T18:37:09.572Z" }, + { url = "https://files.pythonhosted.org/packages/41/3c/a36c2450754618e62008bf7435ccb0f88053e07592e6028a34776213d877/markupsafe-3.0.3-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:457a69a9577064c05a97c41f4e65148652db078a3a509039e64d3467b9e7ef97", size = 23005, upload-time = "2025-09-27T18:37:10.58Z" }, + { url = "https://files.pythonhosted.org/packages/bc/20/b7fdf89a8456b099837cd1dc21974632a02a999ec9bf7ca3e490aacd98e7/markupsafe-3.0.3-cp314-cp314-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:e8afc3f2ccfa24215f8cb28dcf43f0113ac3c37c2f0f0806d8c70e4228c5cf4d", size = 22048, upload-time = "2025-09-27T18:37:11.547Z" }, + { url = "https://files.pythonhosted.org/packages/9a/a7/591f592afdc734f47db08a75793a55d7fbcc6902a723ae4cfbab61010cc5/markupsafe-3.0.3-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:ec15a59cf5af7be74194f7ab02d0f59a62bdcf1a537677ce67a2537c9b87fcda", size = 23821, upload-time = "2025-09-27T18:37:12.48Z" }, + { url = "https://files.pythonhosted.org/packages/7d/33/45b24e4f44195b26521bc6f1a82197118f74df348556594bd2262bda1038/markupsafe-3.0.3-cp314-cp314-musllinux_1_2_riscv64.whl", hash = "sha256:0eb9ff8191e8498cca014656ae6b8d61f39da5f95b488805da4bb029cccbfbaf", size = 21606, upload-time = "2025-09-27T18:37:13.485Z" }, + { url = "https://files.pythonhosted.org/packages/ff/0e/53dfaca23a69fbfbbf17a4b64072090e70717344c52eaaaa9c5ddff1e5f0/markupsafe-3.0.3-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:2713baf880df847f2bece4230d4d094280f4e67b1e813eec43b4c0e144a34ffe", size = 23043, upload-time = "2025-09-27T18:37:14.408Z" }, + { url = "https://files.pythonhosted.org/packages/46/11/f333a06fc16236d5238bfe74daccbca41459dcd8d1fa952e8fbd5dccfb70/markupsafe-3.0.3-cp314-cp314-win32.whl", hash = "sha256:729586769a26dbceff69f7a7dbbf59ab6572b99d94576a5592625d5b411576b9", size = 14747, upload-time = "2025-09-27T18:37:15.36Z" }, + { url = "https://files.pythonhosted.org/packages/28/52/182836104b33b444e400b14f797212f720cbc9ed6ba34c800639d154e821/markupsafe-3.0.3-cp314-cp314-win_amd64.whl", hash = "sha256:bdc919ead48f234740ad807933cdf545180bfbe9342c2bb451556db2ed958581", size = 15341, upload-time = "2025-09-27T18:37:16.496Z" }, + { url = "https://files.pythonhosted.org/packages/6f/18/acf23e91bd94fd7b3031558b1f013adfa21a8e407a3fdb32745538730382/markupsafe-3.0.3-cp314-cp314-win_arm64.whl", hash = "sha256:5a7d5dc5140555cf21a6fefbdbf8723f06fcd2f63ef108f2854de715e4422cb4", size = 14073, upload-time = "2025-09-27T18:37:17.476Z" }, + { url = "https://files.pythonhosted.org/packages/3c/f0/57689aa4076e1b43b15fdfa646b04653969d50cf30c32a102762be2485da/markupsafe-3.0.3-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:1353ef0c1b138e1907ae78e2f6c63ff67501122006b0f9abad68fda5f4ffc6ab", size = 11661, upload-time = "2025-09-27T18:37:18.453Z" }, + { url = "https://files.pythonhosted.org/packages/89/c3/2e67a7ca217c6912985ec766c6393b636fb0c2344443ff9d91404dc4c79f/markupsafe-3.0.3-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:1085e7fbddd3be5f89cc898938f42c0b3c711fdcb37d75221de2666af647c175", size = 12069, upload-time = "2025-09-27T18:37:19.332Z" }, + { url = "https://files.pythonhosted.org/packages/f0/00/be561dce4e6ca66b15276e184ce4b8aec61fe83662cce2f7d72bd3249d28/markupsafe-3.0.3-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:1b52b4fb9df4eb9ae465f8d0c228a00624de2334f216f178a995ccdcf82c4634", size = 25670, upload-time = "2025-09-27T18:37:20.245Z" }, + { url = "https://files.pythonhosted.org/packages/50/09/c419f6f5a92e5fadde27efd190eca90f05e1261b10dbd8cbcb39cd8ea1dc/markupsafe-3.0.3-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:fed51ac40f757d41b7c48425901843666a6677e3e8eb0abcff09e4ba6e664f50", size = 23598, upload-time = "2025-09-27T18:37:21.177Z" }, + { url = "https://files.pythonhosted.org/packages/22/44/a0681611106e0b2921b3033fc19bc53323e0b50bc70cffdd19f7d679bb66/markupsafe-3.0.3-cp314-cp314t-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:f190daf01f13c72eac4efd5c430a8de82489d9cff23c364c3ea822545032993e", size = 23261, upload-time = "2025-09-27T18:37:22.167Z" }, + { url = "https://files.pythonhosted.org/packages/5f/57/1b0b3f100259dc9fffe780cfb60d4be71375510e435efec3d116b6436d43/markupsafe-3.0.3-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:e56b7d45a839a697b5eb268c82a71bd8c7f6c94d6fd50c3d577fa39a9f1409f5", size = 24835, upload-time = "2025-09-27T18:37:23.296Z" }, + { url = "https://files.pythonhosted.org/packages/26/6a/4bf6d0c97c4920f1597cc14dd720705eca0bf7c787aebc6bb4d1bead5388/markupsafe-3.0.3-cp314-cp314t-musllinux_1_2_riscv64.whl", hash = "sha256:f3e98bb3798ead92273dc0e5fd0f31ade220f59a266ffd8a4f6065e0a3ce0523", size = 22733, upload-time = "2025-09-27T18:37:24.237Z" }, + { url = "https://files.pythonhosted.org/packages/14/c7/ca723101509b518797fedc2fdf79ba57f886b4aca8a7d31857ba3ee8281f/markupsafe-3.0.3-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:5678211cb9333a6468fb8d8be0305520aa073f50d17f089b5b4b477ea6e67fdc", size = 23672, upload-time = "2025-09-27T18:37:25.271Z" }, + { url = "https://files.pythonhosted.org/packages/fb/df/5bd7a48c256faecd1d36edc13133e51397e41b73bb77e1a69deab746ebac/markupsafe-3.0.3-cp314-cp314t-win32.whl", hash = "sha256:915c04ba3851909ce68ccc2b8e2cd691618c4dc4c4232fb7982bca3f41fd8c3d", size = 14819, upload-time = "2025-09-27T18:37:26.285Z" }, + { url = "https://files.pythonhosted.org/packages/1a/8a/0402ba61a2f16038b48b39bccca271134be00c5c9f0f623208399333c448/markupsafe-3.0.3-cp314-cp314t-win_amd64.whl", hash = "sha256:4faffd047e07c38848ce017e8725090413cd80cbc23d86e55c587bf979e579c9", size = 15426, upload-time = "2025-09-27T18:37:27.316Z" }, + { url = "https://files.pythonhosted.org/packages/70/bc/6f1c2f612465f5fa89b95bead1f44dcb607670fd42891d8fdcd5d039f4f4/markupsafe-3.0.3-cp314-cp314t-win_arm64.whl", hash = "sha256:32001d6a8fc98c8cb5c947787c5d08b0a50663d139f1305bac5885d98d9b40fa", size = 14146, upload-time = "2025-09-27T18:37:28.327Z" }, ] [[package]] @@ -3002,7 +3191,7 @@ wheels = [ [[package]] name = "matplotlib" -version = "3.10.7" +version = "3.10.6" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "contourpy" }, @@ -3015,39 +3204,53 @@ dependencies = [ { name = "pyparsing" }, { name = "python-dateutil" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/ae/e2/d2d5295be2f44c678ebaf3544ba32d20c1f9ef08c49fe47f496180e1db15/matplotlib-3.10.7.tar.gz", hash = "sha256:a06ba7e2a2ef9131c79c49e63dad355d2d878413a0376c1727c8b9335ff731c7", size = 34804865, upload-time = "2025-10-09T00:28:00.669Z" } -wheels = [ - { url = "https://files.pythonhosted.org/packages/fc/bc/0fb489005669127ec13f51be0c6adc074d7cf191075dab1da9fe3b7a3cfc/matplotlib-3.10.7-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:53b492410a6cd66c7a471de6c924f6ede976e963c0f3097a3b7abfadddc67d0a", size = 8257507, upload-time = "2025-10-09T00:26:19.073Z" }, - { url = "https://files.pythonhosted.org/packages/e2/6a/d42588ad895279ff6708924645b5d2ed54a7fb2dc045c8a804e955aeace1/matplotlib-3.10.7-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:d9749313deb729f08207718d29c86246beb2ea3fdba753595b55901dee5d2fd6", size = 8119565, upload-time = "2025-10-09T00:26:21.023Z" }, - { url = "https://files.pythonhosted.org/packages/10/b7/4aa196155b4d846bd749cf82aa5a4c300cf55a8b5e0dfa5b722a63c0f8a0/matplotlib-3.10.7-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:2222c7ba2cbde7fe63032769f6eb7e83ab3227f47d997a8453377709b7fe3a5a", size = 8692668, upload-time = "2025-10-09T00:26:22.967Z" }, - { url = "https://files.pythonhosted.org/packages/e6/e7/664d2b97016f46683a02d854d730cfcf54ff92c1dafa424beebef50f831d/matplotlib-3.10.7-cp311-cp311-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:e91f61a064c92c307c5a9dc8c05dc9f8a68f0a3be199d9a002a0622e13f874a1", size = 9521051, upload-time = "2025-10-09T00:26:25.041Z" }, - { url = "https://files.pythonhosted.org/packages/a8/a3/37aef1404efa615f49b5758a5e0261c16dd88f389bc1861e722620e4a754/matplotlib-3.10.7-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:6f1851eab59ca082c95df5a500106bad73672645625e04538b3ad0f69471ffcc", size = 9576878, upload-time = "2025-10-09T00:26:27.478Z" }, - { url = "https://files.pythonhosted.org/packages/33/cd/b145f9797126f3f809d177ca378de57c45413c5099c5990de2658760594a/matplotlib-3.10.7-cp311-cp311-win_amd64.whl", hash = "sha256:6516ce375109c60ceec579e699524e9d504cd7578506f01150f7a6bc174a775e", size = 8115142, upload-time = "2025-10-09T00:26:29.774Z" }, - { url = "https://files.pythonhosted.org/packages/2e/39/63bca9d2b78455ed497fcf51a9c71df200a11048f48249038f06447fa947/matplotlib-3.10.7-cp311-cp311-win_arm64.whl", hash = "sha256:b172db79759f5f9bc13ef1c3ef8b9ee7b37b0247f987fbbbdaa15e4f87fd46a9", size = 7992439, upload-time = "2025-10-09T00:26:40.32Z" }, - { url = "https://files.pythonhosted.org/packages/be/b3/09eb0f7796932826ec20c25b517d568627754f6c6462fca19e12c02f2e12/matplotlib-3.10.7-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:7a0edb7209e21840e8361e91ea84ea676658aa93edd5f8762793dec77a4a6748", size = 8272389, upload-time = "2025-10-09T00:26:42.474Z" }, - { url = "https://files.pythonhosted.org/packages/11/0b/1ae80ddafb8652fd8046cb5c8460ecc8d4afccb89e2c6d6bec61e04e1eaf/matplotlib-3.10.7-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:c380371d3c23e0eadf8ebff114445b9f970aff2010198d498d4ab4c3b41eea4f", size = 8128247, upload-time = "2025-10-09T00:26:44.77Z" }, - { url = "https://files.pythonhosted.org/packages/7d/18/95ae2e242d4a5c98bd6e90e36e128d71cf1c7e39b0874feaed3ef782e789/matplotlib-3.10.7-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:d5f256d49fea31f40f166a5e3131235a5d2f4b7f44520b1cf0baf1ce568ccff0", size = 8696996, upload-time = "2025-10-09T00:26:46.792Z" }, - { url = "https://files.pythonhosted.org/packages/7e/3d/5b559efc800bd05cb2033aa85f7e13af51958136a48327f7c261801ff90a/matplotlib-3.10.7-cp312-cp312-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:11ae579ac83cdf3fb72573bb89f70e0534de05266728740d478f0f818983c695", size = 9530153, upload-time = "2025-10-09T00:26:49.07Z" }, - { url = "https://files.pythonhosted.org/packages/88/57/eab4a719fd110312d3c220595d63a3c85ec2a39723f0f4e7fa7e6e3f74ba/matplotlib-3.10.7-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:4c14b6acd16cddc3569a2d515cfdd81c7a68ac5639b76548cfc1a9e48b20eb65", size = 9593093, upload-time = "2025-10-09T00:26:51.067Z" }, - { url = "https://files.pythonhosted.org/packages/31/3c/80816f027b3a4a28cd2a0a6ef7f89a2db22310e945cd886ec25bfb399221/matplotlib-3.10.7-cp312-cp312-win_amd64.whl", hash = "sha256:0d8c32b7ea6fb80b1aeff5a2ceb3fb9778e2759e899d9beff75584714afcc5ee", size = 8122771, upload-time = "2025-10-09T00:26:53.296Z" }, - { url = "https://files.pythonhosted.org/packages/de/77/ef1fc78bfe99999b2675435cc52120887191c566b25017d78beaabef7f2d/matplotlib-3.10.7-cp312-cp312-win_arm64.whl", hash = "sha256:5f3f6d315dcc176ba7ca6e74c7768fb7e4cf566c49cb143f6bc257b62e634ed8", size = 7992812, upload-time = "2025-10-09T00:26:54.882Z" }, - { url = "https://files.pythonhosted.org/packages/02/9c/207547916a02c78f6bdd83448d9b21afbc42f6379ed887ecf610984f3b4e/matplotlib-3.10.7-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:1d9d3713a237970569156cfb4de7533b7c4eacdd61789726f444f96a0d28f57f", size = 8273212, upload-time = "2025-10-09T00:26:56.752Z" }, - { url = "https://files.pythonhosted.org/packages/bc/d0/b3d3338d467d3fc937f0bb7f256711395cae6f78e22cef0656159950adf0/matplotlib-3.10.7-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:37a1fea41153dd6ee061d21ab69c9cf2cf543160b1b85d89cd3d2e2a7902ca4c", size = 8128713, upload-time = "2025-10-09T00:26:59.001Z" }, - { url = "https://files.pythonhosted.org/packages/22/ff/6425bf5c20d79aa5b959d1ce9e65f599632345391381c9a104133fe0b171/matplotlib-3.10.7-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:b3c4ea4948d93c9c29dc01c0c23eef66f2101bf75158c291b88de6525c55c3d1", size = 8698527, upload-time = "2025-10-09T00:27:00.69Z" }, - { url = "https://files.pythonhosted.org/packages/d0/7f/ccdca06f4c2e6c7989270ed7829b8679466682f4cfc0f8c9986241c023b6/matplotlib-3.10.7-cp313-cp313-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:22df30ffaa89f6643206cf13877191c63a50e8f800b038bc39bee9d2d4957632", size = 9529690, upload-time = "2025-10-09T00:27:02.664Z" }, - { url = "https://files.pythonhosted.org/packages/b8/95/b80fc2c1f269f21ff3d193ca697358e24408c33ce2b106a7438a45407b63/matplotlib-3.10.7-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:b69676845a0a66f9da30e87f48be36734d6748024b525ec4710be40194282c84", size = 9593732, upload-time = "2025-10-09T00:27:04.653Z" }, - { url = "https://files.pythonhosted.org/packages/e1/b6/23064a96308b9aeceeffa65e96bcde459a2ea4934d311dee20afde7407a0/matplotlib-3.10.7-cp313-cp313-win_amd64.whl", hash = "sha256:744991e0cc863dd669c8dc9136ca4e6e0082be2070b9d793cbd64bec872a6815", size = 8122727, upload-time = "2025-10-09T00:27:06.814Z" }, - { url = "https://files.pythonhosted.org/packages/b3/a6/2faaf48133b82cf3607759027f82b5c702aa99cdfcefb7f93d6ccf26a424/matplotlib-3.10.7-cp313-cp313-win_arm64.whl", hash = "sha256:fba2974df0bf8ce3c995fa84b79cde38326e0f7b5409e7a3a481c1141340bcf7", size = 7992958, upload-time = "2025-10-09T00:27:08.567Z" }, - { url = "https://files.pythonhosted.org/packages/4a/f0/b018fed0b599bd48d84c08794cb242227fe3341952da102ee9d9682db574/matplotlib-3.10.7-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:932c55d1fa7af4423422cb6a492a31cbcbdbe68fd1a9a3f545aa5e7a143b5355", size = 8316849, upload-time = "2025-10-09T00:27:10.254Z" }, - { url = "https://files.pythonhosted.org/packages/b0/b7/bb4f23856197659f275e11a2a164e36e65e9b48ea3e93c4ec25b4f163198/matplotlib-3.10.7-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:5e38c2d581d62ee729a6e144c47a71b3f42fb4187508dbbf4fe71d5612c3433b", size = 8178225, upload-time = "2025-10-09T00:27:12.241Z" }, - { url = "https://files.pythonhosted.org/packages/62/56/0600609893ff277e6f3ab3c0cef4eafa6e61006c058e84286c467223d4d5/matplotlib-3.10.7-cp313-cp313t-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:786656bb13c237bbcebcd402f65f44dd61ead60ee3deb045af429d889c8dbc67", size = 8711708, upload-time = "2025-10-09T00:27:13.879Z" }, - { url = "https://files.pythonhosted.org/packages/d8/1a/6bfecb0cafe94d6658f2f1af22c43b76cf7a1c2f0dc34ef84cbb6809617e/matplotlib-3.10.7-cp313-cp313t-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:09d7945a70ea43bf9248f4b6582734c2fe726723204a76eca233f24cffc7ef67", size = 9541409, upload-time = "2025-10-09T00:27:15.684Z" }, - { url = "https://files.pythonhosted.org/packages/08/50/95122a407d7f2e446fd865e2388a232a23f2b81934960ea802f3171518e4/matplotlib-3.10.7-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:d0b181e9fa8daf1d9f2d4c547527b167cb8838fc587deabca7b5c01f97199e84", size = 9594054, upload-time = "2025-10-09T00:27:17.547Z" }, - { url = "https://files.pythonhosted.org/packages/13/76/75b194a43b81583478a81e78a07da8d9ca6ddf50dd0a2ccabf258059481d/matplotlib-3.10.7-cp313-cp313t-win_amd64.whl", hash = "sha256:31963603041634ce1a96053047b40961f7a29eb8f9a62e80cc2c0427aa1d22a2", size = 8200100, upload-time = "2025-10-09T00:27:20.039Z" }, - { url = "https://files.pythonhosted.org/packages/f5/9e/6aefebdc9f8235c12bdeeda44cc0383d89c1e41da2c400caf3ee2073a3ce/matplotlib-3.10.7-cp313-cp313t-win_arm64.whl", hash = "sha256:aebed7b50aa6ac698c90f60f854b47e48cd2252b30510e7a1feddaf5a3f72cbf", size = 8042131, upload-time = "2025-10-09T00:27:21.608Z" }, - { url = "https://files.pythonhosted.org/packages/58/8f/76d5dc21ac64a49e5498d7f0472c0781dae442dd266a67458baec38288ec/matplotlib-3.10.7-pp311-pypy311_pp73-macosx_10_15_x86_64.whl", hash = "sha256:15112bcbaef211bd663fa935ec33313b948e214454d949b723998a43357b17b0", size = 8252283, upload-time = "2025-10-09T00:27:54.739Z" }, - { url = "https://files.pythonhosted.org/packages/27/0d/9c5d4c2317feb31d819e38c9f947c942f42ebd4eb935fc6fd3518a11eaa7/matplotlib-3.10.7-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:d2a959c640cdeecdd2ec3136e8ea0441da59bcaf58d67e9c590740addba2cb68", size = 8116733, upload-time = "2025-10-09T00:27:56.406Z" }, - { url = "https://files.pythonhosted.org/packages/9a/cc/3fe688ff1355010937713164caacf9ed443675ac48a997bab6ed23b3f7c0/matplotlib-3.10.7-pp311-pypy311_pp73-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:3886e47f64611046bc1db523a09dd0a0a6bed6081e6f90e13806dd1d1d1b5e91", size = 8693919, upload-time = "2025-10-09T00:27:58.41Z" }, +sdist = { url = "https://files.pythonhosted.org/packages/a0/59/c3e6453a9676ffba145309a73c462bb407f4400de7de3f2b41af70720a3c/matplotlib-3.10.6.tar.gz", hash = "sha256:ec01b645840dd1996df21ee37f208cd8ba57644779fa20464010638013d3203c", size = 34804264, upload-time = "2025-08-30T00:14:25.137Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/80/d6/5d3665aa44c49005aaacaa68ddea6fcb27345961cd538a98bb0177934ede/matplotlib-3.10.6-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:905b60d1cb0ee604ce65b297b61cf8be9f4e6cfecf95a3fe1c388b5266bc8f4f", size = 8257527, upload-time = "2025-08-30T00:12:45.31Z" }, + { url = "https://files.pythonhosted.org/packages/8c/af/30ddefe19ca67eebd70047dabf50f899eaff6f3c5e6a1a7edaecaf63f794/matplotlib-3.10.6-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:7bac38d816637343e53d7185d0c66677ff30ffb131044a81898b5792c956ba76", size = 8119583, upload-time = "2025-08-30T00:12:47.236Z" }, + { url = "https://files.pythonhosted.org/packages/d3/29/4a8650a3dcae97fa4f375d46efcb25920d67b512186f8a6788b896062a81/matplotlib-3.10.6-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:942a8de2b5bfff1de31d95722f702e2966b8a7e31f4e68f7cd963c7cd8861cf6", size = 8692682, upload-time = "2025-08-30T00:12:48.781Z" }, + { url = "https://files.pythonhosted.org/packages/aa/d3/b793b9cb061cfd5d42ff0f69d1822f8d5dbc94e004618e48a97a8373179a/matplotlib-3.10.6-cp311-cp311-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:a3276c85370bc0dfca051ec65c5817d1e0f8f5ce1b7787528ec8ed2d524bbc2f", size = 9521065, upload-time = "2025-08-30T00:12:50.602Z" }, + { url = "https://files.pythonhosted.org/packages/f7/c5/53de5629f223c1c66668d46ac2621961970d21916a4bc3862b174eb2a88f/matplotlib-3.10.6-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:9df5851b219225731f564e4b9e7f2ac1e13c9e6481f941b5631a0f8e2d9387ce", size = 9576888, upload-time = "2025-08-30T00:12:52.92Z" }, + { url = "https://files.pythonhosted.org/packages/fc/8e/0a18d6d7d2d0a2e66585032a760d13662e5250c784d53ad50434e9560991/matplotlib-3.10.6-cp311-cp311-win_amd64.whl", hash = "sha256:abb5d9478625dd9c9eb51a06d39aae71eda749ae9b3138afb23eb38824026c7e", size = 8115158, upload-time = "2025-08-30T00:12:54.863Z" }, + { url = "https://files.pythonhosted.org/packages/07/b3/1a5107bb66c261e23b9338070702597a2d374e5aa7004b7adfc754fbed02/matplotlib-3.10.6-cp311-cp311-win_arm64.whl", hash = "sha256:886f989ccfae63659183173bb3fced7fd65e9eb793c3cc21c273add368536951", size = 7992444, upload-time = "2025-08-30T00:12:57.067Z" }, + { url = "https://files.pythonhosted.org/packages/ea/1a/7042f7430055d567cc3257ac409fcf608599ab27459457f13772c2d9778b/matplotlib-3.10.6-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:31ca662df6a80bd426f871105fdd69db7543e28e73a9f2afe80de7e531eb2347", size = 8272404, upload-time = "2025-08-30T00:12:59.112Z" }, + { url = "https://files.pythonhosted.org/packages/a9/5d/1d5f33f5b43f4f9e69e6a5fe1fb9090936ae7bc8e2ff6158e7a76542633b/matplotlib-3.10.6-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:1678bb61d897bb4ac4757b5ecfb02bfb3fddf7f808000fb81e09c510712fda75", size = 8128262, upload-time = "2025-08-30T00:13:01.141Z" }, + { url = "https://files.pythonhosted.org/packages/67/c3/135fdbbbf84e0979712df58e5e22b4f257b3f5e52a3c4aacf1b8abec0d09/matplotlib-3.10.6-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:56cd2d20842f58c03d2d6e6c1f1cf5548ad6f66b91e1e48f814e4fb5abd1cb95", size = 8697008, upload-time = "2025-08-30T00:13:03.24Z" }, + { url = "https://files.pythonhosted.org/packages/9c/be/c443ea428fb2488a3ea7608714b1bd85a82738c45da21b447dc49e2f8e5d/matplotlib-3.10.6-cp312-cp312-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:662df55604a2f9a45435566d6e2660e41efe83cd94f4288dfbf1e6d1eae4b0bb", size = 9530166, upload-time = "2025-08-30T00:13:05.951Z" }, + { url = "https://files.pythonhosted.org/packages/a9/35/48441422b044d74034aea2a3e0d1a49023f12150ebc58f16600132b9bbaf/matplotlib-3.10.6-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:08f141d55148cd1fc870c3387d70ca4df16dee10e909b3b038782bd4bda6ea07", size = 9593105, upload-time = "2025-08-30T00:13:08.356Z" }, + { url = "https://files.pythonhosted.org/packages/45/c3/994ef20eb4154ab84cc08d033834555319e4af970165e6c8894050af0b3c/matplotlib-3.10.6-cp312-cp312-win_amd64.whl", hash = "sha256:590f5925c2d650b5c9d813c5b3b5fc53f2929c3f8ef463e4ecfa7e052044fb2b", size = 8122784, upload-time = "2025-08-30T00:13:10.367Z" }, + { url = "https://files.pythonhosted.org/packages/57/b8/5c85d9ae0e40f04e71bedb053aada5d6bab1f9b5399a0937afb5d6b02d98/matplotlib-3.10.6-cp312-cp312-win_arm64.whl", hash = "sha256:f44c8d264a71609c79a78d50349e724f5d5fc3684ead7c2a473665ee63d868aa", size = 7992823, upload-time = "2025-08-30T00:13:12.24Z" }, + { url = "https://files.pythonhosted.org/packages/a0/db/18380e788bb837e724358287b08e223b32bc8dccb3b0c12fa8ca20bc7f3b/matplotlib-3.10.6-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:819e409653c1106c8deaf62e6de6b8611449c2cd9939acb0d7d4e57a3d95cc7a", size = 8273231, upload-time = "2025-08-30T00:13:13.881Z" }, + { url = "https://files.pythonhosted.org/packages/d3/0f/38dd49445b297e0d4f12a322c30779df0d43cb5873c7847df8a82e82ec67/matplotlib-3.10.6-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:59c8ac8382fefb9cb71308dde16a7c487432f5255d8f1fd32473523abecfecdf", size = 8128730, upload-time = "2025-08-30T00:13:15.556Z" }, + { url = "https://files.pythonhosted.org/packages/e5/b8/9eea6630198cb303d131d95d285a024b3b8645b1763a2916fddb44ca8760/matplotlib-3.10.6-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:84e82d9e0fd70c70bc55739defbd8055c54300750cbacf4740c9673a24d6933a", size = 8698539, upload-time = "2025-08-30T00:13:17.297Z" }, + { url = "https://files.pythonhosted.org/packages/71/34/44c7b1f075e1ea398f88aeabcc2907c01b9cc99e2afd560c1d49845a1227/matplotlib-3.10.6-cp313-cp313-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:25f7a3eb42d6c1c56e89eacd495661fc815ffc08d9da750bca766771c0fd9110", size = 9529702, upload-time = "2025-08-30T00:13:19.248Z" }, + { url = "https://files.pythonhosted.org/packages/b5/7f/e5c2dc9950c7facaf8b461858d1b92c09dd0cf174fe14e21953b3dda06f7/matplotlib-3.10.6-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:f9c862d91ec0b7842920a4cfdaaec29662195301914ea54c33e01f1a28d014b2", size = 9593742, upload-time = "2025-08-30T00:13:21.181Z" }, + { url = "https://files.pythonhosted.org/packages/ff/1d/70c28528794f6410ee2856cd729fa1f1756498b8d3126443b0a94e1a8695/matplotlib-3.10.6-cp313-cp313-win_amd64.whl", hash = "sha256:1b53bd6337eba483e2e7d29c5ab10eee644bc3a2491ec67cc55f7b44583ffb18", size = 8122753, upload-time = "2025-08-30T00:13:23.44Z" }, + { url = "https://files.pythonhosted.org/packages/e8/74/0e1670501fc7d02d981564caf7c4df42974464625935424ca9654040077c/matplotlib-3.10.6-cp313-cp313-win_arm64.whl", hash = "sha256:cbd5eb50b7058b2892ce45c2f4e92557f395c9991f5c886d1bb74a1582e70fd6", size = 7992973, upload-time = "2025-08-30T00:13:26.632Z" }, + { url = "https://files.pythonhosted.org/packages/b1/4e/60780e631d73b6b02bd7239f89c451a72970e5e7ec34f621eda55cd9a445/matplotlib-3.10.6-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:acc86dd6e0e695c095001a7fccff158c49e45e0758fdf5dcdbb0103318b59c9f", size = 8316869, upload-time = "2025-08-30T00:13:28.262Z" }, + { url = "https://files.pythonhosted.org/packages/f8/15/baa662374a579413210fc2115d40c503b7360a08e9cc254aa0d97d34b0c1/matplotlib-3.10.6-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:e228cd2ffb8f88b7d0b29e37f68ca9aaf83e33821f24a5ccc4f082dd8396bc27", size = 8178240, upload-time = "2025-08-30T00:13:30.007Z" }, + { url = "https://files.pythonhosted.org/packages/c6/3f/3c38e78d2aafdb8829fcd0857d25aaf9e7dd2dfcf7ec742765b585774931/matplotlib-3.10.6-cp313-cp313t-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:658bc91894adeab669cf4bb4a186d049948262987e80f0857216387d7435d833", size = 8711719, upload-time = "2025-08-30T00:13:31.72Z" }, + { url = "https://files.pythonhosted.org/packages/96/4b/2ec2bbf8cefaa53207cc56118d1fa8a0f9b80642713ea9390235d331ede4/matplotlib-3.10.6-cp313-cp313t-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:8913b7474f6dd83ac444c9459c91f7f0f2859e839f41d642691b104e0af056aa", size = 9541422, upload-time = "2025-08-30T00:13:33.611Z" }, + { url = "https://files.pythonhosted.org/packages/83/7d/40255e89b3ef11c7871020563b2dd85f6cb1b4eff17c0f62b6eb14c8fa80/matplotlib-3.10.6-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:091cea22e059b89f6d7d1a18e2c33a7376c26eee60e401d92a4d6726c4e12706", size = 9594068, upload-time = "2025-08-30T00:13:35.833Z" }, + { url = "https://files.pythonhosted.org/packages/f0/a9/0213748d69dc842537a113493e1c27daf9f96bd7cc316f933dc8ec4de985/matplotlib-3.10.6-cp313-cp313t-win_amd64.whl", hash = "sha256:491e25e02a23d7207629d942c666924a6b61e007a48177fdd231a0097b7f507e", size = 8200100, upload-time = "2025-08-30T00:13:37.668Z" }, + { url = "https://files.pythonhosted.org/packages/be/15/79f9988066ce40b8a6f1759a934ea0cde8dc4adc2262255ee1bc98de6ad0/matplotlib-3.10.6-cp313-cp313t-win_arm64.whl", hash = "sha256:3d80d60d4e54cda462e2cd9a086d85cd9f20943ead92f575ce86885a43a565d5", size = 8042142, upload-time = "2025-08-30T00:13:39.426Z" }, + { url = "https://files.pythonhosted.org/packages/7c/58/e7b6d292beae6fb4283ca6fb7fa47d7c944a68062d6238c07b497dd35493/matplotlib-3.10.6-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:70aaf890ce1d0efd482df969b28a5b30ea0b891224bb315810a3940f67182899", size = 8273802, upload-time = "2025-08-30T00:13:41.006Z" }, + { url = "https://files.pythonhosted.org/packages/9f/f6/7882d05aba16a8cdd594fb9a03a9d3cca751dbb6816adf7b102945522ee9/matplotlib-3.10.6-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:1565aae810ab79cb72e402b22facfa6501365e73ebab70a0fdfb98488d2c3c0c", size = 8131365, upload-time = "2025-08-30T00:13:42.664Z" }, + { url = "https://files.pythonhosted.org/packages/94/bf/ff32f6ed76e78514e98775a53715eca4804b12bdcf35902cdd1cf759d324/matplotlib-3.10.6-cp314-cp314-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:f3b23315a01981689aa4e1a179dbf6ef9fbd17143c3eea77548c2ecfb0499438", size = 9533961, upload-time = "2025-08-30T00:13:44.372Z" }, + { url = "https://files.pythonhosted.org/packages/fe/c3/6bf88c2fc2da7708a2ff8d2eeb5d68943130f50e636d5d3dcf9d4252e971/matplotlib-3.10.6-cp314-cp314-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:30fdd37edf41a4e6785f9b37969de57aea770696cb637d9946eb37470c94a453", size = 9804262, upload-time = "2025-08-30T00:13:46.614Z" }, + { url = "https://files.pythonhosted.org/packages/0f/7a/e05e6d9446d2d577b459427ad060cd2de5742d0e435db3191fea4fcc7e8b/matplotlib-3.10.6-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:bc31e693da1c08012c764b053e702c1855378e04102238e6a5ee6a7117c53a47", size = 9595508, upload-time = "2025-08-30T00:13:48.731Z" }, + { url = "https://files.pythonhosted.org/packages/39/fb/af09c463ced80b801629fd73b96f726c9f6124c3603aa2e480a061d6705b/matplotlib-3.10.6-cp314-cp314-win_amd64.whl", hash = "sha256:05be9bdaa8b242bc6ff96330d18c52f1fc59c6fb3a4dd411d953d67e7e1baf98", size = 8252742, upload-time = "2025-08-30T00:13:50.539Z" }, + { url = "https://files.pythonhosted.org/packages/b1/f9/b682f6db9396d9ab8f050c0a3bfbb5f14fb0f6518f08507c04cc02f8f229/matplotlib-3.10.6-cp314-cp314-win_arm64.whl", hash = "sha256:f56a0d1ab05d34c628592435781d185cd99630bdfd76822cd686fb5a0aecd43a", size = 8124237, upload-time = "2025-08-30T00:13:54.3Z" }, + { url = "https://files.pythonhosted.org/packages/b5/d2/b69b4a0923a3c05ab90527c60fdec899ee21ca23ede7f0fb818e6620d6f2/matplotlib-3.10.6-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:94f0b4cacb23763b64b5dace50d5b7bfe98710fed5f0cef5c08135a03399d98b", size = 8316956, upload-time = "2025-08-30T00:13:55.932Z" }, + { url = "https://files.pythonhosted.org/packages/28/e9/dc427b6f16457ffaeecb2fc4abf91e5adb8827861b869c7a7a6d1836fa73/matplotlib-3.10.6-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:cc332891306b9fb39462673d8225d1b824c89783fee82840a709f96714f17a5c", size = 8178260, upload-time = "2025-08-30T00:14:00.942Z" }, + { url = "https://files.pythonhosted.org/packages/c4/89/1fbd5ad611802c34d1c7ad04607e64a1350b7fb9c567c4ec2c19e066ed35/matplotlib-3.10.6-cp314-cp314t-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:ee1d607b3fb1590deb04b69f02ea1d53ed0b0bf75b2b1a5745f269afcbd3cdd3", size = 9541422, upload-time = "2025-08-30T00:14:02.664Z" }, + { url = "https://files.pythonhosted.org/packages/b0/3b/65fec8716025b22c1d72d5a82ea079934c76a547696eaa55be6866bc89b1/matplotlib-3.10.6-cp314-cp314t-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:376a624a218116461696b27b2bbf7a8945053e6d799f6502fc03226d077807bf", size = 9803678, upload-time = "2025-08-30T00:14:04.741Z" }, + { url = "https://files.pythonhosted.org/packages/c7/b0/40fb2b3a1ab9381bb39a952e8390357c8be3bdadcf6d5055d9c31e1b35ae/matplotlib-3.10.6-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:83847b47f6524c34b4f2d3ce726bb0541c48c8e7692729865c3df75bfa0f495a", size = 9594077, upload-time = "2025-08-30T00:14:07.012Z" }, + { url = "https://files.pythonhosted.org/packages/76/34/c4b71b69edf5b06e635eee1ed10bfc73cf8df058b66e63e30e6a55e231d5/matplotlib-3.10.6-cp314-cp314t-win_amd64.whl", hash = "sha256:c7e0518e0d223683532a07f4b512e2e0729b62674f1b3a1a69869f98e6b1c7e3", size = 8342822, upload-time = "2025-08-30T00:14:09.041Z" }, + { url = "https://files.pythonhosted.org/packages/e8/62/aeabeef1a842b6226a30d49dd13e8a7a1e81e9ec98212c0b5169f0a12d83/matplotlib-3.10.6-cp314-cp314t-win_arm64.whl", hash = "sha256:4dd83e029f5b4801eeb87c64efd80e732452781c16a9cf7415b7b63ec8f374d7", size = 8172588, upload-time = "2025-08-30T00:14:11.166Z" }, + { url = "https://files.pythonhosted.org/packages/12/bb/02c35a51484aae5f49bd29f091286e7af5f3f677a9736c58a92b3c78baeb/matplotlib-3.10.6-pp311-pypy311_pp73-macosx_10_15_x86_64.whl", hash = "sha256:f2d684c3204fa62421bbf770ddfebc6b50130f9cad65531eeba19236d73bb488", size = 8252296, upload-time = "2025-08-30T00:14:19.49Z" }, + { url = "https://files.pythonhosted.org/packages/7d/85/41701e3092005aee9a2445f5ee3904d9dbd4a7df7a45905ffef29b7ef098/matplotlib-3.10.6-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:6f4a69196e663a41d12a728fab8751177215357906436804217d6d9cf0d4d6cf", size = 8116749, upload-time = "2025-08-30T00:14:21.344Z" }, + { url = "https://files.pythonhosted.org/packages/16/53/8d8fa0ea32a8c8239e04d022f6c059ee5e1b77517769feccd50f1df43d6d/matplotlib-3.10.6-pp311-pypy311_pp73-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:4d6ca6ef03dfd269f4ead566ec6f3fb9becf8dab146fb999022ed85ee9f6b3eb", size = 8693933, upload-time = "2025-08-30T00:14:22.942Z" }, ] [[package]] @@ -3103,147 +3306,150 @@ wheels = [ [[package]] name = "msgpack" -version = "1.1.2" -source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/4d/f2/bfb55a6236ed8725a96b0aa3acbd0ec17588e6a2c3b62a93eb513ed8783f/msgpack-1.1.2.tar.gz", hash = "sha256:3b60763c1373dd60f398488069bcdc703cd08a711477b5d480eecc9f9626f47e", size = 173581, upload-time = "2025-10-08T09:15:56.596Z" } -wheels = [ - { url = "https://files.pythonhosted.org/packages/2c/97/560d11202bcd537abca693fd85d81cebe2107ba17301de42b01ac1677b69/msgpack-1.1.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:2e86a607e558d22985d856948c12a3fa7b42efad264dca8a3ebbcfa2735d786c", size = 82271, upload-time = "2025-10-08T09:14:49.967Z" }, - { url = "https://files.pythonhosted.org/packages/83/04/28a41024ccbd67467380b6fb440ae916c1e4f25e2cd4c63abe6835ac566e/msgpack-1.1.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:283ae72fc89da59aa004ba147e8fc2f766647b1251500182fac0350d8af299c0", size = 84914, upload-time = "2025-10-08T09:14:50.958Z" }, - { url = "https://files.pythonhosted.org/packages/71/46/b817349db6886d79e57a966346cf0902a426375aadc1e8e7a86a75e22f19/msgpack-1.1.2-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:61c8aa3bd513d87c72ed0b37b53dd5c5a0f58f2ff9f26e1555d3bd7948fb7296", size = 416962, upload-time = "2025-10-08T09:14:51.997Z" }, - { url = "https://files.pythonhosted.org/packages/da/e0/6cc2e852837cd6086fe7d8406af4294e66827a60a4cf60b86575a4a65ca8/msgpack-1.1.2-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:454e29e186285d2ebe65be34629fa0e8605202c60fbc7c4c650ccd41870896ef", size = 426183, upload-time = "2025-10-08T09:14:53.477Z" }, - { url = "https://files.pythonhosted.org/packages/25/98/6a19f030b3d2ea906696cedd1eb251708e50a5891d0978b012cb6107234c/msgpack-1.1.2-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:7bc8813f88417599564fafa59fd6f95be417179f76b40325b500b3c98409757c", size = 411454, upload-time = "2025-10-08T09:14:54.648Z" }, - { url = "https://files.pythonhosted.org/packages/b7/cd/9098fcb6adb32187a70b7ecaabf6339da50553351558f37600e53a4a2a23/msgpack-1.1.2-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:bafca952dc13907bdfdedfc6a5f579bf4f292bdd506fadb38389afa3ac5b208e", size = 422341, upload-time = "2025-10-08T09:14:56.328Z" }, - { url = "https://files.pythonhosted.org/packages/e6/ae/270cecbcf36c1dc85ec086b33a51a4d7d08fc4f404bdbc15b582255d05ff/msgpack-1.1.2-cp311-cp311-win32.whl", hash = "sha256:602b6740e95ffc55bfb078172d279de3773d7b7db1f703b2f1323566b878b90e", size = 64747, upload-time = "2025-10-08T09:14:57.882Z" }, - { url = "https://files.pythonhosted.org/packages/2a/79/309d0e637f6f37e83c711f547308b91af02b72d2326ddd860b966080ef29/msgpack-1.1.2-cp311-cp311-win_amd64.whl", hash = "sha256:d198d275222dc54244bf3327eb8cbe00307d220241d9cec4d306d49a44e85f68", size = 71633, upload-time = "2025-10-08T09:14:59.177Z" }, - { url = "https://files.pythonhosted.org/packages/73/4d/7c4e2b3d9b1106cd0aa6cb56cc57c6267f59fa8bfab7d91df5adc802c847/msgpack-1.1.2-cp311-cp311-win_arm64.whl", hash = "sha256:86f8136dfa5c116365a8a651a7d7484b65b13339731dd6faebb9a0242151c406", size = 64755, upload-time = "2025-10-08T09:15:00.48Z" }, - { url = "https://files.pythonhosted.org/packages/ad/bd/8b0d01c756203fbab65d265859749860682ccd2a59594609aeec3a144efa/msgpack-1.1.2-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:70a0dff9d1f8da25179ffcf880e10cf1aad55fdb63cd59c9a49a1b82290062aa", size = 81939, upload-time = "2025-10-08T09:15:01.472Z" }, - { url = "https://files.pythonhosted.org/packages/34/68/ba4f155f793a74c1483d4bdef136e1023f7bcba557f0db4ef3db3c665cf1/msgpack-1.1.2-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:446abdd8b94b55c800ac34b102dffd2f6aa0ce643c55dfc017ad89347db3dbdb", size = 85064, upload-time = "2025-10-08T09:15:03.764Z" }, - { url = "https://files.pythonhosted.org/packages/f2/60/a064b0345fc36c4c3d2c743c82d9100c40388d77f0b48b2f04d6041dbec1/msgpack-1.1.2-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:c63eea553c69ab05b6747901b97d620bb2a690633c77f23feb0c6a947a8a7b8f", size = 417131, upload-time = "2025-10-08T09:15:05.136Z" }, - { url = "https://files.pythonhosted.org/packages/65/92/a5100f7185a800a5d29f8d14041f61475b9de465ffcc0f3b9fba606e4505/msgpack-1.1.2-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:372839311ccf6bdaf39b00b61288e0557916c3729529b301c52c2d88842add42", size = 427556, upload-time = "2025-10-08T09:15:06.837Z" }, - { url = "https://files.pythonhosted.org/packages/f5/87/ffe21d1bf7d9991354ad93949286f643b2bb6ddbeab66373922b44c3b8cc/msgpack-1.1.2-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:2929af52106ca73fcb28576218476ffbb531a036c2adbcf54a3664de124303e9", size = 404920, upload-time = "2025-10-08T09:15:08.179Z" }, - { url = "https://files.pythonhosted.org/packages/ff/41/8543ed2b8604f7c0d89ce066f42007faac1eaa7d79a81555f206a5cdb889/msgpack-1.1.2-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:be52a8fc79e45b0364210eef5234a7cf8d330836d0a64dfbb878efa903d84620", size = 415013, upload-time = "2025-10-08T09:15:09.83Z" }, - { url = "https://files.pythonhosted.org/packages/41/0d/2ddfaa8b7e1cee6c490d46cb0a39742b19e2481600a7a0e96537e9c22f43/msgpack-1.1.2-cp312-cp312-win32.whl", hash = "sha256:1fff3d825d7859ac888b0fbda39a42d59193543920eda9d9bea44d958a878029", size = 65096, upload-time = "2025-10-08T09:15:11.11Z" }, - { url = "https://files.pythonhosted.org/packages/8c/ec/d431eb7941fb55a31dd6ca3404d41fbb52d99172df2e7707754488390910/msgpack-1.1.2-cp312-cp312-win_amd64.whl", hash = "sha256:1de460f0403172cff81169a30b9a92b260cb809c4cb7e2fc79ae8d0510c78b6b", size = 72708, upload-time = "2025-10-08T09:15:12.554Z" }, - { url = "https://files.pythonhosted.org/packages/c5/31/5b1a1f70eb0e87d1678e9624908f86317787b536060641d6798e3cf70ace/msgpack-1.1.2-cp312-cp312-win_arm64.whl", hash = "sha256:be5980f3ee0e6bd44f3a9e9dea01054f175b50c3e6cdb692bc9424c0bbb8bf69", size = 64119, upload-time = "2025-10-08T09:15:13.589Z" }, - { url = "https://files.pythonhosted.org/packages/6b/31/b46518ecc604d7edf3a4f94cb3bf021fc62aa301f0cb849936968164ef23/msgpack-1.1.2-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:4efd7b5979ccb539c221a4c4e16aac1a533efc97f3b759bb5a5ac9f6d10383bf", size = 81212, upload-time = "2025-10-08T09:15:14.552Z" }, - { url = "https://files.pythonhosted.org/packages/92/dc/c385f38f2c2433333345a82926c6bfa5ecfff3ef787201614317b58dd8be/msgpack-1.1.2-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:42eefe2c3e2af97ed470eec850facbe1b5ad1d6eacdbadc42ec98e7dcf68b4b7", size = 84315, upload-time = "2025-10-08T09:15:15.543Z" }, - { url = "https://files.pythonhosted.org/packages/d3/68/93180dce57f684a61a88a45ed13047558ded2be46f03acb8dec6d7c513af/msgpack-1.1.2-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:1fdf7d83102bf09e7ce3357de96c59b627395352a4024f6e2458501f158bf999", size = 412721, upload-time = "2025-10-08T09:15:16.567Z" }, - { url = "https://files.pythonhosted.org/packages/5d/ba/459f18c16f2b3fc1a1ca871f72f07d70c07bf768ad0a507a698b8052ac58/msgpack-1.1.2-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:fac4be746328f90caa3cd4bc67e6fe36ca2bf61d5c6eb6d895b6527e3f05071e", size = 424657, upload-time = "2025-10-08T09:15:17.825Z" }, - { url = "https://files.pythonhosted.org/packages/38/f8/4398c46863b093252fe67368b44edc6c13b17f4e6b0e4929dbf0bdb13f23/msgpack-1.1.2-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:fffee09044073e69f2bad787071aeec727183e7580443dfeb8556cbf1978d162", size = 402668, upload-time = "2025-10-08T09:15:19.003Z" }, - { url = "https://files.pythonhosted.org/packages/28/ce/698c1eff75626e4124b4d78e21cca0b4cc90043afb80a507626ea354ab52/msgpack-1.1.2-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:5928604de9b032bc17f5099496417f113c45bc6bc21b5c6920caf34b3c428794", size = 419040, upload-time = "2025-10-08T09:15:20.183Z" }, - { url = "https://files.pythonhosted.org/packages/67/32/f3cd1667028424fa7001d82e10ee35386eea1408b93d399b09fb0aa7875f/msgpack-1.1.2-cp313-cp313-win32.whl", hash = "sha256:a7787d353595c7c7e145e2331abf8b7ff1e6673a6b974ded96e6d4ec09f00c8c", size = 65037, upload-time = "2025-10-08T09:15:21.416Z" }, - { url = "https://files.pythonhosted.org/packages/74/07/1ed8277f8653c40ebc65985180b007879f6a836c525b3885dcc6448ae6cb/msgpack-1.1.2-cp313-cp313-win_amd64.whl", hash = "sha256:a465f0dceb8e13a487e54c07d04ae3ba131c7c5b95e2612596eafde1dccf64a9", size = 72631, upload-time = "2025-10-08T09:15:22.431Z" }, - { url = "https://files.pythonhosted.org/packages/e5/db/0314e4e2db56ebcf450f277904ffd84a7988b9e5da8d0d61ab2d057df2b6/msgpack-1.1.2-cp313-cp313-win_arm64.whl", hash = "sha256:e69b39f8c0aa5ec24b57737ebee40be647035158f14ed4b40e6f150077e21a84", size = 64118, upload-time = "2025-10-08T09:15:23.402Z" }, -] - -[[package]] -name = "msgspec-m" -version = "0.19.2" +version = "1.1.1" source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/91/04/62cbeddcfbe1b9c268fae634d23ab93fb96267a41e88c3eeb9bc0b770f6a/msgspec_m-0.19.2.tar.gz", hash = "sha256:32b57315bdd4ece2d2311c013ea56272a87655e45af0724b2921590aad4b14c1", size = 217393, upload-time = "2025-10-15T15:45:27.366Z" } -wheels = [ - { url = "https://files.pythonhosted.org/packages/68/53/fef2c2d52e1b6b45052c34c3d6a16459cdb78ff807ef54ed317c29cd9fdb/msgspec_m-0.19.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:db8cb1186dc798928ba4e01dc168887a54dc40c995a1e8c033c3becc2430cfe8", size = 208644, upload-time = "2025-10-15T15:44:42.59Z" }, - { url = "https://files.pythonhosted.org/packages/50/8d/925317b6e372511e72928b921af88cd8aac90c75a79eb11663e24919354e/msgspec_m-0.19.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:756d989e4ad493996ca4659d6e49a93aad9070b51a76605125da3b03c22e02e0", size = 214292, upload-time = "2025-10-15T15:44:44.176Z" }, - { url = "https://files.pythonhosted.org/packages/a2/ad/21c683c5c1344ec188f70bc8ca889c1f837123326b31a6ecac8fc396f7ca/msgspec_m-0.19.2-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:72c96abcb91937f32af166d1e2773a1ccdd41167729265fef3b7463abe6910c9", size = 203947, upload-time = "2025-10-15T15:44:45.347Z" }, - { url = "https://files.pythonhosted.org/packages/af/28/d0bb9972808d0c1d274b82b756d4ac1ada41560d585c2c0e7635c58fa6d3/msgspec_m-0.19.2-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:1cd453b32b14a5a8d5be6fffae952caa4c00a18e7be448817b89f428d82f3ccf", size = 211269, upload-time = "2025-10-15T15:44:46.521Z" }, - { url = "https://files.pythonhosted.org/packages/55/69/deaaadd0109f063b200dbe77bfff34255c963caa77bd45adfe79fe7e1608/msgspec_m-0.19.2-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:1f7c01e06d17ba06f82ca623fc07ad516aff9d57067d0cf16a12f15682a96d04", size = 206173, upload-time = "2025-10-15T15:44:48.447Z" }, - { url = "https://files.pythonhosted.org/packages/12/01/f94d5b8c20487e4e7db80f01c8a079aa5246b9829ad61e66bfd6cb1b8059/msgspec_m-0.19.2-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:ca45fbc0e4b95bfb58877a7784ffb4e6de618ce061fa1a25b49da204f55e2017", size = 213423, upload-time = "2025-10-15T15:44:49.547Z" }, - { url = "https://files.pythonhosted.org/packages/f5/26/1d3c6c65e326987f4189ecd93a7d47c1e5dab76ed8d9397fba21403d55b1/msgspec_m-0.19.2-cp311-cp311-win_amd64.whl", hash = "sha256:2b5911b91aa8f1ac76b67842afbb32aec488b375c8302aaa0da28b10c9299579", size = 186419, upload-time = "2025-10-15T15:44:50.74Z" }, - { url = "https://files.pythonhosted.org/packages/4f/33/9b22ff91a46bdc725a06db9668bcad6c05942d91a8d1d809625c5ea680c6/msgspec_m-0.19.2-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:876a15baafd0ee067bcf7feed43e8b409b82e14d04c0a8c12376131956cdcfe9", size = 218554, upload-time = "2025-10-15T15:44:51.954Z" }, - { url = "https://files.pythonhosted.org/packages/4c/b7/a93d524907162cb6150179eb47fac885bbbad025c82151b69ad1d62cda4d/msgspec_m-0.19.2-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:360aec3102a6122169aada60ee9ccd801fbb5949edeb88abb7f69df2901011b6", size = 225050, upload-time = "2025-10-15T15:44:53.15Z" }, - { url = "https://files.pythonhosted.org/packages/40/e1/7a3a8e3d38702d0125bd61f5cb5d4325c23a60625a274b7f58ff57d55120/msgspec_m-0.19.2-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:b38ebd0350ebfe4d8f4b5c8065cc5d05cdcece9b68712bac27797b232ed0f60a", size = 213481, upload-time = "2025-10-15T15:44:54.302Z" }, - { url = "https://files.pythonhosted.org/packages/87/51/0fa83662b036bdac504192e8067798b6d0ef912eec2af897361e60357808/msgspec_m-0.19.2-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:3c7d2aa7ce71733bb9aaa8e6987576965cca0c9c36f09c271639a8b873034832", size = 220674, upload-time = "2025-10-15T15:44:55.954Z" }, - { url = "https://files.pythonhosted.org/packages/69/9d/70766c99b2853e8de14a4893dfae91eba3e5227cd77cf0707b921c8e1970/msgspec_m-0.19.2-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:a5ff928816b5a331d11c31fc19931531d13a541d26677b5a5b3861363affbac6", size = 216218, upload-time = "2025-10-15T15:44:57.101Z" }, - { url = "https://files.pythonhosted.org/packages/73/30/bcabeddab61596d9a770db7cee658053b26bcbb06bd30ba55c7ee38fb4db/msgspec_m-0.19.2-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:c42b74fd96beb3688f3fe2973af3494ca652cf43bd2b33874c72b44eb9e96902", size = 224198, upload-time = "2025-10-15T15:44:58.254Z" }, - { url = "https://files.pythonhosted.org/packages/98/ca/e411a6f2c86888284e18b0a8d3f09605d825d4777048a9e6eca19ec38510/msgspec_m-0.19.2-cp312-cp312-win_amd64.whl", hash = "sha256:d9212b7706e277e83065cf4bbacd86b37f66628cd5039802f4b98d1cc5bc4442", size = 187767, upload-time = "2025-10-15T15:44:59.446Z" }, - { url = "https://files.pythonhosted.org/packages/23/55/ae1ff4838e85d15d7c93542f9f682e5548d5ef00382fdef4138b60e700d0/msgspec_m-0.19.2-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:1ae47819b3bf38949230fd9465432679f03656b377e68beb9e5268bf28e9fa5c", size = 218707, upload-time = "2025-10-15T15:45:00.616Z" }, - { url = "https://files.pythonhosted.org/packages/ae/e8/bc20bc34115f41b31a424f4b58bdf80b9a17c8581ec63cfce3b14f6a8fbd/msgspec_m-0.19.2-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:aa4be2fcce32a65b8ab84dc8b63bf8e9684378f9456fcb33f7ad9f57aeb5421e", size = 225113, upload-time = "2025-10-15T15:45:01.881Z" }, - { url = "https://files.pythonhosted.org/packages/b0/3d/6708e1f790087c683de97d71b112b330f5375a71aab0afbbf397e854ee4b/msgspec_m-0.19.2-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:04eaf31a35bf86ca4e6b8f87154c5e468d5026c0488b322e2cf04b310a412bb2", size = 213588, upload-time = "2025-10-15T15:45:03.106Z" }, - { url = "https://files.pythonhosted.org/packages/b4/27/116741ab2af0215d6f2d767724e9478aab7b3deef487ba928992ce332fb9/msgspec_m-0.19.2-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:3983a57a1648f5c74585396bbca8470ad8b32f8bfe060ea0118a7974a36eb5a5", size = 220719, upload-time = "2025-10-15T15:45:04.308Z" }, - { url = "https://files.pythonhosted.org/packages/36/41/43f31ae96988f20b9ffb40af10fdaced194118e636cf303a1c73c7ecf9b2/msgspec_m-0.19.2-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:0b94c36600ea82103ca55f7f7d4f43e6f08c6721a9bb67f60a57d8b1d015ab24", size = 216333, upload-time = "2025-10-15T15:45:05.82Z" }, - { url = "https://files.pythonhosted.org/packages/ac/62/ee8eefb3f5fdfceb0ffc1deb40ce628b7c245d1d8dcd7a6361b743b28f38/msgspec_m-0.19.2-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:a9a7568d84ae7b352fc3dbe82f77b05f2d61c2441a3cb383cafd4c8250493ad8", size = 224328, upload-time = "2025-10-15T15:45:07.004Z" }, - { url = "https://files.pythonhosted.org/packages/5e/a2/f572e098a4fd70eaa7f1e7af35feb58e0781dcb834b9101228c653b63921/msgspec_m-0.19.2-cp313-cp313-win_amd64.whl", hash = "sha256:91ece8d5d8b4c21eb5dfc95615670faa632fbddfca64005619ce81aeb44f9976", size = 187686, upload-time = "2025-10-15T15:45:08.307Z" }, +sdist = { url = "https://files.pythonhosted.org/packages/45/b1/ea4f68038a18c77c9467400d166d74c4ffa536f34761f7983a104357e614/msgpack-1.1.1.tar.gz", hash = "sha256:77b79ce34a2bdab2594f490c8e80dd62a02d650b91a75159a63ec413b8d104cd", size = 173555, upload-time = "2025-06-13T06:52:51.324Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/7f/83/97f24bf9848af23fe2ba04380388216defc49a8af6da0c28cc636d722502/msgpack-1.1.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:71ef05c1726884e44f8b1d1773604ab5d4d17729d8491403a705e649116c9558", size = 82728, upload-time = "2025-06-13T06:51:50.68Z" }, + { url = "https://files.pythonhosted.org/packages/aa/7f/2eaa388267a78401f6e182662b08a588ef4f3de6f0eab1ec09736a7aaa2b/msgpack-1.1.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:36043272c6aede309d29d56851f8841ba907a1a3d04435e43e8a19928e243c1d", size = 79279, upload-time = "2025-06-13T06:51:51.72Z" }, + { url = "https://files.pythonhosted.org/packages/f8/46/31eb60f4452c96161e4dfd26dbca562b4ec68c72e4ad07d9566d7ea35e8a/msgpack-1.1.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a32747b1b39c3ac27d0670122b57e6e57f28eefb725e0b625618d1b59bf9d1e0", size = 423859, upload-time = "2025-06-13T06:51:52.749Z" }, + { url = "https://files.pythonhosted.org/packages/45/16/a20fa8c32825cc7ae8457fab45670c7a8996d7746ce80ce41cc51e3b2bd7/msgpack-1.1.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8a8b10fdb84a43e50d38057b06901ec9da52baac6983d3f709d8507f3889d43f", size = 429975, upload-time = "2025-06-13T06:51:53.97Z" }, + { url = "https://files.pythonhosted.org/packages/86/ea/6c958e07692367feeb1a1594d35e22b62f7f476f3c568b002a5ea09d443d/msgpack-1.1.1-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ba0c325c3f485dc54ec298d8b024e134acf07c10d494ffa24373bea729acf704", size = 413528, upload-time = "2025-06-13T06:51:55.507Z" }, + { url = "https://files.pythonhosted.org/packages/75/05/ac84063c5dae79722bda9f68b878dc31fc3059adb8633c79f1e82c2cd946/msgpack-1.1.1-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:88daaf7d146e48ec71212ce21109b66e06a98e5e44dca47d853cbfe171d6c8d2", size = 413338, upload-time = "2025-06-13T06:51:57.023Z" }, + { url = "https://files.pythonhosted.org/packages/69/e8/fe86b082c781d3e1c09ca0f4dacd457ede60a13119b6ce939efe2ea77b76/msgpack-1.1.1-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:d8b55ea20dc59b181d3f47103f113e6f28a5e1c89fd5b67b9140edb442ab67f2", size = 422658, upload-time = "2025-06-13T06:51:58.419Z" }, + { url = "https://files.pythonhosted.org/packages/3b/2b/bafc9924df52d8f3bb7c00d24e57be477f4d0f967c0a31ef5e2225e035c7/msgpack-1.1.1-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:4a28e8072ae9779f20427af07f53bbb8b4aa81151054e882aee333b158da8752", size = 427124, upload-time = "2025-06-13T06:51:59.969Z" }, + { url = "https://files.pythonhosted.org/packages/a2/3b/1f717e17e53e0ed0b68fa59e9188f3f610c79d7151f0e52ff3cd8eb6b2dc/msgpack-1.1.1-cp311-cp311-win32.whl", hash = "sha256:7da8831f9a0fdb526621ba09a281fadc58ea12701bc709e7b8cbc362feabc295", size = 65016, upload-time = "2025-06-13T06:52:01.294Z" }, + { url = "https://files.pythonhosted.org/packages/48/45/9d1780768d3b249accecc5a38c725eb1e203d44a191f7b7ff1941f7df60c/msgpack-1.1.1-cp311-cp311-win_amd64.whl", hash = "sha256:5fd1b58e1431008a57247d6e7cc4faa41c3607e8e7d4aaf81f7c29ea013cb458", size = 72267, upload-time = "2025-06-13T06:52:02.568Z" }, + { url = "https://files.pythonhosted.org/packages/e3/26/389b9c593eda2b8551b2e7126ad3a06af6f9b44274eb3a4f054d48ff7e47/msgpack-1.1.1-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:ae497b11f4c21558d95de9f64fff7053544f4d1a17731c866143ed6bb4591238", size = 82359, upload-time = "2025-06-13T06:52:03.909Z" }, + { url = "https://files.pythonhosted.org/packages/ab/65/7d1de38c8a22cf8b1551469159d4b6cf49be2126adc2482de50976084d78/msgpack-1.1.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:33be9ab121df9b6b461ff91baac6f2731f83d9b27ed948c5b9d1978ae28bf157", size = 79172, upload-time = "2025-06-13T06:52:05.246Z" }, + { url = "https://files.pythonhosted.org/packages/0f/bd/cacf208b64d9577a62c74b677e1ada005caa9b69a05a599889d6fc2ab20a/msgpack-1.1.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6f64ae8fe7ffba251fecb8408540c34ee9df1c26674c50c4544d72dbf792e5ce", size = 425013, upload-time = "2025-06-13T06:52:06.341Z" }, + { url = "https://files.pythonhosted.org/packages/4d/ec/fd869e2567cc9c01278a736cfd1697941ba0d4b81a43e0aa2e8d71dab208/msgpack-1.1.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a494554874691720ba5891c9b0b39474ba43ffb1aaf32a5dac874effb1619e1a", size = 426905, upload-time = "2025-06-13T06:52:07.501Z" }, + { url = "https://files.pythonhosted.org/packages/55/2a/35860f33229075bce803a5593d046d8b489d7ba2fc85701e714fc1aaf898/msgpack-1.1.1-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:cb643284ab0ed26f6957d969fe0dd8bb17beb567beb8998140b5e38a90974f6c", size = 407336, upload-time = "2025-06-13T06:52:09.047Z" }, + { url = "https://files.pythonhosted.org/packages/8c/16/69ed8f3ada150bf92745fb4921bd621fd2cdf5a42e25eb50bcc57a5328f0/msgpack-1.1.1-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:d275a9e3c81b1093c060c3837e580c37f47c51eca031f7b5fb76f7b8470f5f9b", size = 409485, upload-time = "2025-06-13T06:52:10.382Z" }, + { url = "https://files.pythonhosted.org/packages/c6/b6/0c398039e4c6d0b2e37c61d7e0e9d13439f91f780686deb8ee64ecf1ae71/msgpack-1.1.1-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:4fd6b577e4541676e0cc9ddc1709d25014d3ad9a66caa19962c4f5de30fc09ef", size = 412182, upload-time = "2025-06-13T06:52:11.644Z" }, + { url = "https://files.pythonhosted.org/packages/b8/d0/0cf4a6ecb9bc960d624c93effaeaae75cbf00b3bc4a54f35c8507273cda1/msgpack-1.1.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:bb29aaa613c0a1c40d1af111abf025f1732cab333f96f285d6a93b934738a68a", size = 419883, upload-time = "2025-06-13T06:52:12.806Z" }, + { url = "https://files.pythonhosted.org/packages/62/83/9697c211720fa71a2dfb632cad6196a8af3abea56eece220fde4674dc44b/msgpack-1.1.1-cp312-cp312-win32.whl", hash = "sha256:870b9a626280c86cff9c576ec0d9cbcc54a1e5ebda9cd26dab12baf41fee218c", size = 65406, upload-time = "2025-06-13T06:52:14.271Z" }, + { url = "https://files.pythonhosted.org/packages/c0/23/0abb886e80eab08f5e8c485d6f13924028602829f63b8f5fa25a06636628/msgpack-1.1.1-cp312-cp312-win_amd64.whl", hash = "sha256:5692095123007180dca3e788bb4c399cc26626da51629a31d40207cb262e67f4", size = 72558, upload-time = "2025-06-13T06:52:15.252Z" }, + { url = "https://files.pythonhosted.org/packages/a1/38/561f01cf3577430b59b340b51329803d3a5bf6a45864a55f4ef308ac11e3/msgpack-1.1.1-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:3765afa6bd4832fc11c3749be4ba4b69a0e8d7b728f78e68120a157a4c5d41f0", size = 81677, upload-time = "2025-06-13T06:52:16.64Z" }, + { url = "https://files.pythonhosted.org/packages/09/48/54a89579ea36b6ae0ee001cba8c61f776451fad3c9306cd80f5b5c55be87/msgpack-1.1.1-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:8ddb2bcfd1a8b9e431c8d6f4f7db0773084e107730ecf3472f1dfe9ad583f3d9", size = 78603, upload-time = "2025-06-13T06:52:17.843Z" }, + { url = "https://files.pythonhosted.org/packages/a0/60/daba2699b308e95ae792cdc2ef092a38eb5ee422f9d2fbd4101526d8a210/msgpack-1.1.1-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:196a736f0526a03653d829d7d4c5500a97eea3648aebfd4b6743875f28aa2af8", size = 420504, upload-time = "2025-06-13T06:52:18.982Z" }, + { url = "https://files.pythonhosted.org/packages/20/22/2ebae7ae43cd8f2debc35c631172ddf14e2a87ffcc04cf43ff9df9fff0d3/msgpack-1.1.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9d592d06e3cc2f537ceeeb23d38799c6ad83255289bb84c2e5792e5a8dea268a", size = 423749, upload-time = "2025-06-13T06:52:20.211Z" }, + { url = "https://files.pythonhosted.org/packages/40/1b/54c08dd5452427e1179a40b4b607e37e2664bca1c790c60c442c8e972e47/msgpack-1.1.1-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:4df2311b0ce24f06ba253fda361f938dfecd7b961576f9be3f3fbd60e87130ac", size = 404458, upload-time = "2025-06-13T06:52:21.429Z" }, + { url = "https://files.pythonhosted.org/packages/2e/60/6bb17e9ffb080616a51f09928fdd5cac1353c9becc6c4a8abd4e57269a16/msgpack-1.1.1-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:e4141c5a32b5e37905b5940aacbc59739f036930367d7acce7a64e4dec1f5e0b", size = 405976, upload-time = "2025-06-13T06:52:22.995Z" }, + { url = "https://files.pythonhosted.org/packages/ee/97/88983e266572e8707c1f4b99c8fd04f9eb97b43f2db40e3172d87d8642db/msgpack-1.1.1-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:b1ce7f41670c5a69e1389420436f41385b1aa2504c3b0c30620764b15dded2e7", size = 408607, upload-time = "2025-06-13T06:52:24.152Z" }, + { url = "https://files.pythonhosted.org/packages/bc/66/36c78af2efaffcc15a5a61ae0df53a1d025f2680122e2a9eb8442fed3ae4/msgpack-1.1.1-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:4147151acabb9caed4e474c3344181e91ff7a388b888f1e19ea04f7e73dc7ad5", size = 424172, upload-time = "2025-06-13T06:52:25.704Z" }, + { url = "https://files.pythonhosted.org/packages/8c/87/a75eb622b555708fe0427fab96056d39d4c9892b0c784b3a721088c7ee37/msgpack-1.1.1-cp313-cp313-win32.whl", hash = "sha256:500e85823a27d6d9bba1d057c871b4210c1dd6fb01fbb764e37e4e8847376323", size = 65347, upload-time = "2025-06-13T06:52:26.846Z" }, + { url = "https://files.pythonhosted.org/packages/ca/91/7dc28d5e2a11a5ad804cf2b7f7a5fcb1eb5a4966d66a5d2b41aee6376543/msgpack-1.1.1-cp313-cp313-win_amd64.whl", hash = "sha256:6d489fba546295983abd142812bda76b57e33d0b9f5d5b71c09a583285506f69", size = 72341, upload-time = "2025-06-13T06:52:27.835Z" }, +] + +[[package]] +name = "msgspec" +version = "0.19.0" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/cf/9b/95d8ce458462b8b71b8a70fa94563b2498b89933689f3a7b8911edfae3d7/msgspec-0.19.0.tar.gz", hash = "sha256:604037e7cd475345848116e89c553aa9a233259733ab51986ac924ab1b976f8e", size = 216934, upload-time = "2024-12-27T17:40:28.597Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/24/d4/2ec2567ac30dab072cce3e91fb17803c52f0a37aab6b0c24375d2b20a581/msgspec-0.19.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:aa77046904db764b0462036bc63ef71f02b75b8f72e9c9dd4c447d6da1ed8f8e", size = 187939, upload-time = "2024-12-27T17:39:32.347Z" }, + { url = "https://files.pythonhosted.org/packages/2b/c0/18226e4328897f4f19875cb62bb9259fe47e901eade9d9376ab5f251a929/msgspec-0.19.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:047cfa8675eb3bad68722cfe95c60e7afabf84d1bd8938979dd2b92e9e4a9551", size = 182202, upload-time = "2024-12-27T17:39:33.633Z" }, + { url = "https://files.pythonhosted.org/packages/81/25/3a4b24d468203d8af90d1d351b77ea3cffb96b29492855cf83078f16bfe4/msgspec-0.19.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e78f46ff39a427e10b4a61614a2777ad69559cc8d603a7c05681f5a595ea98f7", size = 209029, upload-time = "2024-12-27T17:39:35.023Z" }, + { url = "https://files.pythonhosted.org/packages/85/2e/db7e189b57901955239f7689b5dcd6ae9458637a9c66747326726c650523/msgspec-0.19.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6c7adf191e4bd3be0e9231c3b6dc20cf1199ada2af523885efc2ed218eafd011", size = 210682, upload-time = "2024-12-27T17:39:36.384Z" }, + { url = "https://files.pythonhosted.org/packages/03/97/7c8895c9074a97052d7e4a1cc1230b7b6e2ca2486714eb12c3f08bb9d284/msgspec-0.19.0-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:f04cad4385e20be7c7176bb8ae3dca54a08e9756cfc97bcdb4f18560c3042063", size = 214003, upload-time = "2024-12-27T17:39:39.097Z" }, + { url = "https://files.pythonhosted.org/packages/61/61/e892997bcaa289559b4d5869f066a8021b79f4bf8e955f831b095f47a4cd/msgspec-0.19.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:45c8fb410670b3b7eb884d44a75589377c341ec1392b778311acdbfa55187716", size = 216833, upload-time = "2024-12-27T17:39:41.203Z" }, + { url = "https://files.pythonhosted.org/packages/ce/3d/71b2dffd3a1c743ffe13296ff701ee503feaebc3f04d0e75613b6563c374/msgspec-0.19.0-cp311-cp311-win_amd64.whl", hash = "sha256:70eaef4934b87193a27d802534dc466778ad8d536e296ae2f9334e182ac27b6c", size = 186184, upload-time = "2024-12-27T17:39:43.702Z" }, + { url = "https://files.pythonhosted.org/packages/b2/5f/a70c24f075e3e7af2fae5414c7048b0e11389685b7f717bb55ba282a34a7/msgspec-0.19.0-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:f98bd8962ad549c27d63845b50af3f53ec468b6318400c9f1adfe8b092d7b62f", size = 190485, upload-time = "2024-12-27T17:39:44.974Z" }, + { url = "https://files.pythonhosted.org/packages/89/b0/1b9763938cfae12acf14b682fcf05c92855974d921a5a985ecc197d1c672/msgspec-0.19.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:43bbb237feab761b815ed9df43b266114203f53596f9b6e6f00ebd79d178cdf2", size = 183910, upload-time = "2024-12-27T17:39:46.401Z" }, + { url = "https://files.pythonhosted.org/packages/87/81/0c8c93f0b92c97e326b279795f9c5b956c5a97af28ca0fbb9fd86c83737a/msgspec-0.19.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4cfc033c02c3e0aec52b71710d7f84cb3ca5eb407ab2ad23d75631153fdb1f12", size = 210633, upload-time = "2024-12-27T17:39:49.099Z" }, + { url = "https://files.pythonhosted.org/packages/d0/ef/c5422ce8af73928d194a6606f8ae36e93a52fd5e8df5abd366903a5ca8da/msgspec-0.19.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d911c442571605e17658ca2b416fd8579c5050ac9adc5e00c2cb3126c97f73bc", size = 213594, upload-time = "2024-12-27T17:39:51.204Z" }, + { url = "https://files.pythonhosted.org/packages/19/2b/4137bc2ed45660444842d042be2cf5b18aa06efd2cda107cff18253b9653/msgspec-0.19.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:757b501fa57e24896cf40a831442b19a864f56d253679f34f260dcb002524a6c", size = 214053, upload-time = "2024-12-27T17:39:52.866Z" }, + { url = "https://files.pythonhosted.org/packages/9d/e6/8ad51bdc806aac1dc501e8fe43f759f9ed7284043d722b53323ea421c360/msgspec-0.19.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:5f0f65f29b45e2816d8bded36e6b837a4bf5fb60ec4bc3c625fa2c6da4124537", size = 219081, upload-time = "2024-12-27T17:39:55.142Z" }, + { url = "https://files.pythonhosted.org/packages/b1/ef/27dd35a7049c9a4f4211c6cd6a8c9db0a50647546f003a5867827ec45391/msgspec-0.19.0-cp312-cp312-win_amd64.whl", hash = "sha256:067f0de1c33cfa0b6a8206562efdf6be5985b988b53dd244a8e06f993f27c8c0", size = 187467, upload-time = "2024-12-27T17:39:56.531Z" }, + { url = "https://files.pythonhosted.org/packages/3c/cb/2842c312bbe618d8fefc8b9cedce37f773cdc8fa453306546dba2c21fd98/msgspec-0.19.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:f12d30dd6266557aaaf0aa0f9580a9a8fbeadfa83699c487713e355ec5f0bd86", size = 190498, upload-time = "2024-12-27T17:40:00.427Z" }, + { url = "https://files.pythonhosted.org/packages/58/95/c40b01b93465e1a5f3b6c7d91b10fb574818163740cc3acbe722d1e0e7e4/msgspec-0.19.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:82b2c42c1b9ebc89e822e7e13bbe9d17ede0c23c187469fdd9505afd5a481314", size = 183950, upload-time = "2024-12-27T17:40:04.219Z" }, + { url = "https://files.pythonhosted.org/packages/e8/f0/5b764e066ce9aba4b70d1db8b087ea66098c7c27d59b9dd8a3532774d48f/msgspec-0.19.0-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:19746b50be214a54239aab822964f2ac81e38b0055cca94808359d779338c10e", size = 210647, upload-time = "2024-12-27T17:40:05.606Z" }, + { url = "https://files.pythonhosted.org/packages/9d/87/bc14f49bc95c4cb0dd0a8c56028a67c014ee7e6818ccdce74a4862af259b/msgspec-0.19.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:60ef4bdb0ec8e4ad62e5a1f95230c08efb1f64f32e6e8dd2ced685bcc73858b5", size = 213563, upload-time = "2024-12-27T17:40:10.516Z" }, + { url = "https://files.pythonhosted.org/packages/53/2f/2b1c2b056894fbaa975f68f81e3014bb447516a8b010f1bed3fb0e016ed7/msgspec-0.19.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:ac7f7c377c122b649f7545810c6cd1b47586e3aa3059126ce3516ac7ccc6a6a9", size = 213996, upload-time = "2024-12-27T17:40:12.244Z" }, + { url = "https://files.pythonhosted.org/packages/aa/5a/4cd408d90d1417e8d2ce6a22b98a6853c1b4d7cb7669153e4424d60087f6/msgspec-0.19.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:a5bc1472223a643f5ffb5bf46ccdede7f9795078194f14edd69e3aab7020d327", size = 219087, upload-time = "2024-12-27T17:40:14.881Z" }, + { url = "https://files.pythonhosted.org/packages/23/d8/f15b40611c2d5753d1abb0ca0da0c75348daf1252220e5dda2867bd81062/msgspec-0.19.0-cp313-cp313-win_amd64.whl", hash = "sha256:317050bc0f7739cb30d257ff09152ca309bf5a369854bbf1e57dffc310c1f20f", size = 187432, upload-time = "2024-12-27T17:40:16.256Z" }, ] [[package]] name = "multidict" -version = "6.7.0" -source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/80/1e/5492c365f222f907de1039b91f922b93fa4f764c713ee858d235495d8f50/multidict-6.7.0.tar.gz", hash = "sha256:c6e99d9a65ca282e578dfea819cfa9c0a62b2499d8677392e09feaf305e9e6f5", size = 101834, upload-time = "2025-10-06T14:52:30.657Z" } -wheels = [ - { url = "https://files.pythonhosted.org/packages/34/9e/5c727587644d67b2ed479041e4b1c58e30afc011e3d45d25bbe35781217c/multidict-6.7.0-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:4d409aa42a94c0b3fa617708ef5276dfe81012ba6753a0370fcc9d0195d0a1fc", size = 76604, upload-time = "2025-10-06T14:48:54.277Z" }, - { url = "https://files.pythonhosted.org/packages/17/e4/67b5c27bd17c085a5ea8f1ec05b8a3e5cba0ca734bfcad5560fb129e70ca/multidict-6.7.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:14c9e076eede3b54c636f8ce1c9c252b5f057c62131211f0ceeec273810c9721", size = 44715, upload-time = "2025-10-06T14:48:55.445Z" }, - { url = "https://files.pythonhosted.org/packages/4d/e1/866a5d77be6ea435711bef2a4291eed11032679b6b28b56b4776ab06ba3e/multidict-6.7.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:4c09703000a9d0fa3c3404b27041e574cc7f4df4c6563873246d0e11812a94b6", size = 44332, upload-time = "2025-10-06T14:48:56.706Z" }, - { url = "https://files.pythonhosted.org/packages/31/61/0c2d50241ada71ff61a79518db85ada85fdabfcf395d5968dae1cbda04e5/multidict-6.7.0-cp311-cp311-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:a265acbb7bb33a3a2d626afbe756371dce0279e7b17f4f4eda406459c2b5ff1c", size = 245212, upload-time = "2025-10-06T14:48:58.042Z" }, - { url = "https://files.pythonhosted.org/packages/ac/e0/919666a4e4b57fff1b57f279be1c9316e6cdc5de8a8b525d76f6598fefc7/multidict-6.7.0-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:51cb455de290ae462593e5b1cb1118c5c22ea7f0d3620d9940bf695cea5a4bd7", size = 246671, upload-time = "2025-10-06T14:49:00.004Z" }, - { url = "https://files.pythonhosted.org/packages/a1/cc/d027d9c5a520f3321b65adea289b965e7bcbd2c34402663f482648c716ce/multidict-6.7.0-cp311-cp311-manylinux2014_armv7l.manylinux_2_17_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:db99677b4457c7a5c5a949353e125ba72d62b35f74e26da141530fbb012218a7", size = 225491, upload-time = "2025-10-06T14:49:01.393Z" }, - { url = "https://files.pythonhosted.org/packages/75/c4/bbd633980ce6155a28ff04e6a6492dd3335858394d7bb752d8b108708558/multidict-6.7.0-cp311-cp311-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:f470f68adc395e0183b92a2f4689264d1ea4b40504a24d9882c27375e6662bb9", size = 257322, upload-time = "2025-10-06T14:49:02.745Z" }, - { url = "https://files.pythonhosted.org/packages/4c/6d/d622322d344f1f053eae47e033b0b3f965af01212de21b10bcf91be991fb/multidict-6.7.0-cp311-cp311-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:0db4956f82723cc1c270de9c6e799b4c341d327762ec78ef82bb962f79cc07d8", size = 254694, upload-time = "2025-10-06T14:49:04.15Z" }, - { url = "https://files.pythonhosted.org/packages/a8/9f/78f8761c2705d4c6d7516faed63c0ebdac569f6db1bef95e0d5218fdc146/multidict-6.7.0-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:3e56d780c238f9e1ae66a22d2adf8d16f485381878250db8d496623cd38b22bd", size = 246715, upload-time = "2025-10-06T14:49:05.967Z" }, - { url = "https://files.pythonhosted.org/packages/78/59/950818e04f91b9c2b95aab3d923d9eabd01689d0dcd889563988e9ea0fd8/multidict-6.7.0-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:9d14baca2ee12c1a64740d4531356ba50b82543017f3ad6de0deb943c5979abb", size = 243189, upload-time = "2025-10-06T14:49:07.37Z" }, - { url = "https://files.pythonhosted.org/packages/7a/3d/77c79e1934cad2ee74991840f8a0110966d9599b3af95964c0cd79bb905b/multidict-6.7.0-cp311-cp311-musllinux_1_2_armv7l.whl", hash = "sha256:295a92a76188917c7f99cda95858c822f9e4aae5824246bba9b6b44004ddd0a6", size = 237845, upload-time = "2025-10-06T14:49:08.759Z" }, - { url = "https://files.pythonhosted.org/packages/63/1b/834ce32a0a97a3b70f86437f685f880136677ac00d8bce0027e9fd9c2db7/multidict-6.7.0-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:39f1719f57adbb767ef592a50ae5ebb794220d1188f9ca93de471336401c34d2", size = 246374, upload-time = "2025-10-06T14:49:10.574Z" }, - { url = "https://files.pythonhosted.org/packages/23/ef/43d1c3ba205b5dec93dc97f3fba179dfa47910fc73aaaea4f7ceb41cec2a/multidict-6.7.0-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:0a13fb8e748dfc94749f622de065dd5c1def7e0d2216dba72b1d8069a389c6ff", size = 253345, upload-time = "2025-10-06T14:49:12.331Z" }, - { url = "https://files.pythonhosted.org/packages/6b/03/eaf95bcc2d19ead522001f6a650ef32811aa9e3624ff0ad37c445c7a588c/multidict-6.7.0-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:e3aa16de190d29a0ea1b48253c57d99a68492c8dd8948638073ab9e74dc9410b", size = 246940, upload-time = "2025-10-06T14:49:13.821Z" }, - { url = "https://files.pythonhosted.org/packages/e8/df/ec8a5fd66ea6cd6f525b1fcbb23511b033c3e9bc42b81384834ffa484a62/multidict-6.7.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:a048ce45dcdaaf1defb76b2e684f997fb5abf74437b6cb7b22ddad934a964e34", size = 242229, upload-time = "2025-10-06T14:49:15.603Z" }, - { url = "https://files.pythonhosted.org/packages/8a/a2/59b405d59fd39ec86d1142630e9049243015a5f5291ba49cadf3c090c541/multidict-6.7.0-cp311-cp311-win32.whl", hash = "sha256:a90af66facec4cebe4181b9e62a68be65e45ac9b52b67de9eec118701856e7ff", size = 41308, upload-time = "2025-10-06T14:49:16.871Z" }, - { url = "https://files.pythonhosted.org/packages/32/0f/13228f26f8b882c34da36efa776c3b7348455ec383bab4a66390e42963ae/multidict-6.7.0-cp311-cp311-win_amd64.whl", hash = "sha256:95b5ffa4349df2887518bb839409bcf22caa72d82beec453216802f475b23c81", size = 46037, upload-time = "2025-10-06T14:49:18.457Z" }, - { url = "https://files.pythonhosted.org/packages/84/1f/68588e31b000535a3207fd3c909ebeec4fb36b52c442107499c18a896a2a/multidict-6.7.0-cp311-cp311-win_arm64.whl", hash = "sha256:329aa225b085b6f004a4955271a7ba9f1087e39dcb7e65f6284a988264a63912", size = 43023, upload-time = "2025-10-06T14:49:19.648Z" }, - { url = "https://files.pythonhosted.org/packages/c2/9e/9f61ac18d9c8b475889f32ccfa91c9f59363480613fc807b6e3023d6f60b/multidict-6.7.0-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:8a3862568a36d26e650a19bb5cbbba14b71789032aebc0423f8cc5f150730184", size = 76877, upload-time = "2025-10-06T14:49:20.884Z" }, - { url = "https://files.pythonhosted.org/packages/38/6f/614f09a04e6184f8824268fce4bc925e9849edfa654ddd59f0b64508c595/multidict-6.7.0-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:960c60b5849b9b4f9dcc9bea6e3626143c252c74113df2c1540aebce70209b45", size = 45467, upload-time = "2025-10-06T14:49:22.054Z" }, - { url = "https://files.pythonhosted.org/packages/b3/93/c4f67a436dd026f2e780c433277fff72be79152894d9fc36f44569cab1a6/multidict-6.7.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:2049be98fb57a31b4ccf870bf377af2504d4ae35646a19037ec271e4c07998aa", size = 43834, upload-time = "2025-10-06T14:49:23.566Z" }, - { url = "https://files.pythonhosted.org/packages/7f/f5/013798161ca665e4a422afbc5e2d9e4070142a9ff8905e482139cd09e4d0/multidict-6.7.0-cp312-cp312-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:0934f3843a1860dd465d38895c17fce1f1cb37295149ab05cd1b9a03afacb2a7", size = 250545, upload-time = "2025-10-06T14:49:24.882Z" }, - { url = "https://files.pythonhosted.org/packages/71/2f/91dbac13e0ba94669ea5119ba267c9a832f0cb65419aca75549fcf09a3dc/multidict-6.7.0-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:b3e34f3a1b8131ba06f1a73adab24f30934d148afcd5f5de9a73565a4404384e", size = 258305, upload-time = "2025-10-06T14:49:26.778Z" }, - { url = "https://files.pythonhosted.org/packages/ef/b0/754038b26f6e04488b48ac621f779c341338d78503fb45403755af2df477/multidict-6.7.0-cp312-cp312-manylinux2014_armv7l.manylinux_2_17_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:efbb54e98446892590dc2458c19c10344ee9a883a79b5cec4bc34d6656e8d546", size = 242363, upload-time = "2025-10-06T14:49:28.562Z" }, - { url = "https://files.pythonhosted.org/packages/87/15/9da40b9336a7c9fa606c4cf2ed80a649dffeb42b905d4f63a1d7eb17d746/multidict-6.7.0-cp312-cp312-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:a35c5fc61d4f51eb045061e7967cfe3123d622cd500e8868e7c0c592a09fedc4", size = 268375, upload-time = "2025-10-06T14:49:29.96Z" }, - { url = "https://files.pythonhosted.org/packages/82/72/c53fcade0cc94dfaad583105fd92b3a783af2091eddcb41a6d5a52474000/multidict-6.7.0-cp312-cp312-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:29fe6740ebccba4175af1b9b87bf553e9c15cd5868ee967e010efcf94e4fd0f1", size = 269346, upload-time = "2025-10-06T14:49:31.404Z" }, - { url = "https://files.pythonhosted.org/packages/0d/e2/9baffdae21a76f77ef8447f1a05a96ec4bc0a24dae08767abc0a2fe680b8/multidict-6.7.0-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:123e2a72e20537add2f33a79e605f6191fba2afda4cbb876e35c1a7074298a7d", size = 256107, upload-time = "2025-10-06T14:49:32.974Z" }, - { url = "https://files.pythonhosted.org/packages/3c/06/3f06f611087dc60d65ef775f1fb5aca7c6d61c6db4990e7cda0cef9b1651/multidict-6.7.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:b284e319754366c1aee2267a2036248b24eeb17ecd5dc16022095e747f2f4304", size = 253592, upload-time = "2025-10-06T14:49:34.52Z" }, - { url = "https://files.pythonhosted.org/packages/20/24/54e804ec7945b6023b340c412ce9c3f81e91b3bf5fa5ce65558740141bee/multidict-6.7.0-cp312-cp312-musllinux_1_2_armv7l.whl", hash = "sha256:803d685de7be4303b5a657b76e2f6d1240e7e0a8aa2968ad5811fa2285553a12", size = 251024, upload-time = "2025-10-06T14:49:35.956Z" }, - { url = "https://files.pythonhosted.org/packages/14/48/011cba467ea0b17ceb938315d219391d3e421dfd35928e5dbdc3f4ae76ef/multidict-6.7.0-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:c04a328260dfd5db8c39538f999f02779012268f54614902d0afc775d44e0a62", size = 251484, upload-time = "2025-10-06T14:49:37.631Z" }, - { url = "https://files.pythonhosted.org/packages/0d/2f/919258b43bb35b99fa127435cfb2d91798eb3a943396631ef43e3720dcf4/multidict-6.7.0-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:8a19cdb57cd3df4cd865849d93ee14920fb97224300c88501f16ecfa2604b4e0", size = 263579, upload-time = "2025-10-06T14:49:39.502Z" }, - { url = "https://files.pythonhosted.org/packages/31/22/a0e884d86b5242b5a74cf08e876bdf299e413016b66e55511f7a804a366e/multidict-6.7.0-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:9b2fd74c52accced7e75de26023b7dccee62511a600e62311b918ec5c168fc2a", size = 259654, upload-time = "2025-10-06T14:49:41.32Z" }, - { url = "https://files.pythonhosted.org/packages/b2/e5/17e10e1b5c5f5a40f2fcbb45953c9b215f8a4098003915e46a93f5fcaa8f/multidict-6.7.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:3e8bfdd0e487acf992407a140d2589fe598238eaeffa3da8448d63a63cd363f8", size = 251511, upload-time = "2025-10-06T14:49:46.021Z" }, - { url = "https://files.pythonhosted.org/packages/e3/9a/201bb1e17e7af53139597069c375e7b0dcbd47594604f65c2d5359508566/multidict-6.7.0-cp312-cp312-win32.whl", hash = "sha256:dd32a49400a2c3d52088e120ee00c1e3576cbff7e10b98467962c74fdb762ed4", size = 41895, upload-time = "2025-10-06T14:49:48.718Z" }, - { url = "https://files.pythonhosted.org/packages/46/e2/348cd32faad84eaf1d20cce80e2bb0ef8d312c55bca1f7fa9865e7770aaf/multidict-6.7.0-cp312-cp312-win_amd64.whl", hash = "sha256:92abb658ef2d7ef22ac9f8bb88e8b6c3e571671534e029359b6d9e845923eb1b", size = 46073, upload-time = "2025-10-06T14:49:50.28Z" }, - { url = "https://files.pythonhosted.org/packages/25/ec/aad2613c1910dce907480e0c3aa306905830f25df2e54ccc9dea450cb5aa/multidict-6.7.0-cp312-cp312-win_arm64.whl", hash = "sha256:490dab541a6a642ce1a9d61a4781656b346a55c13038f0b1244653828e3a83ec", size = 43226, upload-time = "2025-10-06T14:49:52.304Z" }, - { url = "https://files.pythonhosted.org/packages/d2/86/33272a544eeb36d66e4d9a920602d1a2f57d4ebea4ef3cdfe5a912574c95/multidict-6.7.0-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:bee7c0588aa0076ce77c0ea5d19a68d76ad81fcd9fe8501003b9a24f9d4000f6", size = 76135, upload-time = "2025-10-06T14:49:54.26Z" }, - { url = "https://files.pythonhosted.org/packages/91/1c/eb97db117a1ebe46d457a3d235a7b9d2e6dcab174f42d1b67663dd9e5371/multidict-6.7.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:7ef6b61cad77091056ce0e7ce69814ef72afacb150b7ac6a3e9470def2198159", size = 45117, upload-time = "2025-10-06T14:49:55.82Z" }, - { url = "https://files.pythonhosted.org/packages/f1/d8/6c3442322e41fb1dd4de8bd67bfd11cd72352ac131f6368315617de752f1/multidict-6.7.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:9c0359b1ec12b1d6849c59f9d319610b7f20ef990a6d454ab151aa0e3b9f78ca", size = 43472, upload-time = "2025-10-06T14:49:57.048Z" }, - { url = "https://files.pythonhosted.org/packages/75/3f/e2639e80325af0b6c6febdf8e57cc07043ff15f57fa1ef808f4ccb5ac4cd/multidict-6.7.0-cp313-cp313-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:cd240939f71c64bd658f186330603aac1a9a81bf6273f523fca63673cb7378a8", size = 249342, upload-time = "2025-10-06T14:49:58.368Z" }, - { url = "https://files.pythonhosted.org/packages/5d/cc/84e0585f805cbeaa9cbdaa95f9a3d6aed745b9d25700623ac89a6ecff400/multidict-6.7.0-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:a60a4d75718a5efa473ebd5ab685786ba0c67b8381f781d1be14da49f1a2dc60", size = 257082, upload-time = "2025-10-06T14:49:59.89Z" }, - { url = "https://files.pythonhosted.org/packages/b0/9c/ac851c107c92289acbbf5cfb485694084690c1b17e555f44952c26ddc5bd/multidict-6.7.0-cp313-cp313-manylinux2014_armv7l.manylinux_2_17_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:53a42d364f323275126aff81fb67c5ca1b7a04fda0546245730a55c8c5f24bc4", size = 240704, upload-time = "2025-10-06T14:50:01.485Z" }, - { url = "https://files.pythonhosted.org/packages/50/cc/5f93e99427248c09da95b62d64b25748a5f5c98c7c2ab09825a1d6af0e15/multidict-6.7.0-cp313-cp313-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:3b29b980d0ddbecb736735ee5bef69bb2ddca56eff603c86f3f29a1128299b4f", size = 266355, upload-time = "2025-10-06T14:50:02.955Z" }, - { url = "https://files.pythonhosted.org/packages/ec/0c/2ec1d883ceb79c6f7f6d7ad90c919c898f5d1c6ea96d322751420211e072/multidict-6.7.0-cp313-cp313-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:f8a93b1c0ed2d04b97a5e9336fd2d33371b9a6e29ab7dd6503d63407c20ffbaf", size = 267259, upload-time = "2025-10-06T14:50:04.446Z" }, - { url = "https://files.pythonhosted.org/packages/c6/2d/f0b184fa88d6630aa267680bdb8623fb69cb0d024b8c6f0d23f9a0f406d3/multidict-6.7.0-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:9ff96e8815eecacc6645da76c413eb3b3d34cfca256c70b16b286a687d013c32", size = 254903, upload-time = "2025-10-06T14:50:05.98Z" }, - { url = "https://files.pythonhosted.org/packages/06/c9/11ea263ad0df7dfabcad404feb3c0dd40b131bc7f232d5537f2fb1356951/multidict-6.7.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:7516c579652f6a6be0e266aec0acd0db80829ca305c3d771ed898538804c2036", size = 252365, upload-time = "2025-10-06T14:50:07.511Z" }, - { url = "https://files.pythonhosted.org/packages/41/88/d714b86ee2c17d6e09850c70c9d310abac3d808ab49dfa16b43aba9d53fd/multidict-6.7.0-cp313-cp313-musllinux_1_2_armv7l.whl", hash = "sha256:040f393368e63fb0f3330e70c26bfd336656bed925e5cbe17c9da839a6ab13ec", size = 250062, upload-time = "2025-10-06T14:50:09.074Z" }, - { url = "https://files.pythonhosted.org/packages/15/fe/ad407bb9e818c2b31383f6131ca19ea7e35ce93cf1310fce69f12e89de75/multidict-6.7.0-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:b3bc26a951007b1057a1c543af845f1c7e3e71cc240ed1ace7bf4484aa99196e", size = 249683, upload-time = "2025-10-06T14:50:10.714Z" }, - { url = "https://files.pythonhosted.org/packages/8c/a4/a89abdb0229e533fb925e7c6e5c40201c2873efebc9abaf14046a4536ee6/multidict-6.7.0-cp313-cp313-musllinux_1_2_ppc64le.whl", hash = "sha256:7b022717c748dd1992a83e219587aabe45980d88969f01b316e78683e6285f64", size = 261254, upload-time = "2025-10-06T14:50:12.28Z" }, - { url = "https://files.pythonhosted.org/packages/8d/aa/0e2b27bd88b40a4fb8dc53dd74eecac70edaa4c1dd0707eb2164da3675b3/multidict-6.7.0-cp313-cp313-musllinux_1_2_s390x.whl", hash = "sha256:9600082733859f00d79dee64effc7aef1beb26adb297416a4ad2116fd61374bd", size = 257967, upload-time = "2025-10-06T14:50:14.16Z" }, - { url = "https://files.pythonhosted.org/packages/d0/8e/0c67b7120d5d5f6d874ed85a085f9dc770a7f9d8813e80f44a9fec820bb7/multidict-6.7.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:94218fcec4d72bc61df51c198d098ce2b378e0ccbac41ddbed5ef44092913288", size = 250085, upload-time = "2025-10-06T14:50:15.639Z" }, - { url = "https://files.pythonhosted.org/packages/ba/55/b73e1d624ea4b8fd4dd07a3bb70f6e4c7c6c5d9d640a41c6ffe5cdbd2a55/multidict-6.7.0-cp313-cp313-win32.whl", hash = "sha256:a37bd74c3fa9d00be2d7b8eca074dc56bd8077ddd2917a839bd989612671ed17", size = 41713, upload-time = "2025-10-06T14:50:17.066Z" }, - { url = "https://files.pythonhosted.org/packages/32/31/75c59e7d3b4205075b4c183fa4ca398a2daf2303ddf616b04ae6ef55cffe/multidict-6.7.0-cp313-cp313-win_amd64.whl", hash = "sha256:30d193c6cc6d559db42b6bcec8a5d395d34d60c9877a0b71ecd7c204fcf15390", size = 45915, upload-time = "2025-10-06T14:50:18.264Z" }, - { url = "https://files.pythonhosted.org/packages/31/2a/8987831e811f1184c22bc2e45844934385363ee61c0a2dcfa8f71b87e608/multidict-6.7.0-cp313-cp313-win_arm64.whl", hash = "sha256:ea3334cabe4d41b7ccd01e4d349828678794edbc2d3ae97fc162a3312095092e", size = 43077, upload-time = "2025-10-06T14:50:19.853Z" }, - { url = "https://files.pythonhosted.org/packages/e8/68/7b3a5170a382a340147337b300b9eb25a9ddb573bcdfff19c0fa3f31ffba/multidict-6.7.0-cp313-cp313t-macosx_10_13_universal2.whl", hash = "sha256:ad9ce259f50abd98a1ca0aa6e490b58c316a0fce0617f609723e40804add2c00", size = 83114, upload-time = "2025-10-06T14:50:21.223Z" }, - { url = "https://files.pythonhosted.org/packages/55/5c/3fa2d07c84df4e302060f555bbf539310980362236ad49f50eeb0a1c1eb9/multidict-6.7.0-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:07f5594ac6d084cbb5de2df218d78baf55ef150b91f0ff8a21cc7a2e3a5a58eb", size = 48442, upload-time = "2025-10-06T14:50:22.871Z" }, - { url = "https://files.pythonhosted.org/packages/fc/56/67212d33239797f9bd91962bb899d72bb0f4c35a8652dcdb8ed049bef878/multidict-6.7.0-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:0591b48acf279821a579282444814a2d8d0af624ae0bc600aa4d1b920b6e924b", size = 46885, upload-time = "2025-10-06T14:50:24.258Z" }, - { url = "https://files.pythonhosted.org/packages/46/d1/908f896224290350721597a61a69cd19b89ad8ee0ae1f38b3f5cd12ea2ac/multidict-6.7.0-cp313-cp313t-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:749a72584761531d2b9467cfbdfd29487ee21124c304c4b6cb760d8777b27f9c", size = 242588, upload-time = "2025-10-06T14:50:25.716Z" }, - { url = "https://files.pythonhosted.org/packages/ab/67/8604288bbd68680eee0ab568fdcb56171d8b23a01bcd5cb0c8fedf6e5d99/multidict-6.7.0-cp313-cp313t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:6b4c3d199f953acd5b446bf7c0de1fe25d94e09e79086f8dc2f48a11a129cdf1", size = 249966, upload-time = "2025-10-06T14:50:28.192Z" }, - { url = "https://files.pythonhosted.org/packages/20/33/9228d76339f1ba51e3efef7da3ebd91964d3006217aae13211653193c3ff/multidict-6.7.0-cp313-cp313t-manylinux2014_armv7l.manylinux_2_17_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:9fb0211dfc3b51efea2f349ec92c114d7754dd62c01f81c3e32b765b70c45c9b", size = 228618, upload-time = "2025-10-06T14:50:29.82Z" }, - { url = "https://files.pythonhosted.org/packages/f8/2d/25d9b566d10cab1c42b3b9e5b11ef79c9111eaf4463b8c257a3bd89e0ead/multidict-6.7.0-cp313-cp313t-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:a027ec240fe73a8d6281872690b988eed307cd7d91b23998ff35ff577ca688b5", size = 257539, upload-time = "2025-10-06T14:50:31.731Z" }, - { url = "https://files.pythonhosted.org/packages/b6/b1/8d1a965e6637fc33de3c0d8f414485c2b7e4af00f42cab3d84e7b955c222/multidict-6.7.0-cp313-cp313t-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:d1d964afecdf3a8288789df2f5751dc0a8261138c3768d9af117ed384e538fad", size = 256345, upload-time = "2025-10-06T14:50:33.26Z" }, - { url = "https://files.pythonhosted.org/packages/ba/0c/06b5a8adbdeedada6f4fb8d8f193d44a347223b11939b42953eeb6530b6b/multidict-6.7.0-cp313-cp313t-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:caf53b15b1b7df9fbd0709aa01409000a2b4dd03a5f6f5cc548183c7c8f8b63c", size = 247934, upload-time = "2025-10-06T14:50:34.808Z" }, - { url = "https://files.pythonhosted.org/packages/8f/31/b2491b5fe167ca044c6eb4b8f2c9f3b8a00b24c432c365358eadac5d7625/multidict-6.7.0-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:654030da3197d927f05a536a66186070e98765aa5142794c9904555d3a9d8fb5", size = 245243, upload-time = "2025-10-06T14:50:36.436Z" }, - { url = "https://files.pythonhosted.org/packages/61/1a/982913957cb90406c8c94f53001abd9eafc271cb3e70ff6371590bec478e/multidict-6.7.0-cp313-cp313t-musllinux_1_2_armv7l.whl", hash = "sha256:2090d3718829d1e484706a2f525e50c892237b2bf9b17a79b059cb98cddc2f10", size = 235878, upload-time = "2025-10-06T14:50:37.953Z" }, - { url = "https://files.pythonhosted.org/packages/be/c0/21435d804c1a1cf7a2608593f4d19bca5bcbd7a81a70b253fdd1c12af9c0/multidict-6.7.0-cp313-cp313t-musllinux_1_2_i686.whl", hash = "sha256:2d2cfeec3f6f45651b3d408c4acec0ebf3daa9bc8a112a084206f5db5d05b754", size = 243452, upload-time = "2025-10-06T14:50:39.574Z" }, - { url = "https://files.pythonhosted.org/packages/54/0a/4349d540d4a883863191be6eb9a928846d4ec0ea007d3dcd36323bb058ac/multidict-6.7.0-cp313-cp313t-musllinux_1_2_ppc64le.whl", hash = "sha256:4ef089f985b8c194d341eb2c24ae6e7408c9a0e2e5658699c92f497437d88c3c", size = 252312, upload-time = "2025-10-06T14:50:41.612Z" }, - { url = "https://files.pythonhosted.org/packages/26/64/d5416038dbda1488daf16b676e4dbfd9674dde10a0cc8f4fc2b502d8125d/multidict-6.7.0-cp313-cp313t-musllinux_1_2_s390x.whl", hash = "sha256:e93a0617cd16998784bf4414c7e40f17a35d2350e5c6f0bd900d3a8e02bd3762", size = 246935, upload-time = "2025-10-06T14:50:43.972Z" }, - { url = "https://files.pythonhosted.org/packages/9f/8c/8290c50d14e49f35e0bd4abc25e1bc7711149ca9588ab7d04f886cdf03d9/multidict-6.7.0-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:f0feece2ef8ebc42ed9e2e8c78fc4aa3cf455733b507c09ef7406364c94376c6", size = 243385, upload-time = "2025-10-06T14:50:45.648Z" }, - { url = "https://files.pythonhosted.org/packages/ef/a0/f83ae75e42d694b3fbad3e047670e511c138be747bc713cf1b10d5096416/multidict-6.7.0-cp313-cp313t-win32.whl", hash = "sha256:19a1d55338ec1be74ef62440ca9e04a2f001a04d0cc49a4983dc320ff0f3212d", size = 47777, upload-time = "2025-10-06T14:50:47.154Z" }, - { url = "https://files.pythonhosted.org/packages/dc/80/9b174a92814a3830b7357307a792300f42c9e94664b01dee8e457551fa66/multidict-6.7.0-cp313-cp313t-win_amd64.whl", hash = "sha256:3da4fb467498df97e986af166b12d01f05d2e04f978a9c1c680ea1988e0bc4b6", size = 53104, upload-time = "2025-10-06T14:50:48.851Z" }, - { url = "https://files.pythonhosted.org/packages/cc/28/04baeaf0428d95bb7a7bea0e691ba2f31394338ba424fb0679a9ed0f4c09/multidict-6.7.0-cp313-cp313t-win_arm64.whl", hash = "sha256:b4121773c49a0776461f4a904cdf6264c88e42218aaa8407e803ca8025872792", size = 45503, upload-time = "2025-10-06T14:50:50.16Z" }, - { url = "https://files.pythonhosted.org/packages/b7/da/7d22601b625e241d4f23ef1ebff8acfc60da633c9e7e7922e24d10f592b3/multidict-6.7.0-py3-none-any.whl", hash = "sha256:394fc5c42a333c9ffc3e421a4c85e08580d990e08b99f6bf35b4132114c5dcb3", size = 12317, upload-time = "2025-10-06T14:52:29.272Z" }, +version = "6.6.4" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/69/7f/0652e6ed47ab288e3756ea9c0df8b14950781184d4bd7883f4d87dd41245/multidict-6.6.4.tar.gz", hash = "sha256:d2d4e4787672911b48350df02ed3fa3fffdc2f2e8ca06dd6afdf34189b76a9dd", size = 101843, upload-time = "2025-08-11T12:08:48.217Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/6b/7f/90a7f01e2d005d6653c689039977f6856718c75c5579445effb7e60923d1/multidict-6.6.4-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:c7a0e9b561e6460484318a7612e725df1145d46b0ef57c6b9866441bf6e27e0c", size = 76472, upload-time = "2025-08-11T12:06:29.006Z" }, + { url = "https://files.pythonhosted.org/packages/54/a3/bed07bc9e2bb302ce752f1dabc69e884cd6a676da44fb0e501b246031fdd/multidict-6.6.4-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:6bf2f10f70acc7a2446965ffbc726e5fc0b272c97a90b485857e5c70022213eb", size = 44634, upload-time = "2025-08-11T12:06:30.374Z" }, + { url = "https://files.pythonhosted.org/packages/a7/4b/ceeb4f8f33cf81277da464307afeaf164fb0297947642585884f5cad4f28/multidict-6.6.4-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:66247d72ed62d5dd29752ffc1d3b88f135c6a8de8b5f63b7c14e973ef5bda19e", size = 44282, upload-time = "2025-08-11T12:06:31.958Z" }, + { url = "https://files.pythonhosted.org/packages/03/35/436a5da8702b06866189b69f655ffdb8f70796252a8772a77815f1812679/multidict-6.6.4-cp311-cp311-manylinux1_i686.manylinux2014_i686.manylinux_2_17_i686.manylinux_2_5_i686.whl", hash = "sha256:105245cc6b76f51e408451a844a54e6823bbd5a490ebfe5bdfc79798511ceded", size = 229696, upload-time = "2025-08-11T12:06:33.087Z" }, + { url = "https://files.pythonhosted.org/packages/b6/0e/915160be8fecf1fca35f790c08fb74ca684d752fcba62c11daaf3d92c216/multidict-6.6.4-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:cbbc54e58b34c3bae389ef00046be0961f30fef7cb0dd9c7756aee376a4f7683", size = 246665, upload-time = "2025-08-11T12:06:34.448Z" }, + { url = "https://files.pythonhosted.org/packages/08/ee/2f464330acd83f77dcc346f0b1a0eaae10230291450887f96b204b8ac4d3/multidict-6.6.4-cp311-cp311-manylinux2014_armv7l.manylinux_2_17_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:56c6b3652f945c9bc3ac6c8178cd93132b8d82dd581fcbc3a00676c51302bc1a", size = 225485, upload-time = "2025-08-11T12:06:35.672Z" }, + { url = "https://files.pythonhosted.org/packages/71/cc/9a117f828b4d7fbaec6adeed2204f211e9caf0a012692a1ee32169f846ae/multidict-6.6.4-cp311-cp311-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:b95494daf857602eccf4c18ca33337dd2be705bccdb6dddbfc9d513e6addb9d9", size = 257318, upload-time = "2025-08-11T12:06:36.98Z" }, + { url = "https://files.pythonhosted.org/packages/25/77/62752d3dbd70e27fdd68e86626c1ae6bccfebe2bb1f84ae226363e112f5a/multidict-6.6.4-cp311-cp311-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:e5b1413361cef15340ab9dc61523e653d25723e82d488ef7d60a12878227ed50", size = 254689, upload-time = "2025-08-11T12:06:38.233Z" }, + { url = "https://files.pythonhosted.org/packages/00/6e/fac58b1072a6fc59af5e7acb245e8754d3e1f97f4f808a6559951f72a0d4/multidict-6.6.4-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:e167bf899c3d724f9662ef00b4f7fef87a19c22b2fead198a6f68b263618df52", size = 246709, upload-time = "2025-08-11T12:06:39.517Z" }, + { url = "https://files.pythonhosted.org/packages/01/ef/4698d6842ef5e797c6db7744b0081e36fb5de3d00002cc4c58071097fac3/multidict-6.6.4-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:aaea28ba20a9026dfa77f4b80369e51cb767c61e33a2d4043399c67bd95fb7c6", size = 243185, upload-time = "2025-08-11T12:06:40.796Z" }, + { url = "https://files.pythonhosted.org/packages/aa/c9/d82e95ae1d6e4ef396934e9b0e942dfc428775f9554acf04393cce66b157/multidict-6.6.4-cp311-cp311-musllinux_1_2_armv7l.whl", hash = "sha256:8c91cdb30809a96d9ecf442ec9bc45e8cfaa0f7f8bdf534e082c2443a196727e", size = 237838, upload-time = "2025-08-11T12:06:42.595Z" }, + { url = "https://files.pythonhosted.org/packages/57/cf/f94af5c36baaa75d44fab9f02e2a6bcfa0cd90acb44d4976a80960759dbc/multidict-6.6.4-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:1a0ccbfe93ca114c5d65a2471d52d8829e56d467c97b0e341cf5ee45410033b3", size = 246368, upload-time = "2025-08-11T12:06:44.304Z" }, + { url = "https://files.pythonhosted.org/packages/4a/fe/29f23460c3d995f6a4b678cb2e9730e7277231b981f0b234702f0177818a/multidict-6.6.4-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:55624b3f321d84c403cb7d8e6e982f41ae233d85f85db54ba6286f7295dc8a9c", size = 253339, upload-time = "2025-08-11T12:06:45.597Z" }, + { url = "https://files.pythonhosted.org/packages/29/b6/fd59449204426187b82bf8a75f629310f68c6adc9559dc922d5abe34797b/multidict-6.6.4-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:4a1fb393a2c9d202cb766c76208bd7945bc194eba8ac920ce98c6e458f0b524b", size = 246933, upload-time = "2025-08-11T12:06:46.841Z" }, + { url = "https://files.pythonhosted.org/packages/19/52/d5d6b344f176a5ac3606f7a61fb44dc746e04550e1a13834dff722b8d7d6/multidict-6.6.4-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:43868297a5759a845fa3a483fb4392973a95fb1de891605a3728130c52b8f40f", size = 242225, upload-time = "2025-08-11T12:06:48.588Z" }, + { url = "https://files.pythonhosted.org/packages/ec/d3/5b2281ed89ff4d5318d82478a2a2450fcdfc3300da48ff15c1778280ad26/multidict-6.6.4-cp311-cp311-win32.whl", hash = "sha256:ed3b94c5e362a8a84d69642dbeac615452e8af9b8eb825b7bc9f31a53a1051e2", size = 41306, upload-time = "2025-08-11T12:06:49.95Z" }, + { url = "https://files.pythonhosted.org/packages/74/7d/36b045c23a1ab98507aefd44fd8b264ee1dd5e5010543c6fccf82141ccef/multidict-6.6.4-cp311-cp311-win_amd64.whl", hash = "sha256:d8c112f7a90d8ca5d20213aa41eac690bb50a76da153e3afb3886418e61cb22e", size = 46029, upload-time = "2025-08-11T12:06:51.082Z" }, + { url = "https://files.pythonhosted.org/packages/0f/5e/553d67d24432c5cd52b49047f2d248821843743ee6d29a704594f656d182/multidict-6.6.4-cp311-cp311-win_arm64.whl", hash = "sha256:3bb0eae408fa1996d87247ca0d6a57b7fc1dcf83e8a5c47ab82c558c250d4adf", size = 43017, upload-time = "2025-08-11T12:06:52.243Z" }, + { url = "https://files.pythonhosted.org/packages/05/f6/512ffd8fd8b37fb2680e5ac35d788f1d71bbaf37789d21a820bdc441e565/multidict-6.6.4-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:0ffb87be160942d56d7b87b0fdf098e81ed565add09eaa1294268c7f3caac4c8", size = 76516, upload-time = "2025-08-11T12:06:53.393Z" }, + { url = "https://files.pythonhosted.org/packages/99/58/45c3e75deb8855c36bd66cc1658007589662ba584dbf423d01df478dd1c5/multidict-6.6.4-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:d191de6cbab2aff5de6c5723101705fd044b3e4c7cfd587a1929b5028b9714b3", size = 45394, upload-time = "2025-08-11T12:06:54.555Z" }, + { url = "https://files.pythonhosted.org/packages/fd/ca/e8c4472a93a26e4507c0b8e1f0762c0d8a32de1328ef72fd704ef9cc5447/multidict-6.6.4-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:38a0956dd92d918ad5feff3db8fcb4a5eb7dba114da917e1a88475619781b57b", size = 43591, upload-time = "2025-08-11T12:06:55.672Z" }, + { url = "https://files.pythonhosted.org/packages/05/51/edf414f4df058574a7265034d04c935aa84a89e79ce90fcf4df211f47b16/multidict-6.6.4-cp312-cp312-manylinux1_i686.manylinux2014_i686.manylinux_2_17_i686.manylinux_2_5_i686.whl", hash = "sha256:6865f6d3b7900ae020b495d599fcf3765653bc927951c1abb959017f81ae8287", size = 237215, upload-time = "2025-08-11T12:06:57.213Z" }, + { url = "https://files.pythonhosted.org/packages/c8/45/8b3d6dbad8cf3252553cc41abea09ad527b33ce47a5e199072620b296902/multidict-6.6.4-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:0a2088c126b6f72db6c9212ad827d0ba088c01d951cee25e758c450da732c138", size = 258299, upload-time = "2025-08-11T12:06:58.946Z" }, + { url = "https://files.pythonhosted.org/packages/3c/e8/8ca2e9a9f5a435fc6db40438a55730a4bf4956b554e487fa1b9ae920f825/multidict-6.6.4-cp312-cp312-manylinux2014_armv7l.manylinux_2_17_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:0f37bed7319b848097085d7d48116f545985db988e2256b2e6f00563a3416ee6", size = 242357, upload-time = "2025-08-11T12:07:00.301Z" }, + { url = "https://files.pythonhosted.org/packages/0f/84/80c77c99df05a75c28490b2af8f7cba2a12621186e0a8b0865d8e745c104/multidict-6.6.4-cp312-cp312-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:01368e3c94032ba6ca0b78e7ccb099643466cf24f8dc8eefcfdc0571d56e58f9", size = 268369, upload-time = "2025-08-11T12:07:01.638Z" }, + { url = "https://files.pythonhosted.org/packages/0d/e9/920bfa46c27b05fb3e1ad85121fd49f441492dca2449c5bcfe42e4565d8a/multidict-6.6.4-cp312-cp312-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:8fe323540c255db0bffee79ad7f048c909f2ab0edb87a597e1c17da6a54e493c", size = 269341, upload-time = "2025-08-11T12:07:02.943Z" }, + { url = "https://files.pythonhosted.org/packages/af/65/753a2d8b05daf496f4a9c367fe844e90a1b2cac78e2be2c844200d10cc4c/multidict-6.6.4-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:b8eb3025f17b0a4c3cd08cda49acf312a19ad6e8a4edd9dbd591e6506d999402", size = 256100, upload-time = "2025-08-11T12:07:04.564Z" }, + { url = "https://files.pythonhosted.org/packages/09/54/655be13ae324212bf0bc15d665a4e34844f34c206f78801be42f7a0a8aaa/multidict-6.6.4-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:bbc14f0365534d35a06970d6a83478b249752e922d662dc24d489af1aa0d1be7", size = 253584, upload-time = "2025-08-11T12:07:05.914Z" }, + { url = "https://files.pythonhosted.org/packages/5c/74/ab2039ecc05264b5cec73eb018ce417af3ebb384ae9c0e9ed42cb33f8151/multidict-6.6.4-cp312-cp312-musllinux_1_2_armv7l.whl", hash = "sha256:75aa52fba2d96bf972e85451b99d8e19cc37ce26fd016f6d4aa60da9ab2b005f", size = 251018, upload-time = "2025-08-11T12:07:08.301Z" }, + { url = "https://files.pythonhosted.org/packages/af/0a/ccbb244ac848e56c6427f2392741c06302bbfba49c0042f1eb3c5b606497/multidict-6.6.4-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:4fefd4a815e362d4f011919d97d7b4a1e566f1dde83dc4ad8cfb5b41de1df68d", size = 251477, upload-time = "2025-08-11T12:07:10.248Z" }, + { url = "https://files.pythonhosted.org/packages/0e/b0/0ed49bba775b135937f52fe13922bc64a7eaf0a3ead84a36e8e4e446e096/multidict-6.6.4-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:db9801fe021f59a5b375ab778973127ca0ac52429a26e2fd86aa9508f4d26eb7", size = 263575, upload-time = "2025-08-11T12:07:11.928Z" }, + { url = "https://files.pythonhosted.org/packages/3e/d9/7fb85a85e14de2e44dfb6a24f03c41e2af8697a6df83daddb0e9b7569f73/multidict-6.6.4-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:a650629970fa21ac1fb06ba25dabfc5b8a2054fcbf6ae97c758aa956b8dba802", size = 259649, upload-time = "2025-08-11T12:07:13.244Z" }, + { url = "https://files.pythonhosted.org/packages/03/9e/b3a459bcf9b6e74fa461a5222a10ff9b544cb1cd52fd482fb1b75ecda2a2/multidict-6.6.4-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:452ff5da78d4720d7516a3a2abd804957532dd69296cb77319c193e3ffb87e24", size = 251505, upload-time = "2025-08-11T12:07:14.57Z" }, + { url = "https://files.pythonhosted.org/packages/86/a2/8022f78f041dfe6d71e364001a5cf987c30edfc83c8a5fb7a3f0974cff39/multidict-6.6.4-cp312-cp312-win32.whl", hash = "sha256:8c2fcb12136530ed19572bbba61b407f655e3953ba669b96a35036a11a485793", size = 41888, upload-time = "2025-08-11T12:07:15.904Z" }, + { url = "https://files.pythonhosted.org/packages/c7/eb/d88b1780d43a56db2cba24289fa744a9d216c1a8546a0dc3956563fd53ea/multidict-6.6.4-cp312-cp312-win_amd64.whl", hash = "sha256:047d9425860a8c9544fed1b9584f0c8bcd31bcde9568b047c5e567a1025ecd6e", size = 46072, upload-time = "2025-08-11T12:07:17.045Z" }, + { url = "https://files.pythonhosted.org/packages/9f/16/b929320bf5750e2d9d4931835a4c638a19d2494a5b519caaaa7492ebe105/multidict-6.6.4-cp312-cp312-win_arm64.whl", hash = "sha256:14754eb72feaa1e8ae528468f24250dd997b8e2188c3d2f593f9eba259e4b364", size = 43222, upload-time = "2025-08-11T12:07:18.328Z" }, + { url = "https://files.pythonhosted.org/packages/3a/5d/e1db626f64f60008320aab00fbe4f23fc3300d75892a3381275b3d284580/multidict-6.6.4-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:f46a6e8597f9bd71b31cc708195d42b634c8527fecbcf93febf1052cacc1f16e", size = 75848, upload-time = "2025-08-11T12:07:19.912Z" }, + { url = "https://files.pythonhosted.org/packages/4c/aa/8b6f548d839b6c13887253af4e29c939af22a18591bfb5d0ee6f1931dae8/multidict-6.6.4-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:22e38b2bc176c5eb9c0a0e379f9d188ae4cd8b28c0f53b52bce7ab0a9e534657", size = 45060, upload-time = "2025-08-11T12:07:21.163Z" }, + { url = "https://files.pythonhosted.org/packages/eb/c6/f5e97e5d99a729bc2aa58eb3ebfa9f1e56a9b517cc38c60537c81834a73f/multidict-6.6.4-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:5df8afd26f162da59e218ac0eefaa01b01b2e6cd606cffa46608f699539246da", size = 43269, upload-time = "2025-08-11T12:07:22.392Z" }, + { url = "https://files.pythonhosted.org/packages/dc/31/d54eb0c62516776f36fe67f84a732f97e0b0e12f98d5685bebcc6d396910/multidict-6.6.4-cp313-cp313-manylinux1_i686.manylinux2014_i686.manylinux_2_17_i686.manylinux_2_5_i686.whl", hash = "sha256:49517449b58d043023720aa58e62b2f74ce9b28f740a0b5d33971149553d72aa", size = 237158, upload-time = "2025-08-11T12:07:23.636Z" }, + { url = "https://files.pythonhosted.org/packages/c4/1c/8a10c1c25b23156e63b12165a929d8eb49a6ed769fdbefb06e6f07c1e50d/multidict-6.6.4-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:ae9408439537c5afdca05edd128a63f56a62680f4b3c234301055d7a2000220f", size = 257076, upload-time = "2025-08-11T12:07:25.049Z" }, + { url = "https://files.pythonhosted.org/packages/ad/86/90e20b5771d6805a119e483fd3d1e8393e745a11511aebca41f0da38c3e2/multidict-6.6.4-cp313-cp313-manylinux2014_armv7l.manylinux_2_17_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:87a32d20759dc52a9e850fe1061b6e41ab28e2998d44168a8a341b99ded1dba0", size = 240694, upload-time = "2025-08-11T12:07:26.458Z" }, + { url = "https://files.pythonhosted.org/packages/e7/49/484d3e6b535bc0555b52a0a26ba86e4d8d03fd5587d4936dc59ba7583221/multidict-6.6.4-cp313-cp313-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:52e3c8d43cdfff587ceedce9deb25e6ae77daba560b626e97a56ddcad3756879", size = 266350, upload-time = "2025-08-11T12:07:27.94Z" }, + { url = "https://files.pythonhosted.org/packages/bf/b4/aa4c5c379b11895083d50021e229e90c408d7d875471cb3abf721e4670d6/multidict-6.6.4-cp313-cp313-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:ad8850921d3a8d8ff6fbef790e773cecfc260bbfa0566998980d3fa8f520bc4a", size = 267250, upload-time = "2025-08-11T12:07:29.303Z" }, + { url = "https://files.pythonhosted.org/packages/80/e5/5e22c5bf96a64bdd43518b1834c6d95a4922cc2066b7d8e467dae9b6cee6/multidict-6.6.4-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:497a2954adc25c08daff36f795077f63ad33e13f19bfff7736e72c785391534f", size = 254900, upload-time = "2025-08-11T12:07:30.764Z" }, + { url = "https://files.pythonhosted.org/packages/17/38/58b27fed927c07035abc02befacab42491e7388ca105e087e6e0215ead64/multidict-6.6.4-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:024ce601f92d780ca1617ad4be5ac15b501cc2414970ffa2bb2bbc2bd5a68fa5", size = 252355, upload-time = "2025-08-11T12:07:32.205Z" }, + { url = "https://files.pythonhosted.org/packages/d0/a1/dad75d23a90c29c02b5d6f3d7c10ab36c3197613be5d07ec49c7791e186c/multidict-6.6.4-cp313-cp313-musllinux_1_2_armv7l.whl", hash = "sha256:a693fc5ed9bdd1c9e898013e0da4dcc640de7963a371c0bd458e50e046bf6438", size = 250061, upload-time = "2025-08-11T12:07:33.623Z" }, + { url = "https://files.pythonhosted.org/packages/b8/1a/ac2216b61c7f116edab6dc3378cca6c70dc019c9a457ff0d754067c58b20/multidict-6.6.4-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:190766dac95aab54cae5b152a56520fd99298f32a1266d66d27fdd1b5ac00f4e", size = 249675, upload-time = "2025-08-11T12:07:34.958Z" }, + { url = "https://files.pythonhosted.org/packages/d4/79/1916af833b800d13883e452e8e0977c065c4ee3ab7a26941fbfdebc11895/multidict-6.6.4-cp313-cp313-musllinux_1_2_ppc64le.whl", hash = "sha256:34d8f2a5ffdceab9dcd97c7a016deb2308531d5f0fced2bb0c9e1df45b3363d7", size = 261247, upload-time = "2025-08-11T12:07:36.588Z" }, + { url = "https://files.pythonhosted.org/packages/c5/65/d1f84fe08ac44a5fc7391cbc20a7cedc433ea616b266284413fd86062f8c/multidict-6.6.4-cp313-cp313-musllinux_1_2_s390x.whl", hash = "sha256:59e8d40ab1f5a8597abcef00d04845155a5693b5da00d2c93dbe88f2050f2812", size = 257960, upload-time = "2025-08-11T12:07:39.735Z" }, + { url = "https://files.pythonhosted.org/packages/13/b5/29ec78057d377b195ac2c5248c773703a6b602e132a763e20ec0457e7440/multidict-6.6.4-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:467fe64138cfac771f0e949b938c2e1ada2b5af22f39692aa9258715e9ea613a", size = 250078, upload-time = "2025-08-11T12:07:41.525Z" }, + { url = "https://files.pythonhosted.org/packages/c4/0e/7e79d38f70a872cae32e29b0d77024bef7834b0afb406ddae6558d9e2414/multidict-6.6.4-cp313-cp313-win32.whl", hash = "sha256:14616a30fe6d0a48d0a48d1a633ab3b8bec4cf293aac65f32ed116f620adfd69", size = 41708, upload-time = "2025-08-11T12:07:43.405Z" }, + { url = "https://files.pythonhosted.org/packages/9d/34/746696dffff742e97cd6a23da953e55d0ea51fa601fa2ff387b3edcfaa2c/multidict-6.6.4-cp313-cp313-win_amd64.whl", hash = "sha256:40cd05eaeb39e2bc8939451f033e57feaa2ac99e07dbca8afe2be450a4a3b6cf", size = 45912, upload-time = "2025-08-11T12:07:45.082Z" }, + { url = "https://files.pythonhosted.org/packages/c7/87/3bac136181e271e29170d8d71929cdeddeb77f3e8b6a0c08da3a8e9da114/multidict-6.6.4-cp313-cp313-win_arm64.whl", hash = "sha256:f6eb37d511bfae9e13e82cb4d1af36b91150466f24d9b2b8a9785816deb16605", size = 43076, upload-time = "2025-08-11T12:07:46.746Z" }, + { url = "https://files.pythonhosted.org/packages/64/94/0a8e63e36c049b571c9ae41ee301ada29c3fee9643d9c2548d7d558a1d99/multidict-6.6.4-cp313-cp313t-macosx_10_13_universal2.whl", hash = "sha256:6c84378acd4f37d1b507dfa0d459b449e2321b3ba5f2338f9b085cf7a7ba95eb", size = 82812, upload-time = "2025-08-11T12:07:48.402Z" }, + { url = "https://files.pythonhosted.org/packages/25/1a/be8e369dfcd260d2070a67e65dd3990dd635cbd735b98da31e00ea84cd4e/multidict-6.6.4-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:0e0558693063c75f3d952abf645c78f3c5dfdd825a41d8c4d8156fc0b0da6e7e", size = 48313, upload-time = "2025-08-11T12:07:49.679Z" }, + { url = "https://files.pythonhosted.org/packages/26/5a/dd4ade298674b2f9a7b06a32c94ffbc0497354df8285f27317c66433ce3b/multidict-6.6.4-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:3f8e2384cb83ebd23fd07e9eada8ba64afc4c759cd94817433ab8c81ee4b403f", size = 46777, upload-time = "2025-08-11T12:07:51.318Z" }, + { url = "https://files.pythonhosted.org/packages/89/db/98aa28bc7e071bfba611ac2ae803c24e96dd3a452b4118c587d3d872c64c/multidict-6.6.4-cp313-cp313t-manylinux1_i686.manylinux2014_i686.manylinux_2_17_i686.manylinux_2_5_i686.whl", hash = "sha256:f996b87b420995a9174b2a7c1a8daf7db4750be6848b03eb5e639674f7963773", size = 229321, upload-time = "2025-08-11T12:07:52.965Z" }, + { url = "https://files.pythonhosted.org/packages/c7/bc/01ddda2a73dd9d167bd85d0e8ef4293836a8f82b786c63fb1a429bc3e678/multidict-6.6.4-cp313-cp313t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:cc356250cffd6e78416cf5b40dc6a74f1edf3be8e834cf8862d9ed5265cf9b0e", size = 249954, upload-time = "2025-08-11T12:07:54.423Z" }, + { url = "https://files.pythonhosted.org/packages/06/78/6b7c0f020f9aa0acf66d0ab4eb9f08375bac9a50ff5e3edb1c4ccd59eafc/multidict-6.6.4-cp313-cp313t-manylinux2014_armv7l.manylinux_2_17_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:dadf95aa862714ea468a49ad1e09fe00fcc9ec67d122f6596a8d40caf6cec7d0", size = 228612, upload-time = "2025-08-11T12:07:55.914Z" }, + { url = "https://files.pythonhosted.org/packages/00/44/3faa416f89b2d5d76e9d447296a81521e1c832ad6e40b92f990697b43192/multidict-6.6.4-cp313-cp313t-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:7dd57515bebffd8ebd714d101d4c434063322e4fe24042e90ced41f18b6d3395", size = 257528, upload-time = "2025-08-11T12:07:57.371Z" }, + { url = "https://files.pythonhosted.org/packages/05/5f/77c03b89af0fcb16f018f668207768191fb9dcfb5e3361a5e706a11db2c9/multidict-6.6.4-cp313-cp313t-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:967af5f238ebc2eb1da4e77af5492219fbd9b4b812347da39a7b5f5c72c0fa45", size = 256329, upload-time = "2025-08-11T12:07:58.844Z" }, + { url = "https://files.pythonhosted.org/packages/cf/e9/ed750a2a9afb4f8dc6f13dc5b67b514832101b95714f1211cd42e0aafc26/multidict-6.6.4-cp313-cp313t-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:2a4c6875c37aae9794308ec43e3530e4aa0d36579ce38d89979bbf89582002bb", size = 247928, upload-time = "2025-08-11T12:08:01.037Z" }, + { url = "https://files.pythonhosted.org/packages/1f/b5/e0571bc13cda277db7e6e8a532791d4403dacc9850006cb66d2556e649c0/multidict-6.6.4-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:7f683a551e92bdb7fac545b9c6f9fa2aebdeefa61d607510b3533286fcab67f5", size = 245228, upload-time = "2025-08-11T12:08:02.96Z" }, + { url = "https://files.pythonhosted.org/packages/f3/a3/69a84b0eccb9824491f06368f5b86e72e4af54c3067c37c39099b6687109/multidict-6.6.4-cp313-cp313t-musllinux_1_2_armv7l.whl", hash = "sha256:3ba5aaf600edaf2a868a391779f7a85d93bed147854925f34edd24cc70a3e141", size = 235869, upload-time = "2025-08-11T12:08:04.746Z" }, + { url = "https://files.pythonhosted.org/packages/a9/9d/28802e8f9121a6a0804fa009debf4e753d0a59969ea9f70be5f5fdfcb18f/multidict-6.6.4-cp313-cp313t-musllinux_1_2_i686.whl", hash = "sha256:580b643b7fd2c295d83cad90d78419081f53fd532d1f1eb67ceb7060f61cff0d", size = 243446, upload-time = "2025-08-11T12:08:06.332Z" }, + { url = "https://files.pythonhosted.org/packages/38/ea/6c98add069b4878c1d66428a5f5149ddb6d32b1f9836a826ac764b9940be/multidict-6.6.4-cp313-cp313t-musllinux_1_2_ppc64le.whl", hash = "sha256:37b7187197da6af3ee0b044dbc9625afd0c885f2800815b228a0e70f9a7f473d", size = 252299, upload-time = "2025-08-11T12:08:07.931Z" }, + { url = "https://files.pythonhosted.org/packages/3a/09/8fe02d204473e14c0af3affd50af9078839dfca1742f025cca765435d6b4/multidict-6.6.4-cp313-cp313t-musllinux_1_2_s390x.whl", hash = "sha256:e1b93790ed0bc26feb72e2f08299691ceb6da5e9e14a0d13cc74f1869af327a0", size = 246926, upload-time = "2025-08-11T12:08:09.467Z" }, + { url = "https://files.pythonhosted.org/packages/37/3d/7b1e10d774a6df5175ecd3c92bff069e77bed9ec2a927fdd4ff5fe182f67/multidict-6.6.4-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:a506a77ddee1efcca81ecbeae27ade3e09cdf21a8ae854d766c2bb4f14053f92", size = 243383, upload-time = "2025-08-11T12:08:10.981Z" }, + { url = "https://files.pythonhosted.org/packages/50/b0/a6fae46071b645ae98786ab738447de1ef53742eaad949f27e960864bb49/multidict-6.6.4-cp313-cp313t-win32.whl", hash = "sha256:f93b2b2279883d1d0a9e1bd01f312d6fc315c5e4c1f09e112e4736e2f650bc4e", size = 47775, upload-time = "2025-08-11T12:08:12.439Z" }, + { url = "https://files.pythonhosted.org/packages/b2/0a/2436550b1520091af0600dff547913cb2d66fbac27a8c33bc1b1bccd8d98/multidict-6.6.4-cp313-cp313t-win_amd64.whl", hash = "sha256:6d46a180acdf6e87cc41dc15d8f5c2986e1e8739dc25dbb7dac826731ef381a4", size = 53100, upload-time = "2025-08-11T12:08:13.823Z" }, + { url = "https://files.pythonhosted.org/packages/97/ea/43ac51faff934086db9c072a94d327d71b7d8b40cd5dcb47311330929ef0/multidict-6.6.4-cp313-cp313t-win_arm64.whl", hash = "sha256:756989334015e3335d087a27331659820d53ba432befdef6a718398b0a8493ad", size = 45501, upload-time = "2025-08-11T12:08:15.173Z" }, + { url = "https://files.pythonhosted.org/packages/fd/69/b547032297c7e63ba2af494edba695d781af8a0c6e89e4d06cf848b21d80/multidict-6.6.4-py3-none-any.whl", hash = "sha256:27d8f8e125c07cb954e54d75d04905a9bba8a439c1d84aca94949d4d03d8601c", size = 12313, upload-time = "2025-08-11T12:08:46.891Z" }, ] [[package]] @@ -3275,6 +3481,12 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/83/45/4798f4d00df13eae3bfdf726c9244bcb495ab5bd588c0eed93a2f2dd67f3/mypy-1.18.2-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:a431a6f1ef14cf8c144c6b14793a23ec4eae3db28277c358136e79d7d062f62d", size = 13338709, upload-time = "2025-09-19T00:11:03.358Z" }, { url = "https://files.pythonhosted.org/packages/d7/09/479f7358d9625172521a87a9271ddd2441e1dab16a09708f056e97007207/mypy-1.18.2-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:7ab28cc197f1dd77a67e1c6f35cd1f8e8b73ed2217e4fc005f9e6a504e46e7ba", size = 13529806, upload-time = "2025-09-19T00:10:26.073Z" }, { url = "https://files.pythonhosted.org/packages/71/cf/ac0f2c7e9d0ea3c75cd99dff7aec1c9df4a1376537cb90e4c882267ee7e9/mypy-1.18.2-cp313-cp313-win_amd64.whl", hash = "sha256:0e2785a84b34a72ba55fb5daf079a1003a34c05b22238da94fcae2bbe46f3544", size = 9833262, upload-time = "2025-09-19T00:10:40.035Z" }, + { url = "https://files.pythonhosted.org/packages/5a/0c/7d5300883da16f0063ae53996358758b2a2df2a09c72a5061fa79a1f5006/mypy-1.18.2-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:62f0e1e988ad41c2a110edde6c398383a889d95b36b3e60bcf155f5164c4fdce", size = 12893775, upload-time = "2025-09-19T00:10:03.814Z" }, + { url = "https://files.pythonhosted.org/packages/50/df/2cffbf25737bdb236f60c973edf62e3e7b4ee1c25b6878629e88e2cde967/mypy-1.18.2-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:8795a039bab805ff0c1dfdb8cd3344642c2b99b8e439d057aba30850b8d3423d", size = 11936852, upload-time = "2025-09-19T00:10:51.631Z" }, + { url = "https://files.pythonhosted.org/packages/be/50/34059de13dd269227fb4a03be1faee6e2a4b04a2051c82ac0a0b5a773c9a/mypy-1.18.2-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:6ca1e64b24a700ab5ce10133f7ccd956a04715463d30498e64ea8715236f9c9c", size = 12480242, upload-time = "2025-09-19T00:11:07.955Z" }, + { url = "https://files.pythonhosted.org/packages/5b/11/040983fad5132d85914c874a2836252bbc57832065548885b5bb5b0d4359/mypy-1.18.2-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:d924eef3795cc89fecf6bedc6ed32b33ac13e8321344f6ddbf8ee89f706c05cb", size = 13326683, upload-time = "2025-09-19T00:09:55.572Z" }, + { url = "https://files.pythonhosted.org/packages/e9/ba/89b2901dd77414dd7a8c8729985832a5735053be15b744c18e4586e506ef/mypy-1.18.2-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:20c02215a080e3a2be3aa50506c67242df1c151eaba0dcbc1e4e557922a26075", size = 13514749, upload-time = "2025-09-19T00:10:44.827Z" }, + { url = "https://files.pythonhosted.org/packages/25/bc/cc98767cffd6b2928ba680f3e5bc969c4152bf7c2d83f92f5a504b92b0eb/mypy-1.18.2-cp314-cp314-win_amd64.whl", hash = "sha256:749b5f83198f1ca64345603118a6f01a4e99ad4bf9d103ddc5a3200cc4614adf", size = 9982959, upload-time = "2025-09-19T00:10:37.344Z" }, { url = "https://files.pythonhosted.org/packages/87/e3/be76d87158ebafa0309946c4a73831974d4d6ab4f4ef40c3b53a385a66fd/mypy-1.18.2-py3-none-any.whl", hash = "sha256:22a1748707dd62b58d2ae53562ffc4d7f8bcc727e8ac7cbc69c053ddc874d47e", size = 2352367, upload-time = "2025-09-19T00:10:15.489Z" }, ] @@ -3289,11 +3501,11 @@ wheels = [ [[package]] name = "narwhals" -version = "2.8.0" +version = "2.6.0" source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/ae/05/79a5b5a795f36c1aaa002d194c1ef71e5d95f7e1900155bbfde734815ab9/narwhals-2.8.0.tar.gz", hash = "sha256:52e0b22d54718264ae703bd9293af53b04abc995a1414908c3b807ba8c913858", size = 574277, upload-time = "2025-10-13T08:44:28.81Z" } +sdist = { url = "https://files.pythonhosted.org/packages/00/dd/40ff412dabf90ef6b99266b0b74f217bb88733541733849e0153a108c750/narwhals-2.6.0.tar.gz", hash = "sha256:5c9e2ba923e6a0051017e146184e49fb793548936f978ce130c9f55a9a81240e", size = 561649, upload-time = "2025-09-29T09:08:56.482Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/1d/86/ac808ecb94322a3f1ea31627d13ab3e50dd4333564d711e0e481ad0f4586/narwhals-2.8.0-py3-none-any.whl", hash = "sha256:6304856676ba4a79fd34148bda63aed8060dd6edb1227edf3659ce5e091de73c", size = 415852, upload-time = "2025-10-13T08:44:25.421Z" }, + { url = "https://files.pythonhosted.org/packages/50/3b/0e2c535c3e6970cfc5763b67f6cc31accaab35a7aa3e322fb6a12830450f/narwhals-2.6.0-py3-none-any.whl", hash = "sha256:3215ea42afb452c6c8527e79cefbe542b674aa08d7e2e99d46b2c9708870e0d4", size = 408435, upload-time = "2025-09-29T09:08:54.503Z" }, ] [[package]] @@ -3371,7 +3583,7 @@ wheels = [ [[package]] name = "nicegui" -version = "3.1.0" +version = "3.0.3" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "aiofiles" }, @@ -3393,11 +3605,12 @@ dependencies = [ { name = "starlette" }, { name = "typing-extensions" }, { name = "uvicorn", extra = ["standard"] }, + { name = "vbuild" }, { name = "watchfiles" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/f8/23/1a709ac5ae3b91674b7fb75b7eb2db851cf09b55a809936c106efa46c52f/nicegui-3.1.0.tar.gz", hash = "sha256:1496b9719292cdd64fb89a9e560a197f517347e8808c829fae9ba28a294d78ae", size = 20342929, upload-time = "2025-10-22T08:33:07.869Z" } +sdist = { url = "https://files.pythonhosted.org/packages/4c/01/da7808283ce437338982aab3aa2b4940ddb63d921a8cf9b1c42d8aac383e/nicegui-3.0.3.tar.gz", hash = "sha256:01ac5e29c7f9b8ef9661d684f2172ff0005fc501004fadb04343e1fb7c9c8891", size = 20101621, upload-time = "2025-10-06T15:14:25.595Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/a6/4e/ce491f04b07d2530a441f01fa028c936c353ee92e0ad153bf6b7cf2c3e9d/nicegui-3.1.0-py3-none-any.whl", hash = "sha256:c5ad4ac120eaec138bee71514e9cba2f9d27384b7e9201c11f309a90a128a96f", size = 20996176, upload-time = "2025-10-22T08:33:05.178Z" }, + { url = "https://files.pythonhosted.org/packages/7e/49/bad16f65f3f933059b6eb52505c6efcc7c97d66e25681a8ecab2e1240c86/nicegui-3.0.3-py3-none-any.whl", hash = "sha256:f40c37866cedf91d75e8bdfc2730348a007dec69baf8080dfadc0ab3381ab2ab", size = 20749450, upload-time = "2025-10-06T15:14:22.858Z" }, ] [package.optional-dependencies] @@ -3444,20 +3657,19 @@ wheels = [ [[package]] name = "nox" -version = "2025.10.16" +version = "2025.5.1" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "argcomplete" }, { name = "attrs" }, { name = "colorlog" }, { name = "dependency-groups" }, - { name = "humanize" }, { name = "packaging" }, { name = "virtualenv" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/3d/3e/16440c5a2c1e867a862479cf7d11d05d0e0f2bb133de3921cb3ed6e37e57/nox-2025.10.16.tar.gz", hash = "sha256:fca1e7504384dbc91dddef3fec45d04572f23c882a87241e2c793b77fe1c9259", size = 4030246, upload-time = "2025-10-17T01:53:07.458Z" } +sdist = { url = "https://files.pythonhosted.org/packages/b4/80/47712208c410defec169992e57c179f0f4d92f5dd17ba8daca50a8077e23/nox-2025.5.1.tar.gz", hash = "sha256:2a571dfa7a58acc726521ac3cd8184455ebcdcbf26401c7b737b5bc6701427b2", size = 4023334, upload-time = "2025-05-01T16:35:48.056Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/6f/68/f4a9cd43dcd6cf9138e0fb39b3b8f17a82b121e8c26499d864e64c329cd0/nox-2025.10.16-py3-none-any.whl", hash = "sha256:b4ef28709d5fb0d964ccc987c8863f76ed860700fabd04ad557252df3562a7e5", size = 74405, upload-time = "2025-10-17T01:53:05.792Z" }, + { url = "https://files.pythonhosted.org/packages/a6/be/7b423b02b09eb856beffe76fe8c4121c99852db74dd12a422dcb72d1134e/nox-2025.5.1-py3-none-any.whl", hash = "sha256:56abd55cf37ff523c254fcec4d152ed51e5fe80e2ab8317221d8b828ac970a31", size = 71753, upload-time = "2025-05-01T16:35:46.037Z" }, ] [package.optional-dependencies] @@ -3515,6 +3727,28 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/5b/8e/3ab61a730bdbbc201bb245a71102aa609f0008b9ed15255500a99cd7f780/numpy-2.3.3-cp313-cp313t-win32.whl", hash = "sha256:a333b4ed33d8dc2b373cc955ca57babc00cd6f9009991d9edc5ddbc1bac36bcd", size = 6442776, upload-time = "2025-09-09T15:57:45.793Z" }, { url = "https://files.pythonhosted.org/packages/1c/3a/e22b766b11f6030dc2decdeff5c2fb1610768055603f9f3be88b6d192fb2/numpy-2.3.3-cp313-cp313t-win_amd64.whl", hash = "sha256:4384a169c4d8f97195980815d6fcad04933a7e1ab3b530921c3fef7a1c63426d", size = 12927281, upload-time = "2025-09-09T15:57:47.492Z" }, { url = "https://files.pythonhosted.org/packages/7b/42/c2e2bc48c5e9b2a83423f99733950fbefd86f165b468a3d85d52b30bf782/numpy-2.3.3-cp313-cp313t-win_arm64.whl", hash = "sha256:75370986cc0bc66f4ce5110ad35aae6d182cc4ce6433c40ad151f53690130bf1", size = 10265275, upload-time = "2025-09-09T15:57:49.647Z" }, + { url = "https://files.pythonhosted.org/packages/6b/01/342ad585ad82419b99bcf7cebe99e61da6bedb89e213c5fd71acc467faee/numpy-2.3.3-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:cd052f1fa6a78dee696b58a914b7229ecfa41f0a6d96dc663c1220a55e137593", size = 20951527, upload-time = "2025-09-09T15:57:52.006Z" }, + { url = "https://files.pythonhosted.org/packages/ef/d8/204e0d73fc1b7a9ee80ab1fe1983dd33a4d64a4e30a05364b0208e9a241a/numpy-2.3.3-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:414a97499480067d305fcac9716c29cf4d0d76db6ebf0bf3cbce666677f12652", size = 14186159, upload-time = "2025-09-09T15:57:54.407Z" }, + { url = "https://files.pythonhosted.org/packages/22/af/f11c916d08f3a18fb8ba81ab72b5b74a6e42ead4c2846d270eb19845bf74/numpy-2.3.3-cp314-cp314-macosx_14_0_arm64.whl", hash = "sha256:50a5fe69f135f88a2be9b6ca0481a68a136f6febe1916e4920e12f1a34e708a7", size = 5114624, upload-time = "2025-09-09T15:57:56.5Z" }, + { url = "https://files.pythonhosted.org/packages/fb/11/0ed919c8381ac9d2ffacd63fd1f0c34d27e99cab650f0eb6f110e6ae4858/numpy-2.3.3-cp314-cp314-macosx_14_0_x86_64.whl", hash = "sha256:b912f2ed2b67a129e6a601e9d93d4fa37bef67e54cac442a2f588a54afe5c67a", size = 6642627, upload-time = "2025-09-09T15:57:58.206Z" }, + { url = "https://files.pythonhosted.org/packages/ee/83/deb5f77cb0f7ba6cb52b91ed388b47f8f3c2e9930d4665c600408d9b90b9/numpy-2.3.3-cp314-cp314-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:9e318ee0596d76d4cb3d78535dc005fa60e5ea348cd131a51e99d0bdbe0b54fe", size = 14296926, upload-time = "2025-09-09T15:58:00.035Z" }, + { url = "https://files.pythonhosted.org/packages/77/cc/70e59dcb84f2b005d4f306310ff0a892518cc0c8000a33d0e6faf7ca8d80/numpy-2.3.3-cp314-cp314-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:ce020080e4a52426202bdb6f7691c65bb55e49f261f31a8f506c9f6bc7450421", size = 16638958, upload-time = "2025-09-09T15:58:02.738Z" }, + { url = "https://files.pythonhosted.org/packages/b6/5a/b2ab6c18b4257e099587d5b7f903317bd7115333ad8d4ec4874278eafa61/numpy-2.3.3-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:e6687dc183aa55dae4a705b35f9c0f8cb178bcaa2f029b241ac5356221d5c021", size = 16071920, upload-time = "2025-09-09T15:58:05.029Z" }, + { url = "https://files.pythonhosted.org/packages/b8/f1/8b3fdc44324a259298520dd82147ff648979bed085feeacc1250ef1656c0/numpy-2.3.3-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:d8f3b1080782469fdc1718c4ed1d22549b5fb12af0d57d35e992158a772a37cf", size = 18577076, upload-time = "2025-09-09T15:58:07.745Z" }, + { url = "https://files.pythonhosted.org/packages/f0/a1/b87a284fb15a42e9274e7fcea0dad259d12ddbf07c1595b26883151ca3b4/numpy-2.3.3-cp314-cp314-win32.whl", hash = "sha256:cb248499b0bc3be66ebd6578b83e5acacf1d6cb2a77f2248ce0e40fbec5a76d0", size = 6366952, upload-time = "2025-09-09T15:58:10.096Z" }, + { url = "https://files.pythonhosted.org/packages/70/5f/1816f4d08f3b8f66576d8433a66f8fa35a5acfb3bbd0bf6c31183b003f3d/numpy-2.3.3-cp314-cp314-win_amd64.whl", hash = "sha256:691808c2b26b0f002a032c73255d0bd89751425f379f7bcd22d140db593a96e8", size = 12919322, upload-time = "2025-09-09T15:58:12.138Z" }, + { url = "https://files.pythonhosted.org/packages/8c/de/072420342e46a8ea41c324a555fa90fcc11637583fb8df722936aed1736d/numpy-2.3.3-cp314-cp314-win_arm64.whl", hash = "sha256:9ad12e976ca7b10f1774b03615a2a4bab8addce37ecc77394d8e986927dc0dfe", size = 10478630, upload-time = "2025-09-09T15:58:14.64Z" }, + { url = "https://files.pythonhosted.org/packages/d5/df/ee2f1c0a9de7347f14da5dd3cd3c3b034d1b8607ccb6883d7dd5c035d631/numpy-2.3.3-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:9cc48e09feb11e1db00b320e9d30a4151f7369afb96bd0e48d942d09da3a0d00", size = 21047987, upload-time = "2025-09-09T15:58:16.889Z" }, + { url = "https://files.pythonhosted.org/packages/d6/92/9453bdc5a4e9e69cf4358463f25e8260e2ffc126d52e10038b9077815989/numpy-2.3.3-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:901bf6123879b7f251d3631967fd574690734236075082078e0571977c6a8e6a", size = 14301076, upload-time = "2025-09-09T15:58:20.343Z" }, + { url = "https://files.pythonhosted.org/packages/13/77/1447b9eb500f028bb44253105bd67534af60499588a5149a94f18f2ca917/numpy-2.3.3-cp314-cp314t-macosx_14_0_arm64.whl", hash = "sha256:7f025652034199c301049296b59fa7d52c7e625017cae4c75d8662e377bf487d", size = 5229491, upload-time = "2025-09-09T15:58:22.481Z" }, + { url = "https://files.pythonhosted.org/packages/3d/f9/d72221b6ca205f9736cb4b2ce3b002f6e45cd67cd6a6d1c8af11a2f0b649/numpy-2.3.3-cp314-cp314t-macosx_14_0_x86_64.whl", hash = "sha256:533ca5f6d325c80b6007d4d7fb1984c303553534191024ec6a524a4c92a5935a", size = 6737913, upload-time = "2025-09-09T15:58:24.569Z" }, + { url = "https://files.pythonhosted.org/packages/3c/5f/d12834711962ad9c46af72f79bb31e73e416ee49d17f4c797f72c96b6ca5/numpy-2.3.3-cp314-cp314t-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:0edd58682a399824633b66885d699d7de982800053acf20be1eaa46d92009c54", size = 14352811, upload-time = "2025-09-09T15:58:26.416Z" }, + { url = "https://files.pythonhosted.org/packages/a1/0d/fdbec6629d97fd1bebed56cd742884e4eead593611bbe1abc3eb40d304b2/numpy-2.3.3-cp314-cp314t-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:367ad5d8fbec5d9296d18478804a530f1191e24ab4d75ab408346ae88045d25e", size = 16702689, upload-time = "2025-09-09T15:58:28.831Z" }, + { url = "https://files.pythonhosted.org/packages/9b/09/0a35196dc5575adde1eb97ddfbc3e1687a814f905377621d18ca9bc2b7dd/numpy-2.3.3-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:8f6ac61a217437946a1fa48d24c47c91a0c4f725237871117dea264982128097", size = 16133855, upload-time = "2025-09-09T15:58:31.349Z" }, + { url = "https://files.pythonhosted.org/packages/7a/ca/c9de3ea397d576f1b6753eaa906d4cdef1bf97589a6d9825a349b4729cc2/numpy-2.3.3-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:179a42101b845a816d464b6fe9a845dfaf308fdfc7925387195570789bb2c970", size = 18652520, upload-time = "2025-09-09T15:58:33.762Z" }, + { url = "https://files.pythonhosted.org/packages/fd/c2/e5ed830e08cd0196351db55db82f65bc0ab05da6ef2b72a836dcf1936d2f/numpy-2.3.3-cp314-cp314t-win32.whl", hash = "sha256:1250c5d3d2562ec4174bce2e3a1523041595f9b651065e4a4473f5f48a6bc8a5", size = 6515371, upload-time = "2025-09-09T15:58:36.04Z" }, + { url = "https://files.pythonhosted.org/packages/47/c7/b0f6b5b67f6788a0725f744496badbb604d226bf233ba716683ebb47b570/numpy-2.3.3-cp314-cp314t-win_amd64.whl", hash = "sha256:b37a0b2e5935409daebe82c1e42274d30d9dd355852529eab91dab8dcca7419f", size = 13112576, upload-time = "2025-09-09T15:58:37.927Z" }, + { url = "https://files.pythonhosted.org/packages/06/b9/33bba5ff6fb679aa0b1f8a07e853f002a6b04b9394db3069a1270a7784ca/numpy-2.3.3-cp314-cp314t-win_arm64.whl", hash = "sha256:78c9f6560dc7e6b3990e32df7ea1a50bbd0e2a111e05209963f5ddcab7073b0b", size = 10545953, upload-time = "2025-09-09T15:58:40.576Z" }, { url = "https://files.pythonhosted.org/packages/b8/f2/7e0a37cfced2644c9563c529f29fa28acbd0960dde32ece683aafa6f4949/numpy-2.3.3-pp311-pypy311_pp73-macosx_10_15_x86_64.whl", hash = "sha256:1e02c7159791cd481e1e6d5ddd766b62a4d5acf8df4d4d1afe35ee9c5c33a41e", size = 21131019, upload-time = "2025-09-09T15:58:42.838Z" }, { url = "https://files.pythonhosted.org/packages/1a/7e/3291f505297ed63831135a6cc0f474da0c868a1f31b0dd9a9f03a7a0d2ed/numpy-2.3.3-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:dca2d0fc80b3893ae72197b39f69d55a3cd8b17ea1b50aa4c62de82419936150", size = 14376288, upload-time = "2025-09-09T15:58:45.425Z" }, { url = "https://files.pythonhosted.org/packages/bf/4b/ae02e985bdeee73d7b5abdefeb98aef1207e96d4c0621ee0cf228ddfac3c/numpy-2.3.3-pp311-pypy311_pp73-macosx_14_0_arm64.whl", hash = "sha256:99683cbe0658f8271b333a1b1b4bb3173750ad59c0c61f5bbdc5b318918fffe3", size = 5305425, upload-time = "2025-09-09T15:58:48.6Z" }, @@ -3892,6 +4126,17 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/60/d4/bae8e4f26afb2c23bea69d2f6d566132584d1c3a5fe89ee8c17b718cab67/orjson-3.11.3-cp313-cp313-win32.whl", hash = "sha256:2039b7847ba3eec1f5886e75e6763a16e18c68a63efc4b029ddf994821e2e66b", size = 136216, upload-time = "2025-08-26T17:45:57.182Z" }, { url = "https://files.pythonhosted.org/packages/88/76/224985d9f127e121c8cad882cea55f0ebe39f97925de040b75ccd4b33999/orjson-3.11.3-cp313-cp313-win_amd64.whl", hash = "sha256:29be5ac4164aa8bdcba5fa0700a3c9c316b411d8ed9d39ef8a882541bd452fae", size = 131362, upload-time = "2025-08-26T17:45:58.56Z" }, { url = "https://files.pythonhosted.org/packages/e2/cf/0dce7a0be94bd36d1346be5067ed65ded6adb795fdbe3abd234c8d576d01/orjson-3.11.3-cp313-cp313-win_arm64.whl", hash = "sha256:18bd1435cb1f2857ceb59cfb7de6f92593ef7b831ccd1b9bfb28ca530e539dce", size = 125989, upload-time = "2025-08-26T17:45:59.95Z" }, + { url = "https://files.pythonhosted.org/packages/ef/77/d3b1fef1fc6aaeed4cbf3be2b480114035f4df8fa1a99d2dac1d40d6e924/orjson-3.11.3-cp314-cp314-macosx_10_15_x86_64.macosx_11_0_arm64.macosx_10_15_universal2.whl", hash = "sha256:cf4b81227ec86935568c7edd78352a92e97af8da7bd70bdfdaa0d2e0011a1ab4", size = 238115, upload-time = "2025-08-26T17:46:01.669Z" }, + { url = "https://files.pythonhosted.org/packages/e4/6d/468d21d49bb12f900052edcfbf52c292022d0a323d7828dc6376e6319703/orjson-3.11.3-cp314-cp314-macosx_15_0_arm64.whl", hash = "sha256:bc8bc85b81b6ac9fc4dae393a8c159b817f4c2c9dee5d12b773bddb3b95fc07e", size = 127493, upload-time = "2025-08-26T17:46:03.466Z" }, + { url = "https://files.pythonhosted.org/packages/67/46/1e2588700d354aacdf9e12cc2d98131fb8ac6f31ca65997bef3863edb8ff/orjson-3.11.3-cp314-cp314-manylinux_2_34_aarch64.whl", hash = "sha256:88dcfc514cfd1b0de038443c7b3e6a9797ffb1b3674ef1fd14f701a13397f82d", size = 122998, upload-time = "2025-08-26T17:46:04.803Z" }, + { url = "https://files.pythonhosted.org/packages/3b/94/11137c9b6adb3779f1b34fd98be51608a14b430dbc02c6d41134fbba484c/orjson-3.11.3-cp314-cp314-manylinux_2_34_x86_64.whl", hash = "sha256:d61cd543d69715d5fc0a690c7c6f8dcc307bc23abef9738957981885f5f38229", size = 132915, upload-time = "2025-08-26T17:46:06.237Z" }, + { url = "https://files.pythonhosted.org/packages/10/61/dccedcf9e9bcaac09fdabe9eaee0311ca92115699500efbd31950d878833/orjson-3.11.3-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:2b7b153ed90ababadbef5c3eb39549f9476890d339cf47af563aea7e07db2451", size = 130907, upload-time = "2025-08-26T17:46:07.581Z" }, + { url = "https://files.pythonhosted.org/packages/0e/fd/0e935539aa7b08b3ca0f817d73034f7eb506792aae5ecc3b7c6e679cdf5f/orjson-3.11.3-cp314-cp314-musllinux_1_2_armv7l.whl", hash = "sha256:7909ae2460f5f494fecbcd10613beafe40381fd0316e35d6acb5f3a05bfda167", size = 403852, upload-time = "2025-08-26T17:46:08.982Z" }, + { url = "https://files.pythonhosted.org/packages/4a/2b/50ae1a5505cd1043379132fdb2adb8a05f37b3e1ebffe94a5073321966fd/orjson-3.11.3-cp314-cp314-musllinux_1_2_i686.whl", hash = "sha256:2030c01cbf77bc67bee7eef1e7e31ecf28649353987775e3583062c752da0077", size = 146309, upload-time = "2025-08-26T17:46:10.576Z" }, + { url = "https://files.pythonhosted.org/packages/cd/1d/a473c158e380ef6f32753b5f39a69028b25ec5be331c2049a2201bde2e19/orjson-3.11.3-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:a0169ebd1cbd94b26c7a7ad282cf5c2744fce054133f959e02eb5265deae1872", size = 135424, upload-time = "2025-08-26T17:46:12.386Z" }, + { url = "https://files.pythonhosted.org/packages/da/09/17d9d2b60592890ff7382e591aa1d9afb202a266b180c3d4049b1ec70e4a/orjson-3.11.3-cp314-cp314-win32.whl", hash = "sha256:0c6d7328c200c349e3a4c6d8c83e0a5ad029bdc2d417f234152bf34842d0fc8d", size = 136266, upload-time = "2025-08-26T17:46:13.853Z" }, + { url = "https://files.pythonhosted.org/packages/15/58/358f6846410a6b4958b74734727e582ed971e13d335d6c7ce3e47730493e/orjson-3.11.3-cp314-cp314-win_amd64.whl", hash = "sha256:317bbe2c069bbc757b1a2e4105b64aacd3bc78279b66a6b9e51e846e4809f804", size = 131351, upload-time = "2025-08-26T17:46:15.27Z" }, + { url = "https://files.pythonhosted.org/packages/28/01/d6b274a0635be0468d4dbd9cafe80c47105937a0d42434e805e67cd2ed8b/orjson-3.11.3-cp314-cp314-win_arm64.whl", hash = "sha256:e8f6a7a27d7b7bec81bd5924163e9af03d49bbb63013f107b48eb5d16db711bc", size = 125985, upload-time = "2025-08-26T17:46:16.67Z" }, ] [[package]] @@ -3972,6 +4217,19 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/44/23/78d645adc35d94d1ac4f2a3c4112ab6f5b8999f4898b8cdf01252f8df4a9/pandas-2.3.3-cp313-cp313t-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:900f47d8f20860de523a1ac881c4c36d65efcb2eb850e6948140fa781736e110", size = 12121912, upload-time = "2025-09-29T23:23:05.042Z" }, { url = "https://files.pythonhosted.org/packages/53/da/d10013df5e6aaef6b425aa0c32e1fc1f3e431e4bcabd420517dceadce354/pandas-2.3.3-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:a45c765238e2ed7d7c608fc5bc4a6f88b642f2f01e70c0c23d2224dd21829d86", size = 12712160, upload-time = "2025-09-29T23:23:28.57Z" }, { url = "https://files.pythonhosted.org/packages/bd/17/e756653095a083d8a37cbd816cb87148debcfcd920129b25f99dd8d04271/pandas-2.3.3-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:c4fc4c21971a1a9f4bdb4c73978c7f7256caa3e62b323f70d6cb80db583350bc", size = 13199233, upload-time = "2025-09-29T23:24:24.876Z" }, + { url = "https://files.pythonhosted.org/packages/04/fd/74903979833db8390b73b3a8a7d30d146d710bd32703724dd9083950386f/pandas-2.3.3-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:ee15f284898e7b246df8087fc82b87b01686f98ee67d85a17b7ab44143a3a9a0", size = 11540635, upload-time = "2025-09-29T23:25:52.486Z" }, + { url = "https://files.pythonhosted.org/packages/21/00/266d6b357ad5e6d3ad55093a7e8efc7dd245f5a842b584db9f30b0f0a287/pandas-2.3.3-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:1611aedd912e1ff81ff41c745822980c49ce4a7907537be8692c8dbc31924593", size = 10759079, upload-time = "2025-09-29T23:26:33.204Z" }, + { url = "https://files.pythonhosted.org/packages/ca/05/d01ef80a7a3a12b2f8bbf16daba1e17c98a2f039cbc8e2f77a2c5a63d382/pandas-2.3.3-cp314-cp314-manylinux_2_24_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:6d2cefc361461662ac48810cb14365a365ce864afe85ef1f447ff5a1e99ea81c", size = 11814049, upload-time = "2025-09-29T23:27:15.384Z" }, + { url = "https://files.pythonhosted.org/packages/15/b2/0e62f78c0c5ba7e3d2c5945a82456f4fac76c480940f805e0b97fcbc2f65/pandas-2.3.3-cp314-cp314-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:ee67acbbf05014ea6c763beb097e03cd629961c8a632075eeb34247120abcb4b", size = 12332638, upload-time = "2025-09-29T23:27:51.625Z" }, + { url = "https://files.pythonhosted.org/packages/c5/33/dd70400631b62b9b29c3c93d2feee1d0964dc2bae2e5ad7a6c73a7f25325/pandas-2.3.3-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:c46467899aaa4da076d5abc11084634e2d197e9460643dd455ac3db5856b24d6", size = 12886834, upload-time = "2025-09-29T23:28:21.289Z" }, + { url = "https://files.pythonhosted.org/packages/d3/18/b5d48f55821228d0d2692b34fd5034bb185e854bdb592e9c640f6290e012/pandas-2.3.3-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:6253c72c6a1d990a410bc7de641d34053364ef8bcd3126f7e7450125887dffe3", size = 13409925, upload-time = "2025-09-29T23:28:58.261Z" }, + { url = "https://files.pythonhosted.org/packages/a6/3d/124ac75fcd0ecc09b8fdccb0246ef65e35b012030defb0e0eba2cbbbe948/pandas-2.3.3-cp314-cp314-win_amd64.whl", hash = "sha256:1b07204a219b3b7350abaae088f451860223a52cfb8a6c53358e7948735158e5", size = 11109071, upload-time = "2025-09-29T23:32:27.484Z" }, + { url = "https://files.pythonhosted.org/packages/89/9c/0e21c895c38a157e0faa1fb64587a9226d6dd46452cac4532d80c3c4a244/pandas-2.3.3-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:2462b1a365b6109d275250baaae7b760fd25c726aaca0054649286bcfbb3e8ec", size = 12048504, upload-time = "2025-09-29T23:29:31.47Z" }, + { url = "https://files.pythonhosted.org/packages/d7/82/b69a1c95df796858777b68fbe6a81d37443a33319761d7c652ce77797475/pandas-2.3.3-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:0242fe9a49aa8b4d78a4fa03acb397a58833ef6199e9aa40a95f027bb3a1b6e7", size = 11410702, upload-time = "2025-09-29T23:29:54.591Z" }, + { url = "https://files.pythonhosted.org/packages/f9/88/702bde3ba0a94b8c73a0181e05144b10f13f29ebfc2150c3a79062a8195d/pandas-2.3.3-cp314-cp314t-manylinux_2_24_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:a21d830e78df0a515db2b3d2f5570610f5e6bd2e27749770e8bb7b524b89b450", size = 11634535, upload-time = "2025-09-29T23:30:21.003Z" }, + { url = "https://files.pythonhosted.org/packages/a4/1e/1bac1a839d12e6a82ec6cb40cda2edde64a2013a66963293696bbf31fbbb/pandas-2.3.3-cp314-cp314t-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:2e3ebdb170b5ef78f19bfb71b0dc5dc58775032361fa188e814959b74d726dd5", size = 12121582, upload-time = "2025-09-29T23:30:43.391Z" }, + { url = "https://files.pythonhosted.org/packages/44/91/483de934193e12a3b1d6ae7c8645d083ff88dec75f46e827562f1e4b4da6/pandas-2.3.3-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:d051c0e065b94b7a3cea50eb1ec32e912cd96dba41647eb24104b6c6c14c5788", size = 12699963, upload-time = "2025-09-29T23:31:10.009Z" }, + { url = "https://files.pythonhosted.org/packages/70/44/5191d2e4026f86a2a109053e194d3ba7a31a2d10a9c2348368c63ed4e85a/pandas-2.3.3-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:3869faf4bd07b3b66a9f462417d0ca3a9df29a9f6abd5d0d0dbab15dac7abe87", size = 13202175, upload-time = "2025-09-29T23:31:59.173Z" }, ] [[package]] @@ -4084,6 +4342,28 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/49/6b/00187a044f98255225f172de653941e61da37104a9ea60e4f6887717e2b5/pillow-11.3.0-cp313-cp313t-win32.whl", hash = "sha256:2a3117c06b8fb646639dce83694f2f9eac405472713fcb1ae887469c0d4f6788", size = 6277546, upload-time = "2025-07-01T09:15:11.311Z" }, { url = "https://files.pythonhosted.org/packages/e8/5c/6caaba7e261c0d75bab23be79f1d06b5ad2a2ae49f028ccec801b0e853d6/pillow-11.3.0-cp313-cp313t-win_amd64.whl", hash = "sha256:857844335c95bea93fb39e0fa2726b4d9d758850b34075a7e3ff4f4fa3aa3b31", size = 6985102, upload-time = "2025-07-01T09:15:13.164Z" }, { url = "https://files.pythonhosted.org/packages/f3/7e/b623008460c09a0cb38263c93b828c666493caee2eb34ff67f778b87e58c/pillow-11.3.0-cp313-cp313t-win_arm64.whl", hash = "sha256:8797edc41f3e8536ae4b10897ee2f637235c94f27404cac7297f7b607dd0716e", size = 2424803, upload-time = "2025-07-01T09:15:15.695Z" }, + { url = "https://files.pythonhosted.org/packages/73/f4/04905af42837292ed86cb1b1dabe03dce1edc008ef14c473c5c7e1443c5d/pillow-11.3.0-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:d9da3df5f9ea2a89b81bb6087177fb1f4d1c7146d583a3fe5c672c0d94e55e12", size = 5278520, upload-time = "2025-07-01T09:15:17.429Z" }, + { url = "https://files.pythonhosted.org/packages/41/b0/33d79e377a336247df6348a54e6d2a2b85d644ca202555e3faa0cf811ecc/pillow-11.3.0-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:0b275ff9b04df7b640c59ec5a3cb113eefd3795a8df80bac69646ef699c6981a", size = 4686116, upload-time = "2025-07-01T09:15:19.423Z" }, + { url = "https://files.pythonhosted.org/packages/49/2d/ed8bc0ab219ae8768f529597d9509d184fe8a6c4741a6864fea334d25f3f/pillow-11.3.0-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:0743841cabd3dba6a83f38a92672cccbd69af56e3e91777b0ee7f4dba4385632", size = 5864597, upload-time = "2025-07-03T13:10:38.404Z" }, + { url = "https://files.pythonhosted.org/packages/b5/3d/b932bb4225c80b58dfadaca9d42d08d0b7064d2d1791b6a237f87f661834/pillow-11.3.0-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:2465a69cf967b8b49ee1b96d76718cd98c4e925414ead59fdf75cf0fd07df673", size = 7638246, upload-time = "2025-07-03T13:10:44.987Z" }, + { url = "https://files.pythonhosted.org/packages/09/b5/0487044b7c096f1b48f0d7ad416472c02e0e4bf6919541b111efd3cae690/pillow-11.3.0-cp314-cp314-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:41742638139424703b4d01665b807c6468e23e699e8e90cffefe291c5832b027", size = 5973336, upload-time = "2025-07-01T09:15:21.237Z" }, + { url = "https://files.pythonhosted.org/packages/a8/2d/524f9318f6cbfcc79fbc004801ea6b607ec3f843977652fdee4857a7568b/pillow-11.3.0-cp314-cp314-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:93efb0b4de7e340d99057415c749175e24c8864302369e05914682ba642e5d77", size = 6642699, upload-time = "2025-07-01T09:15:23.186Z" }, + { url = "https://files.pythonhosted.org/packages/6f/d2/a9a4f280c6aefedce1e8f615baaa5474e0701d86dd6f1dede66726462bbd/pillow-11.3.0-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:7966e38dcd0fa11ca390aed7c6f20454443581d758242023cf36fcb319b1a874", size = 6083789, upload-time = "2025-07-01T09:15:25.1Z" }, + { url = "https://files.pythonhosted.org/packages/fe/54/86b0cd9dbb683a9d5e960b66c7379e821a19be4ac5810e2e5a715c09a0c0/pillow-11.3.0-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:98a9afa7b9007c67ed84c57c9e0ad86a6000da96eaa638e4f8abe5b65ff83f0a", size = 6720386, upload-time = "2025-07-01T09:15:27.378Z" }, + { url = "https://files.pythonhosted.org/packages/e7/95/88efcaf384c3588e24259c4203b909cbe3e3c2d887af9e938c2022c9dd48/pillow-11.3.0-cp314-cp314-win32.whl", hash = "sha256:02a723e6bf909e7cea0dac1b0e0310be9d7650cd66222a5f1c571455c0a45214", size = 6370911, upload-time = "2025-07-01T09:15:29.294Z" }, + { url = "https://files.pythonhosted.org/packages/2e/cc/934e5820850ec5eb107e7b1a72dd278140731c669f396110ebc326f2a503/pillow-11.3.0-cp314-cp314-win_amd64.whl", hash = "sha256:a418486160228f64dd9e9efcd132679b7a02a5f22c982c78b6fc7dab3fefb635", size = 7117383, upload-time = "2025-07-01T09:15:31.128Z" }, + { url = "https://files.pythonhosted.org/packages/d6/e9/9c0a616a71da2a5d163aa37405e8aced9a906d574b4a214bede134e731bc/pillow-11.3.0-cp314-cp314-win_arm64.whl", hash = "sha256:155658efb5e044669c08896c0c44231c5e9abcaadbc5cd3648df2f7c0b96b9a6", size = 2511385, upload-time = "2025-07-01T09:15:33.328Z" }, + { url = "https://files.pythonhosted.org/packages/1a/33/c88376898aff369658b225262cd4f2659b13e8178e7534df9e6e1fa289f6/pillow-11.3.0-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:59a03cdf019efbfeeed910bf79c7c93255c3d54bc45898ac2a4140071b02b4ae", size = 5281129, upload-time = "2025-07-01T09:15:35.194Z" }, + { url = "https://files.pythonhosted.org/packages/1f/70/d376247fb36f1844b42910911c83a02d5544ebd2a8bad9efcc0f707ea774/pillow-11.3.0-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:f8a5827f84d973d8636e9dc5764af4f0cf2318d26744b3d902931701b0d46653", size = 4689580, upload-time = "2025-07-01T09:15:37.114Z" }, + { url = "https://files.pythonhosted.org/packages/eb/1c/537e930496149fbac69efd2fc4329035bbe2e5475b4165439e3be9cb183b/pillow-11.3.0-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:ee92f2fd10f4adc4b43d07ec5e779932b4eb3dbfbc34790ada5a6669bc095aa6", size = 5902860, upload-time = "2025-07-03T13:10:50.248Z" }, + { url = "https://files.pythonhosted.org/packages/bd/57/80f53264954dcefeebcf9dae6e3eb1daea1b488f0be8b8fef12f79a3eb10/pillow-11.3.0-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:c96d333dcf42d01f47b37e0979b6bd73ec91eae18614864622d9b87bbd5bbf36", size = 7670694, upload-time = "2025-07-03T13:10:56.432Z" }, + { url = "https://files.pythonhosted.org/packages/70/ff/4727d3b71a8578b4587d9c276e90efad2d6fe0335fd76742a6da08132e8c/pillow-11.3.0-cp314-cp314t-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:4c96f993ab8c98460cd0c001447bff6194403e8b1d7e149ade5f00594918128b", size = 6005888, upload-time = "2025-07-01T09:15:39.436Z" }, + { url = "https://files.pythonhosted.org/packages/05/ae/716592277934f85d3be51d7256f3636672d7b1abfafdc42cf3f8cbd4b4c8/pillow-11.3.0-cp314-cp314t-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:41342b64afeba938edb034d122b2dda5db2139b9a4af999729ba8818e0056477", size = 6670330, upload-time = "2025-07-01T09:15:41.269Z" }, + { url = "https://files.pythonhosted.org/packages/e7/bb/7fe6cddcc8827b01b1a9766f5fdeb7418680744f9082035bdbabecf1d57f/pillow-11.3.0-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:068d9c39a2d1b358eb9f245ce7ab1b5c3246c7c8c7d9ba58cfa5b43146c06e50", size = 6114089, upload-time = "2025-07-01T09:15:43.13Z" }, + { url = "https://files.pythonhosted.org/packages/8b/f5/06bfaa444c8e80f1a8e4bff98da9c83b37b5be3b1deaa43d27a0db37ef84/pillow-11.3.0-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:a1bc6ba083b145187f648b667e05a2534ecc4b9f2784c2cbe3089e44868f2b9b", size = 6748206, upload-time = "2025-07-01T09:15:44.937Z" }, + { url = "https://files.pythonhosted.org/packages/f0/77/bc6f92a3e8e6e46c0ca78abfffec0037845800ea38c73483760362804c41/pillow-11.3.0-cp314-cp314t-win32.whl", hash = "sha256:118ca10c0d60b06d006be10a501fd6bbdfef559251ed31b794668ed569c87e12", size = 6377370, upload-time = "2025-07-01T09:15:46.673Z" }, + { url = "https://files.pythonhosted.org/packages/4a/82/3a721f7d69dca802befb8af08b7c79ebcab461007ce1c18bd91a5d5896f9/pillow-11.3.0-cp314-cp314t-win_amd64.whl", hash = "sha256:8924748b688aa210d79883357d102cd64690e56b923a186f35a82cbc10f997db", size = 7121500, upload-time = "2025-07-01T09:15:48.512Z" }, + { url = "https://files.pythonhosted.org/packages/89/c7/5572fa4a3f45740eaab6ae86fcdf7195b55beac1371ac8c619d880cfe948/pillow-11.3.0-cp314-cp314t-win_arm64.whl", hash = "sha256:79ea0d14d3ebad43ec77ad5272e6ff9bba5b679ef73375ea760261207fa8e0aa", size = 2512835, upload-time = "2025-07-01T09:15:50.399Z" }, { url = "https://files.pythonhosted.org/packages/9e/e3/6fa84033758276fb31da12e5fb66ad747ae83b93c67af17f8c6ff4cc8f34/pillow-11.3.0-pp311-pypy311_pp73-macosx_10_15_x86_64.whl", hash = "sha256:7c8ec7a017ad1bd562f93dbd8505763e688d388cde6e4a010ae1486916e713e6", size = 5270566, upload-time = "2025-07-01T09:16:19.801Z" }, { url = "https://files.pythonhosted.org/packages/5b/ee/e8d2e1ab4892970b561e1ba96cbd59c0d28cf66737fc44abb2aec3795a4e/pillow-11.3.0-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:9ab6ae226de48019caa8074894544af5b53a117ccb9d3b3dcb2871464c829438", size = 4654618, upload-time = "2025-07-01T09:16:21.818Z" }, { url = "https://files.pythonhosted.org/packages/f2/6d/17f80f4e1f0761f02160fc433abd4109fa1548dcfdca46cfdadaf9efa565/pillow-11.3.0-pp311-pypy311_pp73-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:fe27fb049cdcca11f11a7bfda64043c37b30e6b91f10cb5bab275806c32f6ab3", size = 4874248, upload-time = "2025-07-03T13:11:20.738Z" }, @@ -4158,11 +4438,11 @@ wheels = [ [[package]] name = "platformdirs" -version = "4.5.0" +version = "4.4.0" source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/61/33/9611380c2bdb1225fdef633e2a9610622310fed35ab11dac9620972ee088/platformdirs-4.5.0.tar.gz", hash = "sha256:70ddccdd7c99fc5942e9fc25636a8b34d04c24b335100223152c2803e4063312", size = 21632, upload-time = "2025-10-08T17:44:48.791Z" } +sdist = { url = "https://files.pythonhosted.org/packages/23/e8/21db9c9987b0e728855bd57bff6984f67952bea55d6f75e055c46b5383e8/platformdirs-4.4.0.tar.gz", hash = "sha256:ca753cf4d81dc309bc67b0ea38fd15dc97bc30ce419a7f58d13eb3bf14c4febf", size = 21634, upload-time = "2025-08-26T14:32:04.268Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/73/cb/ac7874b3e5d58441674fb70742e6c374b28b0c7cb988d37d991cde47166c/platformdirs-4.5.0-py3-none-any.whl", hash = "sha256:e578a81bb873cbb89a41fcc904c7ef523cc18284b7e3b3ccf06aca1403b7ebd3", size = 18651, upload-time = "2025-10-08T17:44:47.223Z" }, + { url = "https://files.pythonhosted.org/packages/40/4b/2028861e724d3bd36227adfa20d3fd24c3fc6d52032f4a93c133be5d17ce/platformdirs-4.4.0-py3-none-any.whl", hash = "sha256:abd01743f24e5287cd7a5db3752faf1a2d65353f38ec26d98e25a6db65958c85", size = 18654, upload-time = "2025-08-26T14:32:02.735Z" }, ] [[package]] @@ -4225,71 +4505,101 @@ wheels = [ [[package]] name = "propcache" -version = "0.4.1" +version = "0.4.0" source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/9e/da/e9fc233cf63743258bff22b3dfa7ea5baef7b5bc324af47a0ad89b8ffc6f/propcache-0.4.1.tar.gz", hash = "sha256:f48107a8c637e80362555f37ecf49abe20370e557cc4ab374f04ec4423c97c3d", size = 46442, upload-time = "2025-10-08T19:49:02.291Z" } -wheels = [ - { url = "https://files.pythonhosted.org/packages/8c/d4/4e2c9aaf7ac2242b9358f98dccd8f90f2605402f5afeff6c578682c2c491/propcache-0.4.1-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:60a8fda9644b7dfd5dece8c61d8a85e271cb958075bfc4e01083c148b61a7caf", size = 80208, upload-time = "2025-10-08T19:46:24.597Z" }, - { url = "https://files.pythonhosted.org/packages/c2/21/d7b68e911f9c8e18e4ae43bdbc1e1e9bbd971f8866eb81608947b6f585ff/propcache-0.4.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:c30b53e7e6bda1d547cabb47c825f3843a0a1a42b0496087bb58d8fedf9f41b5", size = 45777, upload-time = "2025-10-08T19:46:25.733Z" }, - { url = "https://files.pythonhosted.org/packages/d3/1d/11605e99ac8ea9435651ee71ab4cb4bf03f0949586246476a25aadfec54a/propcache-0.4.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:6918ecbd897443087a3b7cd978d56546a812517dcaaca51b49526720571fa93e", size = 47647, upload-time = "2025-10-08T19:46:27.304Z" }, - { url = "https://files.pythonhosted.org/packages/58/1a/3c62c127a8466c9c843bccb503d40a273e5cc69838805f322e2826509e0d/propcache-0.4.1-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:3d902a36df4e5989763425a8ab9e98cd8ad5c52c823b34ee7ef307fd50582566", size = 214929, upload-time = "2025-10-08T19:46:28.62Z" }, - { url = "https://files.pythonhosted.org/packages/56/b9/8fa98f850960b367c4b8fe0592e7fc341daa7a9462e925228f10a60cf74f/propcache-0.4.1-cp311-cp311-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:a9695397f85973bb40427dedddf70d8dc4a44b22f1650dd4af9eedf443d45165", size = 221778, upload-time = "2025-10-08T19:46:30.358Z" }, - { url = "https://files.pythonhosted.org/packages/46/a6/0ab4f660eb59649d14b3d3d65c439421cf2f87fe5dd68591cbe3c1e78a89/propcache-0.4.1-cp311-cp311-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:2bb07ffd7eaad486576430c89f9b215f9e4be68c4866a96e97db9e97fead85dc", size = 228144, upload-time = "2025-10-08T19:46:32.607Z" }, - { url = "https://files.pythonhosted.org/packages/52/6a/57f43e054fb3d3a56ac9fc532bc684fc6169a26c75c353e65425b3e56eef/propcache-0.4.1-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:fd6f30fdcf9ae2a70abd34da54f18da086160e4d7d9251f81f3da0ff84fc5a48", size = 210030, upload-time = "2025-10-08T19:46:33.969Z" }, - { url = "https://files.pythonhosted.org/packages/40/e2/27e6feebb5f6b8408fa29f5efbb765cd54c153ac77314d27e457a3e993b7/propcache-0.4.1-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:fc38cba02d1acba4e2869eef1a57a43dfbd3d49a59bf90dda7444ec2be6a5570", size = 208252, upload-time = "2025-10-08T19:46:35.309Z" }, - { url = "https://files.pythonhosted.org/packages/9e/f8/91c27b22ccda1dbc7967f921c42825564fa5336a01ecd72eb78a9f4f53c2/propcache-0.4.1-cp311-cp311-musllinux_1_2_armv7l.whl", hash = "sha256:67fad6162281e80e882fb3ec355398cf72864a54069d060321f6cd0ade95fe85", size = 202064, upload-time = "2025-10-08T19:46:36.993Z" }, - { url = "https://files.pythonhosted.org/packages/f2/26/7f00bd6bd1adba5aafe5f4a66390f243acab58eab24ff1a08bebb2ef9d40/propcache-0.4.1-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:f10207adf04d08bec185bae14d9606a1444715bc99180f9331c9c02093e1959e", size = 212429, upload-time = "2025-10-08T19:46:38.398Z" }, - { url = "https://files.pythonhosted.org/packages/84/89/fd108ba7815c1117ddca79c228f3f8a15fc82a73bca8b142eb5de13b2785/propcache-0.4.1-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:e9b0d8d0845bbc4cfcdcbcdbf5086886bc8157aa963c31c777ceff7846c77757", size = 216727, upload-time = "2025-10-08T19:46:39.732Z" }, - { url = "https://files.pythonhosted.org/packages/79/37/3ec3f7e3173e73f1d600495d8b545b53802cbf35506e5732dd8578db3724/propcache-0.4.1-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:981333cb2f4c1896a12f4ab92a9cc8f09ea664e9b7dbdc4eff74627af3a11c0f", size = 205097, upload-time = "2025-10-08T19:46:41.025Z" }, - { url = "https://files.pythonhosted.org/packages/61/b0/b2631c19793f869d35f47d5a3a56fb19e9160d3c119f15ac7344fc3ccae7/propcache-0.4.1-cp311-cp311-win32.whl", hash = "sha256:f1d2f90aeec838a52f1c1a32fe9a619fefd5e411721a9117fbf82aea638fe8a1", size = 38084, upload-time = "2025-10-08T19:46:42.693Z" }, - { url = "https://files.pythonhosted.org/packages/f4/78/6cce448e2098e9f3bfc91bb877f06aa24b6ccace872e39c53b2f707c4648/propcache-0.4.1-cp311-cp311-win_amd64.whl", hash = "sha256:364426a62660f3f699949ac8c621aad6977be7126c5807ce48c0aeb8e7333ea6", size = 41637, upload-time = "2025-10-08T19:46:43.778Z" }, - { url = "https://files.pythonhosted.org/packages/9c/e9/754f180cccd7f51a39913782c74717c581b9cc8177ad0e949f4d51812383/propcache-0.4.1-cp311-cp311-win_arm64.whl", hash = "sha256:e53f3a38d3510c11953f3e6a33f205c6d1b001129f972805ca9b42fc308bc239", size = 38064, upload-time = "2025-10-08T19:46:44.872Z" }, - { url = "https://files.pythonhosted.org/packages/a2/0f/f17b1b2b221d5ca28b4b876e8bb046ac40466513960646bda8e1853cdfa2/propcache-0.4.1-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:e153e9cd40cc8945138822807139367f256f89c6810c2634a4f6902b52d3b4e2", size = 80061, upload-time = "2025-10-08T19:46:46.075Z" }, - { url = "https://files.pythonhosted.org/packages/76/47/8ccf75935f51448ba9a16a71b783eb7ef6b9ee60f5d14c7f8a8a79fbeed7/propcache-0.4.1-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:cd547953428f7abb73c5ad82cbb32109566204260d98e41e5dfdc682eb7f8403", size = 46037, upload-time = "2025-10-08T19:46:47.23Z" }, - { url = "https://files.pythonhosted.org/packages/0a/b6/5c9a0e42df4d00bfb4a3cbbe5cf9f54260300c88a0e9af1f47ca5ce17ac0/propcache-0.4.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:f048da1b4f243fc44f205dfd320933a951b8d89e0afd4c7cacc762a8b9165207", size = 47324, upload-time = "2025-10-08T19:46:48.384Z" }, - { url = "https://files.pythonhosted.org/packages/9e/d3/6c7ee328b39a81ee877c962469f1e795f9db87f925251efeb0545e0020d0/propcache-0.4.1-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:ec17c65562a827bba85e3872ead335f95405ea1674860d96483a02f5c698fa72", size = 225505, upload-time = "2025-10-08T19:46:50.055Z" }, - { url = "https://files.pythonhosted.org/packages/01/5d/1c53f4563490b1d06a684742cc6076ef944bc6457df6051b7d1a877c057b/propcache-0.4.1-cp312-cp312-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:405aac25c6394ef275dee4c709be43745d36674b223ba4eb7144bf4d691b7367", size = 230242, upload-time = "2025-10-08T19:46:51.815Z" }, - { url = "https://files.pythonhosted.org/packages/20/e1/ce4620633b0e2422207c3cb774a0ee61cac13abc6217763a7b9e2e3f4a12/propcache-0.4.1-cp312-cp312-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:0013cb6f8dde4b2a2f66903b8ba740bdfe378c943c4377a200551ceb27f379e4", size = 238474, upload-time = "2025-10-08T19:46:53.208Z" }, - { url = "https://files.pythonhosted.org/packages/46/4b/3aae6835b8e5f44ea6a68348ad90f78134047b503765087be2f9912140ea/propcache-0.4.1-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:15932ab57837c3368b024473a525e25d316d8353016e7cc0e5ba9eb343fbb1cf", size = 221575, upload-time = "2025-10-08T19:46:54.511Z" }, - { url = "https://files.pythonhosted.org/packages/6e/a5/8a5e8678bcc9d3a1a15b9a29165640d64762d424a16af543f00629c87338/propcache-0.4.1-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:031dce78b9dc099f4c29785d9cf5577a3faf9ebf74ecbd3c856a7b92768c3df3", size = 216736, upload-time = "2025-10-08T19:46:56.212Z" }, - { url = "https://files.pythonhosted.org/packages/f1/63/b7b215eddeac83ca1c6b934f89d09a625aa9ee4ba158338854c87210cc36/propcache-0.4.1-cp312-cp312-musllinux_1_2_armv7l.whl", hash = "sha256:ab08df6c9a035bee56e31af99be621526bd237bea9f32def431c656b29e41778", size = 213019, upload-time = "2025-10-08T19:46:57.595Z" }, - { url = "https://files.pythonhosted.org/packages/57/74/f580099a58c8af587cac7ba19ee7cb418506342fbbe2d4a4401661cca886/propcache-0.4.1-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:4d7af63f9f93fe593afbf104c21b3b15868efb2c21d07d8732c0c4287e66b6a6", size = 220376, upload-time = "2025-10-08T19:46:59.067Z" }, - { url = "https://files.pythonhosted.org/packages/c4/ee/542f1313aff7eaf19c2bb758c5d0560d2683dac001a1c96d0774af799843/propcache-0.4.1-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:cfc27c945f422e8b5071b6e93169679e4eb5bf73bbcbf1ba3ae3a83d2f78ebd9", size = 226988, upload-time = "2025-10-08T19:47:00.544Z" }, - { url = "https://files.pythonhosted.org/packages/8f/18/9c6b015dd9c6930f6ce2229e1f02fb35298b847f2087ea2b436a5bfa7287/propcache-0.4.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:35c3277624a080cc6ec6f847cbbbb5b49affa3598c4535a0a4682a697aaa5c75", size = 215615, upload-time = "2025-10-08T19:47:01.968Z" }, - { url = "https://files.pythonhosted.org/packages/80/9e/e7b85720b98c45a45e1fca6a177024934dc9bc5f4d5dd04207f216fc33ed/propcache-0.4.1-cp312-cp312-win32.whl", hash = "sha256:671538c2262dadb5ba6395e26c1731e1d52534bfe9ae56d0b5573ce539266aa8", size = 38066, upload-time = "2025-10-08T19:47:03.503Z" }, - { url = "https://files.pythonhosted.org/packages/54/09/d19cff2a5aaac632ec8fc03737b223597b1e347416934c1b3a7df079784c/propcache-0.4.1-cp312-cp312-win_amd64.whl", hash = "sha256:cb2d222e72399fcf5890d1d5cc1060857b9b236adff2792ff48ca2dfd46c81db", size = 41655, upload-time = "2025-10-08T19:47:04.973Z" }, - { url = "https://files.pythonhosted.org/packages/68/ab/6b5c191bb5de08036a8c697b265d4ca76148efb10fa162f14af14fb5f076/propcache-0.4.1-cp312-cp312-win_arm64.whl", hash = "sha256:204483131fb222bdaaeeea9f9e6c6ed0cac32731f75dfc1d4a567fc1926477c1", size = 37789, upload-time = "2025-10-08T19:47:06.077Z" }, - { url = "https://files.pythonhosted.org/packages/bf/df/6d9c1b6ac12b003837dde8a10231a7344512186e87b36e855bef32241942/propcache-0.4.1-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:43eedf29202c08550aac1d14e0ee619b0430aaef78f85864c1a892294fbc28cf", size = 77750, upload-time = "2025-10-08T19:47:07.648Z" }, - { url = "https://files.pythonhosted.org/packages/8b/e8/677a0025e8a2acf07d3418a2e7ba529c9c33caf09d3c1f25513023c1db56/propcache-0.4.1-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:d62cdfcfd89ccb8de04e0eda998535c406bf5e060ffd56be6c586cbcc05b3311", size = 44780, upload-time = "2025-10-08T19:47:08.851Z" }, - { url = "https://files.pythonhosted.org/packages/89/a4/92380f7ca60f99ebae761936bc48a72a639e8a47b29050615eef757cb2a7/propcache-0.4.1-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:cae65ad55793da34db5f54e4029b89d3b9b9490d8abe1b4c7ab5d4b8ec7ebf74", size = 46308, upload-time = "2025-10-08T19:47:09.982Z" }, - { url = "https://files.pythonhosted.org/packages/2d/48/c5ac64dee5262044348d1d78a5f85dd1a57464a60d30daee946699963eb3/propcache-0.4.1-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:333ddb9031d2704a301ee3e506dc46b1fe5f294ec198ed6435ad5b6a085facfe", size = 208182, upload-time = "2025-10-08T19:47:11.319Z" }, - { url = "https://files.pythonhosted.org/packages/c6/0c/cd762dd011a9287389a6a3eb43aa30207bde253610cca06824aeabfe9653/propcache-0.4.1-cp313-cp313-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:fd0858c20f078a32cf55f7e81473d96dcf3b93fd2ccdb3d40fdf54b8573df3af", size = 211215, upload-time = "2025-10-08T19:47:13.146Z" }, - { url = "https://files.pythonhosted.org/packages/30/3e/49861e90233ba36890ae0ca4c660e95df565b2cd15d4a68556ab5865974e/propcache-0.4.1-cp313-cp313-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:678ae89ebc632c5c204c794f8dab2837c5f159aeb59e6ed0539500400577298c", size = 218112, upload-time = "2025-10-08T19:47:14.913Z" }, - { url = "https://files.pythonhosted.org/packages/f1/8b/544bc867e24e1bd48f3118cecd3b05c694e160a168478fa28770f22fd094/propcache-0.4.1-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:d472aeb4fbf9865e0c6d622d7f4d54a4e101a89715d8904282bb5f9a2f476c3f", size = 204442, upload-time = "2025-10-08T19:47:16.277Z" }, - { url = "https://files.pythonhosted.org/packages/50/a6/4282772fd016a76d3e5c0df58380a5ea64900afd836cec2c2f662d1b9bb3/propcache-0.4.1-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:4d3df5fa7e36b3225954fba85589da77a0fe6a53e3976de39caf04a0db4c36f1", size = 199398, upload-time = "2025-10-08T19:47:17.962Z" }, - { url = "https://files.pythonhosted.org/packages/3e/ec/d8a7cd406ee1ddb705db2139f8a10a8a427100347bd698e7014351c7af09/propcache-0.4.1-cp313-cp313-musllinux_1_2_armv7l.whl", hash = "sha256:ee17f18d2498f2673e432faaa71698032b0127ebf23ae5974eeaf806c279df24", size = 196920, upload-time = "2025-10-08T19:47:19.355Z" }, - { url = "https://files.pythonhosted.org/packages/f6/6c/f38ab64af3764f431e359f8baf9e0a21013e24329e8b85d2da32e8ed07ca/propcache-0.4.1-cp313-cp313-musllinux_1_2_ppc64le.whl", hash = "sha256:580e97762b950f993ae618e167e7be9256b8353c2dcd8b99ec100eb50f5286aa", size = 203748, upload-time = "2025-10-08T19:47:21.338Z" }, - { url = "https://files.pythonhosted.org/packages/d6/e3/fa846bd70f6534d647886621388f0a265254d30e3ce47e5c8e6e27dbf153/propcache-0.4.1-cp313-cp313-musllinux_1_2_s390x.whl", hash = "sha256:501d20b891688eb8e7aa903021f0b72d5a55db40ffaab27edefd1027caaafa61", size = 205877, upload-time = "2025-10-08T19:47:23.059Z" }, - { url = "https://files.pythonhosted.org/packages/e2/39/8163fc6f3133fea7b5f2827e8eba2029a0277ab2c5beee6c1db7b10fc23d/propcache-0.4.1-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:9a0bd56e5b100aef69bd8562b74b46254e7c8812918d3baa700c8a8009b0af66", size = 199437, upload-time = "2025-10-08T19:47:24.445Z" }, - { url = "https://files.pythonhosted.org/packages/93/89/caa9089970ca49c7c01662bd0eeedfe85494e863e8043565aeb6472ce8fe/propcache-0.4.1-cp313-cp313-win32.whl", hash = "sha256:bcc9aaa5d80322bc2fb24bb7accb4a30f81e90ab8d6ba187aec0744bc302ad81", size = 37586, upload-time = "2025-10-08T19:47:25.736Z" }, - { url = "https://files.pythonhosted.org/packages/f5/ab/f76ec3c3627c883215b5c8080debb4394ef5a7a29be811f786415fc1e6fd/propcache-0.4.1-cp313-cp313-win_amd64.whl", hash = "sha256:381914df18634f5494334d201e98245c0596067504b9372d8cf93f4bb23e025e", size = 40790, upload-time = "2025-10-08T19:47:26.847Z" }, - { url = "https://files.pythonhosted.org/packages/59/1b/e71ae98235f8e2ba5004d8cb19765a74877abf189bc53fc0c80d799e56c3/propcache-0.4.1-cp313-cp313-win_arm64.whl", hash = "sha256:8873eb4460fd55333ea49b7d189749ecf6e55bf85080f11b1c4530ed3034cba1", size = 37158, upload-time = "2025-10-08T19:47:27.961Z" }, - { url = "https://files.pythonhosted.org/packages/83/ce/a31bbdfc24ee0dcbba458c8175ed26089cf109a55bbe7b7640ed2470cfe9/propcache-0.4.1-cp313-cp313t-macosx_10_13_universal2.whl", hash = "sha256:92d1935ee1f8d7442da9c0c4fa7ac20d07e94064184811b685f5c4fada64553b", size = 81451, upload-time = "2025-10-08T19:47:29.445Z" }, - { url = "https://files.pythonhosted.org/packages/25/9c/442a45a470a68456e710d96cacd3573ef26a1d0a60067e6a7d5e655621ed/propcache-0.4.1-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:473c61b39e1460d386479b9b2f337da492042447c9b685f28be4f74d3529e566", size = 46374, upload-time = "2025-10-08T19:47:30.579Z" }, - { url = "https://files.pythonhosted.org/packages/f4/bf/b1d5e21dbc3b2e889ea4327044fb16312a736d97640fb8b6aa3f9c7b3b65/propcache-0.4.1-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:c0ef0aaafc66fbd87842a3fe3902fd889825646bc21149eafe47be6072725835", size = 48396, upload-time = "2025-10-08T19:47:31.79Z" }, - { url = "https://files.pythonhosted.org/packages/f4/04/5b4c54a103d480e978d3c8a76073502b18db0c4bc17ab91b3cb5092ad949/propcache-0.4.1-cp313-cp313t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:f95393b4d66bfae908c3ca8d169d5f79cd65636ae15b5e7a4f6e67af675adb0e", size = 275950, upload-time = "2025-10-08T19:47:33.481Z" }, - { url = "https://files.pythonhosted.org/packages/b4/c1/86f846827fb969c4b78b0af79bba1d1ea2156492e1b83dea8b8a6ae27395/propcache-0.4.1-cp313-cp313t-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:c07fda85708bc48578467e85099645167a955ba093be0a2dcba962195676e859", size = 273856, upload-time = "2025-10-08T19:47:34.906Z" }, - { url = "https://files.pythonhosted.org/packages/36/1d/fc272a63c8d3bbad6878c336c7a7dea15e8f2d23a544bda43205dfa83ada/propcache-0.4.1-cp313-cp313t-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:af223b406d6d000830c6f65f1e6431783fc3f713ba3e6cc8c024d5ee96170a4b", size = 280420, upload-time = "2025-10-08T19:47:36.338Z" }, - { url = "https://files.pythonhosted.org/packages/07/0c/01f2219d39f7e53d52e5173bcb09c976609ba30209912a0680adfb8c593a/propcache-0.4.1-cp313-cp313t-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:a78372c932c90ee474559c5ddfffd718238e8673c340dc21fe45c5b8b54559a0", size = 263254, upload-time = "2025-10-08T19:47:37.692Z" }, - { url = "https://files.pythonhosted.org/packages/2d/18/cd28081658ce597898f0c4d174d4d0f3c5b6d4dc27ffafeef835c95eb359/propcache-0.4.1-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:564d9f0d4d9509e1a870c920a89b2fec951b44bf5ba7d537a9e7c1ccec2c18af", size = 261205, upload-time = "2025-10-08T19:47:39.659Z" }, - { url = "https://files.pythonhosted.org/packages/7a/71/1f9e22eb8b8316701c2a19fa1f388c8a3185082607da8e406a803c9b954e/propcache-0.4.1-cp313-cp313t-musllinux_1_2_armv7l.whl", hash = "sha256:17612831fda0138059cc5546f4d12a2aacfb9e47068c06af35c400ba58ba7393", size = 247873, upload-time = "2025-10-08T19:47:41.084Z" }, - { url = "https://files.pythonhosted.org/packages/4a/65/3d4b61f36af2b4eddba9def857959f1016a51066b4f1ce348e0cf7881f58/propcache-0.4.1-cp313-cp313t-musllinux_1_2_ppc64le.whl", hash = "sha256:41a89040cb10bd345b3c1a873b2bf36413d48da1def52f268a055f7398514874", size = 262739, upload-time = "2025-10-08T19:47:42.51Z" }, - { url = "https://files.pythonhosted.org/packages/2a/42/26746ab087faa77c1c68079b228810436ccd9a5ce9ac85e2b7307195fd06/propcache-0.4.1-cp313-cp313t-musllinux_1_2_s390x.whl", hash = "sha256:e35b88984e7fa64aacecea39236cee32dd9bd8c55f57ba8a75cf2399553f9bd7", size = 263514, upload-time = "2025-10-08T19:47:43.927Z" }, - { url = "https://files.pythonhosted.org/packages/94/13/630690fe201f5502d2403dd3cfd451ed8858fe3c738ee88d095ad2ff407b/propcache-0.4.1-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:6f8b465489f927b0df505cbe26ffbeed4d6d8a2bbc61ce90eb074ff129ef0ab1", size = 257781, upload-time = "2025-10-08T19:47:45.448Z" }, - { url = "https://files.pythonhosted.org/packages/92/f7/1d4ec5841505f423469efbfc381d64b7b467438cd5a4bbcbb063f3b73d27/propcache-0.4.1-cp313-cp313t-win32.whl", hash = "sha256:2ad890caa1d928c7c2965b48f3a3815c853180831d0e5503d35cf00c472f4717", size = 41396, upload-time = "2025-10-08T19:47:47.202Z" }, - { url = "https://files.pythonhosted.org/packages/48/f0/615c30622316496d2cbbc29f5985f7777d3ada70f23370608c1d3e081c1f/propcache-0.4.1-cp313-cp313t-win_amd64.whl", hash = "sha256:f7ee0e597f495cf415bcbd3da3caa3bd7e816b74d0d52b8145954c5e6fd3ff37", size = 44897, upload-time = "2025-10-08T19:47:48.336Z" }, - { url = "https://files.pythonhosted.org/packages/fd/ca/6002e46eccbe0e33dcd4069ef32f7f1c9e243736e07adca37ae8c4830ec3/propcache-0.4.1-cp313-cp313t-win_arm64.whl", hash = "sha256:929d7cbe1f01bb7baffb33dc14eb5691c95831450a26354cd210a8155170c93a", size = 39789, upload-time = "2025-10-08T19:47:49.876Z" }, - { url = "https://files.pythonhosted.org/packages/5b/5a/bc7b4a4ef808fa59a816c17b20c4bef6884daebbdf627ff2a161da67da19/propcache-0.4.1-py3-none-any.whl", hash = "sha256:af2a6052aeb6cf17d3e46ee169099044fd8224cbaf75c76a2ef596e8163e2237", size = 13305, upload-time = "2025-10-08T19:49:00.792Z" }, +sdist = { url = "https://files.pythonhosted.org/packages/ea/c8/d70cd26d845c6d85479d8f5a11a0fd7151e9bc4794cc5e6eb5a790f12df8/propcache-0.4.0.tar.gz", hash = "sha256:c1ad731253eb738f9cadd9fa1844e019576c70bca6a534252e97cf33a57da529", size = 45187, upload-time = "2025-10-04T21:57:39.546Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/f9/c4/72b8d41bdbae8aea9c25b869d7cdc3ab5f281f979d8aea30f4646ad12743/propcache-0.4.0-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:6a6a36b94c09711d6397d79006ca47901539fbc602c853d794c39abd6a326549", size = 80035, upload-time = "2025-10-04T21:55:11.266Z" }, + { url = "https://files.pythonhosted.org/packages/e9/f8/f87115733e221408a363f3a9753419cf2d4be7a8a7ec9dc0788325cd23f1/propcache-0.4.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:da47070e1340a1639aca6b1c18fe1f1f3d8d64d3a1f9ddc67b94475f44cd40f3", size = 45622, upload-time = "2025-10-04T21:55:12.41Z" }, + { url = "https://files.pythonhosted.org/packages/5d/cc/391f883248faa2efdf6886bdb12ac8edf20eac0863770d8d925450d8cc76/propcache-0.4.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:de536cf796abc5b58d11c0ad56580215d231d9554ea4bb6b8b1b3bed80aa3234", size = 47517, upload-time = "2025-10-04T21:55:13.819Z" }, + { url = "https://files.pythonhosted.org/packages/3e/d2/5593b59999f42d1044c5ab5f238be1f9d537ab91b0c910727986d520a6e9/propcache-0.4.0-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:f5c82af8e329c3cdc3e717dd3c7b2ff1a218b6de611f6ce76ee34967570a9de9", size = 214540, upload-time = "2025-10-04T21:55:15.206Z" }, + { url = "https://files.pythonhosted.org/packages/bb/5d/028cdc0eaa1a66ee2ec339a08b5e6ec15e7e71dac86103bebe53ba10dc0f/propcache-0.4.0-cp311-cp311-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:abe04e7aa5ab2e4056fcf3255ebee2071e4a427681f76d4729519e292c46ecc1", size = 221603, upload-time = "2025-10-04T21:55:16.704Z" }, + { url = "https://files.pythonhosted.org/packages/e8/f8/e30aee5f59ea21647faef9c82bd67fa510295c34908a7a38571def555881/propcache-0.4.0-cp311-cp311-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:075ca32384294434344760fdcb95f7833e1d7cf7c4e55f0e726358140179da35", size = 227749, upload-time = "2025-10-04T21:55:18.082Z" }, + { url = "https://files.pythonhosted.org/packages/d7/85/0757dfc73931bea63b18d26b2c5e7bf13113ca60fe0e5f19905f104bcf6a/propcache-0.4.0-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:626ec13592928b677f48ff5861040b604b635e93d8e2162fb638397ea83d07e8", size = 209792, upload-time = "2025-10-04T21:55:19.475Z" }, + { url = "https://files.pythonhosted.org/packages/d2/45/35a6a6241f46948c0ac2418d5bf50cfbcd9735739f42028a1c11e9066a72/propcache-0.4.0-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:02e071548b6a376e173b0102c3f55dc16e7d055b5307d487e844c320e38cacf2", size = 207979, upload-time = "2025-10-04T21:55:21.164Z" }, + { url = "https://files.pythonhosted.org/packages/e3/d1/5930396e75c9ed477958eac1496e6fb08794d823e9b14a459f1c0e20f338/propcache-0.4.0-cp311-cp311-musllinux_1_2_armv7l.whl", hash = "sha256:2af6de831a26f42a3f94592964becd8d7f238551786d7525807f02e53defbd13", size = 201923, upload-time = "2025-10-04T21:55:22.5Z" }, + { url = "https://files.pythonhosted.org/packages/98/72/675455f22bcefeda16907461f9a9a4a93709ff2095e8cf799bdb6c78e030/propcache-0.4.0-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:bd6c6dba1a3b8949e08c4280071c86e38cb602f02e0ed6659234108c7a7cd710", size = 212117, upload-time = "2025-10-04T21:55:23.858Z" }, + { url = "https://files.pythonhosted.org/packages/13/27/c533302ff80a49a848c3dbd01bb18f87b06826602b3b37043ff00d6b5005/propcache-0.4.0-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:783e91595cf9b66c2deda17f2e8748ae8591aa9f7c65dcab038872bfe83c5bb1", size = 216594, upload-time = "2025-10-04T21:55:25.169Z" }, + { url = "https://files.pythonhosted.org/packages/63/91/8250fbb601fd16c427e5f469132f27e175c6692dbfa784ef1266dc652e55/propcache-0.4.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:c3f4b125285d354a627eb37f3ea7c13b8842c7c0d47783581d0df0e272dbf5f0", size = 204863, upload-time = "2025-10-04T21:55:26.511Z" }, + { url = "https://files.pythonhosted.org/packages/34/c4/fd945a9a25845aafb6094b9fa6a88286e4e1c55686e60172c60fe669e0d1/propcache-0.4.0-cp311-cp311-win32.whl", hash = "sha256:71c45f02ffbb8a21040ae816ceff7f6cd749ffac29fc0f9daa42dc1a9652d577", size = 37948, upload-time = "2025-10-04T21:55:27.719Z" }, + { url = "https://files.pythonhosted.org/packages/42/02/f30e7304661ffe8d51ff4050e06765ac2df6d95cf23c999dfe5a0cd0eb4c/propcache-0.4.0-cp311-cp311-win_amd64.whl", hash = "sha256:7d51f70f77950f8efafed4383865d3533eeee52d8a0dd1c35b65f24de41de4e0", size = 41511, upload-time = "2025-10-04T21:55:29.15Z" }, + { url = "https://files.pythonhosted.org/packages/a5/f2/edd329d86085438a1ba32cf4cf45fc982d18343bed1f16b218b516c3340d/propcache-0.4.0-cp311-cp311-win_arm64.whl", hash = "sha256:858eaabd2191dd0da5272993ad08a748b5d3ae1aefabea8aee619b45c2af4a64", size = 37957, upload-time = "2025-10-04T21:55:30.31Z" }, + { url = "https://files.pythonhosted.org/packages/b3/cf/3f88344261d69f8021256f20e82e820c5df3aba96e5ba9b5fdd3685d3a9f/propcache-0.4.0-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:381c84a445efb8c9168f1393a5a7c566de22edc42bfe207a142fff919b37f5d9", size = 79846, upload-time = "2025-10-04T21:55:31.447Z" }, + { url = "https://files.pythonhosted.org/packages/be/fa/0286fc92764eead9dcfee639b67828daa32e61dd0f1618831547141eb28b/propcache-0.4.0-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:5a531d29d7b873b12730972237c48b1a4e5980b98cf21b3f09fa4710abd3a8c3", size = 45850, upload-time = "2025-10-04T21:55:32.637Z" }, + { url = "https://files.pythonhosted.org/packages/c7/83/57840656f972f8a67992eee40781e4066657776dcb889f49df0e8eecb112/propcache-0.4.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:cd6e22255ed73efeaaeb1765505a66a48a9ec9ebc919fce5ad490fe5e33b1555", size = 47171, upload-time = "2025-10-04T21:55:33.819Z" }, + { url = "https://files.pythonhosted.org/packages/9f/8e/e0a0bd376c3440476b924eca517589ee535bb4520420d178268bf88558ba/propcache-0.4.0-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:d9a8d277dc218ddf04ec243a53ac309b1afcebe297c0526a8f82320139b56289", size = 225306, upload-time = "2025-10-04T21:55:35.312Z" }, + { url = "https://files.pythonhosted.org/packages/84/fe/76884442da1bab6d4353ba1c43fdc4a770c3b3973f3ac7620a7205402fdd/propcache-0.4.0-cp312-cp312-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:399c73201d88c856a994916200d7cba41d7687096f8eb5139eb68f02785dc3f7", size = 230013, upload-time = "2025-10-04T21:55:37.005Z" }, + { url = "https://files.pythonhosted.org/packages/f4/b7/322af273bd1136bb7e13628821fb855c9f61d64651c73fea71dded68dda5/propcache-0.4.0-cp312-cp312-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:a1d5e474d43c238035b74ecf997f655afa67f979bae591ac838bb3fbe3076392", size = 238331, upload-time = "2025-10-04T21:55:38.713Z" }, + { url = "https://files.pythonhosted.org/packages/84/5e/036d2b105927ae7f179346c9911d16c345f4dba5a19a063f23a8d28acfbd/propcache-0.4.0-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:22f589652ee38de96aa58dd219335604e09666092bc250c1d9c26a55bcef9932", size = 221461, upload-time = "2025-10-04T21:55:40.034Z" }, + { url = "https://files.pythonhosted.org/packages/63/0d/babd038efb12a87a46ab070438c52daeac6bed0a930693a418feef8cb8a6/propcache-0.4.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:e5227da556b2939da6125cda1d5eecf9e412e58bc97b41e2f192605c3ccbb7c2", size = 216707, upload-time = "2025-10-04T21:55:41.455Z" }, + { url = "https://files.pythonhosted.org/packages/ab/68/dd075a037381581f16e7e504a6da9c1d7e415e945dd8ed67905d608f0687/propcache-0.4.0-cp312-cp312-musllinux_1_2_armv7l.whl", hash = "sha256:92bc43a1ab852310721ce856f40a3a352254aa6f5e26f0fad870b31be45bba2e", size = 212591, upload-time = "2025-10-04T21:55:42.938Z" }, + { url = "https://files.pythonhosted.org/packages/ff/43/22698f28fc8e04c32b109cb9cb81305a4873b77c907b17484566b6133aef/propcache-0.4.0-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:83ae2f5343f6f06f4c91ae530d95f56b415f768f9c401a5ee2a10459cf74370b", size = 220188, upload-time = "2025-10-04T21:55:44.53Z" }, + { url = "https://files.pythonhosted.org/packages/96/7a/27886e4a4c69598a38fbeeed64f9b8ddfa6f08fe3452035845a1fe90336f/propcache-0.4.0-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:077a32977399dc05299b16e793210341a0b511eb0a86d1796873e83ce47334cc", size = 226736, upload-time = "2025-10-04T21:55:46.348Z" }, + { url = "https://files.pythonhosted.org/packages/5b/c7/313c632b5888db3c9f4cb262420dcd5e57cf858d939d6ad9c3b1b90c12af/propcache-0.4.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:94a278c45e6463031b5a8278e40a07edf2bcc3b5379510e22b6c1a6e6498c194", size = 216363, upload-time = "2025-10-04T21:55:47.768Z" }, + { url = "https://files.pythonhosted.org/packages/7a/5d/5aaf82bd1542aedb47d10483b84f49ee8f00d970a58e27534cd241e9c5ac/propcache-0.4.0-cp312-cp312-win32.whl", hash = "sha256:4c491462e1dc80f9deb93f428aad8d83bb286de212837f58eb48e75606e7726c", size = 37945, upload-time = "2025-10-04T21:55:49.104Z" }, + { url = "https://files.pythonhosted.org/packages/4c/67/47ffff6eb176f383f56319f31c0e1bcf7500cb94ffb7582efc600c6b3c73/propcache-0.4.0-cp312-cp312-win_amd64.whl", hash = "sha256:cdb0cecafb528ab15ed89cdfed183074d15912d046d3e304955513b50a34b907", size = 41530, upload-time = "2025-10-04T21:55:50.261Z" }, + { url = "https://files.pythonhosted.org/packages/f3/7e/61b70306b9d7527286ce887a8ff28c304ab2514e5893eea36b5bdf7a21af/propcache-0.4.0-cp312-cp312-win_arm64.whl", hash = "sha256:b2f29697d1110e8cdf7a39cc630498df0082d7898b79b731c1c863f77c6e8cfc", size = 37662, upload-time = "2025-10-04T21:55:51.35Z" }, + { url = "https://files.pythonhosted.org/packages/cd/dd/f405b0fe84d29d356895bc048404d3321a2df849281cf3f932158c9346ac/propcache-0.4.0-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:e2d01fd53e89cb3d71d20b8c225a8c70d84660f2d223afc7ed7851a4086afe6d", size = 77565, upload-time = "2025-10-04T21:55:52.907Z" }, + { url = "https://files.pythonhosted.org/packages/c0/48/dfb2c45e1b0d92228c9c66fa929af7316c15cbe69a7e438786aaa60c1b3c/propcache-0.4.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:7dfa60953169d2531dd8ae306e9c27c5d4e5efe7a2ba77049e8afdaece062937", size = 44602, upload-time = "2025-10-04T21:55:54.406Z" }, + { url = "https://files.pythonhosted.org/packages/d0/d9/b15e88b4463df45a7793fb04e2b5497334f8fcc24e281c221150a0af9aff/propcache-0.4.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:227892597953611fce2601d49f1d1f39786a6aebc2f253c2de775407f725a3f6", size = 46168, upload-time = "2025-10-04T21:55:55.537Z" }, + { url = "https://files.pythonhosted.org/packages/40/ac/983e69cce8800251aab85858069cf9359b22222a9cda47591e03e2f24eec/propcache-0.4.0-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:5e0a5bc019014531308fb67d86066d235daa7551baf2e00e1ea7b00531f6ea85", size = 207997, upload-time = "2025-10-04T21:55:57.022Z" }, + { url = "https://files.pythonhosted.org/packages/ae/9c/5586a7a54e7e0b9a87fdd8ba935961f398c0e6eaecd57baaa8eca468a236/propcache-0.4.0-cp313-cp313-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:6ebc6e2e65c31356310ddb6519420eaa6bb8c30fbd809d0919129c89dcd70f4c", size = 210948, upload-time = "2025-10-04T21:55:58.397Z" }, + { url = "https://files.pythonhosted.org/packages/5f/ba/644e367f8a86461d45bd023ace521180938e76515040550af9b44085e99a/propcache-0.4.0-cp313-cp313-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:1927b78dd75fc31a7fdc76cc7039e39f3170cb1d0d9a271e60f0566ecb25211a", size = 217988, upload-time = "2025-10-04T21:56:00.251Z" }, + { url = "https://files.pythonhosted.org/packages/24/0e/1e21af74b4732d002b0452605bdf31d6bf990fd8b720cb44e27a97d80db5/propcache-0.4.0-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:5b113feeda47f908562d9a6d0e05798ad2f83d4473c0777dafa2bc7756473218", size = 204442, upload-time = "2025-10-04T21:56:01.93Z" }, + { url = "https://files.pythonhosted.org/packages/fd/30/ae2eec96995a8a760acb9a0b6c92b9815f1fc885c7d8481237ccb554eab0/propcache-0.4.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:4596c12aa7e3bb2abf158ea8f79eb0fb4851606695d04ab846b2bb386f5690a1", size = 199371, upload-time = "2025-10-04T21:56:03.25Z" }, + { url = "https://files.pythonhosted.org/packages/45/1d/a18fac8cb04f8379ccb79cf15aac31f4167a270d1cd1111f33c0d38ce4fb/propcache-0.4.0-cp313-cp313-musllinux_1_2_armv7l.whl", hash = "sha256:6d1f67dad8cc36e8abc2207a77f3f952ac80be7404177830a7af4635a34cbc16", size = 196638, upload-time = "2025-10-04T21:56:04.619Z" }, + { url = "https://files.pythonhosted.org/packages/48/45/3549a2b6f74dce6f21b2664d078bd26ceb876aae9c58f3c017cf590f0ee3/propcache-0.4.0-cp313-cp313-musllinux_1_2_ppc64le.whl", hash = "sha256:e6229ad15366cd8b6d6b4185c55dd48debf9ca546f91416ba2e5921ad6e210a6", size = 203651, upload-time = "2025-10-04T21:56:06.153Z" }, + { url = "https://files.pythonhosted.org/packages/7d/f0/90ea14d518c919fc154332742a9302db3004af4f1d3df688676959733283/propcache-0.4.0-cp313-cp313-musllinux_1_2_s390x.whl", hash = "sha256:2a4bf309d057327f1f227a22ac6baf34a66f9af75e08c613e47c4d775b06d6c7", size = 205726, upload-time = "2025-10-04T21:56:07.955Z" }, + { url = "https://files.pythonhosted.org/packages/f6/de/8efc1dbafeb42108e7af744822cdca944b990869e9da70e79efb21569d6b/propcache-0.4.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:c2e274f3d1cbb2ddcc7a55ce3739af0f8510edc68a7f37981b2258fa1eedc833", size = 199576, upload-time = "2025-10-04T21:56:09.43Z" }, + { url = "https://files.pythonhosted.org/packages/d7/38/4d79fe3477b050398fb8d8f59301ed116d8c6ea3c4dbf09498c679103f90/propcache-0.4.0-cp313-cp313-win32.whl", hash = "sha256:f114a3e1f8034e2957d34043b7a317a8a05d97dfe8fddb36d9a2252c0117dbbc", size = 37474, upload-time = "2025-10-04T21:56:10.74Z" }, + { url = "https://files.pythonhosted.org/packages/36/9b/a283daf665a1945cff1b03d1104e7c9ee92bb7b6bbcc6518b24fcdac8bd0/propcache-0.4.0-cp313-cp313-win_amd64.whl", hash = "sha256:9ba68c57cde9c667f6b65b98bc342dfa7240b1272ffb2c24b32172ee61b6d281", size = 40685, upload-time = "2025-10-04T21:56:11.896Z" }, + { url = "https://files.pythonhosted.org/packages/e9/f7/def8fc0b4d7a89f1628f337cb122bb9a946c5ed97760f2442b27b7fa5a69/propcache-0.4.0-cp313-cp313-win_arm64.whl", hash = "sha256:eb77a85253174bf73e52c968b689d64be62d71e8ac33cabef4ca77b03fb4ef92", size = 37046, upload-time = "2025-10-04T21:56:13.021Z" }, + { url = "https://files.pythonhosted.org/packages/ca/6b/f6e8b36b58d17dfb6c505b9ae1163fcf7a4cf98825032fdc77bba4ab5c4a/propcache-0.4.0-cp313-cp313t-macosx_10_13_universal2.whl", hash = "sha256:c0e1c218fff95a66ad9f2f83ad41a67cf4d0a3f527efe820f57bde5fda616de4", size = 81274, upload-time = "2025-10-04T21:56:14.206Z" }, + { url = "https://files.pythonhosted.org/packages/8e/c5/1fd0baa222b8faf53ba04dd4f34de33ea820b80e34f87c7960666bae5f4f/propcache-0.4.0-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:5710b1c01472542bb024366803812ca13e8774d21381bcfc1f7ae738eeb38acc", size = 46232, upload-time = "2025-10-04T21:56:15.337Z" }, + { url = "https://files.pythonhosted.org/packages/cb/6b/7aa5324983cab7666ed58fc32c68a0430468a18e02e3f04e7a879c002414/propcache-0.4.0-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:d7f008799682e8826ce98f25e8bc43532d2cd26c187a1462499fa8d123ae054f", size = 48239, upload-time = "2025-10-04T21:56:16.768Z" }, + { url = "https://files.pythonhosted.org/packages/24/0f/58c192301c0436762ed5fed5a3edadb0ae399cb73528fb9c1b5cb8e53523/propcache-0.4.0-cp313-cp313t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:0596d2ae99d74ca436553eb9ce11fe4163dc742fcf8724ebe07d7cb0db679bb1", size = 275804, upload-time = "2025-10-04T21:56:18.066Z" }, + { url = "https://files.pythonhosted.org/packages/f7/b9/092ee32064ebfabedae4251952787e63e551075af1a1205e8061b3ed5838/propcache-0.4.0-cp313-cp313t-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:ab9c1bd95ebd1689f0e24f2946c495808777e9e8df7bb3c1dfe3e9eb7f47fe0d", size = 273996, upload-time = "2025-10-04T21:56:19.801Z" }, + { url = "https://files.pythonhosted.org/packages/43/82/becf618ed28e732f3bba3df172cd290a1afbd99f291074f747fd5bd031bb/propcache-0.4.0-cp313-cp313t-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:a8ef2ea819549ae2e8698d2ec229ae948d7272feea1cb2878289f767b6c585a4", size = 280266, upload-time = "2025-10-04T21:56:21.136Z" }, + { url = "https://files.pythonhosted.org/packages/51/be/b370930249a9332a81b5c4c550dac614b7e11b6c160080777e903d57e197/propcache-0.4.0-cp313-cp313t-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:71a400b2f0b079438cc24f9a27f02eff24d8ef78f2943f949abc518b844ade3d", size = 263186, upload-time = "2025-10-04T21:56:22.787Z" }, + { url = "https://files.pythonhosted.org/packages/33/b6/546fd3e31770aed3aed1c01b120944c689edb510aeb7a25472edc472ce23/propcache-0.4.0-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:4c2735d3305e6cecab6e53546909edf407ad3da5b9eeaf483f4cf80142bb21be", size = 260721, upload-time = "2025-10-04T21:56:24.22Z" }, + { url = "https://files.pythonhosted.org/packages/80/70/3751930d16e5984490c73ca65b80777e4b26e7a0015f2d41f31d75959a71/propcache-0.4.0-cp313-cp313t-musllinux_1_2_armv7l.whl", hash = "sha256:72b51340047ac43b3cf388eebd362d052632260c9f73a50882edbb66e589fd44", size = 247516, upload-time = "2025-10-04T21:56:25.577Z" }, + { url = "https://files.pythonhosted.org/packages/59/90/4bc96ce6476f67e2e6b72469f328c92b53259a0e4d1d5386d71a36e9258c/propcache-0.4.0-cp313-cp313t-musllinux_1_2_ppc64le.whl", hash = "sha256:184c779363740d6664982ad05699f378f7694220e2041996f12b7c2a4acdcad0", size = 262675, upload-time = "2025-10-04T21:56:27.065Z" }, + { url = "https://files.pythonhosted.org/packages/6f/d1/f16d096869c5f1c93d67fc37488c0c814add0560574f6877653a10239cde/propcache-0.4.0-cp313-cp313t-musllinux_1_2_s390x.whl", hash = "sha256:a60634a9de41f363923c6adfb83105d39e49f7a3058511563ed3de6748661af6", size = 263379, upload-time = "2025-10-04T21:56:28.517Z" }, + { url = "https://files.pythonhosted.org/packages/ab/2a/da5cd1bc1c6412939c457ea65bbe7e034045c395d98ff8ff880d06ec4553/propcache-0.4.0-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:c9b8119244d122241a9c4566bce49bb20408a6827044155856735cf14189a7da", size = 257694, upload-time = "2025-10-04T21:56:30.051Z" }, + { url = "https://files.pythonhosted.org/packages/a5/11/938e67c07189b662a6c72551d48285a02496de885408392447c25657dd47/propcache-0.4.0-cp313-cp313t-win32.whl", hash = "sha256:515b610a364c8cdd2b72c734cc97dece85c416892ea8d5c305624ac8734e81db", size = 41321, upload-time = "2025-10-04T21:56:31.406Z" }, + { url = "https://files.pythonhosted.org/packages/f4/6e/72b11a4dcae68c728b15126cc5bc830bf275c84836da2633412b768d07e0/propcache-0.4.0-cp313-cp313t-win_amd64.whl", hash = "sha256:7ea86eb32e74f9902df57e8608e8ac66f1e1e1d24d1ed2ddeb849888413b924d", size = 44846, upload-time = "2025-10-04T21:56:32.5Z" }, + { url = "https://files.pythonhosted.org/packages/94/09/0ef3c025e0621e703ef71b69e0085181a3124bcc1beef29e0ffef59ed7f4/propcache-0.4.0-cp313-cp313t-win_arm64.whl", hash = "sha256:c1443fa4bb306461a3a8a52b7de0932a2515b100ecb0ebc630cc3f87d451e0a9", size = 39689, upload-time = "2025-10-04T21:56:33.686Z" }, + { url = "https://files.pythonhosted.org/packages/60/89/7699d8e9f8c222bbef1fae26afd72d448353f164a52125d5f87dd9fec2c7/propcache-0.4.0-cp314-cp314-macosx_10_13_universal2.whl", hash = "sha256:de8e310d24b5a61de08812dd70d5234da1458d41b059038ee7895a9e4c8cae79", size = 77977, upload-time = "2025-10-04T21:56:34.836Z" }, + { url = "https://files.pythonhosted.org/packages/77/c5/2758a498199ce46d6d500ba4391a8594df35400cc85738aa9f0c9b8366db/propcache-0.4.0-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:55a54de5266bc44aa274915cdf388584fa052db8748a869e5500ab5993bac3f4", size = 44715, upload-time = "2025-10-04T21:56:36.075Z" }, + { url = "https://files.pythonhosted.org/packages/0d/da/5a44e10282a28c2dd576e5e1a2c7bb8145587070ddab7375fb643f7129d7/propcache-0.4.0-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:88d50d662c917ec2c9d3858920aa7b9d5bfb74ab9c51424b775ccbe683cb1b4e", size = 46463, upload-time = "2025-10-04T21:56:37.227Z" }, + { url = "https://files.pythonhosted.org/packages/d5/5a/b2c314f655f46c10c204dc0d69e19fadfb1cc4d40ab33f403698a35c3281/propcache-0.4.0-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:ae3adf88a66f5863cf79394bc359da523bb27a2ed6ba9898525a6a02b723bfc5", size = 206980, upload-time = "2025-10-04T21:56:38.828Z" }, + { url = "https://files.pythonhosted.org/packages/7c/4e/f6643ec2cd5527b92c93488f9b67a170494736bb1c5460136399d709ce5a/propcache-0.4.0-cp314-cp314-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:7f088e21d15b3abdb9047e4b7b7a0acd79bf166893ac2b34a72ab1062feb219e", size = 211385, upload-time = "2025-10-04T21:56:40.2Z" }, + { url = "https://files.pythonhosted.org/packages/71/41/362766a346c3f8d3bbeb7899e1ff40f18844e0fe37e9f6f536553cf6b6be/propcache-0.4.0-cp314-cp314-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:a4efbaf10793fd574c76a5732c75452f19d93df6e0f758c67dd60552ebd8614b", size = 215315, upload-time = "2025-10-04T21:56:41.574Z" }, + { url = "https://files.pythonhosted.org/packages/ff/98/17385d51816d56fa6acc035d8625fbf833b6a795d7ef7fb37ea3f62db6c9/propcache-0.4.0-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:681a168d06284602d56e97f09978057aa88bcc4177352b875b3d781df4efd4cb", size = 201416, upload-time = "2025-10-04T21:56:42.947Z" }, + { url = "https://files.pythonhosted.org/packages/7a/83/801178ca1c29e217564ee507ff2a49d3f24a4dd85c9b9d681fd1d62b15f2/propcache-0.4.0-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:a7f06f077fc4ef37e8a37ca6bbb491b29e29db9fb28e29cf3896aad10dbd4137", size = 197726, upload-time = "2025-10-04T21:56:44.313Z" }, + { url = "https://files.pythonhosted.org/packages/d2/38/c8743917bca92b7e5474366b6b04c7b3982deac32a0fe4b705f2e92c09bb/propcache-0.4.0-cp314-cp314-musllinux_1_2_armv7l.whl", hash = "sha256:082a643479f49a6778dcd68a80262fc324b14fd8e9b1a5380331fe41adde1738", size = 192819, upload-time = "2025-10-04T21:56:45.702Z" }, + { url = "https://files.pythonhosted.org/packages/0b/74/3de3ef483e8615aaaf62026fcdcb20cbfc4535ea14871b12f72d52c1d6dc/propcache-0.4.0-cp314-cp314-musllinux_1_2_ppc64le.whl", hash = "sha256:26692850120241a99bb4a4eec675cd7b4fdc431144f0d15ef69f7f8599f6165f", size = 202492, upload-time = "2025-10-04T21:56:47.388Z" }, + { url = "https://files.pythonhosted.org/packages/46/86/a130dd85199d651a6986ba6bf1ce297b7bbcafc01c8e139e6ba2b8218a20/propcache-0.4.0-cp314-cp314-musllinux_1_2_s390x.whl", hash = "sha256:33ad7d37b9a386f97582f5d042cc7b8d4b3591bb384cf50866b749a17e4dba90", size = 204106, upload-time = "2025-10-04T21:56:49.139Z" }, + { url = "https://files.pythonhosted.org/packages/b2/f7/44eab58659d71d21995146c94139e63882bac280065b3a9ed10376897bcc/propcache-0.4.0-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:1e7fd82d4a5b7583588f103b0771e43948532f1292105f13ee6f3b300933c4ca", size = 198043, upload-time = "2025-10-04T21:56:50.561Z" }, + { url = "https://files.pythonhosted.org/packages/96/14/df37be1bf1423d2dda201a4cdb1c5cb44048d34e31a97df227cc25b0a55c/propcache-0.4.0-cp314-cp314-win32.whl", hash = "sha256:213eb0d3bc695a70cffffe11a1c2e1c2698d89ffd8dba35a49bc44a035d45c93", size = 38036, upload-time = "2025-10-04T21:56:51.868Z" }, + { url = "https://files.pythonhosted.org/packages/99/96/9cea65d6c50224737e80c57a3f3db4ca81bc7b1b52bc73346df8c50db400/propcache-0.4.0-cp314-cp314-win_amd64.whl", hash = "sha256:087e2d3d7613e1b59b2ffca0daabd500c1a032d189c65625ee05ea114afcad0b", size = 41156, upload-time = "2025-10-04T21:56:53.242Z" }, + { url = "https://files.pythonhosted.org/packages/52/4d/91523dcbe23cc127b097623a6ba177da51fca6b7c979082aa49745b527b7/propcache-0.4.0-cp314-cp314-win_arm64.whl", hash = "sha256:94b0f7407d18001dbdcbb239512e753b1b36725a6e08a4983be1c948f5435f79", size = 37976, upload-time = "2025-10-04T21:56:54.351Z" }, + { url = "https://files.pythonhosted.org/packages/ec/f7/7118a944cb6cdb548c9333cf311bda120f9793ecca54b2ca4a3f7e58723e/propcache-0.4.0-cp314-cp314t-macosx_10_13_universal2.whl", hash = "sha256:b730048ae8b875e2c0af1a09ca31b303fc7b5ed27652beec03fa22b29545aec9", size = 81270, upload-time = "2025-10-04T21:56:55.516Z" }, + { url = "https://files.pythonhosted.org/packages/ab/f9/04a8bc9977ea201783f3ccb04106f44697f635f70439a208852d4d08554d/propcache-0.4.0-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:f495007ada16a4e16312b502636fafff42a9003adf1d4fb7541e0a0870bc056f", size = 46224, upload-time = "2025-10-04T21:56:56.695Z" }, + { url = "https://files.pythonhosted.org/packages/0f/3d/808b074034156f130a0047304d811a5a5df3bb0976c9adfb9383718fd888/propcache-0.4.0-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:659a0ea6d9017558ed7af00fb4028186f64d0ba9adfc70a4d2c85fcd3d026321", size = 48246, upload-time = "2025-10-04T21:56:57.926Z" }, + { url = "https://files.pythonhosted.org/packages/66/eb/e311f3a59ddc93078cb079b12699af9fd844142c4b4d382b386ee071d921/propcache-0.4.0-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:d74aa60b1ec076d4d5dcde27c9a535fc0ebb12613f599681c438ca3daa68acac", size = 275562, upload-time = "2025-10-04T21:56:59.221Z" }, + { url = "https://files.pythonhosted.org/packages/f4/05/a146094d6a00bb2f2036dd2a2f4c2b2733ff9574b59ce53bd8513edfca5d/propcache-0.4.0-cp314-cp314t-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:34000e31795bdcda9826e0e70e783847a42e3dcd0d6416c5d3cb717905ebaec0", size = 273627, upload-time = "2025-10-04T21:57:00.582Z" }, + { url = "https://files.pythonhosted.org/packages/91/95/a6d138f6e3d5f6c9b34dbd336b964a1293f2f1a79cafbe70ae3403d7cc46/propcache-0.4.0-cp314-cp314t-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:bcb5bfac5b9635e6fc520c8af6efc7a0a56f12a1fe9e9d3eb4328537e316dd6a", size = 279778, upload-time = "2025-10-04T21:57:01.944Z" }, + { url = "https://files.pythonhosted.org/packages/ac/09/19594a20da0519bfa00deef8cf35dda6c9a5b51bba947f366e85ea59b3de/propcache-0.4.0-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:0ea11fceb31fa95b0fa2007037f19e922e2caceb7dc6c6cac4cb56e2d291f1a2", size = 262833, upload-time = "2025-10-04T21:57:03.326Z" }, + { url = "https://files.pythonhosted.org/packages/b5/92/60d2ddc7662f7b2720d3b628ad8ce888015f4ab5c335b7b1b50183194e68/propcache-0.4.0-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:cd8684f628fe285ea5c86f88e1c30716239dc9d6ac55e7851a4b7f555b628da3", size = 260456, upload-time = "2025-10-04T21:57:05.159Z" }, + { url = "https://files.pythonhosted.org/packages/6f/e2/4c2e25c77cf43add2e05a86c4fcf51107edc4d92318e5c593bbdc2515d57/propcache-0.4.0-cp314-cp314t-musllinux_1_2_armv7l.whl", hash = "sha256:790286d3d542c0ef9f6d0280d1049378e5e776dcba780d169298f664c39394db", size = 247284, upload-time = "2025-10-04T21:57:06.566Z" }, + { url = "https://files.pythonhosted.org/packages/dc/3e/c273ab8edc80683ec8b15b486e95c03096ef875d99e4b0ab0a36c1e42c94/propcache-0.4.0-cp314-cp314t-musllinux_1_2_ppc64le.whl", hash = "sha256:009093c9b5dbae114a5958e6a649f8a5d94dd6866b0f82b60395eb92c58002d4", size = 262368, upload-time = "2025-10-04T21:57:08.231Z" }, + { url = "https://files.pythonhosted.org/packages/ac/a9/3fa231f65a9f78614c5aafa9cee788d7f55c22187cc2f33e86c7c16d0262/propcache-0.4.0-cp314-cp314t-musllinux_1_2_s390x.whl", hash = "sha256:728d98179e92d77096937fdfecd2c555a3d613abe56c9909165c24196a3b5012", size = 263010, upload-time = "2025-10-04T21:57:09.641Z" }, + { url = "https://files.pythonhosted.org/packages/38/a0/f4f5d368e60c9dc04d3158eaf1ca0ad899b40ac3d29c015bf62735225a6f/propcache-0.4.0-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:a9725d96a81e17e48a0fe82d0c3de2f5e623d7163fec70a6c7df90753edd1bec", size = 257298, upload-time = "2025-10-04T21:57:11.125Z" }, + { url = "https://files.pythonhosted.org/packages/c7/30/f78d6758dc36a98f1cddc39b3185cefde616cc58248715b7c65495491cb1/propcache-0.4.0-cp314-cp314t-win32.whl", hash = "sha256:0964c55c95625193defeb4fd85f8f28a9a754ed012cab71127d10e3dc66b1373", size = 42484, upload-time = "2025-10-04T21:57:12.652Z" }, + { url = "https://files.pythonhosted.org/packages/4e/ad/de0640e9b56d2caa796c4266d7d1e6cc4544cc327c25b7ced5c59893b625/propcache-0.4.0-cp314-cp314t-win_amd64.whl", hash = "sha256:24403152e41abf09488d3ae9c0c3bf7ff93e2fb12b435390718f21810353db28", size = 46229, upload-time = "2025-10-04T21:57:14.034Z" }, + { url = "https://files.pythonhosted.org/packages/da/bf/5aed62dddbf2bbe62a3564677436261909c9dd63a0fa1fb6cf0629daa13c/propcache-0.4.0-cp314-cp314t-win_arm64.whl", hash = "sha256:0363a696a9f24b37a04ed5e34c2e07ccbe92798c998d37729551120a1bb744c4", size = 40329, upload-time = "2025-10-04T21:57:15.198Z" }, + { url = "https://files.pythonhosted.org/packages/c7/16/794c114f6041bbe2de23eb418ef58a0f45de27224d5540f5dbb266a73d72/propcache-0.4.0-py3-none-any.whl", hash = "sha256:015b2ca2f98ea9e08ac06eecc409d5d988f78c5fd5821b2ad42bc9afcd6b1557", size = 13183, upload-time = "2025-10-04T21:57:38.054Z" }, ] [[package]] @@ -4324,24 +4634,29 @@ version = "0.1.0" source = { registry = "https://pypi.org/simple" } sdist = { url = "https://files.pythonhosted.org/packages/f2/cf/77d3e19b7fabd03895caca7857ef51e4c409e0ca6b37ee6e9f7daa50b642/proxy_tools-0.1.0.tar.gz", hash = "sha256:ccb3751f529c047e2d8a58440d86b205303cf0fe8146f784d1cbcd94f0a28010", size = 2978, upload-time = "2014-05-05T21:02:24.606Z" } +[[package]] +name = "pscript" +version = "0.7.7" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/59/68/f918702e270eddc5f7c54108f6a2f2afc2d299985820dbb0db9beb77d66d/pscript-0.7.7.tar.gz", hash = "sha256:8632f7a4483f235514aadee110edee82eb6d67336bf68744a7b18d76e50442f8", size = 176138, upload-time = "2022-01-10T10:55:02.559Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/f1/bc/980e2ebd442d2a8f1d22780f73db76f2a1df3bf79b3fb501b054b4b4dd03/pscript-0.7.7-py3-none-any.whl", hash = "sha256:b0fdac0df0393a4d7497153fea6a82e6429f32327c4c0a4817f1cd68adc08083", size = 126689, upload-time = "2022-01-10T10:55:00.793Z" }, +] + [[package]] name = "psutil" -version = "7.1.2" +version = "7.1.0" source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/cd/ec/7b8e6b9b1d22708138630ef34c53ab2b61032c04f16adfdbb96791c8c70c/psutil-7.1.2.tar.gz", hash = "sha256:aa225cdde1335ff9684708ee8c72650f6598d5ed2114b9a7c5802030b1785018", size = 487424, upload-time = "2025-10-25T10:46:34.931Z" } +sdist = { url = "https://files.pythonhosted.org/packages/b3/31/4723d756b59344b643542936e37a31d1d3204bcdc42a7daa8ee9eb06fb50/psutil-7.1.0.tar.gz", hash = "sha256:655708b3c069387c8b77b072fc429a57d0e214221d01c0a772df7dfedcb3bcd2", size = 497660, upload-time = "2025-09-17T20:14:52.902Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/b8/d9/b56cc9f883140ac10021a8c9b0f4e16eed1ba675c22513cdcbce3ba64014/psutil-7.1.2-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:0cc5c6889b9871f231ed5455a9a02149e388fffcb30b607fb7a8896a6d95f22e", size = 238575, upload-time = "2025-10-25T10:46:38.728Z" }, - { url = "https://files.pythonhosted.org/packages/36/eb/28d22de383888deb252c818622196e709da98816e296ef95afda33f1c0a2/psutil-7.1.2-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:8e9e77a977208d84aa363a4a12e0f72189d58bbf4e46b49aae29a2c6e93ef206", size = 239297, upload-time = "2025-10-25T10:46:41.347Z" }, - { url = "https://files.pythonhosted.org/packages/89/5d/220039e2f28cc129626e54d63892ab05c0d56a29818bfe7268dcb5008932/psutil-7.1.2-cp313-cp313t-manylinux2010_x86_64.manylinux_2_12_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:7d9623a5e4164d2220ecceb071f4b333b3c78866141e8887c072129185f41278", size = 280420, upload-time = "2025-10-25T10:46:44.122Z" }, - { url = "https://files.pythonhosted.org/packages/ba/7a/286f0e1c167445b2ef4a6cbdfc8c59fdb45a5a493788950cf8467201dc73/psutil-7.1.2-cp313-cp313t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:364b1c10fe4ed59c89ec49e5f1a70da353b27986fa8233b4b999df4742a5ee2f", size = 283049, upload-time = "2025-10-25T10:46:47.095Z" }, - { url = "https://files.pythonhosted.org/packages/aa/cc/7eb93260794a42e39b976f3a4dde89725800b9f573b014fac142002a5c98/psutil-7.1.2-cp313-cp313t-win_amd64.whl", hash = "sha256:f101ef84de7e05d41310e3ccbdd65a6dd1d9eed85e8aaf0758405d022308e204", size = 248713, upload-time = "2025-10-25T10:46:49.573Z" }, - { url = "https://files.pythonhosted.org/packages/ab/1a/0681a92b53366e01f0a099f5237d0c8a2f79d322ac589cccde5e30c8a4e2/psutil-7.1.2-cp313-cp313t-win_arm64.whl", hash = "sha256:20c00824048a95de67f00afedc7b08b282aa08638585b0206a9fb51f28f1a165", size = 244644, upload-time = "2025-10-25T10:46:51.924Z" }, - { url = "https://files.pythonhosted.org/packages/ae/89/b9f8d47ddbc52d7301fc868e8224e5f44ed3c7f55e6d0f54ecaf5dd9ff5e/psutil-7.1.2-cp36-abi3-macosx_10_9_x86_64.whl", hash = "sha256:c9ba5c19f2d46203ee8c152c7b01df6eec87d883cfd8ee1af2ef2727f6b0f814", size = 237244, upload-time = "2025-10-25T10:47:07.086Z" }, - { url = "https://files.pythonhosted.org/packages/c8/7a/8628c2f6b240680a67d73d8742bb9ff39b1820a693740e43096d5dcb01e5/psutil-7.1.2-cp36-abi3-macosx_11_0_arm64.whl", hash = "sha256:2a486030d2fe81bec023f703d3d155f4823a10a47c36784c84f1cc7f8d39bedb", size = 238101, upload-time = "2025-10-25T10:47:09.523Z" }, - { url = "https://files.pythonhosted.org/packages/30/28/5e27f4d5a0e347f8e3cc16cd7d35533dbce086c95807f1f0e9cd77e26c10/psutil-7.1.2-cp36-abi3-manylinux2010_x86_64.manylinux_2_12_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:3efd8fc791492e7808a51cb2b94889db7578bfaea22df931424f874468e389e3", size = 258675, upload-time = "2025-10-25T10:47:11.082Z" }, - { url = "https://files.pythonhosted.org/packages/e5/5c/79cf60c9acf36d087f0db0f82066fca4a780e97e5b3a2e4c38209c03d170/psutil-7.1.2-cp36-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:e2aeb9b64f481b8eabfc633bd39e0016d4d8bbcd590d984af764d80bf0851b8a", size = 260203, upload-time = "2025-10-25T10:47:13.226Z" }, - { url = "https://files.pythonhosted.org/packages/f7/03/0a464404c51685dcb9329fdd660b1721e076ccd7b3d97dee066bcc9ffb15/psutil-7.1.2-cp37-abi3-win_amd64.whl", hash = "sha256:8e17852114c4e7996fe9da4745c2bdef001ebbf2f260dec406290e66628bdb91", size = 246714, upload-time = "2025-10-25T10:47:15.093Z" }, - { url = "https://files.pythonhosted.org/packages/6a/32/97ca2090f2f1b45b01b6aa7ae161cfe50671de097311975ca6eea3e7aabc/psutil-7.1.2-cp37-abi3-win_arm64.whl", hash = "sha256:3e988455e61c240cc879cb62a008c2699231bf3e3d061d7fce4234463fd2abb4", size = 243742, upload-time = "2025-10-25T10:47:17.302Z" }, + { url = "https://files.pythonhosted.org/packages/46/62/ce4051019ee20ce0ed74432dd73a5bb087a6704284a470bb8adff69a0932/psutil-7.1.0-cp36-abi3-macosx_10_9_x86_64.whl", hash = "sha256:76168cef4397494250e9f4e73eb3752b146de1dd950040b29186d0cce1d5ca13", size = 245242, upload-time = "2025-09-17T20:14:56.126Z" }, + { url = "https://files.pythonhosted.org/packages/38/61/f76959fba841bf5b61123fbf4b650886dc4094c6858008b5bf73d9057216/psutil-7.1.0-cp36-abi3-macosx_11_0_arm64.whl", hash = "sha256:5d007560c8c372efdff9e4579c2846d71de737e4605f611437255e81efcca2c5", size = 246682, upload-time = "2025-09-17T20:14:58.25Z" }, + { url = "https://files.pythonhosted.org/packages/88/7a/37c99d2e77ec30d63398ffa6a660450b8a62517cabe44b3e9bae97696e8d/psutil-7.1.0-cp36-abi3-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:22e4454970b32472ce7deaa45d045b34d3648ce478e26a04c7e858a0a6e75ff3", size = 287994, upload-time = "2025-09-17T20:14:59.901Z" }, + { url = "https://files.pythonhosted.org/packages/9d/de/04c8c61232f7244aa0a4b9a9fbd63a89d5aeaf94b2fc9d1d16e2faa5cbb0/psutil-7.1.0-cp36-abi3-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8c70e113920d51e89f212dd7be06219a9b88014e63a4cec69b684c327bc474e3", size = 291163, upload-time = "2025-09-17T20:15:01.481Z" }, + { url = "https://files.pythonhosted.org/packages/f4/58/c4f976234bf6d4737bc8c02a81192f045c307b72cf39c9e5c5a2d78927f6/psutil-7.1.0-cp36-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:7d4a113425c037300de3ac8b331637293da9be9713855c4fc9d2d97436d7259d", size = 293625, upload-time = "2025-09-17T20:15:04.492Z" }, + { url = "https://files.pythonhosted.org/packages/79/87/157c8e7959ec39ced1b11cc93c730c4fb7f9d408569a6c59dbd92ceb35db/psutil-7.1.0-cp37-abi3-win32.whl", hash = "sha256:09ad740870c8d219ed8daae0ad3b726d3bf9a028a198e7f3080f6a1888b99bca", size = 244812, upload-time = "2025-09-17T20:15:07.462Z" }, + { url = "https://files.pythonhosted.org/packages/bf/e9/b44c4f697276a7a95b8e94d0e320a7bf7f3318521b23de69035540b39838/psutil-7.1.0-cp37-abi3-win_amd64.whl", hash = "sha256:57f5e987c36d3146c0dd2528cd42151cf96cd359b9d67cfff836995cc5df9a3d", size = 247965, upload-time = "2025-09-17T20:15:09.673Z" }, + { url = "https://files.pythonhosted.org/packages/26/65/1070a6e3c036f39142c2820c4b52e9243246fcfc3f96239ac84472ba361e/psutil-7.1.0-cp37-abi3-win_arm64.whl", hash = "sha256:6937cb68133e7c97b6cc9649a570c9a18ba0efebed46d8c5dae4c07fa1b67a07", size = 244971, upload-time = "2025-09-17T20:15:12.262Z" }, ] [[package]] @@ -4406,7 +4721,7 @@ wheels = [ [[package]] name = "pydantic" -version = "2.12.2" +version = "2.11.10" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "annotated-types" }, @@ -4414,9 +4729,9 @@ dependencies = [ { name = "typing-extensions" }, { name = "typing-inspection" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/8d/35/d319ed522433215526689bad428a94058b6dd12190ce7ddd78618ac14b28/pydantic-2.12.2.tar.gz", hash = "sha256:7b8fa15b831a4bbde9d5b84028641ac3080a4ca2cbd4a621a661687e741624fd", size = 816358, upload-time = "2025-10-14T15:02:21.842Z" } +sdist = { url = "https://files.pythonhosted.org/packages/ae/54/ecab642b3bed45f7d5f59b38443dcb36ef50f85af192e6ece103dbfe9587/pydantic-2.11.10.tar.gz", hash = "sha256:dc280f0982fbda6c38fada4e476dc0a4f3aeaf9c6ad4c28df68a666ec3c61423", size = 788494, upload-time = "2025-10-04T10:40:41.338Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/6c/98/468cb649f208a6f1279448e6e5247b37ae79cf5e4041186f1e2ef3d16345/pydantic-2.12.2-py3-none-any.whl", hash = "sha256:25ff718ee909acd82f1ff9b1a4acfd781bb23ab3739adaa7144f19a6a4e231ae", size = 460628, upload-time = "2025-10-14T15:02:19.623Z" }, + { url = "https://files.pythonhosted.org/packages/bd/1f/73c53fcbfb0b5a78f91176df41945ca466e71e9d9d836e5c522abda39ee7/pydantic-2.11.10-py3-none-any.whl", hash = "sha256:802a655709d49bd004c31e865ef37da30b540786a46bfce02333e0e24b5fe29a", size = 444823, upload-time = "2025-10-04T10:40:39.055Z" }, ] [package.optional-dependencies] @@ -4426,89 +4741,80 @@ email = [ [[package]] name = "pydantic-core" -version = "2.41.4" +version = "2.33.2" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "typing-extensions" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/df/18/d0944e8eaaa3efd0a91b0f1fc537d3be55ad35091b6a87638211ba691964/pydantic_core-2.41.4.tar.gz", hash = "sha256:70e47929a9d4a1905a67e4b687d5946026390568a8e952b92824118063cee4d5", size = 457557, upload-time = "2025-10-14T10:23:47.909Z" } -wheels = [ - { url = "https://files.pythonhosted.org/packages/62/4c/f6cbfa1e8efacd00b846764e8484fe173d25b8dab881e277a619177f3384/pydantic_core-2.41.4-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:28ff11666443a1a8cf2a044d6a545ebffa8382b5f7973f22c36109205e65dc80", size = 2109062, upload-time = "2025-10-14T10:20:04.486Z" }, - { url = "https://files.pythonhosted.org/packages/21/f8/40b72d3868896bfcd410e1bd7e516e762d326201c48e5b4a06446f6cf9e8/pydantic_core-2.41.4-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:61760c3925d4633290292bad462e0f737b840508b4f722247d8729684f6539ae", size = 1916301, upload-time = "2025-10-14T10:20:06.857Z" }, - { url = "https://files.pythonhosted.org/packages/94/4d/d203dce8bee7faeca791671c88519969d98d3b4e8f225da5b96dad226fc8/pydantic_core-2.41.4-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:eae547b7315d055b0de2ec3965643b0ab82ad0106a7ffd29615ee9f266a02827", size = 1968728, upload-time = "2025-10-14T10:20:08.353Z" }, - { url = "https://files.pythonhosted.org/packages/65/f5/6a66187775df87c24d526985b3a5d78d861580ca466fbd9d4d0e792fcf6c/pydantic_core-2.41.4-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:ef9ee5471edd58d1fcce1c80ffc8783a650e3e3a193fe90d52e43bb4d87bff1f", size = 2050238, upload-time = "2025-10-14T10:20:09.766Z" }, - { url = "https://files.pythonhosted.org/packages/5e/b9/78336345de97298cf53236b2f271912ce11f32c1e59de25a374ce12f9cce/pydantic_core-2.41.4-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:15dd504af121caaf2c95cb90c0ebf71603c53de98305621b94da0f967e572def", size = 2249424, upload-time = "2025-10-14T10:20:11.732Z" }, - { url = "https://files.pythonhosted.org/packages/99/bb/a4584888b70ee594c3d374a71af5075a68654d6c780369df269118af7402/pydantic_core-2.41.4-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:3a926768ea49a8af4d36abd6a8968b8790f7f76dd7cbd5a4c180db2b4ac9a3a2", size = 2366047, upload-time = "2025-10-14T10:20:13.647Z" }, - { url = "https://files.pythonhosted.org/packages/5f/8d/17fc5de9d6418e4d2ae8c675f905cdafdc59d3bf3bf9c946b7ab796a992a/pydantic_core-2.41.4-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6916b9b7d134bff5440098a4deb80e4cb623e68974a87883299de9124126c2a8", size = 2071163, upload-time = "2025-10-14T10:20:15.307Z" }, - { url = "https://files.pythonhosted.org/packages/54/e7/03d2c5c0b8ed37a4617430db68ec5e7dbba66358b629cd69e11b4d564367/pydantic_core-2.41.4-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:5cf90535979089df02e6f17ffd076f07237efa55b7343d98760bde8743c4b265", size = 2190585, upload-time = "2025-10-14T10:20:17.3Z" }, - { url = "https://files.pythonhosted.org/packages/be/fc/15d1c9fe5ad9266a5897d9b932b7f53d7e5cfc800573917a2c5d6eea56ec/pydantic_core-2.41.4-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:7533c76fa647fade2d7ec75ac5cc079ab3f34879626dae5689b27790a6cf5a5c", size = 2150109, upload-time = "2025-10-14T10:20:19.143Z" }, - { url = "https://files.pythonhosted.org/packages/26/ef/e735dd008808226c83ba56972566138665b71477ad580fa5a21f0851df48/pydantic_core-2.41.4-cp311-cp311-musllinux_1_1_armv7l.whl", hash = "sha256:37e516bca9264cbf29612539801ca3cd5d1be465f940417b002905e6ed79d38a", size = 2315078, upload-time = "2025-10-14T10:20:20.742Z" }, - { url = "https://files.pythonhosted.org/packages/90/00/806efdcf35ff2ac0f938362350cd9827b8afb116cc814b6b75cf23738c7c/pydantic_core-2.41.4-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:0c19cb355224037c83642429b8ce261ae108e1c5fbf5c028bac63c77b0f8646e", size = 2318737, upload-time = "2025-10-14T10:20:22.306Z" }, - { url = "https://files.pythonhosted.org/packages/41/7e/6ac90673fe6cb36621a2283552897838c020db343fa86e513d3f563b196f/pydantic_core-2.41.4-cp311-cp311-win32.whl", hash = "sha256:09c2a60e55b357284b5f31f5ab275ba9f7f70b7525e18a132ec1f9160b4f1f03", size = 1974160, upload-time = "2025-10-14T10:20:23.817Z" }, - { url = "https://files.pythonhosted.org/packages/e0/9d/7c5e24ee585c1f8b6356e1d11d40ab807ffde44d2db3b7dfd6d20b09720e/pydantic_core-2.41.4-cp311-cp311-win_amd64.whl", hash = "sha256:711156b6afb5cb1cb7c14a2cc2c4a8b4c717b69046f13c6b332d8a0a8f41ca3e", size = 2021883, upload-time = "2025-10-14T10:20:25.48Z" }, - { url = "https://files.pythonhosted.org/packages/33/90/5c172357460fc28b2871eb4a0fb3843b136b429c6fa827e4b588877bf115/pydantic_core-2.41.4-cp311-cp311-win_arm64.whl", hash = "sha256:6cb9cf7e761f4f8a8589a45e49ed3c0d92d1d696a45a6feaee8c904b26efc2db", size = 1968026, upload-time = "2025-10-14T10:20:27.039Z" }, - { url = "https://files.pythonhosted.org/packages/e9/81/d3b3e95929c4369d30b2a66a91db63c8ed0a98381ae55a45da2cd1cc1288/pydantic_core-2.41.4-cp312-cp312-macosx_10_12_x86_64.whl", hash = "sha256:ab06d77e053d660a6faaf04894446df7b0a7e7aba70c2797465a0a1af00fc887", size = 2099043, upload-time = "2025-10-14T10:20:28.561Z" }, - { url = "https://files.pythonhosted.org/packages/58/da/46fdac49e6717e3a94fc9201403e08d9d61aa7a770fab6190b8740749047/pydantic_core-2.41.4-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:c53ff33e603a9c1179a9364b0a24694f183717b2e0da2b5ad43c316c956901b2", size = 1910699, upload-time = "2025-10-14T10:20:30.217Z" }, - { url = "https://files.pythonhosted.org/packages/1e/63/4d948f1b9dd8e991a5a98b77dd66c74641f5f2e5225fee37994b2e07d391/pydantic_core-2.41.4-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:304c54176af2c143bd181d82e77c15c41cbacea8872a2225dd37e6544dce9999", size = 1952121, upload-time = "2025-10-14T10:20:32.246Z" }, - { url = "https://files.pythonhosted.org/packages/b2/a7/e5fc60a6f781fc634ecaa9ecc3c20171d238794cef69ae0af79ac11b89d7/pydantic_core-2.41.4-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:025ba34a4cf4fb32f917d5d188ab5e702223d3ba603be4d8aca2f82bede432a4", size = 2041590, upload-time = "2025-10-14T10:20:34.332Z" }, - { url = "https://files.pythonhosted.org/packages/70/69/dce747b1d21d59e85af433428978a1893c6f8a7068fa2bb4a927fba7a5ff/pydantic_core-2.41.4-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:b9f5f30c402ed58f90c70e12eff65547d3ab74685ffe8283c719e6bead8ef53f", size = 2219869, upload-time = "2025-10-14T10:20:35.965Z" }, - { url = "https://files.pythonhosted.org/packages/83/6a/c070e30e295403bf29c4df1cb781317b6a9bac7cd07b8d3acc94d501a63c/pydantic_core-2.41.4-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:dd96e5d15385d301733113bcaa324c8bcf111275b7675a9c6e88bfb19fc05e3b", size = 2345169, upload-time = "2025-10-14T10:20:37.627Z" }, - { url = "https://files.pythonhosted.org/packages/f0/83/06d001f8043c336baea7fd202a9ac7ad71f87e1c55d8112c50b745c40324/pydantic_core-2.41.4-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:98f348cbb44fae6e9653c1055db7e29de67ea6a9ca03a5fa2c2e11a47cff0e47", size = 2070165, upload-time = "2025-10-14T10:20:39.246Z" }, - { url = "https://files.pythonhosted.org/packages/14/0a/e567c2883588dd12bcbc110232d892cf385356f7c8a9910311ac997ab715/pydantic_core-2.41.4-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:ec22626a2d14620a83ca583c6f5a4080fa3155282718b6055c2ea48d3ef35970", size = 2189067, upload-time = "2025-10-14T10:20:41.015Z" }, - { url = "https://files.pythonhosted.org/packages/f4/1d/3d9fca34273ba03c9b1c5289f7618bc4bd09c3ad2289b5420481aa051a99/pydantic_core-2.41.4-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:3a95d4590b1f1a43bf33ca6d647b990a88f4a3824a8c4572c708f0b45a5290ed", size = 2132997, upload-time = "2025-10-14T10:20:43.106Z" }, - { url = "https://files.pythonhosted.org/packages/52/70/d702ef7a6cd41a8afc61f3554922b3ed8d19dd54c3bd4bdbfe332e610827/pydantic_core-2.41.4-cp312-cp312-musllinux_1_1_armv7l.whl", hash = "sha256:f9672ab4d398e1b602feadcffcdd3af44d5f5e6ddc15bc7d15d376d47e8e19f8", size = 2307187, upload-time = "2025-10-14T10:20:44.849Z" }, - { url = "https://files.pythonhosted.org/packages/68/4c/c06be6e27545d08b802127914156f38d10ca287a9e8489342793de8aae3c/pydantic_core-2.41.4-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:84d8854db5f55fead3b579f04bda9a36461dab0730c5d570e1526483e7bb8431", size = 2305204, upload-time = "2025-10-14T10:20:46.781Z" }, - { url = "https://files.pythonhosted.org/packages/b0/e5/35ae4919bcd9f18603419e23c5eaf32750224a89d41a8df1a3704b69f77e/pydantic_core-2.41.4-cp312-cp312-win32.whl", hash = "sha256:9be1c01adb2ecc4e464392c36d17f97e9110fbbc906bcbe1c943b5b87a74aabd", size = 1972536, upload-time = "2025-10-14T10:20:48.39Z" }, - { url = "https://files.pythonhosted.org/packages/1e/c2/49c5bb6d2a49eb2ee3647a93e3dae7080c6409a8a7558b075027644e879c/pydantic_core-2.41.4-cp312-cp312-win_amd64.whl", hash = "sha256:d682cf1d22bab22a5be08539dca3d1593488a99998f9f412137bc323179067ff", size = 2031132, upload-time = "2025-10-14T10:20:50.421Z" }, - { url = "https://files.pythonhosted.org/packages/06/23/936343dbcba6eec93f73e95eb346810fc732f71ba27967b287b66f7b7097/pydantic_core-2.41.4-cp312-cp312-win_arm64.whl", hash = "sha256:833eebfd75a26d17470b58768c1834dfc90141b7afc6eb0429c21fc5a21dcfb8", size = 1969483, upload-time = "2025-10-14T10:20:52.35Z" }, - { url = "https://files.pythonhosted.org/packages/13/d0/c20adabd181a029a970738dfe23710b52a31f1258f591874fcdec7359845/pydantic_core-2.41.4-cp313-cp313-macosx_10_12_x86_64.whl", hash = "sha256:85e050ad9e5f6fe1004eec65c914332e52f429bc0ae12d6fa2092407a462c746", size = 2105688, upload-time = "2025-10-14T10:20:54.448Z" }, - { url = "https://files.pythonhosted.org/packages/00/b6/0ce5c03cec5ae94cca220dfecddc453c077d71363b98a4bbdb3c0b22c783/pydantic_core-2.41.4-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:e7393f1d64792763a48924ba31d1e44c2cfbc05e3b1c2c9abb4ceeadd912cced", size = 1910807, upload-time = "2025-10-14T10:20:56.115Z" }, - { url = "https://files.pythonhosted.org/packages/68/3e/800d3d02c8beb0b5c069c870cbb83799d085debf43499c897bb4b4aaff0d/pydantic_core-2.41.4-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:94dab0940b0d1fb28bcab847adf887c66a27a40291eedf0b473be58761c9799a", size = 1956669, upload-time = "2025-10-14T10:20:57.874Z" }, - { url = "https://files.pythonhosted.org/packages/60/a4/24271cc71a17f64589be49ab8bd0751f6a0a03046c690df60989f2f95c2c/pydantic_core-2.41.4-cp313-cp313-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:de7c42f897e689ee6f9e93c4bec72b99ae3b32a2ade1c7e4798e690ff5246e02", size = 2051629, upload-time = "2025-10-14T10:21:00.006Z" }, - { url = "https://files.pythonhosted.org/packages/68/de/45af3ca2f175d91b96bfb62e1f2d2f1f9f3b14a734afe0bfeff079f78181/pydantic_core-2.41.4-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:664b3199193262277b8b3cd1e754fb07f2c6023289c815a1e1e8fb415cb247b1", size = 2224049, upload-time = "2025-10-14T10:21:01.801Z" }, - { url = "https://files.pythonhosted.org/packages/af/8f/ae4e1ff84672bf869d0a77af24fd78387850e9497753c432875066b5d622/pydantic_core-2.41.4-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:d95b253b88f7d308b1c0b417c4624f44553ba4762816f94e6986819b9c273fb2", size = 2342409, upload-time = "2025-10-14T10:21:03.556Z" }, - { url = "https://files.pythonhosted.org/packages/18/62/273dd70b0026a085c7b74b000394e1ef95719ea579c76ea2f0cc8893736d/pydantic_core-2.41.4-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a1351f5bbdbbabc689727cb91649a00cb9ee7203e0a6e54e9f5ba9e22e384b84", size = 2069635, upload-time = "2025-10-14T10:21:05.385Z" }, - { url = "https://files.pythonhosted.org/packages/30/03/cf485fff699b4cdaea469bc481719d3e49f023241b4abb656f8d422189fc/pydantic_core-2.41.4-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:1affa4798520b148d7182da0615d648e752de4ab1a9566b7471bc803d88a062d", size = 2194284, upload-time = "2025-10-14T10:21:07.122Z" }, - { url = "https://files.pythonhosted.org/packages/f9/7e/c8e713db32405dfd97211f2fc0a15d6bf8adb7640f3d18544c1f39526619/pydantic_core-2.41.4-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:7b74e18052fea4aa8dea2fb7dbc23d15439695da6cbe6cfc1b694af1115df09d", size = 2137566, upload-time = "2025-10-14T10:21:08.981Z" }, - { url = "https://files.pythonhosted.org/packages/04/f7/db71fd4cdccc8b75990f79ccafbbd66757e19f6d5ee724a6252414483fb4/pydantic_core-2.41.4-cp313-cp313-musllinux_1_1_armv7l.whl", hash = "sha256:285b643d75c0e30abda9dc1077395624f314a37e3c09ca402d4015ef5979f1a2", size = 2316809, upload-time = "2025-10-14T10:21:10.805Z" }, - { url = "https://files.pythonhosted.org/packages/76/63/a54973ddb945f1bca56742b48b144d85c9fc22f819ddeb9f861c249d5464/pydantic_core-2.41.4-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:f52679ff4218d713b3b33f88c89ccbf3a5c2c12ba665fb80ccc4192b4608dbab", size = 2311119, upload-time = "2025-10-14T10:21:12.583Z" }, - { url = "https://files.pythonhosted.org/packages/f8/03/5d12891e93c19218af74843a27e32b94922195ded2386f7b55382f904d2f/pydantic_core-2.41.4-cp313-cp313-win32.whl", hash = "sha256:ecde6dedd6fff127c273c76821bb754d793be1024bc33314a120f83a3c69460c", size = 1981398, upload-time = "2025-10-14T10:21:14.584Z" }, - { url = "https://files.pythonhosted.org/packages/be/d8/fd0de71f39db91135b7a26996160de71c073d8635edfce8b3c3681be0d6d/pydantic_core-2.41.4-cp313-cp313-win_amd64.whl", hash = "sha256:d081a1f3800f05409ed868ebb2d74ac39dd0c1ff6c035b5162356d76030736d4", size = 2030735, upload-time = "2025-10-14T10:21:16.432Z" }, - { url = "https://files.pythonhosted.org/packages/72/86/c99921c1cf6650023c08bfab6fe2d7057a5142628ef7ccfa9921f2dda1d5/pydantic_core-2.41.4-cp313-cp313-win_arm64.whl", hash = "sha256:f8e49c9c364a7edcbe2a310f12733aad95b022495ef2a8d653f645e5d20c1564", size = 1973209, upload-time = "2025-10-14T10:21:18.213Z" }, - { url = "https://files.pythonhosted.org/packages/36/0d/b5706cacb70a8414396efdda3d72ae0542e050b591119e458e2490baf035/pydantic_core-2.41.4-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:ed97fd56a561f5eb5706cebe94f1ad7c13b84d98312a05546f2ad036bafe87f4", size = 1877324, upload-time = "2025-10-14T10:21:20.363Z" }, - { url = "https://files.pythonhosted.org/packages/de/2d/cba1fa02cfdea72dfb3a9babb067c83b9dff0bbcb198368e000a6b756ea7/pydantic_core-2.41.4-cp313-cp313t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a870c307bf1ee91fc58a9a61338ff780d01bfae45922624816878dce784095d2", size = 1884515, upload-time = "2025-10-14T10:21:22.339Z" }, - { url = "https://files.pythonhosted.org/packages/07/ea/3df927c4384ed9b503c9cc2d076cf983b4f2adb0c754578dfb1245c51e46/pydantic_core-2.41.4-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d25e97bc1f5f8f7985bdc2335ef9e73843bb561eb1fa6831fdfc295c1c2061cf", size = 2042819, upload-time = "2025-10-14T10:21:26.683Z" }, - { url = "https://files.pythonhosted.org/packages/6a/ee/df8e871f07074250270a3b1b82aad4cd0026b588acd5d7d3eb2fcb1471a3/pydantic_core-2.41.4-cp313-cp313t-win_amd64.whl", hash = "sha256:d405d14bea042f166512add3091c1af40437c2e7f86988f3915fabd27b1e9cd2", size = 1995866, upload-time = "2025-10-14T10:21:28.951Z" }, - { url = "https://files.pythonhosted.org/packages/fc/de/b20f4ab954d6d399499c33ec4fafc46d9551e11dc1858fb7f5dca0748ceb/pydantic_core-2.41.4-cp313-cp313t-win_arm64.whl", hash = "sha256:19f3684868309db5263a11bace3c45d93f6f24afa2ffe75a647583df22a2ff89", size = 1970034, upload-time = "2025-10-14T10:21:30.869Z" }, - { url = "https://files.pythonhosted.org/packages/b0/12/5ba58daa7f453454464f92b3ca7b9d7c657d8641c48e370c3ebc9a82dd78/pydantic_core-2.41.4-graalpy311-graalpy242_311_native-macosx_10_12_x86_64.whl", hash = "sha256:a1b2cfec3879afb742a7b0bcfa53e4f22ba96571c9e54d6a3afe1052d17d843b", size = 2122139, upload-time = "2025-10-14T10:22:47.288Z" }, - { url = "https://files.pythonhosted.org/packages/21/fb/6860126a77725c3108baecd10fd3d75fec25191d6381b6eb2ac660228eac/pydantic_core-2.41.4-graalpy311-graalpy242_311_native-macosx_11_0_arm64.whl", hash = "sha256:d175600d975b7c244af6eb9c9041f10059f20b8bbffec9e33fdd5ee3f67cdc42", size = 1936674, upload-time = "2025-10-14T10:22:49.555Z" }, - { url = "https://files.pythonhosted.org/packages/de/be/57dcaa3ed595d81f8757e2b44a38240ac5d37628bce25fb20d02c7018776/pydantic_core-2.41.4-graalpy311-graalpy242_311_native-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0f184d657fa4947ae5ec9c47bd7e917730fa1cbb78195037e32dcbab50aca5ee", size = 1956398, upload-time = "2025-10-14T10:22:52.19Z" }, - { url = "https://files.pythonhosted.org/packages/2f/1d/679a344fadb9695f1a6a294d739fbd21d71fa023286daeea8c0ed49e7c2b/pydantic_core-2.41.4-graalpy311-graalpy242_311_native-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1ed810568aeffed3edc78910af32af911c835cc39ebbfacd1f0ab5dd53028e5c", size = 2138674, upload-time = "2025-10-14T10:22:54.499Z" }, - { url = "https://files.pythonhosted.org/packages/c4/48/ae937e5a831b7c0dc646b2ef788c27cd003894882415300ed21927c21efa/pydantic_core-2.41.4-graalpy312-graalpy250_312_native-macosx_10_12_x86_64.whl", hash = "sha256:4f5d640aeebb438517150fdeec097739614421900e4a08db4a3ef38898798537", size = 2112087, upload-time = "2025-10-14T10:22:56.818Z" }, - { url = "https://files.pythonhosted.org/packages/5e/db/6db8073e3d32dae017da7e0d16a9ecb897d0a4d92e00634916e486097961/pydantic_core-2.41.4-graalpy312-graalpy250_312_native-macosx_11_0_arm64.whl", hash = "sha256:4a9ab037b71927babc6d9e7fc01aea9e66dc2a4a34dff06ef0724a4049629f94", size = 1920387, upload-time = "2025-10-14T10:22:59.342Z" }, - { url = "https://files.pythonhosted.org/packages/0d/c1/dd3542d072fcc336030d66834872f0328727e3b8de289c662faa04aa270e/pydantic_core-2.41.4-graalpy312-graalpy250_312_native-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e4dab9484ec605c3016df9ad4fd4f9a390bc5d816a3b10c6550f8424bb80b18c", size = 1951495, upload-time = "2025-10-14T10:23:02.089Z" }, - { url = "https://files.pythonhosted.org/packages/2b/c6/db8d13a1f8ab3f1eb08c88bd00fd62d44311e3456d1e85c0e59e0a0376e7/pydantic_core-2.41.4-graalpy312-graalpy250_312_native-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bd8a5028425820731d8c6c098ab642d7b8b999758e24acae03ed38a66eca8335", size = 2139008, upload-time = "2025-10-14T10:23:04.539Z" }, - { url = "https://files.pythonhosted.org/packages/7e/7d/138e902ed6399b866f7cfe4435d22445e16fff888a1c00560d9dc79a780f/pydantic_core-2.41.4-pp311-pypy311_pp73-macosx_10_12_x86_64.whl", hash = "sha256:491535d45cd7ad7e4a2af4a5169b0d07bebf1adfd164b0368da8aa41e19907a5", size = 2104721, upload-time = "2025-10-14T10:23:26.906Z" }, - { url = "https://files.pythonhosted.org/packages/47/13/0525623cf94627f7b53b4c2034c81edc8491cbfc7c28d5447fa318791479/pydantic_core-2.41.4-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:54d86c0cada6aba4ec4c047d0e348cbad7063b87ae0f005d9f8c9ad04d4a92a2", size = 1931608, upload-time = "2025-10-14T10:23:29.306Z" }, - { url = "https://files.pythonhosted.org/packages/d6/f9/744bc98137d6ef0a233f808bfc9b18cf94624bf30836a18d3b05d08bf418/pydantic_core-2.41.4-pp311-pypy311_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:eca1124aced216b2500dc2609eade086d718e8249cb9696660ab447d50a758bd", size = 2132986, upload-time = "2025-10-14T10:23:32.057Z" }, - { url = "https://files.pythonhosted.org/packages/17/c8/629e88920171173f6049386cc71f893dff03209a9ef32b4d2f7e7c264bcf/pydantic_core-2.41.4-pp311-pypy311_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:6c9024169becccf0cb470ada03ee578d7348c119a0d42af3dcf9eda96e3a247c", size = 2187516, upload-time = "2025-10-14T10:23:34.871Z" }, - { url = "https://files.pythonhosted.org/packages/2e/0f/4f2734688d98488782218ca61bcc118329bf5de05bb7fe3adc7dd79b0b86/pydantic_core-2.41.4-pp311-pypy311_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:26895a4268ae5a2849269f4991cdc97236e4b9c010e51137becf25182daac405", size = 2146146, upload-time = "2025-10-14T10:23:37.342Z" }, - { url = "https://files.pythonhosted.org/packages/ed/f2/ab385dbd94a052c62224b99cf99002eee99dbec40e10006c78575aead256/pydantic_core-2.41.4-pp311-pypy311_pp73-musllinux_1_1_armv7l.whl", hash = "sha256:ca4df25762cf71308c446e33c9b1fdca2923a3f13de616e2a949f38bf21ff5a8", size = 2311296, upload-time = "2025-10-14T10:23:40.145Z" }, - { url = "https://files.pythonhosted.org/packages/fc/8e/e4f12afe1beeb9823bba5375f8f258df0cc61b056b0195fb1cf9f62a1a58/pydantic_core-2.41.4-pp311-pypy311_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:5a28fcedd762349519276c36634e71853b4541079cab4acaaac60c4421827308", size = 2315386, upload-time = "2025-10-14T10:23:42.624Z" }, - { url = "https://files.pythonhosted.org/packages/48/f7/925f65d930802e3ea2eb4d5afa4cb8730c8dc0d2cb89a59dc4ed2fcb2d74/pydantic_core-2.41.4-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:c173ddcd86afd2535e2b695217e82191580663a1d1928239f877f5a1649ef39f", size = 2147775, upload-time = "2025-10-14T10:23:45.406Z" }, +sdist = { url = "https://files.pythonhosted.org/packages/ad/88/5f2260bdfae97aabf98f1778d43f69574390ad787afb646292a638c923d4/pydantic_core-2.33.2.tar.gz", hash = "sha256:7cb8bc3605c29176e1b105350d2e6474142d7c1bd1d9327c4a9bdb46bf827acc", size = 435195, upload-time = "2025-04-23T18:33:52.104Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/3f/8d/71db63483d518cbbf290261a1fc2839d17ff89fce7089e08cad07ccfce67/pydantic_core-2.33.2-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:4c5b0a576fb381edd6d27f0a85915c6daf2f8138dc5c267a57c08a62900758c7", size = 2028584, upload-time = "2025-04-23T18:31:03.106Z" }, + { url = "https://files.pythonhosted.org/packages/24/2f/3cfa7244ae292dd850989f328722d2aef313f74ffc471184dc509e1e4e5a/pydantic_core-2.33.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:e799c050df38a639db758c617ec771fd8fb7a5f8eaaa4b27b101f266b216a246", size = 1855071, upload-time = "2025-04-23T18:31:04.621Z" }, + { url = "https://files.pythonhosted.org/packages/b3/d3/4ae42d33f5e3f50dd467761304be2fa0a9417fbf09735bc2cce003480f2a/pydantic_core-2.33.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:dc46a01bf8d62f227d5ecee74178ffc448ff4e5197c756331f71efcc66dc980f", size = 1897823, upload-time = "2025-04-23T18:31:06.377Z" }, + { url = "https://files.pythonhosted.org/packages/f4/f3/aa5976e8352b7695ff808599794b1fba2a9ae2ee954a3426855935799488/pydantic_core-2.33.2-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:a144d4f717285c6d9234a66778059f33a89096dfb9b39117663fd8413d582dcc", size = 1983792, upload-time = "2025-04-23T18:31:07.93Z" }, + { url = "https://files.pythonhosted.org/packages/d5/7a/cda9b5a23c552037717f2b2a5257e9b2bfe45e687386df9591eff7b46d28/pydantic_core-2.33.2-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:73cf6373c21bc80b2e0dc88444f41ae60b2f070ed02095754eb5a01df12256de", size = 2136338, upload-time = "2025-04-23T18:31:09.283Z" }, + { url = "https://files.pythonhosted.org/packages/2b/9f/b8f9ec8dd1417eb9da784e91e1667d58a2a4a7b7b34cf4af765ef663a7e5/pydantic_core-2.33.2-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:3dc625f4aa79713512d1976fe9f0bc99f706a9dee21dfd1810b4bbbf228d0e8a", size = 2730998, upload-time = "2025-04-23T18:31:11.7Z" }, + { url = "https://files.pythonhosted.org/packages/47/bc/cd720e078576bdb8255d5032c5d63ee5c0bf4b7173dd955185a1d658c456/pydantic_core-2.33.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:881b21b5549499972441da4758d662aeea93f1923f953e9cbaff14b8b9565aef", size = 2003200, upload-time = "2025-04-23T18:31:13.536Z" }, + { url = "https://files.pythonhosted.org/packages/ca/22/3602b895ee2cd29d11a2b349372446ae9727c32e78a94b3d588a40fdf187/pydantic_core-2.33.2-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:bdc25f3681f7b78572699569514036afe3c243bc3059d3942624e936ec93450e", size = 2113890, upload-time = "2025-04-23T18:31:15.011Z" }, + { url = "https://files.pythonhosted.org/packages/ff/e6/e3c5908c03cf00d629eb38393a98fccc38ee0ce8ecce32f69fc7d7b558a7/pydantic_core-2.33.2-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:fe5b32187cbc0c862ee201ad66c30cf218e5ed468ec8dc1cf49dec66e160cc4d", size = 2073359, upload-time = "2025-04-23T18:31:16.393Z" }, + { url = "https://files.pythonhosted.org/packages/12/e7/6a36a07c59ebefc8777d1ffdaf5ae71b06b21952582e4b07eba88a421c79/pydantic_core-2.33.2-cp311-cp311-musllinux_1_1_armv7l.whl", hash = "sha256:bc7aee6f634a6f4a95676fcb5d6559a2c2a390330098dba5e5a5f28a2e4ada30", size = 2245883, upload-time = "2025-04-23T18:31:17.892Z" }, + { url = "https://files.pythonhosted.org/packages/16/3f/59b3187aaa6cc0c1e6616e8045b284de2b6a87b027cce2ffcea073adf1d2/pydantic_core-2.33.2-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:235f45e5dbcccf6bd99f9f472858849f73d11120d76ea8707115415f8e5ebebf", size = 2241074, upload-time = "2025-04-23T18:31:19.205Z" }, + { url = "https://files.pythonhosted.org/packages/e0/ed/55532bb88f674d5d8f67ab121a2a13c385df382de2a1677f30ad385f7438/pydantic_core-2.33.2-cp311-cp311-win32.whl", hash = "sha256:6368900c2d3ef09b69cb0b913f9f8263b03786e5b2a387706c5afb66800efd51", size = 1910538, upload-time = "2025-04-23T18:31:20.541Z" }, + { url = "https://files.pythonhosted.org/packages/fe/1b/25b7cccd4519c0b23c2dd636ad39d381abf113085ce4f7bec2b0dc755eb1/pydantic_core-2.33.2-cp311-cp311-win_amd64.whl", hash = "sha256:1e063337ef9e9820c77acc768546325ebe04ee38b08703244c1309cccc4f1bab", size = 1952909, upload-time = "2025-04-23T18:31:22.371Z" }, + { url = "https://files.pythonhosted.org/packages/49/a9/d809358e49126438055884c4366a1f6227f0f84f635a9014e2deb9b9de54/pydantic_core-2.33.2-cp311-cp311-win_arm64.whl", hash = "sha256:6b99022f1d19bc32a4c2a0d544fc9a76e3be90f0b3f4af413f87d38749300e65", size = 1897786, upload-time = "2025-04-23T18:31:24.161Z" }, + { url = "https://files.pythonhosted.org/packages/18/8a/2b41c97f554ec8c71f2a8a5f85cb56a8b0956addfe8b0efb5b3d77e8bdc3/pydantic_core-2.33.2-cp312-cp312-macosx_10_12_x86_64.whl", hash = "sha256:a7ec89dc587667f22b6a0b6579c249fca9026ce7c333fc142ba42411fa243cdc", size = 2009000, upload-time = "2025-04-23T18:31:25.863Z" }, + { url = "https://files.pythonhosted.org/packages/a1/02/6224312aacb3c8ecbaa959897af57181fb6cf3a3d7917fd44d0f2917e6f2/pydantic_core-2.33.2-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:3c6db6e52c6d70aa0d00d45cdb9b40f0433b96380071ea80b09277dba021ddf7", size = 1847996, upload-time = "2025-04-23T18:31:27.341Z" }, + { url = "https://files.pythonhosted.org/packages/d6/46/6dcdf084a523dbe0a0be59d054734b86a981726f221f4562aed313dbcb49/pydantic_core-2.33.2-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4e61206137cbc65e6d5256e1166f88331d3b6238e082d9f74613b9b765fb9025", size = 1880957, upload-time = "2025-04-23T18:31:28.956Z" }, + { url = "https://files.pythonhosted.org/packages/ec/6b/1ec2c03837ac00886ba8160ce041ce4e325b41d06a034adbef11339ae422/pydantic_core-2.33.2-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:eb8c529b2819c37140eb51b914153063d27ed88e3bdc31b71198a198e921e011", size = 1964199, upload-time = "2025-04-23T18:31:31.025Z" }, + { url = "https://files.pythonhosted.org/packages/2d/1d/6bf34d6adb9debd9136bd197ca72642203ce9aaaa85cfcbfcf20f9696e83/pydantic_core-2.33.2-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:c52b02ad8b4e2cf14ca7b3d918f3eb0ee91e63b3167c32591e57c4317e134f8f", size = 2120296, upload-time = "2025-04-23T18:31:32.514Z" }, + { url = "https://files.pythonhosted.org/packages/e0/94/2bd0aaf5a591e974b32a9f7123f16637776c304471a0ab33cf263cf5591a/pydantic_core-2.33.2-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:96081f1605125ba0855dfda83f6f3df5ec90c61195421ba72223de35ccfb2f88", size = 2676109, upload-time = "2025-04-23T18:31:33.958Z" }, + { url = "https://files.pythonhosted.org/packages/f9/41/4b043778cf9c4285d59742281a769eac371b9e47e35f98ad321349cc5d61/pydantic_core-2.33.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8f57a69461af2a5fa6e6bbd7a5f60d3b7e6cebb687f55106933188e79ad155c1", size = 2002028, upload-time = "2025-04-23T18:31:39.095Z" }, + { url = "https://files.pythonhosted.org/packages/cb/d5/7bb781bf2748ce3d03af04d5c969fa1308880e1dca35a9bd94e1a96a922e/pydantic_core-2.33.2-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:572c7e6c8bb4774d2ac88929e3d1f12bc45714ae5ee6d9a788a9fb35e60bb04b", size = 2100044, upload-time = "2025-04-23T18:31:41.034Z" }, + { url = "https://files.pythonhosted.org/packages/fe/36/def5e53e1eb0ad896785702a5bbfd25eed546cdcf4087ad285021a90ed53/pydantic_core-2.33.2-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:db4b41f9bd95fbe5acd76d89920336ba96f03e149097365afe1cb092fceb89a1", size = 2058881, upload-time = "2025-04-23T18:31:42.757Z" }, + { url = "https://files.pythonhosted.org/packages/01/6c/57f8d70b2ee57fc3dc8b9610315949837fa8c11d86927b9bb044f8705419/pydantic_core-2.33.2-cp312-cp312-musllinux_1_1_armv7l.whl", hash = "sha256:fa854f5cf7e33842a892e5c73f45327760bc7bc516339fda888c75ae60edaeb6", size = 2227034, upload-time = "2025-04-23T18:31:44.304Z" }, + { url = "https://files.pythonhosted.org/packages/27/b9/9c17f0396a82b3d5cbea4c24d742083422639e7bb1d5bf600e12cb176a13/pydantic_core-2.33.2-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:5f483cfb75ff703095c59e365360cb73e00185e01aaea067cd19acffd2ab20ea", size = 2234187, upload-time = "2025-04-23T18:31:45.891Z" }, + { url = "https://files.pythonhosted.org/packages/b0/6a/adf5734ffd52bf86d865093ad70b2ce543415e0e356f6cacabbc0d9ad910/pydantic_core-2.33.2-cp312-cp312-win32.whl", hash = "sha256:9cb1da0f5a471435a7bc7e439b8a728e8b61e59784b2af70d7c169f8dd8ae290", size = 1892628, upload-time = "2025-04-23T18:31:47.819Z" }, + { url = "https://files.pythonhosted.org/packages/43/e4/5479fecb3606c1368d496a825d8411e126133c41224c1e7238be58b87d7e/pydantic_core-2.33.2-cp312-cp312-win_amd64.whl", hash = "sha256:f941635f2a3d96b2973e867144fde513665c87f13fe0e193c158ac51bfaaa7b2", size = 1955866, upload-time = "2025-04-23T18:31:49.635Z" }, + { url = "https://files.pythonhosted.org/packages/0d/24/8b11e8b3e2be9dd82df4b11408a67c61bb4dc4f8e11b5b0fc888b38118b5/pydantic_core-2.33.2-cp312-cp312-win_arm64.whl", hash = "sha256:cca3868ddfaccfbc4bfb1d608e2ccaaebe0ae628e1416aeb9c4d88c001bb45ab", size = 1888894, upload-time = "2025-04-23T18:31:51.609Z" }, + { url = "https://files.pythonhosted.org/packages/46/8c/99040727b41f56616573a28771b1bfa08a3d3fe74d3d513f01251f79f172/pydantic_core-2.33.2-cp313-cp313-macosx_10_12_x86_64.whl", hash = "sha256:1082dd3e2d7109ad8b7da48e1d4710c8d06c253cbc4a27c1cff4fbcaa97a9e3f", size = 2015688, upload-time = "2025-04-23T18:31:53.175Z" }, + { url = "https://files.pythonhosted.org/packages/3a/cc/5999d1eb705a6cefc31f0b4a90e9f7fc400539b1a1030529700cc1b51838/pydantic_core-2.33.2-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:f517ca031dfc037a9c07e748cefd8d96235088b83b4f4ba8939105d20fa1dcd6", size = 1844808, upload-time = "2025-04-23T18:31:54.79Z" }, + { url = "https://files.pythonhosted.org/packages/6f/5e/a0a7b8885c98889a18b6e376f344da1ef323d270b44edf8174d6bce4d622/pydantic_core-2.33.2-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0a9f2c9dd19656823cb8250b0724ee9c60a82f3cdf68a080979d13092a3b0fef", size = 1885580, upload-time = "2025-04-23T18:31:57.393Z" }, + { url = "https://files.pythonhosted.org/packages/3b/2a/953581f343c7d11a304581156618c3f592435523dd9d79865903272c256a/pydantic_core-2.33.2-cp313-cp313-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:2b0a451c263b01acebe51895bfb0e1cc842a5c666efe06cdf13846c7418caa9a", size = 1973859, upload-time = "2025-04-23T18:31:59.065Z" }, + { url = "https://files.pythonhosted.org/packages/e6/55/f1a813904771c03a3f97f676c62cca0c0a4138654107c1b61f19c644868b/pydantic_core-2.33.2-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1ea40a64d23faa25e62a70ad163571c0b342b8bf66d5fa612ac0dec4f069d916", size = 2120810, upload-time = "2025-04-23T18:32:00.78Z" }, + { url = "https://files.pythonhosted.org/packages/aa/c3/053389835a996e18853ba107a63caae0b9deb4a276c6b472931ea9ae6e48/pydantic_core-2.33.2-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:0fb2d542b4d66f9470e8065c5469ec676978d625a8b7a363f07d9a501a9cb36a", size = 2676498, upload-time = "2025-04-23T18:32:02.418Z" }, + { url = "https://files.pythonhosted.org/packages/eb/3c/f4abd740877a35abade05e437245b192f9d0ffb48bbbbd708df33d3cda37/pydantic_core-2.33.2-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9fdac5d6ffa1b5a83bca06ffe7583f5576555e6c8b3a91fbd25ea7780f825f7d", size = 2000611, upload-time = "2025-04-23T18:32:04.152Z" }, + { url = "https://files.pythonhosted.org/packages/59/a7/63ef2fed1837d1121a894d0ce88439fe3e3b3e48c7543b2a4479eb99c2bd/pydantic_core-2.33.2-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:04a1a413977ab517154eebb2d326da71638271477d6ad87a769102f7c2488c56", size = 2107924, upload-time = "2025-04-23T18:32:06.129Z" }, + { url = "https://files.pythonhosted.org/packages/04/8f/2551964ef045669801675f1cfc3b0d74147f4901c3ffa42be2ddb1f0efc4/pydantic_core-2.33.2-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:c8e7af2f4e0194c22b5b37205bfb293d166a7344a5b0d0eaccebc376546d77d5", size = 2063196, upload-time = "2025-04-23T18:32:08.178Z" }, + { url = "https://files.pythonhosted.org/packages/26/bd/d9602777e77fc6dbb0c7db9ad356e9a985825547dce5ad1d30ee04903918/pydantic_core-2.33.2-cp313-cp313-musllinux_1_1_armv7l.whl", hash = "sha256:5c92edd15cd58b3c2d34873597a1e20f13094f59cf88068adb18947df5455b4e", size = 2236389, upload-time = "2025-04-23T18:32:10.242Z" }, + { url = "https://files.pythonhosted.org/packages/42/db/0e950daa7e2230423ab342ae918a794964b053bec24ba8af013fc7c94846/pydantic_core-2.33.2-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:65132b7b4a1c0beded5e057324b7e16e10910c106d43675d9bd87d4f38dde162", size = 2239223, upload-time = "2025-04-23T18:32:12.382Z" }, + { url = "https://files.pythonhosted.org/packages/58/4d/4f937099c545a8a17eb52cb67fe0447fd9a373b348ccfa9a87f141eeb00f/pydantic_core-2.33.2-cp313-cp313-win32.whl", hash = "sha256:52fb90784e0a242bb96ec53f42196a17278855b0f31ac7c3cc6f5c1ec4811849", size = 1900473, upload-time = "2025-04-23T18:32:14.034Z" }, + { url = "https://files.pythonhosted.org/packages/a0/75/4a0a9bac998d78d889def5e4ef2b065acba8cae8c93696906c3a91f310ca/pydantic_core-2.33.2-cp313-cp313-win_amd64.whl", hash = "sha256:c083a3bdd5a93dfe480f1125926afcdbf2917ae714bdb80b36d34318b2bec5d9", size = 1955269, upload-time = "2025-04-23T18:32:15.783Z" }, + { url = "https://files.pythonhosted.org/packages/f9/86/1beda0576969592f1497b4ce8e7bc8cbdf614c352426271b1b10d5f0aa64/pydantic_core-2.33.2-cp313-cp313-win_arm64.whl", hash = "sha256:e80b087132752f6b3d714f041ccf74403799d3b23a72722ea2e6ba2e892555b9", size = 1893921, upload-time = "2025-04-23T18:32:18.473Z" }, + { url = "https://files.pythonhosted.org/packages/a4/7d/e09391c2eebeab681df2b74bfe6c43422fffede8dc74187b2b0bf6fd7571/pydantic_core-2.33.2-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:61c18fba8e5e9db3ab908620af374db0ac1baa69f0f32df4f61ae23f15e586ac", size = 1806162, upload-time = "2025-04-23T18:32:20.188Z" }, + { url = "https://files.pythonhosted.org/packages/f1/3d/847b6b1fed9f8ed3bb95a9ad04fbd0b212e832d4f0f50ff4d9ee5a9f15cf/pydantic_core-2.33.2-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:95237e53bb015f67b63c91af7518a62a8660376a6a0db19b89acc77a4d6199f5", size = 1981560, upload-time = "2025-04-23T18:32:22.354Z" }, + { url = "https://files.pythonhosted.org/packages/6f/9a/e73262f6c6656262b5fdd723ad90f518f579b7bc8622e43a942eec53c938/pydantic_core-2.33.2-cp313-cp313t-win_amd64.whl", hash = "sha256:c2fc0a768ef76c15ab9238afa6da7f69895bb5d1ee83aeea2e3509af4472d0b9", size = 1935777, upload-time = "2025-04-23T18:32:25.088Z" }, + { url = "https://files.pythonhosted.org/packages/7b/27/d4ae6487d73948d6f20dddcd94be4ea43e74349b56eba82e9bdee2d7494c/pydantic_core-2.33.2-pp311-pypy311_pp73-macosx_10_12_x86_64.whl", hash = "sha256:dd14041875d09cc0f9308e37a6f8b65f5585cf2598a53aa0123df8b129d481f8", size = 2025200, upload-time = "2025-04-23T18:33:14.199Z" }, + { url = "https://files.pythonhosted.org/packages/f1/b8/b3cb95375f05d33801024079b9392a5ab45267a63400bf1866e7ce0f0de4/pydantic_core-2.33.2-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:d87c561733f66531dced0da6e864f44ebf89a8fba55f31407b00c2f7f9449593", size = 1859123, upload-time = "2025-04-23T18:33:16.555Z" }, + { url = "https://files.pythonhosted.org/packages/05/bc/0d0b5adeda59a261cd30a1235a445bf55c7e46ae44aea28f7bd6ed46e091/pydantic_core-2.33.2-pp311-pypy311_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2f82865531efd18d6e07a04a17331af02cb7a651583c418df8266f17a63c6612", size = 1892852, upload-time = "2025-04-23T18:33:18.513Z" }, + { url = "https://files.pythonhosted.org/packages/3e/11/d37bdebbda2e449cb3f519f6ce950927b56d62f0b84fd9cb9e372a26a3d5/pydantic_core-2.33.2-pp311-pypy311_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2bfb5112df54209d820d7bf9317c7a6c9025ea52e49f46b6a2060104bba37de7", size = 2067484, upload-time = "2025-04-23T18:33:20.475Z" }, + { url = "https://files.pythonhosted.org/packages/8c/55/1f95f0a05ce72ecb02a8a8a1c3be0579bbc29b1d5ab68f1378b7bebc5057/pydantic_core-2.33.2-pp311-pypy311_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:64632ff9d614e5eecfb495796ad51b0ed98c453e447a76bcbeeb69615079fc7e", size = 2108896, upload-time = "2025-04-23T18:33:22.501Z" }, + { url = "https://files.pythonhosted.org/packages/53/89/2b2de6c81fa131f423246a9109d7b2a375e83968ad0800d6e57d0574629b/pydantic_core-2.33.2-pp311-pypy311_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:f889f7a40498cc077332c7ab6b4608d296d852182211787d4f3ee377aaae66e8", size = 2069475, upload-time = "2025-04-23T18:33:24.528Z" }, + { url = "https://files.pythonhosted.org/packages/b8/e9/1f7efbe20d0b2b10f6718944b5d8ece9152390904f29a78e68d4e7961159/pydantic_core-2.33.2-pp311-pypy311_pp73-musllinux_1_1_armv7l.whl", hash = "sha256:de4b83bb311557e439b9e186f733f6c645b9417c84e2eb8203f3f820a4b988bf", size = 2239013, upload-time = "2025-04-23T18:33:26.621Z" }, + { url = "https://files.pythonhosted.org/packages/3c/b2/5309c905a93811524a49b4e031e9851a6b00ff0fb668794472ea7746b448/pydantic_core-2.33.2-pp311-pypy311_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:82f68293f055f51b51ea42fafc74b6aad03e70e191799430b90c13d643059ebb", size = 2238715, upload-time = "2025-04-23T18:33:28.656Z" }, + { url = "https://files.pythonhosted.org/packages/32/56/8a7ca5d2cd2cda1d245d34b1c9a942920a718082ae8e54e5f3e5a58b7add/pydantic_core-2.33.2-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:329467cecfb529c925cf2bbd4d60d2c509bc2fb52a20c1045bf09bb70971a9c1", size = 2066757, upload-time = "2025-04-23T18:33:30.645Z" }, ] [[package]] name = "pydantic-extra-types" -version = "2.10.6" +version = "2.10.5" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "pydantic" }, { name = "typing-extensions" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/3a/10/fb64987804cde41bcc39d9cd757cd5f2bb5d97b389d81aa70238b14b8a7e/pydantic_extra_types-2.10.6.tar.gz", hash = "sha256:c63d70bf684366e6bbe1f4ee3957952ebe6973d41e7802aea0b770d06b116aeb", size = 141858, upload-time = "2025-10-08T13:47:49.483Z" } +sdist = { url = "https://files.pythonhosted.org/packages/7e/ba/4178111ec4116c54e1dc7ecd2a1ff8f54256cdbd250e576882911e8f710a/pydantic_extra_types-2.10.5.tar.gz", hash = "sha256:1dcfa2c0cf741a422f088e0dbb4690e7bfadaaf050da3d6f80d6c3cf58a2bad8", size = 138429, upload-time = "2025-06-02T09:31:52.713Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/93/04/5c918669096da8d1c9ec7bb716bd72e755526103a61bc5e76a3e4fb23b53/pydantic_extra_types-2.10.6-py3-none-any.whl", hash = "sha256:6106c448316d30abf721b5b9fecc65e983ef2614399a24142d689c7546cc246a", size = 40949, upload-time = "2025-10-08T13:47:48.268Z" }, + { url = "https://files.pythonhosted.org/packages/70/1a/5f4fd9e7285f10c44095a4f9fe17d0f358d1702a7c74a9278c794e8a7537/pydantic_extra_types-2.10.5-py3-none-any.whl", hash = "sha256:b60c4e23d573a69a4f1a16dd92888ecc0ef34fb0e655b4f305530377fa70e7a8", size = 38315, upload-time = "2025-06-02T09:31:51.229Z" }, ] [[package]] @@ -4647,6 +4953,8 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/c0/09/e83228e878e73bf756749939f906a872da54488f18d75658afa7f1abbab1/pyobjc_core-11.1-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:765b97dea6b87ec4612b3212258024d8496ea23517c95a1c5f0735f96b7fd529", size = 677985, upload-time = "2025-06-14T20:44:48.375Z" }, { url = "https://files.pythonhosted.org/packages/c5/24/12e4e2dae5f85fd0c0b696404ed3374ea6ca398e7db886d4f1322eb30799/pyobjc_core-11.1-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:18986f83998fbd5d3f56d8a8428b2f3e0754fd15cef3ef786ca0d29619024f2c", size = 676431, upload-time = "2025-06-14T20:44:49.908Z" }, { url = "https://files.pythonhosted.org/packages/f7/79/031492497624de4c728f1857181b06ce8c56444db4d49418fa459cba217c/pyobjc_core-11.1-cp313-cp313t-macosx_10_13_universal2.whl", hash = "sha256:8849e78cfe6595c4911fbba29683decfb0bf57a350aed8a43316976ba6f659d2", size = 719330, upload-time = "2025-06-14T20:44:51.621Z" }, + { url = "https://files.pythonhosted.org/packages/ed/7d/6169f16a0c7ec15b9381f8bf33872baf912de2ef68d96c798ca4c6ee641f/pyobjc_core-11.1-cp314-cp314-macosx_11_0_universal2.whl", hash = "sha256:8cb9ed17a8d84a312a6e8b665dd22393d48336ea1d8277e7ad20c19a38edf731", size = 667203, upload-time = "2025-06-14T20:44:53.262Z" }, + { url = "https://files.pythonhosted.org/packages/49/0f/f5ab2b0e57430a3bec9a62b6153c0e79c05a30d77b564efdb9f9446eeac5/pyobjc_core-11.1-cp314-cp314t-macosx_11_0_universal2.whl", hash = "sha256:f2455683e807f8541f0d83fbba0f5d9a46128ab0d5cc83ea208f0bec759b7f96", size = 708807, upload-time = "2025-06-14T20:44:54.851Z" }, ] [[package]] @@ -4662,6 +4970,8 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/68/da/41c0f7edc92ead461cced7e67813e27fa17da3c5da428afdb4086c69d7ba/pyobjc_framework_cocoa-11.1-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:806de56f06dfba8f301a244cce289d54877c36b4b19818e3b53150eb7c2424d0", size = 388983, upload-time = "2025-06-14T20:46:52.591Z" }, { url = "https://files.pythonhosted.org/packages/4e/0b/a01477cde2a040f97e226f3e15e5ffd1268fcb6d1d664885a95ba592eca9/pyobjc_framework_cocoa-11.1-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:54e93e1d9b0fc41c032582a6f0834befe1d418d73893968f3f450281b11603da", size = 389049, upload-time = "2025-06-14T20:46:53.757Z" }, { url = "https://files.pythonhosted.org/packages/bc/e6/64cf2661f6ab7c124d0486ec6d1d01a9bb2838a0d2a46006457d8c5e6845/pyobjc_framework_cocoa-11.1-cp313-cp313t-macosx_10_13_universal2.whl", hash = "sha256:fd5245ee1997d93e78b72703be1289d75d88ff6490af94462b564892e9266350", size = 393110, upload-time = "2025-06-14T20:46:54.894Z" }, + { url = "https://files.pythonhosted.org/packages/33/87/01e35c5a3c5bbdc93d5925366421e10835fcd7b23347b6c267df1b16d0b3/pyobjc_framework_cocoa-11.1-cp314-cp314-macosx_11_0_universal2.whl", hash = "sha256:aede53a1afc5433e1e7d66568cc52acceeb171b0a6005407a42e8e82580b4fc0", size = 392644, upload-time = "2025-06-14T20:46:56.503Z" }, + { url = "https://files.pythonhosted.org/packages/c1/7c/54afe9ffee547c41e1161691e72067a37ed27466ac71c089bfdcd07ca70d/pyobjc_framework_cocoa-11.1-cp314-cp314t-macosx_11_0_universal2.whl", hash = "sha256:1b5de4e1757bb65689d6dc1f8d8717de9ec8587eb0c4831c134f13aba29f9b71", size = 396742, upload-time = "2025-06-14T20:46:57.64Z" }, ] [[package]] @@ -4678,6 +4988,8 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/9b/37/ee6e0bdd31b3b277fec00e5ee84d30eb1b5b8b0e025095e24ddc561697d0/pyobjc_framework_quartz-11.1-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:9ac806067541917d6119b98d90390a6944e7d9bd737f5c0a79884202327c9204", size = 216410, upload-time = "2025-06-14T20:53:36.346Z" }, { url = "https://files.pythonhosted.org/packages/bd/27/4f4fc0e6a0652318c2844608dd7c41e49ba6006ee5fb60c7ae417c338357/pyobjc_framework_quartz-11.1-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:43a1138280571bbf44df27a7eef519184b5c4183a588598ebaaeb887b9e73e76", size = 216816, upload-time = "2025-06-14T20:53:37.358Z" }, { url = "https://files.pythonhosted.org/packages/b8/8a/1d15e42496bef31246f7401aad1ebf0f9e11566ce0de41c18431715aafbc/pyobjc_framework_quartz-11.1-cp313-cp313t-macosx_10_13_universal2.whl", hash = "sha256:b23d81c30c564adf6336e00b357f355b35aad10075dd7e837cfd52a9912863e5", size = 221941, upload-time = "2025-06-14T20:53:38.34Z" }, + { url = "https://files.pythonhosted.org/packages/32/a8/a3f84d06e567efc12c104799c7fd015f9bea272a75f799eda8b79e8163c6/pyobjc_framework_quartz-11.1-cp314-cp314-macosx_11_0_universal2.whl", hash = "sha256:07cbda78b4a8fcf3a2d96e047a2ff01f44e3e1820f46f0f4b3b6d77ff6ece07c", size = 221312, upload-time = "2025-06-14T20:53:39.435Z" }, + { url = "https://files.pythonhosted.org/packages/76/ef/8c08d4f255bb3efe8806609d1f0b1ddd29684ab0f9ffb5e26d3ad7957b29/pyobjc_framework_quartz-11.1-cp314-cp314t-macosx_11_0_universal2.whl", hash = "sha256:39d02a3df4b5e3eee1e0da0fb150259476910d2a9aa638ab94153c24317a9561", size = 226353, upload-time = "2025-06-14T20:53:40.655Z" }, ] [[package]] @@ -4694,6 +5006,8 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/35/16/7fc52ab1364ada5885bf9b4c9ea9da3ad892b847c9b86aa59e086b16fc11/pyobjc_framework_security-11.1-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:2eb4ba6d8b221b9ad5d010e026247e8aa26ee43dcaf327e848340ed227d22d7e", size = 41222, upload-time = "2025-06-14T20:54:37.032Z" }, { url = "https://files.pythonhosted.org/packages/3f/d8/cb20b4c4d15b2bdc7e39481159e50a933ddb87e4702d35060c254b316055/pyobjc_framework_security-11.1-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:158da3b2474e2567fd269531c4ee9f35b8ba4f1eccbd1fb4a37c85a18bf1243c", size = 41221, upload-time = "2025-06-14T20:54:37.803Z" }, { url = "https://files.pythonhosted.org/packages/cb/3c/d13d6870f5d66f5379565887b332f86f16d666dc50a1944d7e3a1462e76c/pyobjc_framework_security-11.1-cp313-cp313t-macosx_10_13_universal2.whl", hash = "sha256:141cc3ee08627ae0698264efc3dbbaf28d2255e0fe690e336eb8f0f387c4af01", size = 42099, upload-time = "2025-06-14T20:54:38.627Z" }, + { url = "https://files.pythonhosted.org/packages/f0/3d/2f61d4566e80f203d0e05ddd788037dc06a94d200edac25d2747fd79b5aa/pyobjc_framework_security-11.1-cp314-cp314-macosx_11_0_universal2.whl", hash = "sha256:858a18303711eb69d18d1a64cf8bb2202f64a3bd1c82203c511990dbd8326514", size = 41288, upload-time = "2025-06-14T20:54:39.432Z" }, + { url = "https://files.pythonhosted.org/packages/15/44/99ef33a5319ed2cb6c0a51ed36214adf21ccb37cce970b1acc8bfe57ce23/pyobjc_framework_security-11.1-cp314-cp314t-macosx_11_0_universal2.whl", hash = "sha256:4db1ebf6395cd370139cb35ff172505fc449c7fdf5d3a28f2ada8a30ef132cd0", size = 42849, upload-time = "2025-06-14T20:54:40.174Z" }, ] [[package]] @@ -4710,6 +5024,8 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/8a/7e/fa2c18c0c0f9321e5036e54b9da7a196956b531e50fe1a76e7dfdbe8fac2/pyobjc_framework_webkit-11.1-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:1a6e6f64ca53c4953f17e808ecac11da288d9a6ade738156ba161732a5e0c96a", size = 51464, upload-time = "2025-06-14T20:56:27.653Z" }, { url = "https://files.pythonhosted.org/packages/7a/8d/66561d95b00b8e57a9d5725ae34a8d9ca7ebeb776f13add989421ff90279/pyobjc_framework_webkit-11.1-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:1d01008756c3912b02b7c02f62432467fbee90a93e3b8e31fa351b4ca97c9c98", size = 51495, upload-time = "2025-06-14T20:56:28.464Z" }, { url = "https://files.pythonhosted.org/packages/db/c3/e790b518f84ea8dfbe32a9dcb4d8611b532de08057d19f853c1890110938/pyobjc_framework_webkit-11.1-cp313-cp313t-macosx_10_13_universal2.whl", hash = "sha256:864f9867a2caaeaeb83e5c0fa3dcf78169622233cf93a9a5eeb7012ced3b8076", size = 51985, upload-time = "2025-06-14T20:56:29.303Z" }, + { url = "https://files.pythonhosted.org/packages/d7/4f/194e3e7c01861a5e46dfe9e1fa28ad01fd07190cb514e41a7dcf1f0b7031/pyobjc_framework_webkit-11.1-cp314-cp314-macosx_11_0_universal2.whl", hash = "sha256:13b774d4244734cb77bf3c3648149c163f62acaa105243d7c48bb3fd856b5628", size = 52248, upload-time = "2025-06-14T20:56:30.158Z" }, + { url = "https://files.pythonhosted.org/packages/31/09/28884e7c10d3a76a76c2c8f55369dd96a90f0283800c68f5c764e1fb8e2e/pyobjc_framework_webkit-11.1-cp314-cp314t-macosx_11_0_universal2.whl", hash = "sha256:c1c00d549ab1d50e3d7e8f5f71352b999d2c32dc2365c299f317525eb9bff916", size = 52725, upload-time = "2025-06-14T20:56:30.993Z" }, ] [[package]] @@ -4847,14 +5163,14 @@ wheels = [ [[package]] name = "pytest-env" -version = "1.2.0" +version = "1.1.5" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "pytest" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/13/12/9c87d0ca45d5992473208bcef2828169fa7d39b8d7fc6e3401f5c08b8bf7/pytest_env-1.2.0.tar.gz", hash = "sha256:475e2ebe8626cee01f491f304a74b12137742397d6c784ea4bc258f069232b80", size = 8973, upload-time = "2025-10-09T19:15:47.42Z" } +sdist = { url = "https://files.pythonhosted.org/packages/1f/31/27f28431a16b83cab7a636dce59cf397517807d247caa38ee67d65e71ef8/pytest_env-1.1.5.tar.gz", hash = "sha256:91209840aa0e43385073ac464a554ad2947cc2fd663a9debf88d03b01e0cc1cf", size = 8911, upload-time = "2024-09-17T22:39:18.566Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/27/98/822b924a4a3eb58aacba84444c7439fce32680592f394de26af9c76e2569/pytest_env-1.2.0-py3-none-any.whl", hash = "sha256:d7e5b7198f9b83c795377c09feefa45d56083834e60d04767efd64819fc9da00", size = 6251, upload-time = "2025-10-09T19:15:46.077Z" }, + { url = "https://files.pythonhosted.org/packages/de/b8/87cfb16045c9d4092cfcf526135d73b88101aac83bc1adcf82dfb5fd3833/pytest_env-1.1.5-py3-none-any.whl", hash = "sha256:ce90cf8772878515c24b31cd97c7fa1f4481cd68d588419fd45f10ecaee6bc30", size = 6141, upload-time = "2024-09-17T22:39:16.942Z" }, ] [[package]] @@ -5043,11 +5359,11 @@ wheels = [ [[package]] name = "python-json-logger" -version = "4.0.0" +version = "3.3.0" source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/29/bf/eca6a3d43db1dae7070f70e160ab20b807627ba953663ba07928cdd3dc58/python_json_logger-4.0.0.tar.gz", hash = "sha256:f58e68eb46e1faed27e0f574a55a0455eecd7b8a5b88b85a784519ba3cff047f", size = 17683, upload-time = "2025-10-06T04:15:18.984Z" } +sdist = { url = "https://files.pythonhosted.org/packages/9e/de/d3144a0bceede957f961e975f3752760fbe390d57fbe194baf709d8f1f7b/python_json_logger-3.3.0.tar.gz", hash = "sha256:12b7e74b17775e7d565129296105bbe3910842d9d0eb083fc83a6a617aa8df84", size = 16642, upload-time = "2025-03-07T07:08:27.301Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/51/e5/fecf13f06e5e5f67e8837d777d1bc43fac0ed2b77a676804df5c34744727/python_json_logger-4.0.0-py3-none-any.whl", hash = "sha256:af09c9daf6a813aa4cc7180395f50f2a9e5fa056034c9953aec92e381c5ba1e2", size = 15548, upload-time = "2025-10-06T04:15:17.553Z" }, + { url = "https://files.pythonhosted.org/packages/08/20/0f2523b9e50a8052bc6a8b732dfc8568abbdc42010aef03a2d750bdab3b2/python_json_logger-3.3.0-py3-none-any.whl", hash = "sha256:dd980fae8cffb24c13caf6e158d3d61c0d6d22342f932cb6e9deedab3d35eec7", size = 15163, upload-time = "2025-03-07T07:08:25.627Z" }, ] [[package]] @@ -5154,6 +5470,8 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/02/4e/1098484e042c9485f56f16eb2b69b43b874bd526044ee401512234cf9e04/pywinpty-3.0.2-cp312-cp312-win_amd64.whl", hash = "sha256:99fdd9b455f0ad6419aba6731a7a0d2f88ced83c3c94a80ff9533d95fa8d8a9e", size = 2050391, upload-time = "2025-10-03T21:19:01.642Z" }, { url = "https://files.pythonhosted.org/packages/fc/19/b757fe28008236a4a713e813283721b8a40aa60cd7d3f83549f2e25a3155/pywinpty-3.0.2-cp313-cp313-win_amd64.whl", hash = "sha256:18f78b81e4cfee6aabe7ea8688441d30247b73e52cd9657138015c5f4ee13a51", size = 2050057, upload-time = "2025-10-03T21:19:26.732Z" }, { url = "https://files.pythonhosted.org/packages/cb/44/cbae12ecf6f4fa4129c36871fd09c6bef4f98d5f625ecefb5e2449765508/pywinpty-3.0.2-cp313-cp313t-win_amd64.whl", hash = "sha256:663383ecfab7fc382cc97ea5c4f7f0bb32c2f889259855df6ea34e5df42d305b", size = 2049874, upload-time = "2025-10-03T21:18:53.923Z" }, + { url = "https://files.pythonhosted.org/packages/ca/15/f12c6055e2d7a617d4d5820e8ac4ceaff849da4cb124640ef5116a230771/pywinpty-3.0.2-cp314-cp314-win_amd64.whl", hash = "sha256:28297cecc37bee9f24d8889e47231972d6e9e84f7b668909de54f36ca785029a", size = 2050386, upload-time = "2025-10-03T21:18:50.477Z" }, + { url = "https://files.pythonhosted.org/packages/de/24/c6907c5bb06043df98ad6a0a0ff5db2e0affcecbc3b15c42404393a3f72a/pywinpty-3.0.2-cp314-cp314t-win_amd64.whl", hash = "sha256:34b55ae9a1b671fe3eae071d86618110538e8eaad18fcb1531c0830b91a82767", size = 2049834, upload-time = "2025-10-03T21:19:25.688Z" }, ] [[package]] @@ -5191,6 +5509,24 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/de/94/980b50a6531b3019e45ddeada0626d45fa85cbe22300844a7983285bed3b/pyyaml-6.0.3-cp313-cp313-win32.whl", hash = "sha256:d0eae10f8159e8fdad514efdc92d74fd8d682c933a6dd088030f3834bc8e6b26", size = 137427, upload-time = "2025-09-25T21:32:32.58Z" }, { url = "https://files.pythonhosted.org/packages/97/c9/39d5b874e8b28845e4ec2202b5da735d0199dbe5b8fb85f91398814a9a46/pyyaml-6.0.3-cp313-cp313-win_amd64.whl", hash = "sha256:79005a0d97d5ddabfeeea4cf676af11e647e41d81c9a7722a193022accdb6b7c", size = 154090, upload-time = "2025-09-25T21:32:33.659Z" }, { url = "https://files.pythonhosted.org/packages/73/e8/2bdf3ca2090f68bb3d75b44da7bbc71843b19c9f2b9cb9b0f4ab7a5a4329/pyyaml-6.0.3-cp313-cp313-win_arm64.whl", hash = "sha256:5498cd1645aa724a7c71c8f378eb29ebe23da2fc0d7a08071d89469bf1d2defb", size = 140246, upload-time = "2025-09-25T21:32:34.663Z" }, + { url = "https://files.pythonhosted.org/packages/9d/8c/f4bd7f6465179953d3ac9bc44ac1a8a3e6122cf8ada906b4f96c60172d43/pyyaml-6.0.3-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:8d1fab6bb153a416f9aeb4b8763bc0f22a5586065f86f7664fc23339fc1c1fac", size = 181814, upload-time = "2025-09-25T21:32:35.712Z" }, + { url = "https://files.pythonhosted.org/packages/bd/9c/4d95bb87eb2063d20db7b60faa3840c1b18025517ae857371c4dd55a6b3a/pyyaml-6.0.3-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:34d5fcd24b8445fadc33f9cf348c1047101756fd760b4dacb5c3e99755703310", size = 173809, upload-time = "2025-09-25T21:32:36.789Z" }, + { url = "https://files.pythonhosted.org/packages/92/b5/47e807c2623074914e29dabd16cbbdd4bf5e9b2db9f8090fa64411fc5382/pyyaml-6.0.3-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:501a031947e3a9025ed4405a168e6ef5ae3126c59f90ce0cd6f2bfc477be31b7", size = 766454, upload-time = "2025-09-25T21:32:37.966Z" }, + { url = "https://files.pythonhosted.org/packages/02/9e/e5e9b168be58564121efb3de6859c452fccde0ab093d8438905899a3a483/pyyaml-6.0.3-cp314-cp314-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:b3bc83488de33889877a0f2543ade9f70c67d66d9ebb4ac959502e12de895788", size = 836355, upload-time = "2025-09-25T21:32:39.178Z" }, + { url = "https://files.pythonhosted.org/packages/88/f9/16491d7ed2a919954993e48aa941b200f38040928474c9e85ea9e64222c3/pyyaml-6.0.3-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:c458b6d084f9b935061bc36216e8a69a7e293a2f1e68bf956dcd9e6cbcd143f5", size = 794175, upload-time = "2025-09-25T21:32:40.865Z" }, + { url = "https://files.pythonhosted.org/packages/dd/3f/5989debef34dc6397317802b527dbbafb2b4760878a53d4166579111411e/pyyaml-6.0.3-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:7c6610def4f163542a622a73fb39f534f8c101d690126992300bf3207eab9764", size = 755228, upload-time = "2025-09-25T21:32:42.084Z" }, + { url = "https://files.pythonhosted.org/packages/d7/ce/af88a49043cd2e265be63d083fc75b27b6ed062f5f9fd6cdc223ad62f03e/pyyaml-6.0.3-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:5190d403f121660ce8d1d2c1bb2ef1bd05b5f68533fc5c2ea899bd15f4399b35", size = 789194, upload-time = "2025-09-25T21:32:43.362Z" }, + { url = "https://files.pythonhosted.org/packages/23/20/bb6982b26a40bb43951265ba29d4c246ef0ff59c9fdcdf0ed04e0687de4d/pyyaml-6.0.3-cp314-cp314-win_amd64.whl", hash = "sha256:4a2e8cebe2ff6ab7d1050ecd59c25d4c8bd7e6f400f5f82b96557ac0abafd0ac", size = 156429, upload-time = "2025-09-25T21:32:57.844Z" }, + { url = "https://files.pythonhosted.org/packages/f4/f4/a4541072bb9422c8a883ab55255f918fa378ecf083f5b85e87fc2b4eda1b/pyyaml-6.0.3-cp314-cp314-win_arm64.whl", hash = "sha256:93dda82c9c22deb0a405ea4dc5f2d0cda384168e466364dec6255b293923b2f3", size = 143912, upload-time = "2025-09-25T21:32:59.247Z" }, + { url = "https://files.pythonhosted.org/packages/7c/f9/07dd09ae774e4616edf6cda684ee78f97777bdd15847253637a6f052a62f/pyyaml-6.0.3-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:02893d100e99e03eda1c8fd5c441d8c60103fd175728e23e431db1b589cf5ab3", size = 189108, upload-time = "2025-09-25T21:32:44.377Z" }, + { url = "https://files.pythonhosted.org/packages/4e/78/8d08c9fb7ce09ad8c38ad533c1191cf27f7ae1effe5bb9400a46d9437fcf/pyyaml-6.0.3-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:c1ff362665ae507275af2853520967820d9124984e0f7466736aea23d8611fba", size = 183641, upload-time = "2025-09-25T21:32:45.407Z" }, + { url = "https://files.pythonhosted.org/packages/7b/5b/3babb19104a46945cf816d047db2788bcaf8c94527a805610b0289a01c6b/pyyaml-6.0.3-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:6adc77889b628398debc7b65c073bcb99c4a0237b248cacaf3fe8a557563ef6c", size = 831901, upload-time = "2025-09-25T21:32:48.83Z" }, + { url = "https://files.pythonhosted.org/packages/8b/cc/dff0684d8dc44da4d22a13f35f073d558c268780ce3c6ba1b87055bb0b87/pyyaml-6.0.3-cp314-cp314t-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:a80cb027f6b349846a3bf6d73b5e95e782175e52f22108cfa17876aaeff93702", size = 861132, upload-time = "2025-09-25T21:32:50.149Z" }, + { url = "https://files.pythonhosted.org/packages/b1/5e/f77dc6b9036943e285ba76b49e118d9ea929885becb0a29ba8a7c75e29fe/pyyaml-6.0.3-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:00c4bdeba853cc34e7dd471f16b4114f4162dc03e6b7afcc2128711f0eca823c", size = 839261, upload-time = "2025-09-25T21:32:51.808Z" }, + { url = "https://files.pythonhosted.org/packages/ce/88/a9db1376aa2a228197c58b37302f284b5617f56a5d959fd1763fb1675ce6/pyyaml-6.0.3-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:66e1674c3ef6f541c35191caae2d429b967b99e02040f5ba928632d9a7f0f065", size = 805272, upload-time = "2025-09-25T21:32:52.941Z" }, + { url = "https://files.pythonhosted.org/packages/da/92/1446574745d74df0c92e6aa4a7b0b3130706a4142b2d1a5869f2eaa423c6/pyyaml-6.0.3-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:16249ee61e95f858e83976573de0f5b2893b3677ba71c9dd36b9cf8be9ac6d65", size = 829923, upload-time = "2025-09-25T21:32:54.537Z" }, + { url = "https://files.pythonhosted.org/packages/f0/7a/1c7270340330e575b92f397352af856a8c06f230aa3e76f86b39d01b416a/pyyaml-6.0.3-cp314-cp314t-win_amd64.whl", hash = "sha256:4ad1906908f2f5ae4e5a8ddfce73c320c2a1429ec52eafd27138b7f1cbe341c9", size = 174062, upload-time = "2025-09-25T21:32:55.767Z" }, + { url = "https://files.pythonhosted.org/packages/f1/12/de94a39c2ef588c7e6455cfbe7343d3b2dc9d6b6b2f40c4c6565744c873d/pyyaml-6.0.3-cp314-cp314t-win_arm64.whl", hash = "sha256:ebc55a14a21cb14062aa4162f906cd962b28e2e9ea38f9b4391244cd8de4ae0b", size = 149341, upload-time = "2025-09-25T21:32:56.828Z" }, ] [[package]] @@ -5234,6 +5570,16 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/4f/6f/55c10e2e49ad52d080dc24e37adb215e5b0d64990b57598abc2e3f01725b/pyzmq-27.1.0-cp313-cp313t-win32.whl", hash = "sha256:7ccc0700cfdf7bd487bea8d850ec38f204478681ea02a582a8da8171b7f90a1c", size = 574964, upload-time = "2025-09-08T23:08:37.178Z" }, { url = "https://files.pythonhosted.org/packages/87/4d/2534970ba63dd7c522d8ca80fb92777f362c0f321900667c615e2067cb29/pyzmq-27.1.0-cp313-cp313t-win_amd64.whl", hash = "sha256:8085a9fba668216b9b4323be338ee5437a235fe275b9d1610e422ccc279733e2", size = 641029, upload-time = "2025-09-08T23:08:40.595Z" }, { url = "https://files.pythonhosted.org/packages/f6/fa/f8aea7a28b0641f31d40dea42d7ef003fded31e184ef47db696bc74cd610/pyzmq-27.1.0-cp313-cp313t-win_arm64.whl", hash = "sha256:6bb54ca21bcfe361e445256c15eedf083f153811c37be87e0514934d6913061e", size = 561541, upload-time = "2025-09-08T23:08:42.668Z" }, + { url = "https://files.pythonhosted.org/packages/87/45/19efbb3000956e82d0331bafca5d9ac19ea2857722fa2caacefb6042f39d/pyzmq-27.1.0-cp314-cp314t-macosx_10_15_universal2.whl", hash = "sha256:ce980af330231615756acd5154f29813d553ea555485ae712c491cd483df6b7a", size = 1341197, upload-time = "2025-09-08T23:08:44.973Z" }, + { url = "https://files.pythonhosted.org/packages/48/43/d72ccdbf0d73d1343936296665826350cb1e825f92f2db9db3e61c2162a2/pyzmq-27.1.0-cp314-cp314t-manylinux2014_i686.manylinux_2_17_i686.whl", hash = "sha256:1779be8c549e54a1c38f805e56d2a2e5c009d26de10921d7d51cfd1c8d4632ea", size = 897175, upload-time = "2025-09-08T23:08:46.601Z" }, + { url = "https://files.pythonhosted.org/packages/2f/2e/a483f73a10b65a9ef0161e817321d39a770b2acf8bcf3004a28d90d14a94/pyzmq-27.1.0-cp314-cp314t-manylinux_2_26_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:7200bb0f03345515df50d99d3db206a0a6bee1955fbb8c453c76f5bf0e08fb96", size = 660427, upload-time = "2025-09-08T23:08:48.187Z" }, + { url = "https://files.pythonhosted.org/packages/f5/d2/5f36552c2d3e5685abe60dfa56f91169f7a2d99bbaf67c5271022ab40863/pyzmq-27.1.0-cp314-cp314t-manylinux_2_26_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:01c0e07d558b06a60773744ea6251f769cd79a41a97d11b8bf4ab8f034b0424d", size = 847929, upload-time = "2025-09-08T23:08:49.76Z" }, + { url = "https://files.pythonhosted.org/packages/c4/2a/404b331f2b7bf3198e9945f75c4c521f0c6a3a23b51f7a4a401b94a13833/pyzmq-27.1.0-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:80d834abee71f65253c91540445d37c4c561e293ba6e741b992f20a105d69146", size = 1650193, upload-time = "2025-09-08T23:08:51.7Z" }, + { url = "https://files.pythonhosted.org/packages/1c/0b/f4107e33f62a5acf60e3ded67ed33d79b4ce18de432625ce2fc5093d6388/pyzmq-27.1.0-cp314-cp314t-musllinux_1_2_i686.whl", hash = "sha256:544b4e3b7198dde4a62b8ff6685e9802a9a1ebf47e77478a5eb88eca2a82f2fd", size = 2024388, upload-time = "2025-09-08T23:08:53.393Z" }, + { url = "https://files.pythonhosted.org/packages/0d/01/add31fe76512642fd6e40e3a3bd21f4b47e242c8ba33efb6809e37076d9b/pyzmq-27.1.0-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:cedc4c68178e59a4046f97eca31b148ddcf51e88677de1ef4e78cf06c5376c9a", size = 1885316, upload-time = "2025-09-08T23:08:55.702Z" }, + { url = "https://files.pythonhosted.org/packages/c4/59/a5f38970f9bf07cee96128de79590bb354917914a9be11272cfc7ff26af0/pyzmq-27.1.0-cp314-cp314t-win32.whl", hash = "sha256:1f0b2a577fd770aa6f053211a55d1c47901f4d537389a034c690291485e5fe92", size = 587472, upload-time = "2025-09-08T23:08:58.18Z" }, + { url = "https://files.pythonhosted.org/packages/70/d8/78b1bad170f93fcf5e3536e70e8fadac55030002275c9a29e8f5719185de/pyzmq-27.1.0-cp314-cp314t-win_amd64.whl", hash = "sha256:19c9468ae0437f8074af379e986c5d3d7d7bfe033506af442e8c879732bedbe0", size = 661401, upload-time = "2025-09-08T23:08:59.802Z" }, + { url = "https://files.pythonhosted.org/packages/81/d6/4bfbb40c9a0b42fc53c7cf442f6385db70b40f74a783130c5d0a5aa62228/pyzmq-27.1.0-cp314-cp314t-win_arm64.whl", hash = "sha256:dc5dbf68a7857b59473f7df42650c621d7e8923fb03fa74a526890f4d33cc4d7", size = 575170, upload-time = "2025-09-08T23:09:01.418Z" }, { url = "https://files.pythonhosted.org/packages/4c/c6/c4dcdecdbaa70969ee1fdced6d7b8f60cfabe64d25361f27ac4665a70620/pyzmq-27.1.0-pp311-pypy311_pp73-macosx_10_15_x86_64.whl", hash = "sha256:18770c8d3563715387139060d37859c02ce40718d1faf299abddcdcc6a649066", size = 836265, upload-time = "2025-09-08T23:09:49.376Z" }, { url = "https://files.pythonhosted.org/packages/3e/79/f38c92eeaeb03a2ccc2ba9866f0439593bb08c5e3b714ac1d553e5c96e25/pyzmq-27.1.0-pp311-pypy311_pp73-manylinux2014_i686.manylinux_2_17_i686.whl", hash = "sha256:ac25465d42f92e990f8d8b0546b01c391ad431c3bf447683fdc40565941d0604", size = 800208, upload-time = "2025-09-08T23:09:51.073Z" }, { url = "https://files.pythonhosted.org/packages/49/0e/3f0d0d335c6b3abb9b7b723776d0b21fa7f3a6c819a0db6097059aada160/pyzmq-27.1.0-pp311-pypy311_pp73-manylinux_2_26_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:53b40f8ae006f2734ee7608d59ed661419f087521edbfc2149c3932e9c14808c", size = 567747, upload-time = "2025-09-08T23:09:52.698Z" }, @@ -5267,16 +5613,16 @@ wheels = [ [[package]] name = "referencing" -version = "0.37.0" +version = "0.36.2" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "attrs" }, { name = "rpds-py" }, { name = "typing-extensions", marker = "python_full_version < '3.13'" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/22/f5/df4e9027acead3ecc63e50fe1e36aca1523e1719559c499951bb4b53188f/referencing-0.37.0.tar.gz", hash = "sha256:44aefc3142c5b842538163acb373e24cce6632bd54bdb01b21ad5863489f50d8", size = 78036, upload-time = "2025-10-13T15:30:48.871Z" } +sdist = { url = "https://files.pythonhosted.org/packages/2f/db/98b5c277be99dd18bfd91dd04e1b759cad18d1a338188c936e92f921c7e2/referencing-0.36.2.tar.gz", hash = "sha256:df2e89862cd09deabbdba16944cc3f10feb6b3e6f18e902f7cc25609a34775aa", size = 74744, upload-time = "2025-01-25T08:48:16.138Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/2c/58/ca301544e1fa93ed4f80d724bf5b194f6e4b945841c5bfd555878eea9fcb/referencing-0.37.0-py3-none-any.whl", hash = "sha256:381329a9f99628c9069361716891d34ad94af76e461dcb0335825aecc7692231", size = 26766, upload-time = "2025-10-13T15:30:47.625Z" }, + { url = "https://files.pythonhosted.org/packages/c1/b1/3baf80dc6d2b7bc27a95a67752d0208e410351e3feb4eb78de5f77454d8d/referencing-0.36.2-py3-none-any.whl", hash = "sha256:e8699adbbf8b5c7de96d8ffa0eb5c158b3beafce084968e2ea8bb08c6794dcd0", size = 26775, upload-time = "2025-01-25T08:48:14.241Z" }, ] [[package]] @@ -5360,29 +5706,29 @@ wheels = [ [[package]] name = "rich" -version = "14.2.0" +version = "14.1.0" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "markdown-it-py" }, { name = "pygments" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/fb/d2/8920e102050a0de7bfabeb4c4614a49248cf8d5d7a8d01885fbb24dc767a/rich-14.2.0.tar.gz", hash = "sha256:73ff50c7c0c1c77c8243079283f4edb376f0f6442433aecb8ce7e6d0b92d1fe4", size = 219990, upload-time = "2025-10-09T14:16:53.064Z" } +sdist = { url = "https://files.pythonhosted.org/packages/fe/75/af448d8e52bf1d8fa6a9d089ca6c07ff4453d86c65c145d0a300bb073b9b/rich-14.1.0.tar.gz", hash = "sha256:e497a48b844b0320d45007cdebfeaeed8db2a4f4bcf49f15e455cfc4af11eaa8", size = 224441, upload-time = "2025-07-25T07:32:58.125Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/25/7a/b0178788f8dc6cafce37a212c99565fa1fe7872c70c6c9c1e1a372d9d88f/rich-14.2.0-py3-none-any.whl", hash = "sha256:76bc51fe2e57d2b1be1f96c524b890b816e334ab4c1e45888799bfaab0021edd", size = 243393, upload-time = "2025-10-09T14:16:51.245Z" }, + { url = "https://files.pythonhosted.org/packages/e3/30/3c4d035596d3cf444529e0b2953ad0466f6049528a879d27534700580395/rich-14.1.0-py3-none-any.whl", hash = "sha256:536f5f1785986d6dbdea3c75205c473f970777b4a0d6c6dd1b696aa05a3fa04f", size = 243368, upload-time = "2025-07-25T07:32:56.73Z" }, ] [[package]] name = "rich-click" -version = "1.9.3" +version = "1.9.2" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "click" }, { name = "colorama", marker = "sys_platform == 'win32'" }, { name = "rich" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/9d/90/95cff624a176de6d00a4ddc4fb0238649bca09c19bd37d5b8d1962f8dcfc/rich_click-1.9.3.tar.gz", hash = "sha256:60839150a935604df1378b159da340d3fff91f912903e935da7cb615b5738c1b", size = 73549, upload-time = "2025-10-09T18:00:40.455Z" } +sdist = { url = "https://files.pythonhosted.org/packages/0c/4d/e8fcbd785a93dc5d7aef38f8aa4ade1e31b0c820eb2e8ff267056eda70b1/rich_click-1.9.2.tar.gz", hash = "sha256:1c4212f05561be0cac6a9c1743e1ebcd4fe1fb1e311f9f672abfada3be649db6", size = 73533, upload-time = "2025-10-04T21:56:25.36Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/5b/76/5679d9eee13b8670084d2fe5d7933931b50fd896391693ba690f63916d66/rich_click-1.9.3-py3-none-any.whl", hash = "sha256:8ef51bc340db4d048a846c15c035d27b88acf720cbbb9b6fecf6c8b1a297b909", size = 70168, upload-time = "2025-10-09T18:00:39.464Z" }, + { url = "https://files.pythonhosted.org/packages/a9/27/7a82106d69738aefb81e044d6dd278053c5263581c5e8e5330e1339b8444/rich_click-1.9.2-py3-none-any.whl", hash = "sha256:5079dad67ed7df434a9ec1f20b1d62d831e58c78740026f968ce3d3b861f01a0", size = 70153, upload-time = "2025-10-04T21:56:24.066Z" }, ] [[package]] @@ -5455,6 +5801,12 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/a3/a7/7400a4343d1b5a1345a98846c6fd7768ff13890d207fce79d690c7fd7798/rignore-0.7.0-cp313-cp313t-musllinux_1_2_armv7l.whl", hash = "sha256:b12b316adf6cf64f9d22bd690b2aa019a37335a1f632a0da7fb15a423cb64080", size = 1128403, upload-time = "2025-10-02T13:25:38.394Z" }, { url = "https://files.pythonhosted.org/packages/45/8b/ce8ff27336a86bad47bbf011f8f7fb0b82b559ee4a0d6a4815ee3555ef56/rignore-0.7.0-cp313-cp313t-musllinux_1_2_i686.whl", hash = "sha256:dba8181d999387c17dd6cce5fd7f0009376ca8623d2d86842d034b18d83dc768", size = 1105552, upload-time = "2025-10-02T13:25:54.511Z" }, { url = "https://files.pythonhosted.org/packages/8c/e2/7925b564d853c7057f150a7f2f384400422ed30f7b7baf2fde5849562381/rignore-0.7.0-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:04a3d4513cdd184f4f849ae8d6407a169cca543a2c4dd69bfc42e67cb0155504", size = 1114826, upload-time = "2025-10-02T13:26:12.56Z" }, + { url = "https://files.pythonhosted.org/packages/c4/34/c42ccdd81143d38d99e45b965e4040a1ef6c07a365ad205dd94b6d16c794/rignore-0.7.0-cp314-cp314-macosx_10_12_x86_64.whl", hash = "sha256:a296bc26b713aacd0f31702e7d89426ba6240abdbf01b2b18daeeaeaa782f475", size = 879718, upload-time = "2025-10-02T13:25:09.62Z" }, + { url = "https://files.pythonhosted.org/packages/e9/ba/f522adf949d2b581a0a1e488a79577631ed6661fdc12e80d4182ed655036/rignore-0.7.0-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:f7f71807ed0bc1542860a8fa1615a0d93f3d5a22dde1066e9f50d7270bc60686", size = 810391, upload-time = "2025-10-02T13:24:58.144Z" }, + { url = "https://files.pythonhosted.org/packages/f2/82/935bffa4ad7d9560541daaca7ba0e4ee9b0b9a6370ab9518cf9c991087bb/rignore-0.7.0-cp314-cp314-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c7e6ff54399ddb650f4e4dc74b325766e7607967a49b868326e9687fc3642620", size = 950261, upload-time = "2025-10-02T13:24:45.121Z" }, + { url = "https://files.pythonhosted.org/packages/1e/0e/22abda23cc6d20901262fcfea50c25ed66ca6e1a5dc610d338df4ca10407/rignore-0.7.0-cp314-cp314-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:09dfad3ca450b3967533c6b1a2c7c0228c63c518f619ff342df5f9c3ed978b66", size = 974258, upload-time = "2025-10-02T13:24:32.44Z" }, + { url = "https://files.pythonhosted.org/packages/ed/8d/0ba2c712723fdda62125087d00dcdad93102876d4e3fa5adbb99f0b859c3/rignore-0.7.0-cp314-cp314-win32.whl", hash = "sha256:2850718cfb1caece6b7ac19a524c7905a8d0c6627b0d0f4e81798e20b6c75078", size = 637403, upload-time = "2025-10-02T13:26:41.814Z" }, + { url = "https://files.pythonhosted.org/packages/1c/63/0d7df1237c6353d1a85d8a0bc1797ac766c68e8bc6fbca241db74124eb61/rignore-0.7.0-cp314-cp314-win_amd64.whl", hash = "sha256:2401637dc8ab074f5e642295f8225d2572db395ae504ffc272a8d21e9fe77b2c", size = 717404, upload-time = "2025-10-02T13:26:29.936Z" }, { url = "https://files.pythonhosted.org/packages/2b/60/b02edbf5059f7947e375dc46583283aad579505e9e07775277e7fd6e04db/rignore-0.7.0-pp311-pypy311_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f40142a34a7f08cd90fb4e74e43deffe3381fa3b164fb59857fa4e3996d4716d", size = 892600, upload-time = "2025-10-02T13:23:31.158Z" }, { url = "https://files.pythonhosted.org/packages/cf/c5/3caa7732a91623110bc80c30f592efc6571a1c610b94f36083601ebf2392/rignore-0.7.0-pp311-pypy311_pp73-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:ccbc0b6285bb981316e5646ac96be7bca9665ee2444427d8d170fda5eda6f022", size = 866500, upload-time = "2025-10-02T13:23:49.099Z" }, { url = "https://files.pythonhosted.org/packages/8b/66/943300886972b2dded2e0e851c1da1ad36565d40b5e55833b049cbf9285b/rignore-0.7.0-pp311-pypy311_pp73-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:77cdf15a8b0ab80cd1d05a754b3237330e60e8731c255b7eb2a5d240a68df9f8", size = 1167255, upload-time = "2025-10-02T13:24:05.583Z" }, @@ -5541,6 +5893,35 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/86/e3/84507781cccd0145f35b1dc32c72675200c5ce8d5b30f813e49424ef68fc/rpds_py-0.27.1-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:dd2135527aa40f061350c3f8f89da2644de26cd73e4de458e79606384f4f68e7", size = 555300, upload-time = "2025-08-27T12:14:11.783Z" }, { url = "https://files.pythonhosted.org/packages/e5/ee/375469849e6b429b3516206b4580a79e9ef3eb12920ddbd4492b56eaacbe/rpds_py-0.27.1-cp313-cp313t-win32.whl", hash = "sha256:3020724ade63fe320a972e2ffd93b5623227e684315adce194941167fee02688", size = 216714, upload-time = "2025-08-27T12:14:13.629Z" }, { url = "https://files.pythonhosted.org/packages/21/87/3fc94e47c9bd0742660e84706c311a860dcae4374cf4a03c477e23ce605a/rpds_py-0.27.1-cp313-cp313t-win_amd64.whl", hash = "sha256:8ee50c3e41739886606388ba3ab3ee2aae9f35fb23f833091833255a31740797", size = 228943, upload-time = "2025-08-27T12:14:14.937Z" }, + { url = "https://files.pythonhosted.org/packages/70/36/b6e6066520a07cf029d385de869729a895917b411e777ab1cde878100a1d/rpds_py-0.27.1-cp314-cp314-macosx_10_12_x86_64.whl", hash = "sha256:acb9aafccaae278f449d9c713b64a9e68662e7799dbd5859e2c6b3c67b56d334", size = 362472, upload-time = "2025-08-27T12:14:16.333Z" }, + { url = "https://files.pythonhosted.org/packages/af/07/b4646032e0dcec0df9c73a3bd52f63bc6c5f9cda992f06bd0e73fe3fbebd/rpds_py-0.27.1-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:b7fb801aa7f845ddf601c49630deeeccde7ce10065561d92729bfe81bd21fb33", size = 345676, upload-time = "2025-08-27T12:14:17.764Z" }, + { url = "https://files.pythonhosted.org/packages/b0/16/2f1003ee5d0af4bcb13c0cf894957984c32a6751ed7206db2aee7379a55e/rpds_py-0.27.1-cp314-cp314-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:fe0dd05afb46597b9a2e11c351e5e4283c741237e7f617ffb3252780cca9336a", size = 385313, upload-time = "2025-08-27T12:14:19.829Z" }, + { url = "https://files.pythonhosted.org/packages/05/cd/7eb6dd7b232e7f2654d03fa07f1414d7dfc980e82ba71e40a7c46fd95484/rpds_py-0.27.1-cp314-cp314-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:b6dfb0e058adb12d8b1d1b25f686e94ffa65d9995a5157afe99743bf7369d62b", size = 399080, upload-time = "2025-08-27T12:14:21.531Z" }, + { url = "https://files.pythonhosted.org/packages/20/51/5829afd5000ec1cb60f304711f02572d619040aa3ec033d8226817d1e571/rpds_py-0.27.1-cp314-cp314-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:ed090ccd235f6fa8bb5861684567f0a83e04f52dfc2e5c05f2e4b1309fcf85e7", size = 523868, upload-time = "2025-08-27T12:14:23.485Z" }, + { url = "https://files.pythonhosted.org/packages/05/2c/30eebca20d5db95720ab4d2faec1b5e4c1025c473f703738c371241476a2/rpds_py-0.27.1-cp314-cp314-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:bf876e79763eecf3e7356f157540d6a093cef395b65514f17a356f62af6cc136", size = 408750, upload-time = "2025-08-27T12:14:24.924Z" }, + { url = "https://files.pythonhosted.org/packages/90/1a/cdb5083f043597c4d4276eae4e4c70c55ab5accec078da8611f24575a367/rpds_py-0.27.1-cp314-cp314-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:12ed005216a51b1d6e2b02a7bd31885fe317e45897de81d86dcce7d74618ffff", size = 387688, upload-time = "2025-08-27T12:14:27.537Z" }, + { url = "https://files.pythonhosted.org/packages/7c/92/cf786a15320e173f945d205ab31585cc43969743bb1a48b6888f7a2b0a2d/rpds_py-0.27.1-cp314-cp314-manylinux_2_31_riscv64.whl", hash = "sha256:ee4308f409a40e50593c7e3bb8cbe0b4d4c66d1674a316324f0c2f5383b486f9", size = 407225, upload-time = "2025-08-27T12:14:28.981Z" }, + { url = "https://files.pythonhosted.org/packages/33/5c/85ee16df5b65063ef26017bef33096557a4c83fbe56218ac7cd8c235f16d/rpds_py-0.27.1-cp314-cp314-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:0b08d152555acf1f455154d498ca855618c1378ec810646fcd7c76416ac6dc60", size = 423361, upload-time = "2025-08-27T12:14:30.469Z" }, + { url = "https://files.pythonhosted.org/packages/4b/8e/1c2741307fcabd1a334ecf008e92c4f47bb6f848712cf15c923becfe82bb/rpds_py-0.27.1-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:dce51c828941973a5684d458214d3a36fcd28da3e1875d659388f4f9f12cc33e", size = 562493, upload-time = "2025-08-27T12:14:31.987Z" }, + { url = "https://files.pythonhosted.org/packages/04/03/5159321baae9b2222442a70c1f988cbbd66b9be0675dd3936461269be360/rpds_py-0.27.1-cp314-cp314-musllinux_1_2_i686.whl", hash = "sha256:c1476d6f29eb81aa4151c9a31219b03f1f798dc43d8af1250a870735516a1212", size = 592623, upload-time = "2025-08-27T12:14:33.543Z" }, + { url = "https://files.pythonhosted.org/packages/ff/39/c09fd1ad28b85bc1d4554a8710233c9f4cefd03d7717a1b8fbfd171d1167/rpds_py-0.27.1-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:3ce0cac322b0d69b63c9cdb895ee1b65805ec9ffad37639f291dd79467bee675", size = 558800, upload-time = "2025-08-27T12:14:35.436Z" }, + { url = "https://files.pythonhosted.org/packages/c5/d6/99228e6bbcf4baa764b18258f519a9035131d91b538d4e0e294313462a98/rpds_py-0.27.1-cp314-cp314-win32.whl", hash = "sha256:dfbfac137d2a3d0725758cd141f878bf4329ba25e34979797c89474a89a8a3a3", size = 221943, upload-time = "2025-08-27T12:14:36.898Z" }, + { url = "https://files.pythonhosted.org/packages/be/07/c802bc6b8e95be83b79bdf23d1aa61d68324cb1006e245d6c58e959e314d/rpds_py-0.27.1-cp314-cp314-win_amd64.whl", hash = "sha256:a6e57b0abfe7cc513450fcf529eb486b6e4d3f8aee83e92eb5f1ef848218d456", size = 233739, upload-time = "2025-08-27T12:14:38.386Z" }, + { url = "https://files.pythonhosted.org/packages/c8/89/3e1b1c16d4c2d547c5717377a8df99aee8099ff050f87c45cb4d5fa70891/rpds_py-0.27.1-cp314-cp314-win_arm64.whl", hash = "sha256:faf8d146f3d476abfee026c4ae3bdd9ca14236ae4e4c310cbd1cf75ba33d24a3", size = 223120, upload-time = "2025-08-27T12:14:39.82Z" }, + { url = "https://files.pythonhosted.org/packages/62/7e/dc7931dc2fa4a6e46b2a4fa744a9fe5c548efd70e0ba74f40b39fa4a8c10/rpds_py-0.27.1-cp314-cp314t-macosx_10_12_x86_64.whl", hash = "sha256:ba81d2b56b6d4911ce735aad0a1d4495e808b8ee4dc58715998741a26874e7c2", size = 358944, upload-time = "2025-08-27T12:14:41.199Z" }, + { url = "https://files.pythonhosted.org/packages/e6/22/4af76ac4e9f336bfb1a5f240d18a33c6b2fcaadb7472ac7680576512b49a/rpds_py-0.27.1-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:84f7d509870098de0e864cad0102711c1e24e9b1a50ee713b65928adb22269e4", size = 342283, upload-time = "2025-08-27T12:14:42.699Z" }, + { url = "https://files.pythonhosted.org/packages/1c/15/2a7c619b3c2272ea9feb9ade67a45c40b3eeb500d503ad4c28c395dc51b4/rpds_py-0.27.1-cp314-cp314t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a9e960fc78fecd1100539f14132425e1d5fe44ecb9239f8f27f079962021523e", size = 380320, upload-time = "2025-08-27T12:14:44.157Z" }, + { url = "https://files.pythonhosted.org/packages/a2/7d/4c6d243ba4a3057e994bb5bedd01b5c963c12fe38dde707a52acdb3849e7/rpds_py-0.27.1-cp314-cp314t-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:62f85b665cedab1a503747617393573995dac4600ff51869d69ad2f39eb5e817", size = 391760, upload-time = "2025-08-27T12:14:45.845Z" }, + { url = "https://files.pythonhosted.org/packages/b4/71/b19401a909b83bcd67f90221330bc1ef11bc486fe4e04c24388d28a618ae/rpds_py-0.27.1-cp314-cp314t-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:fed467af29776f6556250c9ed85ea5a4dd121ab56a5f8b206e3e7a4c551e48ec", size = 522476, upload-time = "2025-08-27T12:14:47.364Z" }, + { url = "https://files.pythonhosted.org/packages/e4/44/1a3b9715c0455d2e2f0f6df5ee6d6f5afdc423d0773a8a682ed2b43c566c/rpds_py-0.27.1-cp314-cp314t-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:f2729615f9d430af0ae6b36cf042cb55c0936408d543fb691e1a9e36648fd35a", size = 403418, upload-time = "2025-08-27T12:14:49.991Z" }, + { url = "https://files.pythonhosted.org/packages/1c/4b/fb6c4f14984eb56673bc868a66536f53417ddb13ed44b391998100a06a96/rpds_py-0.27.1-cp314-cp314t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1b207d881a9aef7ba753d69c123a35d96ca7cb808056998f6b9e8747321f03b8", size = 384771, upload-time = "2025-08-27T12:14:52.159Z" }, + { url = "https://files.pythonhosted.org/packages/c0/56/d5265d2d28b7420d7b4d4d85cad8ef891760f5135102e60d5c970b976e41/rpds_py-0.27.1-cp314-cp314t-manylinux_2_31_riscv64.whl", hash = "sha256:639fd5efec029f99b79ae47e5d7e00ad8a773da899b6309f6786ecaf22948c48", size = 400022, upload-time = "2025-08-27T12:14:53.859Z" }, + { url = "https://files.pythonhosted.org/packages/8f/e9/9f5fc70164a569bdd6ed9046486c3568d6926e3a49bdefeeccfb18655875/rpds_py-0.27.1-cp314-cp314t-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:fecc80cb2a90e28af8a9b366edacf33d7a91cbfe4c2c4544ea1246e949cfebeb", size = 416787, upload-time = "2025-08-27T12:14:55.673Z" }, + { url = "https://files.pythonhosted.org/packages/d4/64/56dd03430ba491db943a81dcdef115a985aac5f44f565cd39a00c766d45c/rpds_py-0.27.1-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:42a89282d711711d0a62d6f57d81aa43a1368686c45bc1c46b7f079d55692734", size = 557538, upload-time = "2025-08-27T12:14:57.245Z" }, + { url = "https://files.pythonhosted.org/packages/3f/36/92cc885a3129993b1d963a2a42ecf64e6a8e129d2c7cc980dbeba84e55fb/rpds_py-0.27.1-cp314-cp314t-musllinux_1_2_i686.whl", hash = "sha256:cf9931f14223de59551ab9d38ed18d92f14f055a5f78c1d8ad6493f735021bbb", size = 588512, upload-time = "2025-08-27T12:14:58.728Z" }, + { url = "https://files.pythonhosted.org/packages/dd/10/6b283707780a81919f71625351182b4f98932ac89a09023cb61865136244/rpds_py-0.27.1-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:f39f58a27cc6e59f432b568ed8429c7e1641324fbe38131de852cd77b2d534b0", size = 555813, upload-time = "2025-08-27T12:15:00.334Z" }, + { url = "https://files.pythonhosted.org/packages/04/2e/30b5ea18c01379da6272a92825dd7e53dc9d15c88a19e97932d35d430ef7/rpds_py-0.27.1-cp314-cp314t-win32.whl", hash = "sha256:d5fa0ee122dc09e23607a28e6d7b150da16c662e66409bbe85230e4c85bb528a", size = 217385, upload-time = "2025-08-27T12:15:01.937Z" }, + { url = "https://files.pythonhosted.org/packages/32/7d/97119da51cb1dd3f2f3c0805f155a3aa4a95fa44fe7d78ae15e69edf4f34/rpds_py-0.27.1-cp314-cp314t-win_amd64.whl", hash = "sha256:6567d2bb951e21232c2f660c24cf3470bb96de56cdcb3f071a83feeaff8a2772", size = 230097, upload-time = "2025-08-27T12:15:03.961Z" }, { url = "https://files.pythonhosted.org/packages/0c/ed/e1fba02de17f4f76318b834425257c8ea297e415e12c68b4361f63e8ae92/rpds_py-0.27.1-pp311-pypy311_pp73-macosx_10_12_x86_64.whl", hash = "sha256:cdfe4bb2f9fe7458b7453ad3c33e726d6d1c7c0a72960bcc23800d77384e42df", size = 371402, upload-time = "2025-08-27T12:15:51.561Z" }, { url = "https://files.pythonhosted.org/packages/af/7c/e16b959b316048b55585a697e94add55a4ae0d984434d279ea83442e460d/rpds_py-0.27.1-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:8fabb8fd848a5f75a2324e4a84501ee3a5e3c78d8603f83475441866e60b94a3", size = 354084, upload-time = "2025-08-27T12:15:53.219Z" }, { url = "https://files.pythonhosted.org/packages/de/c1/ade645f55de76799fdd08682d51ae6724cb46f318573f18be49b1e040428/rpds_py-0.27.1-pp311-pypy311_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:eda8719d598f2f7f3e0f885cba8646644b55a187762bec091fa14a2b819746a9", size = 383090, upload-time = "2025-08-27T12:15:55.158Z" }, @@ -5581,7 +5962,7 @@ name = "ruamel-yaml" version = "0.18.15" source = { registry = "https://pypi.org/simple" } dependencies = [ - { name = "ruamel-yaml-clib", marker = "platform_python_implementation == 'CPython'" }, + { name = "ruamel-yaml-clib", marker = "python_full_version < '3.14' and platform_python_implementation == 'CPython'" }, ] sdist = { url = "https://files.pythonhosted.org/packages/3e/db/f3950f5e5031b618aae9f423a39bf81a55c148aecd15a34527898e752cf4/ruamel.yaml-0.18.15.tar.gz", hash = "sha256:dbfca74b018c4c3fba0b9cc9ee33e53c371194a9000e694995e620490fd40700", size = 146865, upload-time = "2025-08-19T11:15:10.694Z" } wheels = [ @@ -5624,32 +6005,36 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/3d/ac/3c5c2b27a183f4fda8a57c82211721c016bcb689a4a175865f7646db9f94/ruamel.yaml.clib-0.2.14-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:b30110b29484adc597df6bd92a37b90e63a8c152ca8136aad100a02f8ba6d1b6", size = 765196, upload-time = "2025-09-22T19:51:05.916Z" }, { url = "https://files.pythonhosted.org/packages/92/2e/06f56a71fd55021c993ed6e848c9b2e5e9cfce180a42179f0ddd28253f7c/ruamel.yaml.clib-0.2.14-cp313-cp313-win32.whl", hash = "sha256:f4e97a1cf0b7a30af9e1d9dad10a5671157b9acee790d9e26996391f49b965a2", size = 98635, upload-time = "2025-09-22T19:51:08.183Z" }, { url = "https://files.pythonhosted.org/packages/51/79/76aba16a1689b50528224b182f71097ece338e7a4ab55e84c2e73443b78a/ruamel.yaml.clib-0.2.14-cp313-cp313-win_amd64.whl", hash = "sha256:090782b5fb9d98df96509eecdbcaffd037d47389a89492320280d52f91330d78", size = 115238, upload-time = "2025-09-22T19:51:07.081Z" }, + { url = "https://files.pythonhosted.org/packages/21/e2/a59ff65c26aaf21a24eb38df777cb9af5d87ba8fc8107c163c2da9d1e85e/ruamel.yaml.clib-0.2.14-cp314-cp314-macosx_10_15_universal2.whl", hash = "sha256:7df6f6e9d0e33c7b1d435defb185095386c469109de723d514142632a7b9d07f", size = 271441, upload-time = "2025-09-23T14:24:16.498Z" }, + { url = "https://files.pythonhosted.org/packages/6b/fa/3234f913fe9a6525a7b97c6dad1f51e72b917e6872e051a5e2ffd8b16fbb/ruamel.yaml.clib-0.2.14-cp314-cp314-macosx_15_0_arm64.whl", hash = "sha256:70eda7703b8126f5e52fcf276e6c0f40b0d314674f896fc58c47b0aef2b9ae83", size = 137970, upload-time = "2025-09-22T19:51:09.472Z" }, + { url = "https://files.pythonhosted.org/packages/ef/ec/4edbf17ac2c87fa0845dd366ef8d5852b96eb58fcd65fc1ecf5fe27b4641/ruamel.yaml.clib-0.2.14-cp314-cp314-musllinux_1_2_i686.whl", hash = "sha256:a0cb71ccc6ef9ce36eecb6272c81afdc2f565950cdcec33ae8e6cd8f7fc86f27", size = 739639, upload-time = "2025-09-22T19:51:10.566Z" }, + { url = "https://files.pythonhosted.org/packages/15/18/b0e1fafe59051de9e79cdd431863b03593ecfa8341c110affad7c8121efc/ruamel.yaml.clib-0.2.14-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:e7cb9ad1d525d40f7d87b6df7c0ff916a66bc52cb61b66ac1b2a16d0c1b07640", size = 764456, upload-time = "2025-09-22T19:51:11.736Z" }, ] [[package]] name = "ruff" -version = "0.14.3" +version = "0.14.0" source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/75/62/50b7727004dfe361104dfbf898c45a9a2fdfad8c72c04ae62900224d6ecf/ruff-0.14.3.tar.gz", hash = "sha256:4ff876d2ab2b161b6de0aa1f5bd714e8e9b4033dc122ee006925fbacc4f62153", size = 5558687, upload-time = "2025-10-31T00:26:26.878Z" } -wheels = [ - { url = "https://files.pythonhosted.org/packages/ce/8e/0c10ff1ea5d4360ab8bfca4cb2c9d979101a391f3e79d2616c9bf348cd26/ruff-0.14.3-py3-none-linux_armv6l.whl", hash = "sha256:876b21e6c824f519446715c1342b8e60f97f93264012de9d8d10314f8a79c371", size = 12535613, upload-time = "2025-10-31T00:25:44.302Z" }, - { url = "https://files.pythonhosted.org/packages/d3/c8/6724f4634c1daf52409fbf13fefda64aa9c8f81e44727a378b7b73dc590b/ruff-0.14.3-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:b6fd8c79b457bedd2abf2702b9b472147cd860ed7855c73a5247fa55c9117654", size = 12855812, upload-time = "2025-10-31T00:25:47.793Z" }, - { url = "https://files.pythonhosted.org/packages/de/03/db1bce591d55fd5f8a08bb02517fa0b5097b2ccabd4ea1ee29aa72b67d96/ruff-0.14.3-py3-none-macosx_11_0_arm64.whl", hash = "sha256:71ff6edca490c308f083156938c0c1a66907151263c4abdcb588602c6e696a14", size = 11944026, upload-time = "2025-10-31T00:25:49.657Z" }, - { url = "https://files.pythonhosted.org/packages/0b/75/4f8dbd48e03272715d12c87dc4fcaaf21b913f0affa5f12a4e9c6f8a0582/ruff-0.14.3-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:786ee3ce6139772ff9272aaf43296d975c0217ee1b97538a98171bf0d21f87ed", size = 12356818, upload-time = "2025-10-31T00:25:51.949Z" }, - { url = "https://files.pythonhosted.org/packages/ec/9b/506ec5b140c11d44a9a4f284ea7c14ebf6f8b01e6e8917734a3325bff787/ruff-0.14.3-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:cd6291d0061811c52b8e392f946889916757610d45d004e41140d81fb6cd5ddc", size = 12336745, upload-time = "2025-10-31T00:25:54.248Z" }, - { url = "https://files.pythonhosted.org/packages/c7/e1/c560d254048c147f35e7f8131d30bc1f63a008ac61595cf3078a3e93533d/ruff-0.14.3-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a497ec0c3d2c88561b6d90f9c29f5ae68221ac00d471f306fa21fa4264ce5fcd", size = 13101684, upload-time = "2025-10-31T00:25:56.253Z" }, - { url = "https://files.pythonhosted.org/packages/a5/32/e310133f8af5cd11f8cc30f52522a3ebccc5ea5bff4b492f94faceaca7a8/ruff-0.14.3-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:e231e1be58fc568950a04fbe6887c8e4b85310e7889727e2b81db205c45059eb", size = 14535000, upload-time = "2025-10-31T00:25:58.397Z" }, - { url = "https://files.pythonhosted.org/packages/a2/a1/7b0470a22158c6d8501eabc5e9b6043c99bede40fa1994cadf6b5c2a61c7/ruff-0.14.3-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:469e35872a09c0e45fecf48dd960bfbce056b5db2d5e6b50eca329b4f853ae20", size = 14156450, upload-time = "2025-10-31T00:26:00.889Z" }, - { url = "https://files.pythonhosted.org/packages/0a/96/24bfd9d1a7f532b560dcee1a87096332e461354d3882124219bcaff65c09/ruff-0.14.3-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:3d6bc90307c469cb9d28b7cfad90aaa600b10d67c6e22026869f585e1e8a2db0", size = 13568414, upload-time = "2025-10-31T00:26:03.291Z" }, - { url = "https://files.pythonhosted.org/packages/a7/e7/138b883f0dfe4ad5b76b58bf4ae675f4d2176ac2b24bdd81b4d966b28c61/ruff-0.14.3-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0e2f8a0bbcffcfd895df39c9a4ecd59bb80dca03dc43f7fb63e647ed176b741e", size = 13315293, upload-time = "2025-10-31T00:26:05.708Z" }, - { url = "https://files.pythonhosted.org/packages/33/f4/c09bb898be97b2eb18476b7c950df8815ef14cf956074177e9fbd40b7719/ruff-0.14.3-py3-none-manylinux_2_31_riscv64.whl", hash = "sha256:678fdd7c7d2d94851597c23ee6336d25f9930b460b55f8598e011b57c74fd8c5", size = 13539444, upload-time = "2025-10-31T00:26:08.09Z" }, - { url = "https://files.pythonhosted.org/packages/9c/aa/b30a1db25fc6128b1dd6ff0741fa4abf969ded161599d07ca7edd0739cc0/ruff-0.14.3-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:1ec1ac071e7e37e0221d2f2dbaf90897a988c531a8592a6a5959f0603a1ecf5e", size = 12252581, upload-time = "2025-10-31T00:26:10.297Z" }, - { url = "https://files.pythonhosted.org/packages/da/13/21096308f384d796ffe3f2960b17054110a9c3828d223ca540c2b7cc670b/ruff-0.14.3-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:afcdc4b5335ef440d19e7df9e8ae2ad9f749352190e96d481dc501b753f0733e", size = 12307503, upload-time = "2025-10-31T00:26:12.646Z" }, - { url = "https://files.pythonhosted.org/packages/cb/cc/a350bac23f03b7dbcde3c81b154706e80c6f16b06ff1ce28ed07dc7b07b0/ruff-0.14.3-py3-none-musllinux_1_2_i686.whl", hash = "sha256:7bfc42f81862749a7136267a343990f865e71fe2f99cf8d2958f684d23ce3dfa", size = 12675457, upload-time = "2025-10-31T00:26:15.044Z" }, - { url = "https://files.pythonhosted.org/packages/cb/76/46346029fa2f2078826bc88ef7167e8c198e58fe3126636e52f77488cbba/ruff-0.14.3-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:a65e448cfd7e9c59fae8cf37f9221585d3354febaad9a07f29158af1528e165f", size = 13403980, upload-time = "2025-10-31T00:26:17.81Z" }, - { url = "https://files.pythonhosted.org/packages/9f/a4/35f1ef68c4e7b236d4a5204e3669efdeefaef21f0ff6a456792b3d8be438/ruff-0.14.3-py3-none-win32.whl", hash = "sha256:f3d91857d023ba93e14ed2d462ab62c3428f9bbf2b4fbac50a03ca66d31991f7", size = 12500045, upload-time = "2025-10-31T00:26:20.503Z" }, - { url = "https://files.pythonhosted.org/packages/03/15/51960ae340823c9859fb60c63301d977308735403e2134e17d1d2858c7fb/ruff-0.14.3-py3-none-win_amd64.whl", hash = "sha256:d7b7006ac0756306db212fd37116cce2bd307e1e109375e1c6c106002df0ae5f", size = 13594005, upload-time = "2025-10-31T00:26:22.533Z" }, - { url = "https://files.pythonhosted.org/packages/b7/73/4de6579bac8e979fca0a77e54dec1f1e011a0d268165eb8a9bc0982a6564/ruff-0.14.3-py3-none-win_arm64.whl", hash = "sha256:26eb477ede6d399d898791d01961e16b86f02bc2486d0d1a7a9bb2379d055dc1", size = 12590017, upload-time = "2025-10-31T00:26:24.52Z" }, +sdist = { url = "https://files.pythonhosted.org/packages/41/b9/9bd84453ed6dd04688de9b3f3a4146a1698e8faae2ceeccce4e14c67ae17/ruff-0.14.0.tar.gz", hash = "sha256:62ec8969b7510f77945df916de15da55311fade8d6050995ff7f680afe582c57", size = 5452071, upload-time = "2025-10-07T18:21:55.763Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/3a/4e/79d463a5f80654e93fa653ebfb98e0becc3f0e7cf6219c9ddedf1e197072/ruff-0.14.0-py3-none-linux_armv6l.whl", hash = "sha256:58e15bffa7054299becf4bab8a1187062c6f8cafbe9f6e39e0d5aface455d6b3", size = 12494532, upload-time = "2025-10-07T18:21:00.373Z" }, + { url = "https://files.pythonhosted.org/packages/ee/40/e2392f445ed8e02aa6105d49db4bfff01957379064c30f4811c3bf38aece/ruff-0.14.0-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:838d1b065f4df676b7c9957992f2304e41ead7a50a568185efd404297d5701e8", size = 13160768, upload-time = "2025-10-07T18:21:04.73Z" }, + { url = "https://files.pythonhosted.org/packages/75/da/2a656ea7c6b9bd14c7209918268dd40e1e6cea65f4bb9880eaaa43b055cd/ruff-0.14.0-py3-none-macosx_11_0_arm64.whl", hash = "sha256:703799d059ba50f745605b04638fa7e9682cc3da084b2092feee63500ff3d9b8", size = 12363376, upload-time = "2025-10-07T18:21:07.833Z" }, + { url = "https://files.pythonhosted.org/packages/42/e2/1ffef5a1875add82416ff388fcb7ea8b22a53be67a638487937aea81af27/ruff-0.14.0-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3ba9a8925e90f861502f7d974cc60e18ca29c72bb0ee8bfeabb6ade35a3abde7", size = 12608055, upload-time = "2025-10-07T18:21:10.72Z" }, + { url = "https://files.pythonhosted.org/packages/4a/32/986725199d7cee510d9f1dfdf95bf1efc5fa9dd714d0d85c1fb1f6be3bc3/ruff-0.14.0-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:e41f785498bd200ffc276eb9e1570c019c1d907b07cfb081092c8ad51975bbe7", size = 12318544, upload-time = "2025-10-07T18:21:13.741Z" }, + { url = "https://files.pythonhosted.org/packages/9a/ed/4969cefd53315164c94eaf4da7cfba1f267dc275b0abdd593d11c90829a3/ruff-0.14.0-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:30a58c087aef4584c193aebf2700f0fbcfc1e77b89c7385e3139956fa90434e2", size = 14001280, upload-time = "2025-10-07T18:21:16.411Z" }, + { url = "https://files.pythonhosted.org/packages/ab/ad/96c1fc9f8854c37681c9613d825925c7f24ca1acfc62a4eb3896b50bacd2/ruff-0.14.0-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:f8d07350bc7af0a5ce8812b7d5c1a7293cf02476752f23fdfc500d24b79b783c", size = 15027286, upload-time = "2025-10-07T18:21:19.577Z" }, + { url = "https://files.pythonhosted.org/packages/b3/00/1426978f97df4fe331074baf69615f579dc4e7c37bb4c6f57c2aad80c87f/ruff-0.14.0-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:eec3bbbf3a7d5482b5c1f42d5fc972774d71d107d447919fca620b0be3e3b75e", size = 14451506, upload-time = "2025-10-07T18:21:22.779Z" }, + { url = "https://files.pythonhosted.org/packages/58/d5/9c1cea6e493c0cf0647674cca26b579ea9d2a213b74b5c195fbeb9678e15/ruff-0.14.0-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:16b68e183a0e28e5c176d51004aaa40559e8f90065a10a559176713fcf435206", size = 13437384, upload-time = "2025-10-07T18:21:25.758Z" }, + { url = "https://files.pythonhosted.org/packages/29/b4/4cd6a4331e999fc05d9d77729c95503f99eae3ba1160469f2b64866964e3/ruff-0.14.0-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:eb732d17db2e945cfcbbc52af0143eda1da36ca8ae25083dd4f66f1542fdf82e", size = 13447976, upload-time = "2025-10-07T18:21:28.83Z" }, + { url = "https://files.pythonhosted.org/packages/3b/c0/ac42f546d07e4f49f62332576cb845d45c67cf5610d1851254e341d563b6/ruff-0.14.0-py3-none-manylinux_2_31_riscv64.whl", hash = "sha256:c958f66ab884b7873e72df38dcabee03d556a8f2ee1b8538ee1c2bbd619883dd", size = 13682850, upload-time = "2025-10-07T18:21:31.842Z" }, + { url = "https://files.pythonhosted.org/packages/5f/c4/4b0c9bcadd45b4c29fe1af9c5d1dc0ca87b4021665dfbe1c4688d407aa20/ruff-0.14.0-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:7eb0499a2e01f6e0c285afc5bac43ab380cbfc17cd43a2e1dd10ec97d6f2c42d", size = 12449825, upload-time = "2025-10-07T18:21:35.074Z" }, + { url = "https://files.pythonhosted.org/packages/4b/a8/e2e76288e6c16540fa820d148d83e55f15e994d852485f221b9524514730/ruff-0.14.0-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:4c63b2d99fafa05efca0ab198fd48fa6030d57e4423df3f18e03aa62518c565f", size = 12272599, upload-time = "2025-10-07T18:21:38.08Z" }, + { url = "https://files.pythonhosted.org/packages/18/14/e2815d8eff847391af632b22422b8207704222ff575dec8d044f9ab779b2/ruff-0.14.0-py3-none-musllinux_1_2_i686.whl", hash = "sha256:668fce701b7a222f3f5327f86909db2bbe99c30877c8001ff934c5413812ac02", size = 13193828, upload-time = "2025-10-07T18:21:41.216Z" }, + { url = "https://files.pythonhosted.org/packages/44/c6/61ccc2987cf0aecc588ff8f3212dea64840770e60d78f5606cd7dc34de32/ruff-0.14.0-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:a86bf575e05cb68dcb34e4c7dfe1064d44d3f0c04bbc0491949092192b515296", size = 13628617, upload-time = "2025-10-07T18:21:44.04Z" }, + { url = "https://files.pythonhosted.org/packages/73/e6/03b882225a1b0627e75339b420883dc3c90707a8917d2284abef7a58d317/ruff-0.14.0-py3-none-win32.whl", hash = "sha256:7450a243d7125d1c032cb4b93d9625dea46c8c42b4f06c6b709baac168e10543", size = 12367872, upload-time = "2025-10-07T18:21:46.67Z" }, + { url = "https://files.pythonhosted.org/packages/41/77/56cf9cf01ea0bfcc662de72540812e5ba8e9563f33ef3d37ab2174892c47/ruff-0.14.0-py3-none-win_amd64.whl", hash = "sha256:ea95da28cd874c4d9c922b39381cbd69cb7e7b49c21b8152b014bd4f52acddc2", size = 13464628, upload-time = "2025-10-07T18:21:50.318Z" }, + { url = "https://files.pythonhosted.org/packages/c6/2a/65880dfd0e13f7f13a775998f34703674a4554906167dce02daf7865b954/ruff-0.14.0-py3-none-win_arm64.whl", hash = "sha256:f42c9495f5c13ff841b1da4cb3c2a42075409592825dada7c5885c2c844ac730", size = 12565142, upload-time = "2025-10-07T18:21:53.577Z" }, ] [[package]] @@ -5747,15 +6132,15 @@ wheels = [ [[package]] name = "sentry-sdk" -version = "2.43.0" +version = "2.40.0" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "certifi" }, { name = "urllib3" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/b3/18/09875b4323b03ca9025bae7e6539797b27e4fc032998a466b4b9c3d24653/sentry_sdk-2.43.0.tar.gz", hash = "sha256:52ed6e251c5d2c084224d73efee56b007ef5c2d408a4a071270e82131d336e20", size = 368953, upload-time = "2025-10-29T11:26:08.156Z" } +sdist = { url = "https://files.pythonhosted.org/packages/4f/b5/ce879ce3292e5ca41fa3ebf68f60645032eca813c9ed8f92dcf09804c0e3/sentry_sdk-2.40.0.tar.gz", hash = "sha256:b9c4672fb2cafabcc28586ab8fd0ceeff9b2352afcf2b936e13d5ba06d141b9f", size = 351703, upload-time = "2025-10-06T12:27:29.207Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/69/31/8228fa962f7fd8814d634e4ebece8780e2cdcfbdf0cd2e14d4a6861a7cd5/sentry_sdk-2.43.0-py2.py3-none-any.whl", hash = "sha256:4aacafcf1756ef066d359ae35030881917160ba7f6fc3ae11e0e58b09edc2d5d", size = 400997, upload-time = "2025-10-29T11:26:05.77Z" }, + { url = "https://files.pythonhosted.org/packages/a4/d1/a54bd3622c6e742e6a01bc3bac45966b7ba886e29827da6b8ca7ae234e21/sentry_sdk-2.40.0-py2.py3-none-any.whl", hash = "sha256:d5f6ae0f27ea73e7b09c70ad7d42242326eb44765e87a15d8c5aab96b80013e6", size = 374747, upload-time = "2025-10-06T12:27:27.051Z" }, ] [[package]] @@ -5808,6 +6193,22 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/96/b3/c6655ee7232b417562bae192ae0d3ceaadb1cc0ffc2088a2ddf415456cc2/shapely-2.1.2-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:6305993a35989391bd3476ee538a5c9a845861462327efe00dd11a5c8c709a99", size = 4170078, upload-time = "2025-09-24T13:51:08.584Z" }, { url = "https://files.pythonhosted.org/packages/a0/8e/605c76808d73503c9333af8f6cbe7e1354d2d238bda5f88eea36bfe0f42a/shapely-2.1.2-cp313-cp313t-win32.whl", hash = "sha256:c8876673449f3401f278c86eb33224c5764582f72b653a415d0e6672fde887bf", size = 1559178, upload-time = "2025-09-24T13:51:10.73Z" }, { url = "https://files.pythonhosted.org/packages/36/f7/d317eb232352a1f1444d11002d477e54514a4a6045536d49d0c59783c0da/shapely-2.1.2-cp313-cp313t-win_amd64.whl", hash = "sha256:4a44bc62a10d84c11a7a3d7c1c4fe857f7477c3506e24c9062da0db0ae0c449c", size = 1739756, upload-time = "2025-09-24T13:51:12.105Z" }, + { url = "https://files.pythonhosted.org/packages/fc/c4/3ce4c2d9b6aabd27d26ec988f08cb877ba9e6e96086eff81bfea93e688c7/shapely-2.1.2-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:9a522f460d28e2bf4e12396240a5fc1518788b2fcd73535166d748399ef0c223", size = 1831290, upload-time = "2025-09-24T13:51:13.56Z" }, + { url = "https://files.pythonhosted.org/packages/17/b9/f6ab8918fc15429f79cb04afa9f9913546212d7fb5e5196132a2af46676b/shapely-2.1.2-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:1ff629e00818033b8d71139565527ced7d776c269a49bd78c9df84e8f852190c", size = 1641463, upload-time = "2025-09-24T13:51:14.972Z" }, + { url = "https://files.pythonhosted.org/packages/a5/57/91d59ae525ca641e7ac5551c04c9503aee6f29b92b392f31790fcb1a4358/shapely-2.1.2-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:f67b34271dedc3c653eba4e3d7111aa421d5be9b4c4c7d38d30907f796cb30df", size = 2970145, upload-time = "2025-09-24T13:51:16.961Z" }, + { url = "https://files.pythonhosted.org/packages/8a/cb/4948be52ee1da6927831ab59e10d4c29baa2a714f599f1f0d1bc747f5777/shapely-2.1.2-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:21952dc00df38a2c28375659b07a3979d22641aeb104751e769c3ee825aadecf", size = 3073806, upload-time = "2025-09-24T13:51:18.712Z" }, + { url = "https://files.pythonhosted.org/packages/03/83/f768a54af775eb41ef2e7bec8a0a0dbe7d2431c3e78c0a8bdba7ab17e446/shapely-2.1.2-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:1f2f33f486777456586948e333a56ae21f35ae273be99255a191f5c1fa302eb4", size = 3980803, upload-time = "2025-09-24T13:51:20.37Z" }, + { url = "https://files.pythonhosted.org/packages/9f/cb/559c7c195807c91c79d38a1f6901384a2878a76fbdf3f1048893a9b7534d/shapely-2.1.2-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:cf831a13e0d5a7eb519e96f58ec26e049b1fad411fc6fc23b162a7ce04d9cffc", size = 4133301, upload-time = "2025-09-24T13:51:21.887Z" }, + { url = "https://files.pythonhosted.org/packages/80/cd/60d5ae203241c53ef3abd2ef27c6800e21afd6c94e39db5315ea0cbafb4a/shapely-2.1.2-cp314-cp314-win32.whl", hash = "sha256:61edcd8d0d17dd99075d320a1dd39c0cb9616f7572f10ef91b4b5b00c4aeb566", size = 1583247, upload-time = "2025-09-24T13:51:23.401Z" }, + { url = "https://files.pythonhosted.org/packages/74/d4/135684f342e909330e50d31d441ace06bf83c7dc0777e11043f99167b123/shapely-2.1.2-cp314-cp314-win_amd64.whl", hash = "sha256:a444e7afccdb0999e203b976adb37ea633725333e5b119ad40b1ca291ecf311c", size = 1773019, upload-time = "2025-09-24T13:51:24.873Z" }, + { url = "https://files.pythonhosted.org/packages/a3/05/a44f3f9f695fa3ada22786dc9da33c933da1cbc4bfe876fe3a100bafe263/shapely-2.1.2-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:5ebe3f84c6112ad3d4632b1fd2290665aa75d4cef5f6c5d77c4c95b324527c6a", size = 1834137, upload-time = "2025-09-24T13:51:26.665Z" }, + { url = "https://files.pythonhosted.org/packages/52/7e/4d57db45bf314573427b0a70dfca15d912d108e6023f623947fa69f39b72/shapely-2.1.2-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:5860eb9f00a1d49ebb14e881f5caf6c2cf472c7fd38bd7f253bbd34f934eb076", size = 1642884, upload-time = "2025-09-24T13:51:28.029Z" }, + { url = "https://files.pythonhosted.org/packages/5a/27/4e29c0a55d6d14ad7422bf86995d7ff3f54af0eba59617eb95caf84b9680/shapely-2.1.2-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:b705c99c76695702656327b819c9660768ec33f5ce01fa32b2af62b56ba400a1", size = 3018320, upload-time = "2025-09-24T13:51:29.903Z" }, + { url = "https://files.pythonhosted.org/packages/9f/bb/992e6a3c463f4d29d4cd6ab8963b75b1b1040199edbd72beada4af46bde5/shapely-2.1.2-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:a1fd0ea855b2cf7c9cddaf25543e914dd75af9de08785f20ca3085f2c9ca60b0", size = 3094931, upload-time = "2025-09-24T13:51:32.699Z" }, + { url = "https://files.pythonhosted.org/packages/9c/16/82e65e21070e473f0ed6451224ed9fa0be85033d17e0c6e7213a12f59d12/shapely-2.1.2-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:df90e2db118c3671a0754f38e36802db75fe0920d211a27481daf50a711fdf26", size = 4030406, upload-time = "2025-09-24T13:51:34.189Z" }, + { url = "https://files.pythonhosted.org/packages/7c/75/c24ed871c576d7e2b64b04b1fe3d075157f6eb54e59670d3f5ffb36e25c7/shapely-2.1.2-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:361b6d45030b4ac64ddd0a26046906c8202eb60d0f9f53085f5179f1d23021a0", size = 4169511, upload-time = "2025-09-24T13:51:36.297Z" }, + { url = "https://files.pythonhosted.org/packages/b1/f7/b3d1d6d18ebf55236eec1c681ce5e665742aab3c0b7b232720a7d43df7b6/shapely-2.1.2-cp314-cp314t-win32.whl", hash = "sha256:b54df60f1fbdecc8ebc2c5b11870461a6417b3d617f555e5033f1505d36e5735", size = 1602607, upload-time = "2025-09-24T13:51:37.757Z" }, + { url = "https://files.pythonhosted.org/packages/9a/f6/f09272a71976dfc138129b8faf435d064a811ae2f708cb147dccdf7aacdb/shapely-2.1.2-cp314-cp314t-win_amd64.whl", hash = "sha256:0036ac886e0923417932c2e6369b6c52e38e0ff5d9120b90eef5cd9a5fc5cae9", size = 1796682, upload-time = "2025-09-24T13:51:39.233Z" }, ] [[package]] @@ -5940,14 +6341,14 @@ wheels = [ [[package]] name = "sphinx-autodoc-typehints" -version = "3.5.1" +version = "3.2.0" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "sphinx" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/04/da/40d1ac3a657d967a8d7024d730eb5e29057a2f998f8c5f3df9c2e33917f0/sphinx_autodoc_typehints-3.5.1.tar.gz", hash = "sha256:6114bc788d7b5118712b4f163e0c693e3828c552296007ff1a60ba1606b04718", size = 37620, upload-time = "2025-10-09T18:38:29.378Z" } +sdist = { url = "https://files.pythonhosted.org/packages/93/68/a388a9b8f066cd865d9daa65af589d097efbfab9a8c302d2cb2daa43b52e/sphinx_autodoc_typehints-3.2.0.tar.gz", hash = "sha256:107ac98bc8b4837202c88c0736d59d6da44076e65a0d7d7d543a78631f662a9b", size = 36724, upload-time = "2025-04-25T16:53:25.872Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/51/7e/a391c2217fb842e9bd67854311342f298907ffbc809d712f48d2a2938f56/sphinx_autodoc_typehints-3.5.1-py3-none-any.whl", hash = "sha256:e5c61ae50d0b311129a1f0091c9c25075a5734d575549cbd1b9d9f8a3fc35cfb", size = 21087, upload-time = "2025-10-09T18:38:27.843Z" }, + { url = "https://files.pythonhosted.org/packages/f7/c7/8aab362e86cbf887e58be749a78d20ad743e1eb2c73c2b13d4761f39a104/sphinx_autodoc_typehints-3.2.0-py3-none-any.whl", hash = "sha256:884b39be23b1d884dcc825d4680c9c6357a476936e3b381a67ae80091984eb49", size = 20563, upload-time = "2025-04-25T16:53:24.492Z" }, ] [[package]] @@ -6082,7 +6483,7 @@ wheels = [ [[package]] name = "sphinx-toolbox" -version = "3.10.0" +version = "4.0.0" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "apeye" }, @@ -6103,9 +6504,9 @@ dependencies = [ { name = "tabulate" }, { name = "typing-extensions" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/f4/2d/d916dc5a70bc7b006af8a31bba1a2767e99cdb884f3dfa47aa79a60cc1e9/sphinx_toolbox-3.10.0.tar.gz", hash = "sha256:6afea9ac9afabe76bd5bd4d2b01edfdad81d653a1a34768e776e6a56d5a6f572", size = 113656, upload-time = "2025-05-06T17:36:50.926Z" } +sdist = { url = "https://files.pythonhosted.org/packages/60/d2/fd68940102a02cbff392b91317618e0f87458e98a9684c0f74b1c58d4e49/sphinx_toolbox-4.0.0.tar.gz", hash = "sha256:48c31451db2e2d8c71c03939e72a19ef7bc92ca7850a62db63fc7bb8395b6785", size = 113819, upload-time = "2025-05-12T17:11:39.104Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/12/ec/d09521ae2059fe89d8b59b2b34f5a1713b82a14e70a9a018fca8d3d514be/sphinx_toolbox-3.10.0-py3-none-any.whl", hash = "sha256:675e5978eaee31adf21701054fa75bacf820459d56e93ac30ad01eaee047a6ef", size = 195622, upload-time = "2025-05-06T17:36:48.81Z" }, + { url = "https://files.pythonhosted.org/packages/77/fd/5f03a1ad6623b3533a4b7ce8156f0b6a185f4a276d12567e434b855040d1/sphinx_toolbox-4.0.0-py3-none-any.whl", hash = "sha256:c700937baee505e440d44d46bc47ccd036ec282ae61b04e40342944128721117", size = 195781, upload-time = "2025-05-12T17:11:37.45Z" }, ] [[package]] @@ -6211,21 +6612,21 @@ wheels = [ [[package]] name = "starlette" -version = "0.49.1" +version = "0.48.0" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "anyio", version = "3.7.1", source = { registry = "https://pypi.org/simple" }, marker = "python_full_version < '3.12'" }, { name = "anyio", version = "4.11.0", source = { registry = "https://pypi.org/simple" }, marker = "python_full_version >= '3.12'" }, { name = "typing-extensions", marker = "python_full_version < '3.13'" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/1b/3f/507c21db33b66fb027a332f2cb3abbbe924cc3a79ced12f01ed8645955c9/starlette-0.49.1.tar.gz", hash = "sha256:481a43b71e24ed8c43b11ea02f5353d77840e01480881b8cb5a26b8cae64a8cb", size = 2654703, upload-time = "2025-10-28T17:34:10.928Z" } +sdist = { url = "https://files.pythonhosted.org/packages/a7/a5/d6f429d43394057b67a6b5bbe6eae2f77a6bf7459d961fdb224bf206eee6/starlette-0.48.0.tar.gz", hash = "sha256:7e8cee469a8ab2352911528110ce9088fdc6a37d9876926e73da7ce4aa4c7a46", size = 2652949, upload-time = "2025-09-13T08:41:05.699Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/51/da/545b75d420bb23b5d494b0517757b351963e974e79933f01e05c929f20a6/starlette-0.49.1-py3-none-any.whl", hash = "sha256:d92ce9f07e4a3caa3ac13a79523bd18e3bc0042bb8ff2d759a8e7dd0e1859875", size = 74175, upload-time = "2025-10-28T17:34:09.13Z" }, + { url = "https://files.pythonhosted.org/packages/be/72/2db2f49247d0a18b4f1bb9a5a39a0162869acf235f3a96418363947b3d46/starlette-0.48.0-py3-none-any.whl", hash = "sha256:0764ca97b097582558ecb498132ed0c7d942f233f365b86ba37770e026510659", size = 73736, upload-time = "2025-09-13T08:41:03.869Z" }, ] [[package]] name = "swagger-plugin-for-sphinx" -version = "5.2.0" +version = "5.1.3" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "docutils" }, @@ -6233,9 +6634,9 @@ dependencies = [ { name = "sphinx" }, { name = "typing-extensions" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/c8/04/4e09392f114fa4e1ef07a05c73dc69f32e1cae0db0ddbf3870439e0ae974/swagger_plugin_for_sphinx-5.2.0.tar.gz", hash = "sha256:1d68c0a62d10e6f814ebc6741df8876d74c444490e27206deb23bd76ad17434d", size = 15967, upload-time = "2025-10-10T05:33:20.635Z" } +sdist = { url = "https://files.pythonhosted.org/packages/55/b5/266fc5fb22b87f1829fd3912b2e7f6c93f4a2dbc7a955f446fe3bb5c6d0b/swagger_plugin_for_sphinx-5.1.3.tar.gz", hash = "sha256:941e5b9ecb7275b616500f890bdfa6299fe2d77c2977c1759691fe31b5979a3c", size = 15862, upload-time = "2025-08-12T05:44:07.778Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/dc/6f/14be3aea9402e5a4c39ce3068a3b40cd3ea56fa5fa1aa0afaca77f47654a/swagger_plugin_for_sphinx-5.2.0-py3-none-any.whl", hash = "sha256:8a0070e13a02d66c8443757906fb73c224d385000c3dcf4822e7b82dd690c34c", size = 11259, upload-time = "2025-10-10T05:33:19.439Z" }, + { url = "https://files.pythonhosted.org/packages/0f/60/52bcc4779b9ba83cdf9e14b342c595d2c346acb081570fa8fa94892d3600/swagger_plugin_for_sphinx-5.1.3-py3-none-any.whl", hash = "sha256:4e2dfb8e551e675f8a5ee264e9b13fac76aa33c953700391037f294b78d44fb0", size = 11402, upload-time = "2025-08-12T05:44:05.554Z" }, ] [[package]] @@ -6315,35 +6716,41 @@ wheels = [ [[package]] name = "tomli" -version = "2.3.0" +version = "2.2.1" source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/52/ed/3f73f72945444548f33eba9a87fc7a6e969915e7b1acc8260b30e1f76a2f/tomli-2.3.0.tar.gz", hash = "sha256:64be704a875d2a59753d80ee8a533c3fe183e3f06807ff7dc2232938ccb01549", size = 17392, upload-time = "2025-10-08T22:01:47.119Z" } -wheels = [ - { url = "https://files.pythonhosted.org/packages/b3/2e/299f62b401438d5fe1624119c723f5d877acc86a4c2492da405626665f12/tomli-2.3.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:88bd15eb972f3664f5ed4b57c1634a97153b4bac4479dcb6a495f41921eb7f45", size = 153236, upload-time = "2025-10-08T22:01:00.137Z" }, - { url = "https://files.pythonhosted.org/packages/86/7f/d8fffe6a7aefdb61bced88fcb5e280cfd71e08939da5894161bd71bea022/tomli-2.3.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:883b1c0d6398a6a9d29b508c331fa56adbcdff647f6ace4dfca0f50e90dfd0ba", size = 148084, upload-time = "2025-10-08T22:01:01.63Z" }, - { url = "https://files.pythonhosted.org/packages/47/5c/24935fb6a2ee63e86d80e4d3b58b222dafaf438c416752c8b58537c8b89a/tomli-2.3.0-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:d1381caf13ab9f300e30dd8feadb3de072aeb86f1d34a8569453ff32a7dea4bf", size = 234832, upload-time = "2025-10-08T22:01:02.543Z" }, - { url = "https://files.pythonhosted.org/packages/89/da/75dfd804fc11e6612846758a23f13271b76d577e299592b4371a4ca4cd09/tomli-2.3.0-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:a0e285d2649b78c0d9027570d4da3425bdb49830a6156121360b3f8511ea3441", size = 242052, upload-time = "2025-10-08T22:01:03.836Z" }, - { url = "https://files.pythonhosted.org/packages/70/8c/f48ac899f7b3ca7eb13af73bacbc93aec37f9c954df3c08ad96991c8c373/tomli-2.3.0-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:0a154a9ae14bfcf5d8917a59b51ffd5a3ac1fd149b71b47a3a104ca4edcfa845", size = 239555, upload-time = "2025-10-08T22:01:04.834Z" }, - { url = "https://files.pythonhosted.org/packages/ba/28/72f8afd73f1d0e7829bfc093f4cb98ce0a40ffc0cc997009ee1ed94ba705/tomli-2.3.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:74bf8464ff93e413514fefd2be591c3b0b23231a77f901db1eb30d6f712fc42c", size = 245128, upload-time = "2025-10-08T22:01:05.84Z" }, - { url = "https://files.pythonhosted.org/packages/b6/eb/a7679c8ac85208706d27436e8d421dfa39d4c914dcf5fa8083a9305f58d9/tomli-2.3.0-cp311-cp311-win32.whl", hash = "sha256:00b5f5d95bbfc7d12f91ad8c593a1659b6387b43f054104cda404be6bda62456", size = 96445, upload-time = "2025-10-08T22:01:06.896Z" }, - { url = "https://files.pythonhosted.org/packages/0a/fe/3d3420c4cb1ad9cb462fb52967080575f15898da97e21cb6f1361d505383/tomli-2.3.0-cp311-cp311-win_amd64.whl", hash = "sha256:4dc4ce8483a5d429ab602f111a93a6ab1ed425eae3122032db7e9acf449451be", size = 107165, upload-time = "2025-10-08T22:01:08.107Z" }, - { url = "https://files.pythonhosted.org/packages/ff/b7/40f36368fcabc518bb11c8f06379a0fd631985046c038aca08c6d6a43c6e/tomli-2.3.0-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:d7d86942e56ded512a594786a5ba0a5e521d02529b3826e7761a05138341a2ac", size = 154891, upload-time = "2025-10-08T22:01:09.082Z" }, - { url = "https://files.pythonhosted.org/packages/f9/3f/d9dd692199e3b3aab2e4e4dd948abd0f790d9ded8cd10cbaae276a898434/tomli-2.3.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:73ee0b47d4dad1c5e996e3cd33b8a76a50167ae5f96a2607cbe8cc773506ab22", size = 148796, upload-time = "2025-10-08T22:01:10.266Z" }, - { url = "https://files.pythonhosted.org/packages/60/83/59bff4996c2cf9f9387a0f5a3394629c7efa5ef16142076a23a90f1955fa/tomli-2.3.0-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:792262b94d5d0a466afb5bc63c7daa9d75520110971ee269152083270998316f", size = 242121, upload-time = "2025-10-08T22:01:11.332Z" }, - { url = "https://files.pythonhosted.org/packages/45/e5/7c5119ff39de8693d6baab6c0b6dcb556d192c165596e9fc231ea1052041/tomli-2.3.0-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:4f195fe57ecceac95a66a75ac24d9d5fbc98ef0962e09b2eddec5d39375aae52", size = 250070, upload-time = "2025-10-08T22:01:12.498Z" }, - { url = "https://files.pythonhosted.org/packages/45/12/ad5126d3a278f27e6701abde51d342aa78d06e27ce2bb596a01f7709a5a2/tomli-2.3.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:e31d432427dcbf4d86958c184b9bfd1e96b5b71f8eb17e6d02531f434fd335b8", size = 245859, upload-time = "2025-10-08T22:01:13.551Z" }, - { url = "https://files.pythonhosted.org/packages/fb/a1/4d6865da6a71c603cfe6ad0e6556c73c76548557a8d658f9e3b142df245f/tomli-2.3.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:7b0882799624980785240ab732537fcfc372601015c00f7fc367c55308c186f6", size = 250296, upload-time = "2025-10-08T22:01:14.614Z" }, - { url = "https://files.pythonhosted.org/packages/a0/b7/a7a7042715d55c9ba6e8b196d65d2cb662578b4d8cd17d882d45322b0d78/tomli-2.3.0-cp312-cp312-win32.whl", hash = "sha256:ff72b71b5d10d22ecb084d345fc26f42b5143c5533db5e2eaba7d2d335358876", size = 97124, upload-time = "2025-10-08T22:01:15.629Z" }, - { url = "https://files.pythonhosted.org/packages/06/1e/f22f100db15a68b520664eb3328fb0ae4e90530887928558112c8d1f4515/tomli-2.3.0-cp312-cp312-win_amd64.whl", hash = "sha256:1cb4ed918939151a03f33d4242ccd0aa5f11b3547d0cf30f7c74a408a5b99878", size = 107698, upload-time = "2025-10-08T22:01:16.51Z" }, - { url = "https://files.pythonhosted.org/packages/89/48/06ee6eabe4fdd9ecd48bf488f4ac783844fd777f547b8d1b61c11939974e/tomli-2.3.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:5192f562738228945d7b13d4930baffda67b69425a7f0da96d360b0a3888136b", size = 154819, upload-time = "2025-10-08T22:01:17.964Z" }, - { url = "https://files.pythonhosted.org/packages/f1/01/88793757d54d8937015c75dcdfb673c65471945f6be98e6a0410fba167ed/tomli-2.3.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:be71c93a63d738597996be9528f4abe628d1adf5e6eb11607bc8fe1a510b5dae", size = 148766, upload-time = "2025-10-08T22:01:18.959Z" }, - { url = "https://files.pythonhosted.org/packages/42/17/5e2c956f0144b812e7e107f94f1cc54af734eb17b5191c0bbfb72de5e93e/tomli-2.3.0-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:c4665508bcbac83a31ff8ab08f424b665200c0e1e645d2bd9ab3d3e557b6185b", size = 240771, upload-time = "2025-10-08T22:01:20.106Z" }, - { url = "https://files.pythonhosted.org/packages/d5/f4/0fbd014909748706c01d16824eadb0307115f9562a15cbb012cd9b3512c5/tomli-2.3.0-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:4021923f97266babc6ccab9f5068642a0095faa0a51a246a6a02fccbb3514eaf", size = 248586, upload-time = "2025-10-08T22:01:21.164Z" }, - { url = "https://files.pythonhosted.org/packages/30/77/fed85e114bde5e81ecf9bc5da0cc69f2914b38f4708c80ae67d0c10180c5/tomli-2.3.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:a4ea38c40145a357d513bffad0ed869f13c1773716cf71ccaa83b0fa0cc4e42f", size = 244792, upload-time = "2025-10-08T22:01:22.417Z" }, - { url = "https://files.pythonhosted.org/packages/55/92/afed3d497f7c186dc71e6ee6d4fcb0acfa5f7d0a1a2878f8beae379ae0cc/tomli-2.3.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:ad805ea85eda330dbad64c7ea7a4556259665bdf9d2672f5dccc740eb9d3ca05", size = 248909, upload-time = "2025-10-08T22:01:23.859Z" }, - { url = "https://files.pythonhosted.org/packages/f8/84/ef50c51b5a9472e7265ce1ffc7f24cd4023d289e109f669bdb1553f6a7c2/tomli-2.3.0-cp313-cp313-win32.whl", hash = "sha256:97d5eec30149fd3294270e889b4234023f2c69747e555a27bd708828353ab606", size = 96946, upload-time = "2025-10-08T22:01:24.893Z" }, - { url = "https://files.pythonhosted.org/packages/b2/b7/718cd1da0884f281f95ccfa3a6cc572d30053cba64603f79d431d3c9b61b/tomli-2.3.0-cp313-cp313-win_amd64.whl", hash = "sha256:0c95ca56fbe89e065c6ead5b593ee64b84a26fca063b5d71a1122bf26e533999", size = 107705, upload-time = "2025-10-08T22:01:26.153Z" }, - { url = "https://files.pythonhosted.org/packages/77/b8/0135fadc89e73be292b473cb820b4f5a08197779206b33191e801feeae40/tomli-2.3.0-py3-none-any.whl", hash = "sha256:e95b1af3c5b07d9e643909b5abbec77cd9f1217e6d0bca72b0234736b9fb1f1b", size = 14408, upload-time = "2025-10-08T22:01:46.04Z" }, +sdist = { url = "https://files.pythonhosted.org/packages/18/87/302344fed471e44a87289cf4967697d07e532f2421fdaf868a303cbae4ff/tomli-2.2.1.tar.gz", hash = "sha256:cd45e1dc79c835ce60f7404ec8119f2eb06d38b1deba146f07ced3bbc44505ff", size = 17175, upload-time = "2024-11-27T22:38:36.873Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/43/ca/75707e6efa2b37c77dadb324ae7d9571cb424e61ea73fad7c56c2d14527f/tomli-2.2.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:678e4fa69e4575eb77d103de3df8a895e1591b48e740211bd1067378c69e8249", size = 131077, upload-time = "2024-11-27T22:37:54.956Z" }, + { url = "https://files.pythonhosted.org/packages/c7/16/51ae563a8615d472fdbffc43a3f3d46588c264ac4f024f63f01283becfbb/tomli-2.2.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:023aa114dd824ade0100497eb2318602af309e5a55595f76b626d6d9f3b7b0a6", size = 123429, upload-time = "2024-11-27T22:37:56.698Z" }, + { url = "https://files.pythonhosted.org/packages/f1/dd/4f6cd1e7b160041db83c694abc78e100473c15d54620083dbd5aae7b990e/tomli-2.2.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ece47d672db52ac607a3d9599a9d48dcb2f2f735c6c2d1f34130085bb12b112a", size = 226067, upload-time = "2024-11-27T22:37:57.63Z" }, + { url = "https://files.pythonhosted.org/packages/a9/6b/c54ede5dc70d648cc6361eaf429304b02f2871a345bbdd51e993d6cdf550/tomli-2.2.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6972ca9c9cc9f0acaa56a8ca1ff51e7af152a9f87fb64623e31d5c83700080ee", size = 236030, upload-time = "2024-11-27T22:37:59.344Z" }, + { url = "https://files.pythonhosted.org/packages/1f/47/999514fa49cfaf7a92c805a86c3c43f4215621855d151b61c602abb38091/tomli-2.2.1-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c954d2250168d28797dd4e3ac5cf812a406cd5a92674ee4c8f123c889786aa8e", size = 240898, upload-time = "2024-11-27T22:38:00.429Z" }, + { url = "https://files.pythonhosted.org/packages/73/41/0a01279a7ae09ee1573b423318e7934674ce06eb33f50936655071d81a24/tomli-2.2.1-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:8dd28b3e155b80f4d54beb40a441d366adcfe740969820caf156c019fb5c7ec4", size = 229894, upload-time = "2024-11-27T22:38:02.094Z" }, + { url = "https://files.pythonhosted.org/packages/55/18/5d8bc5b0a0362311ce4d18830a5d28943667599a60d20118074ea1b01bb7/tomli-2.2.1-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:e59e304978767a54663af13c07b3d1af22ddee3bb2fb0618ca1593e4f593a106", size = 245319, upload-time = "2024-11-27T22:38:03.206Z" }, + { url = "https://files.pythonhosted.org/packages/92/a3/7ade0576d17f3cdf5ff44d61390d4b3febb8a9fc2b480c75c47ea048c646/tomli-2.2.1-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:33580bccab0338d00994d7f16f4c4ec25b776af3ffaac1ed74e0b3fc95e885a8", size = 238273, upload-time = "2024-11-27T22:38:04.217Z" }, + { url = "https://files.pythonhosted.org/packages/72/6f/fa64ef058ac1446a1e51110c375339b3ec6be245af9d14c87c4a6412dd32/tomli-2.2.1-cp311-cp311-win32.whl", hash = "sha256:465af0e0875402f1d226519c9904f37254b3045fc5084697cefb9bdde1ff99ff", size = 98310, upload-time = "2024-11-27T22:38:05.908Z" }, + { url = "https://files.pythonhosted.org/packages/6a/1c/4a2dcde4a51b81be3530565e92eda625d94dafb46dbeb15069df4caffc34/tomli-2.2.1-cp311-cp311-win_amd64.whl", hash = "sha256:2d0f2fdd22b02c6d81637a3c95f8cd77f995846af7414c5c4b8d0545afa1bc4b", size = 108309, upload-time = "2024-11-27T22:38:06.812Z" }, + { url = "https://files.pythonhosted.org/packages/52/e1/f8af4c2fcde17500422858155aeb0d7e93477a0d59a98e56cbfe75070fd0/tomli-2.2.1-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:4a8f6e44de52d5e6c657c9fe83b562f5f4256d8ebbfe4ff922c495620a7f6cea", size = 132762, upload-time = "2024-11-27T22:38:07.731Z" }, + { url = "https://files.pythonhosted.org/packages/03/b8/152c68bb84fc00396b83e7bbddd5ec0bd3dd409db4195e2a9b3e398ad2e3/tomli-2.2.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:8d57ca8095a641b8237d5b079147646153d22552f1c637fd3ba7f4b0b29167a8", size = 123453, upload-time = "2024-11-27T22:38:09.384Z" }, + { url = "https://files.pythonhosted.org/packages/c8/d6/fc9267af9166f79ac528ff7e8c55c8181ded34eb4b0e93daa767b8841573/tomli-2.2.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4e340144ad7ae1533cb897d406382b4b6fede8890a03738ff1683af800d54192", size = 233486, upload-time = "2024-11-27T22:38:10.329Z" }, + { url = "https://files.pythonhosted.org/packages/5c/51/51c3f2884d7bab89af25f678447ea7d297b53b5a3b5730a7cb2ef6069f07/tomli-2.2.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:db2b95f9de79181805df90bedc5a5ab4c165e6ec3fe99f970d0e302f384ad222", size = 242349, upload-time = "2024-11-27T22:38:11.443Z" }, + { url = "https://files.pythonhosted.org/packages/ab/df/bfa89627d13a5cc22402e441e8a931ef2108403db390ff3345c05253935e/tomli-2.2.1-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:40741994320b232529c802f8bc86da4e1aa9f413db394617b9a256ae0f9a7f77", size = 252159, upload-time = "2024-11-27T22:38:13.099Z" }, + { url = "https://files.pythonhosted.org/packages/9e/6e/fa2b916dced65763a5168c6ccb91066f7639bdc88b48adda990db10c8c0b/tomli-2.2.1-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:400e720fe168c0f8521520190686ef8ef033fb19fc493da09779e592861b78c6", size = 237243, upload-time = "2024-11-27T22:38:14.766Z" }, + { url = "https://files.pythonhosted.org/packages/b4/04/885d3b1f650e1153cbb93a6a9782c58a972b94ea4483ae4ac5cedd5e4a09/tomli-2.2.1-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:02abe224de6ae62c19f090f68da4e27b10af2b93213d36cf44e6e1c5abd19fdd", size = 259645, upload-time = "2024-11-27T22:38:15.843Z" }, + { url = "https://files.pythonhosted.org/packages/9c/de/6b432d66e986e501586da298e28ebeefd3edc2c780f3ad73d22566034239/tomli-2.2.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:b82ebccc8c8a36f2094e969560a1b836758481f3dc360ce9a3277c65f374285e", size = 244584, upload-time = "2024-11-27T22:38:17.645Z" }, + { url = "https://files.pythonhosted.org/packages/1c/9a/47c0449b98e6e7d1be6cbac02f93dd79003234ddc4aaab6ba07a9a7482e2/tomli-2.2.1-cp312-cp312-win32.whl", hash = "sha256:889f80ef92701b9dbb224e49ec87c645ce5df3fa2cc548664eb8a25e03127a98", size = 98875, upload-time = "2024-11-27T22:38:19.159Z" }, + { url = "https://files.pythonhosted.org/packages/ef/60/9b9638f081c6f1261e2688bd487625cd1e660d0a85bd469e91d8db969734/tomli-2.2.1-cp312-cp312-win_amd64.whl", hash = "sha256:7fc04e92e1d624a4a63c76474610238576942d6b8950a2d7f908a340494e67e4", size = 109418, upload-time = "2024-11-27T22:38:20.064Z" }, + { url = "https://files.pythonhosted.org/packages/04/90/2ee5f2e0362cb8a0b6499dc44f4d7d48f8fff06d28ba46e6f1eaa61a1388/tomli-2.2.1-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:f4039b9cbc3048b2416cc57ab3bda989a6fcf9b36cf8937f01a6e731b64f80d7", size = 132708, upload-time = "2024-11-27T22:38:21.659Z" }, + { url = "https://files.pythonhosted.org/packages/c0/ec/46b4108816de6b385141f082ba99e315501ccd0a2ea23db4a100dd3990ea/tomli-2.2.1-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:286f0ca2ffeeb5b9bd4fcc8d6c330534323ec51b2f52da063b11c502da16f30c", size = 123582, upload-time = "2024-11-27T22:38:22.693Z" }, + { url = "https://files.pythonhosted.org/packages/a0/bd/b470466d0137b37b68d24556c38a0cc819e8febe392d5b199dcd7f578365/tomli-2.2.1-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a92ef1a44547e894e2a17d24e7557a5e85a9e1d0048b0b5e7541f76c5032cb13", size = 232543, upload-time = "2024-11-27T22:38:24.367Z" }, + { url = "https://files.pythonhosted.org/packages/d9/e5/82e80ff3b751373f7cead2815bcbe2d51c895b3c990686741a8e56ec42ab/tomli-2.2.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9316dc65bed1684c9a98ee68759ceaed29d229e985297003e494aa825ebb0281", size = 241691, upload-time = "2024-11-27T22:38:26.081Z" }, + { url = "https://files.pythonhosted.org/packages/05/7e/2a110bc2713557d6a1bfb06af23dd01e7dde52b6ee7dadc589868f9abfac/tomli-2.2.1-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e85e99945e688e32d5a35c1ff38ed0b3f41f43fad8df0bdf79f72b2ba7bc5272", size = 251170, upload-time = "2024-11-27T22:38:27.921Z" }, + { url = "https://files.pythonhosted.org/packages/64/7b/22d713946efe00e0adbcdfd6d1aa119ae03fd0b60ebed51ebb3fa9f5a2e5/tomli-2.2.1-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:ac065718db92ca818f8d6141b5f66369833d4a80a9d74435a268c52bdfa73140", size = 236530, upload-time = "2024-11-27T22:38:29.591Z" }, + { url = "https://files.pythonhosted.org/packages/38/31/3a76f67da4b0cf37b742ca76beaf819dca0ebef26d78fc794a576e08accf/tomli-2.2.1-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:d920f33822747519673ee656a4b6ac33e382eca9d331c87770faa3eef562aeb2", size = 258666, upload-time = "2024-11-27T22:38:30.639Z" }, + { url = "https://files.pythonhosted.org/packages/07/10/5af1293da642aded87e8a988753945d0cf7e00a9452d3911dd3bb354c9e2/tomli-2.2.1-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:a198f10c4d1b1375d7687bc25294306e551bf1abfa4eace6650070a5c1ae2744", size = 243954, upload-time = "2024-11-27T22:38:31.702Z" }, + { url = "https://files.pythonhosted.org/packages/5b/b9/1ed31d167be802da0fc95020d04cd27b7d7065cc6fbefdd2f9186f60d7bd/tomli-2.2.1-cp313-cp313-win32.whl", hash = "sha256:d3f5614314d758649ab2ab3a62d4f2004c825922f9e370b29416484086b264ec", size = 98724, upload-time = "2024-11-27T22:38:32.837Z" }, + { url = "https://files.pythonhosted.org/packages/c7/32/b0963458706accd9afcfeb867c0f9175a741bf7b19cd424230714d722198/tomli-2.2.1-cp313-cp313-win_amd64.whl", hash = "sha256:a38aa0308e754b0e3c67e344754dff64999ff9b513e691d0e786265c93583c69", size = 109383, upload-time = "2024-11-27T22:38:34.455Z" }, + { url = "https://files.pythonhosted.org/packages/6e/c2/61d3e0f47e2b74ef40a68b9e6ad5984f6241a942f7cd3bbfbdbd03861ea9/tomli-2.2.1-py3-none-any.whl", hash = "sha256:cb55c73c5f4408779d0cf3eef9f762b9c9f147a77de7b258bef0a5628adc85cc", size = 14257, upload-time = "2024-11-27T22:38:35.385Z" }, ] [[package]] @@ -6456,7 +6863,7 @@ datetime = [ [[package]] name = "typer" -version = "0.20.0" +version = "0.19.2" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "click" }, @@ -6464,18 +6871,18 @@ dependencies = [ { name = "shellingham" }, { name = "typing-extensions" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/8f/28/7c85c8032b91dbe79725b6f17d2fffc595dff06a35c7a30a37bef73a1ab4/typer-0.20.0.tar.gz", hash = "sha256:1aaf6494031793e4876fb0bacfa6a912b551cf43c1e63c800df8b1a866720c37", size = 106492, upload-time = "2025-10-20T17:03:49.445Z" } +sdist = { url = "https://files.pythonhosted.org/packages/21/ca/950278884e2ca20547ff3eb109478c6baf6b8cf219318e6bc4f666fad8e8/typer-0.19.2.tar.gz", hash = "sha256:9ad824308ded0ad06cc716434705f691d4ee0bfd0fb081839d2e426860e7fdca", size = 104755, upload-time = "2025-09-23T09:47:48.256Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/78/64/7713ffe4b5983314e9d436a90d5bd4f63b6054e2aca783a3cfc44cb95bbf/typer-0.20.0-py3-none-any.whl", hash = "sha256:5b463df6793ec1dca6213a3cf4c0f03bc6e322ac5e16e13ddd622a889489784a", size = 47028, upload-time = "2025-10-20T17:03:47.617Z" }, + { url = "https://files.pythonhosted.org/packages/00/22/35617eee79080a5d071d0f14ad698d325ee6b3bf824fc0467c03b30e7fa8/typer-0.19.2-py3-none-any.whl", hash = "sha256:755e7e19670ffad8283db353267cb81ef252f595aa6834a0d1ca9312d9326cb9", size = 46748, upload-time = "2025-09-23T09:47:46.777Z" }, ] [[package]] name = "types-python-dateutil" -version = "2.9.0.20251008" +version = "2.9.0.20250822" source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/fc/83/24ed25dd0c6277a1a170c180ad9eef5879ecc9a4745b58d7905a4588c80d/types_python_dateutil-2.9.0.20251008.tar.gz", hash = "sha256:c3826289c170c93ebd8360c3485311187df740166dbab9dd3b792e69f2bc1f9c", size = 16128, upload-time = "2025-10-08T02:51:34.93Z" } +sdist = { url = "https://files.pythonhosted.org/packages/0c/0a/775f8551665992204c756be326f3575abba58c4a3a52eef9909ef4536428/types_python_dateutil-2.9.0.20250822.tar.gz", hash = "sha256:84c92c34bd8e68b117bff742bc00b692a1e8531262d4507b33afcc9f7716cd53", size = 16084, upload-time = "2025-08-22T03:02:00.613Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/da/af/5d24b8d49ef358468ecfdff5c556adf37f4fd28e336b96f923661a808329/types_python_dateutil-2.9.0.20251008-py3-none-any.whl", hash = "sha256:b9a5232c8921cf7661b29c163ccc56055c418ab2c6eabe8f917cbcc73a4c4157", size = 17934, upload-time = "2025-10-08T02:51:33.55Z" }, + { url = "https://files.pythonhosted.org/packages/ab/d9/a29dfa84363e88b053bf85a8b7f212a04f0d7343a4d24933baa45c06e08b/types_python_dateutil-2.9.0.20250822-py3-none-any.whl", hash = "sha256:849d52b737e10a6dc6621d2bd7940ec7c65fcb69e6aa2882acf4e56b2b508ddc", size = 17892, upload-time = "2025-08-22T03:01:59.436Z" }, ] [[package]] @@ -6568,6 +6975,28 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/d8/50/8856e24bec5e2fc7f775d867aeb7a3f137359356200ac44658f1f2c834b2/ujson-5.11.0-cp313-cp313-win32.whl", hash = "sha256:8fa2af7c1459204b7a42e98263b069bd535ea0cd978b4d6982f35af5a04a4241", size = 39753, upload-time = "2025-08-20T11:56:01.345Z" }, { url = "https://files.pythonhosted.org/packages/5b/d8/1baee0f4179a4d0f5ce086832147b6cc9b7731c24ca08e14a3fdb8d39c32/ujson-5.11.0-cp313-cp313-win_amd64.whl", hash = "sha256:34032aeca4510a7c7102bd5933f59a37f63891f30a0706fb46487ab6f0edf8f0", size = 43866, upload-time = "2025-08-20T11:56:02.552Z" }, { url = "https://files.pythonhosted.org/packages/a9/8c/6d85ef5be82c6d66adced3ec5ef23353ed710a11f70b0b6a836878396334/ujson-5.11.0-cp313-cp313-win_arm64.whl", hash = "sha256:ce076f2df2e1aa62b685086fbad67f2b1d3048369664b4cdccc50707325401f9", size = 38363, upload-time = "2025-08-20T11:56:03.688Z" }, + { url = "https://files.pythonhosted.org/packages/28/08/4518146f4984d112764b1dfa6fb7bad691c44a401adadaa5e23ccd930053/ujson-5.11.0-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:65724738c73645db88f70ba1f2e6fb678f913281804d5da2fd02c8c5839af302", size = 55462, upload-time = "2025-08-20T11:56:04.873Z" }, + { url = "https://files.pythonhosted.org/packages/29/37/2107b9a62168867a692654d8766b81bd2fd1e1ba13e2ec90555861e02b0c/ujson-5.11.0-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:29113c003ca33ab71b1b480bde952fbab2a0b6b03a4ee4c3d71687cdcbd1a29d", size = 53246, upload-time = "2025-08-20T11:56:06.054Z" }, + { url = "https://files.pythonhosted.org/packages/9b/f8/25583c70f83788edbe3ca62ce6c1b79eff465d78dec5eb2b2b56b3e98b33/ujson-5.11.0-cp314-cp314-manylinux_2_24_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:c44c703842024d796b4c78542a6fcd5c3cb948b9fc2a73ee65b9c86a22ee3638", size = 57631, upload-time = "2025-08-20T11:56:07.374Z" }, + { url = "https://files.pythonhosted.org/packages/ed/ca/19b3a632933a09d696f10dc1b0dfa1d692e65ad507d12340116ce4f67967/ujson-5.11.0-cp314-cp314-manylinux_2_24_i686.manylinux_2_28_i686.whl", hash = "sha256:e750c436fb90edf85585f5c62a35b35082502383840962c6983403d1bd96a02c", size = 59877, upload-time = "2025-08-20T11:56:08.534Z" }, + { url = "https://files.pythonhosted.org/packages/55/7a/4572af5324ad4b2bfdd2321e898a527050290147b4ea337a79a0e4e87ec7/ujson-5.11.0-cp314-cp314-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:f278b31a7c52eb0947b2db55a5133fbc46b6f0ef49972cd1a80843b72e135aba", size = 57363, upload-time = "2025-08-20T11:56:09.758Z" }, + { url = "https://files.pythonhosted.org/packages/7b/71/a2b8c19cf4e1efe53cf439cdf7198ac60ae15471d2f1040b490c1f0f831f/ujson-5.11.0-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:ab2cb8351d976e788669c8281465d44d4e94413718af497b4e7342d7b2f78018", size = 1036394, upload-time = "2025-08-20T11:56:11.168Z" }, + { url = "https://files.pythonhosted.org/packages/7a/3e/7b98668cba3bb3735929c31b999b374ebc02c19dfa98dfebaeeb5c8597ca/ujson-5.11.0-cp314-cp314-musllinux_1_2_i686.whl", hash = "sha256:090b4d11b380ae25453100b722d0609d5051ffe98f80ec52853ccf8249dfd840", size = 1195837, upload-time = "2025-08-20T11:56:12.6Z" }, + { url = "https://files.pythonhosted.org/packages/a1/ea/8870f208c20b43571a5c409ebb2fe9b9dba5f494e9e60f9314ac01ea8f78/ujson-5.11.0-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:80017e870d882d5517d28995b62e4e518a894f932f1e242cbc802a2fd64d365c", size = 1088837, upload-time = "2025-08-20T11:56:14.15Z" }, + { url = "https://files.pythonhosted.org/packages/63/b6/c0e6607e37fa47929920a685a968c6b990a802dec65e9c5181e97845985d/ujson-5.11.0-cp314-cp314-win32.whl", hash = "sha256:1d663b96eb34c93392e9caae19c099ec4133ba21654b081956613327f0e973ac", size = 41022, upload-time = "2025-08-20T11:56:15.509Z" }, + { url = "https://files.pythonhosted.org/packages/4e/56/f4fe86b4c9000affd63e9219e59b222dc48b01c534533093e798bf617a7e/ujson-5.11.0-cp314-cp314-win_amd64.whl", hash = "sha256:849e65b696f0d242833f1df4182096cedc50d414215d1371fca85c541fbff629", size = 45111, upload-time = "2025-08-20T11:56:16.597Z" }, + { url = "https://files.pythonhosted.org/packages/0a/f3/669437f0280308db4783b12a6d88c00730b394327d8334cc7a32ef218e64/ujson-5.11.0-cp314-cp314-win_arm64.whl", hash = "sha256:e73df8648c9470af2b6a6bf5250d4744ad2cf3d774dcf8c6e31f018bdd04d764", size = 39682, upload-time = "2025-08-20T11:56:17.763Z" }, + { url = "https://files.pythonhosted.org/packages/6e/cd/e9809b064a89fe5c4184649adeb13c1b98652db3f8518980b04227358574/ujson-5.11.0-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:de6e88f62796372fba1de973c11138f197d3e0e1d80bcb2b8aae1e826096d433", size = 55759, upload-time = "2025-08-20T11:56:18.882Z" }, + { url = "https://files.pythonhosted.org/packages/1b/be/ae26a6321179ebbb3a2e2685b9007c71bcda41ad7a77bbbe164005e956fc/ujson-5.11.0-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:49e56ef8066f11b80d620985ae36869a3ff7e4b74c3b6129182ec5d1df0255f3", size = 53634, upload-time = "2025-08-20T11:56:20.012Z" }, + { url = "https://files.pythonhosted.org/packages/ae/e9/fb4a220ee6939db099f4cfeeae796ecb91e7584ad4d445d4ca7f994a9135/ujson-5.11.0-cp314-cp314t-manylinux_2_24_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:1a325fd2c3a056cf6c8e023f74a0c478dd282a93141356ae7f16d5309f5ff823", size = 58547, upload-time = "2025-08-20T11:56:21.175Z" }, + { url = "https://files.pythonhosted.org/packages/bd/f8/fc4b952b8f5fea09ea3397a0bd0ad019e474b204cabcb947cead5d4d1ffc/ujson-5.11.0-cp314-cp314t-manylinux_2_24_i686.manylinux_2_28_i686.whl", hash = "sha256:a0af6574fc1d9d53f4ff371f58c96673e6d988ed2b5bf666a6143c782fa007e9", size = 60489, upload-time = "2025-08-20T11:56:22.342Z" }, + { url = "https://files.pythonhosted.org/packages/2e/e5/af5491dfda4f8b77e24cf3da68ee0d1552f99a13e5c622f4cef1380925c3/ujson-5.11.0-cp314-cp314t-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:10f29e71ecf4ecd93a6610bd8efa8e7b6467454a363c3d6416db65de883eb076", size = 58035, upload-time = "2025-08-20T11:56:23.92Z" }, + { url = "https://files.pythonhosted.org/packages/c4/09/0945349dd41f25cc8c38d78ace49f14c5052c5bbb7257d2f466fa7bdb533/ujson-5.11.0-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:1a0a9b76a89827a592656fe12e000cf4f12da9692f51a841a4a07aa4c7ecc41c", size = 1037212, upload-time = "2025-08-20T11:56:25.274Z" }, + { url = "https://files.pythonhosted.org/packages/49/44/8e04496acb3d5a1cbee3a54828d9652f67a37523efa3d3b18a347339680a/ujson-5.11.0-cp314-cp314t-musllinux_1_2_i686.whl", hash = "sha256:b16930f6a0753cdc7d637b33b4e8f10d5e351e1fb83872ba6375f1e87be39746", size = 1196500, upload-time = "2025-08-20T11:56:27.517Z" }, + { url = "https://files.pythonhosted.org/packages/64/ae/4bc825860d679a0f208a19af2f39206dfd804ace2403330fdc3170334a2f/ujson-5.11.0-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:04c41afc195fd477a59db3a84d5b83a871bd648ef371cf8c6f43072d89144eef", size = 1089487, upload-time = "2025-08-20T11:56:29.07Z" }, + { url = "https://files.pythonhosted.org/packages/30/ed/5a057199fb0a5deabe0957073a1c1c1c02a3e99476cd03daee98ea21fa57/ujson-5.11.0-cp314-cp314t-win32.whl", hash = "sha256:aa6d7a5e09217ff93234e050e3e380da62b084e26b9f2e277d2606406a2fc2e5", size = 41859, upload-time = "2025-08-20T11:56:30.495Z" }, + { url = "https://files.pythonhosted.org/packages/aa/03/b19c6176bdf1dc13ed84b886e99677a52764861b6cc023d5e7b6ebda249d/ujson-5.11.0-cp314-cp314t-win_amd64.whl", hash = "sha256:48055e1061c1bb1f79e75b4ac39e821f3f35a9b82de17fce92c3140149009bec", size = 46183, upload-time = "2025-08-20T11:56:31.574Z" }, + { url = "https://files.pythonhosted.org/packages/5d/ca/a0413a3874b2dc1708b8796ca895bf363292f9c70b2e8ca482b7dbc0259d/ujson-5.11.0-cp314-cp314t-win_arm64.whl", hash = "sha256:1194b943e951092db611011cb8dbdb6cf94a3b816ed07906e14d3bc6ce0e90ab", size = 40264, upload-time = "2025-08-20T11:56:32.773Z" }, { url = "https://files.pythonhosted.org/packages/50/17/30275aa2933430d8c0c4ead951cc4fdb922f575a349aa0b48a6f35449e97/ujson-5.11.0-pp311-pypy311_pp73-macosx_10_15_x86_64.whl", hash = "sha256:abae0fb58cc820092a0e9e8ba0051ac4583958495bfa5262a12f628249e3b362", size = 51206, upload-time = "2025-08-20T11:56:48.797Z" }, { url = "https://files.pythonhosted.org/packages/c3/15/42b3924258eac2551f8f33fa4e35da20a06a53857ccf3d4deb5e5d7c0b6c/ujson-5.11.0-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:fac6c0649d6b7c3682a0a6e18d3de6857977378dce8d419f57a0b20e3d775b39", size = 48907, upload-time = "2025-08-20T11:56:50.136Z" }, { url = "https://files.pythonhosted.org/packages/94/7e/0519ff7955aba581d1fe1fb1ca0e452471250455d182f686db5ac9e46119/ujson-5.11.0-pp311-pypy311_pp73-manylinux_2_24_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:4b42c115c7c6012506e8168315150d1e3f76e7ba0f4f95616f4ee599a1372bbc", size = 50319, upload-time = "2025-08-20T11:56:51.63Z" }, @@ -6614,28 +7043,28 @@ wheels = [ [[package]] name = "uv" -version = "0.9.7" -source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/cc/f6/9914f57d152cfcb85f3a26f8fbac3c88e4eb9cbe88639076241e16819334/uv-0.9.7.tar.gz", hash = "sha256:555ee72146b8782c73d755e4a21c9885c6bfc81db0ffca2220d52dddae007eb7", size = 3705596, upload-time = "2025-10-30T22:17:18.652Z" } -wheels = [ - { url = "https://files.pythonhosted.org/packages/58/38/cee64a9dcefd46f83a922c4e31d9cd9d91ce0d27a594192f7df677151eb4/uv-0.9.7-py3-none-linux_armv6l.whl", hash = "sha256:134e0daac56f9e399ccdfc9e4635bc0a13c234cad9224994c67bae462e07399a", size = 20614967, upload-time = "2025-10-30T22:16:31.274Z" }, - { url = "https://files.pythonhosted.org/packages/6f/b7/1b1ff8dfde05e9d27abf29ebf22da48428fe1e16f0b4d65a839bd2211303/uv-0.9.7-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:1aaf79b4234400e9e2fbf5b50b091726ccbb0b6d4d032edd3dfd4c9673d89dca", size = 19692886, upload-time = "2025-10-30T22:16:35.893Z" }, - { url = "https://files.pythonhosted.org/packages/f5/7d/b618174d8a8216af350398ace03805b2b2df6267b1745abf45556c2fda58/uv-0.9.7-py3-none-macosx_11_0_arm64.whl", hash = "sha256:0fdbfad5b367e7a3968264af6da5bbfffd4944a90319042f166e8df1a2d9de09", size = 18345022, upload-time = "2025-10-30T22:16:38.45Z" }, - { url = "https://files.pythonhosted.org/packages/13/4c/03fafb7d28289d54ac7a34507f1e97e527971f8b0ee2c5e957045966a1a6/uv-0.9.7-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.musllinux_1_1_aarch64.whl", hash = "sha256:635e82c2d0d8b001618af82e4f2724350f15814f6462a71b3ebd44adec21f03c", size = 20170427, upload-time = "2025-10-30T22:16:41.099Z" }, - { url = "https://files.pythonhosted.org/packages/35/0e/f1316da150453755bb88cf4232e8934de71a0091eb274a8b69d948535453/uv-0.9.7-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:56a440ccde7624a7bc070e1c2492b358c67aea9b8f17bc243ea27c5871c8d02c", size = 20234277, upload-time = "2025-10-30T22:16:43.521Z" }, - { url = "https://files.pythonhosted.org/packages/37/b8/cb62cd78151b235c5da9290f0e3fb032b36706f2922208a691678aa0f2df/uv-0.9.7-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b5f1fb8203a77853db176000e8f30d5815ab175dc46199db059f97a72fc51110", size = 21180078, upload-time = "2025-10-30T22:16:45.857Z" }, - { url = "https://files.pythonhosted.org/packages/be/e5/6107249d23f06fa1739496e89699e76169037b4643144b28b324efc3075d/uv-0.9.7-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:bb8bfcc2897f7653522abc2cae80233af756ad857bfbbbbe176f79460cbba417", size = 22743896, upload-time = "2025-10-30T22:16:48.487Z" }, - { url = "https://files.pythonhosted.org/packages/df/94/69d8e0bb29c140305e7677bc8c98c765468a55cb10966e77bb8c69bf815d/uv-0.9.7-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:89697fa0d7384ba047daf75df844ee7800235105e41d08e0c876861a2b4aa90e", size = 22361126, upload-time = "2025-10-30T22:16:51.366Z" }, - { url = "https://files.pythonhosted.org/packages/c0/0d/d186456cd0d7972ed026e5977b8a12e1f94c923fc3d6e86c7826c6f0d1fe/uv-0.9.7-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:c9810ee8173dce129c49b338d5e97f3d7c7e9435f73e0b9b26c2f37743d3bb9e", size = 21477489, upload-time = "2025-10-30T22:16:53.757Z" }, - { url = "https://files.pythonhosted.org/packages/c7/59/61d8e9f1734069049abe9e593961de602397c7194712346906c075fec65f/uv-0.9.7-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8cf6bc2482d1293cc630f66b862b494c09acda9b7faff7307ef52667a2b3ad49", size = 21382006, upload-time = "2025-10-30T22:16:56.117Z" }, - { url = "https://files.pythonhosted.org/packages/74/ac/090dbde63abb56001190392d29ca2aa654eebc146a693b5dda68da0df2fb/uv-0.9.7-py3-none-manylinux_2_28_aarch64.whl", hash = "sha256:7019f4416925f4091b9d28c1cf3e8444cf910c4ede76bdf1f6b9a56ca5f97985", size = 20255103, upload-time = "2025-10-30T22:16:58.434Z" }, - { url = "https://files.pythonhosted.org/packages/56/e7/ca2d99a4ce86366731547a84b5a2c946528b8d6d28c74ac659c925955a0c/uv-0.9.7-py3-none-manylinux_2_31_riscv64.whl", hash = "sha256:edd768f6730bba06aa10fdbd80ee064569f7236806f636bf65b68136a430aad0", size = 21311768, upload-time = "2025-10-30T22:17:01.259Z" }, - { url = "https://files.pythonhosted.org/packages/d8/1a/c5d9e57f52aa30bfee703e6b9e5b5072102cfc706f3444377bb0de79eac7/uv-0.9.7-py3-none-musllinux_1_1_armv7l.whl", hash = "sha256:d6e5fe28ca05a4b576c0e8da5f69251dc187a67054829cfc4afb2bfa1767114b", size = 20239129, upload-time = "2025-10-30T22:17:03.815Z" }, - { url = "https://files.pythonhosted.org/packages/aa/ab/16110ca6b1c4aaad79b4f2c6bc102c416a906e5d29947d0dc774f6ef4365/uv-0.9.7-py3-none-musllinux_1_1_i686.whl", hash = "sha256:34fe0af83fcafb9e2b786f4bd633a06c878d548a7c479594ffb5607db8778471", size = 20647326, upload-time = "2025-10-30T22:17:06.33Z" }, - { url = "https://files.pythonhosted.org/packages/89/a9/2a8129c796831279cc0c53ffdd19dd6133d514805e52b1ef8a2aa0ff8912/uv-0.9.7-py3-none-musllinux_1_1_x86_64.whl", hash = "sha256:777bb1de174319245a35e4f805d3b4484d006ebedae71d3546f95e7c28a5f436", size = 21604958, upload-time = "2025-10-30T22:17:09.046Z" }, - { url = "https://files.pythonhosted.org/packages/73/97/616650cb4dd5fbaabf8237469e1bc84710ae878095d359999982e1bc8ecf/uv-0.9.7-py3-none-win32.whl", hash = "sha256:bcf878528bd079fe8ae15928b5dfa232fac8b0e1854a2102da6ae1a833c31276", size = 19418913, upload-time = "2025-10-30T22:17:11.384Z" }, - { url = "https://files.pythonhosted.org/packages/de/7f/e3cdaffac70852f5ff933b04c7b8a06c0f91f41e563f04b689caa65b71bd/uv-0.9.7-py3-none-win_amd64.whl", hash = "sha256:62b315f62669899076a1953fba6baf50bd2b57f66f656280491331dcedd7e6c6", size = 21443513, upload-time = "2025-10-30T22:17:13.785Z" }, - { url = "https://files.pythonhosted.org/packages/89/79/8278452acae2fe96829485d32e1a2363829c9e42674704562ffcfc06b140/uv-0.9.7-py3-none-win_arm64.whl", hash = "sha256:d13da6521d4e841b1e0a9fda82e793dcf8458a323a9e8955f50903479d0bfa97", size = 19946729, upload-time = "2025-10-30T22:17:16.669Z" }, +version = "0.8.23" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/dc/85/6ae7e6a003bf815a1774c11e08de56f2037e1dad0bbbe050c7d3bd57be14/uv-0.8.23.tar.gz", hash = "sha256:1d3ee6f88b77429454172048a9672b8058607abcdf66cb8229707f0312a6752c", size = 3667341, upload-time = "2025-10-04T18:23:53.47Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/06/50/19a48639b2f61bf3f42d92e556494e3b39ccb58287b8b3244236b9d9df45/uv-0.8.23-py3-none-linux_armv6l.whl", hash = "sha256:58879ab3544ed0d7996dc5d9f87ce6a9770bd8f7886d8504298f62c481ecd9fd", size = 20599279, upload-time = "2025-10-04T18:23:06.64Z" }, + { url = "https://files.pythonhosted.org/packages/d7/26/36b3b37ca79bfff6998d7e9567465e6e3b4acf3fe1c7b226302272369240/uv-0.8.23-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:3a5c6cfad0a7b92e2a2ddf121e86171c7b850ccbdbc82b49afb8014e60d7dd84", size = 19580214, upload-time = "2025-10-04T18:23:10.551Z" }, + { url = "https://files.pythonhosted.org/packages/97/a1/e64b4c9a4db6c6ce6ee286991ce98e9cbf472402a5d09f216b85f2287465/uv-0.8.23-py3-none-macosx_11_0_arm64.whl", hash = "sha256:092404eb361f2f6cddf2c0a195c3f4bd2bc8baae60ed8b43409f93f672992b40", size = 18193303, upload-time = "2025-10-04T18:23:12.955Z" }, + { url = "https://files.pythonhosted.org/packages/f4/d7/8dfd344ca878b4de2d6e43636792ecef9d4870dd3735b3cb4951cfc22b0b/uv-0.8.23-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.musllinux_1_1_aarch64.whl", hash = "sha256:8d891aa0f82d67ed32811e1f3e6f314c6b0759a01b5b7676b4969196caf74295", size = 19968823, upload-time = "2025-10-04T18:23:15.861Z" }, + { url = "https://files.pythonhosted.org/packages/02/91/cbf2ebd1642577af2054842fb22cf21f7fa71d59a05ef5bda8f065ea8cc0/uv-0.8.23-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:f59f3823750da8187d9b3c65b87eb2436c398f385befa0ec48a1118d2accea51", size = 20178276, upload-time = "2025-10-04T18:23:19.056Z" }, + { url = "https://files.pythonhosted.org/packages/9d/d3/fefd0589f235c4c1d9b66baaad1472705c4621edc2eb7dabb0d98fc84d72/uv-0.8.23-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:96c1abfcd2c44c3ac8dec5079e1e51497f371a04c97978fed2a16d1c7343e471", size = 21059931, upload-time = "2025-10-04T18:23:21.316Z" }, + { url = "https://files.pythonhosted.org/packages/f7/98/7c7237d891c5d8a350bb9d59593fc103add12269ff983e13bd18bbb52b3e/uv-0.8.23-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:042c2e8b701a55078e0eb6f164c7bf550b6040437036dc572db39115cdaa5a23", size = 22547863, upload-time = "2025-10-04T18:23:24.108Z" }, + { url = "https://files.pythonhosted.org/packages/03/79/c5043180fc6c1f68e4752e0067ffbb8273fea6bafc3ff4e3e1be9c69d63c/uv-0.8.23-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:db02664898449af91a60b4caedc4a18944efc6375d814aa2e424204e975a43d7", size = 22172574, upload-time = "2025-10-04T18:23:26.775Z" }, + { url = "https://files.pythonhosted.org/packages/f6/73/2c472e40fc31f0fd61a41499bf1559af4d662ffa884b4d575f09c695b52e/uv-0.8.23-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:e310bd2b77f0e1cf5d8d31f918292e36d884eb7eed76561198a195baee72f277", size = 21272611, upload-time = "2025-10-04T18:23:28.991Z" }, + { url = "https://files.pythonhosted.org/packages/c4/2f/4f4e49dd04a90d982e76abbe0b6747187006a42b04f611042b525bb05c4b/uv-0.8.23-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:126ca80b6f72859998e2036679ce21c9b5024df0a09d8112698adc4153c3a1a7", size = 21239185, upload-time = "2025-10-04T18:23:31.258Z" }, + { url = "https://files.pythonhosted.org/packages/2b/35/2932f49ab0c6991e51e870bbf9fdef2a82f77cb5ed038a15b05dc9ce2c3e/uv-0.8.23-py3-none-manylinux_2_28_aarch64.whl", hash = "sha256:1a695477c367cb5568a33c8cf075280853eebbdec387c861e27e7d9201985d5f", size = 20097560, upload-time = "2025-10-04T18:23:33.975Z" }, + { url = "https://files.pythonhosted.org/packages/53/ef/d34f514d759b3a2068c50d9dd68672fc5b6be9c8cd585eb311ded73a2b20/uv-0.8.23-py3-none-manylinux_2_31_riscv64.whl", hash = "sha256:cf4b8b167ff38ebcdc2dbd26b9918a2b3d937f4f2811280cbebdd937a838184e", size = 21187580, upload-time = "2025-10-04T18:23:36.365Z" }, + { url = "https://files.pythonhosted.org/packages/b0/a9/52ef1b04419d1bb9ba870fc6fae4fa7b217e5dea079fb3dd7f212965fb89/uv-0.8.23-py3-none-musllinux_1_1_armv7l.whl", hash = "sha256:2884ab47ac18dcd24e366544ab93938ce8ab1aea38d896e87d92c44f08bb0bc1", size = 20136752, upload-time = "2025-10-04T18:23:38.68Z" }, + { url = "https://files.pythonhosted.org/packages/15/07/6ab974393935d37c4f6fa05ee27096ba5fd28850ae8ae049fad6f11febf8/uv-0.8.23-py3-none-musllinux_1_1_i686.whl", hash = "sha256:287430978458afbeab22aa2aafbfe3f5ec90f1054e7d4faec4156282930c44cb", size = 20494797, upload-time = "2025-10-04T18:23:40.947Z" }, + { url = "https://files.pythonhosted.org/packages/1b/f6/250531420babcd2e121c0998611e785335b7766989496ad405d42ef5f580/uv-0.8.23-py3-none-musllinux_1_1_x86_64.whl", hash = "sha256:16cbae67231acdd704e5e589d6fd37e16222d9fff32ca117a0cb3213be65fdf8", size = 21432784, upload-time = "2025-10-04T18:23:43.671Z" }, + { url = "https://files.pythonhosted.org/packages/7a/38/0c65b9c2305cd07e938b19d074dd6db5a7fd0a9dac86813b17d0d53e1759/uv-0.8.23-py3-none-win32.whl", hash = "sha256:f52c8068569d50e1b6d7670f709f5894f0e2f09c094d578d7d12cff0166924ad", size = 19331051, upload-time = "2025-10-04T18:23:46.268Z" }, + { url = "https://files.pythonhosted.org/packages/f1/1e/46f242f974e4480b157ee8276d8c21fb2a975b842e321b72497e27889f8f/uv-0.8.23-py3-none-win_amd64.whl", hash = "sha256:39bc5cd9310ef7a4f567885ba48fd4f6174029986351321fcfa5887076f82380", size = 21380950, upload-time = "2025-10-04T18:23:48.959Z" }, + { url = "https://files.pythonhosted.org/packages/82/6b/37f0cfa325bb4a4a462aee8e0e2d1d1f409b7f4dcfa84b19022f28be8a5b/uv-0.8.23-py3-none-win_arm64.whl", hash = "sha256:cc1725b546edae8d66d9b10aa2616ac5f93c3fa62c1ec72087afcb4b4b802e99", size = 19821125, upload-time = "2025-10-04T18:23:51.584Z" }, ] [[package]] @@ -6688,18 +7117,30 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/63/9a/0962b05b308494e3202d3f794a6e85abe471fe3cafdbcf95c2e8c713aabd/uvloop-0.21.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:a5c39f217ab3c663dc699c04cbd50c13813e31d917642d459fdcec07555cc553", size = 4660018, upload-time = "2024-10-14T23:38:10.888Z" }, ] +[[package]] +name = "vbuild" +version = "0.8.2" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "pscript" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/22/be/f0c6204a36440bbcc086bfa25964d009b7391c5a3c74d6e73188efd47adb/vbuild-0.8.2.tar.gz", hash = "sha256:270cd9078349d907dfae6c0e6364a5a5e74cb86183bb5093613f12a18b435fa9", size = 8937, upload-time = "2023-08-03T09:26:36.196Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/a6/3d/7b22abbdb059d551507275a2815bc2b1974e3b9f6a13781c1eac9e858965/vbuild-0.8.2-py2.py3-none-any.whl", hash = "sha256:d76bcc976a1c53b6a5776ac947606f9e7786c25df33a587ebe33ed09dd8a1076", size = 9371, upload-time = "2023-08-03T09:26:35.023Z" }, +] + [[package]] name = "virtualenv" -version = "20.35.3" +version = "20.34.0" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "distlib" }, { name = "filelock" }, { name = "platformdirs" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/a4/d5/b0ccd381d55c8f45d46f77df6ae59fbc23d19e901e2d523395598e5f4c93/virtualenv-20.35.3.tar.gz", hash = "sha256:4f1a845d131133bdff10590489610c98c168ff99dc75d6c96853801f7f67af44", size = 6002907, upload-time = "2025-10-10T21:23:33.178Z" } +sdist = { url = "https://files.pythonhosted.org/packages/1c/14/37fcdba2808a6c615681cd216fecae00413c9dab44fb2e57805ecf3eaee3/virtualenv-20.34.0.tar.gz", hash = "sha256:44815b2c9dee7ed86e387b842a84f20b93f7f417f95886ca1996a72a4138eb1a", size = 6003808, upload-time = "2025-08-13T14:24:07.464Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/27/73/d9a94da0e9d470a543c1b9d3ccbceb0f59455983088e727b8a1824ed90fb/virtualenv-20.35.3-py3-none-any.whl", hash = "sha256:63d106565078d8c8d0b206d48080f938a8b25361e19432d2c9db40d2899c810a", size = 5981061, upload-time = "2025-10-10T21:23:30.433Z" }, + { url = "https://files.pythonhosted.org/packages/76/06/04c8e804f813cf972e3262f3f8584c232de64f0cde9f703b46cf53a45090/virtualenv-20.34.0-py3-none-any.whl", hash = "sha256:341f5afa7eee943e4984a9207c025feedd768baff6753cd660c857ceb3e36026", size = 5983279, upload-time = "2025-08-13T14:24:05.111Z" }, ] [[package]] @@ -6731,67 +7172,87 @@ wheels = [ [[package]] name = "watchfiles" -version = "1.1.1" +version = "1.1.0" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "anyio", version = "3.7.1", source = { registry = "https://pypi.org/simple" }, marker = "python_full_version < '3.12'" }, { name = "anyio", version = "4.11.0", source = { registry = "https://pypi.org/simple" }, marker = "python_full_version >= '3.12'" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/c2/c9/8869df9b2a2d6c59d79220a4db37679e74f807c559ffe5265e08b227a210/watchfiles-1.1.1.tar.gz", hash = "sha256:a173cb5c16c4f40ab19cecf48a534c409f7ea983ab8fed0741304a1c0a31b3f2", size = 94440, upload-time = "2025-10-14T15:06:21.08Z" } -wheels = [ - { url = "https://files.pythonhosted.org/packages/1f/f8/2c5f479fb531ce2f0564eda479faecf253d886b1ab3630a39b7bf7362d46/watchfiles-1.1.1-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:f57b396167a2565a4e8b5e56a5a1c537571733992b226f4f1197d79e94cf0ae5", size = 406529, upload-time = "2025-10-14T15:04:32.899Z" }, - { url = "https://files.pythonhosted.org/packages/fe/cd/f515660b1f32f65df671ddf6f85bfaca621aee177712874dc30a97397977/watchfiles-1.1.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:421e29339983e1bebc281fab40d812742268ad057db4aee8c4d2bce0af43b741", size = 394384, upload-time = "2025-10-14T15:04:33.761Z" }, - { url = "https://files.pythonhosted.org/packages/7b/c3/28b7dc99733eab43fca2d10f55c86e03bd6ab11ca31b802abac26b23d161/watchfiles-1.1.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6e43d39a741e972bab5d8100b5cdacf69db64e34eb19b6e9af162bccf63c5cc6", size = 448789, upload-time = "2025-10-14T15:04:34.679Z" }, - { url = "https://files.pythonhosted.org/packages/4a/24/33e71113b320030011c8e4316ccca04194bf0cbbaeee207f00cbc7d6b9f5/watchfiles-1.1.1-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:f537afb3276d12814082a2e9b242bdcf416c2e8fd9f799a737990a1dbe906e5b", size = 460521, upload-time = "2025-10-14T15:04:35.963Z" }, - { url = "https://files.pythonhosted.org/packages/f4/c3/3c9a55f255aa57b91579ae9e98c88704955fa9dac3e5614fb378291155df/watchfiles-1.1.1-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b2cd9e04277e756a2e2d2543d65d1e2166d6fd4c9b183f8808634fda23f17b14", size = 488722, upload-time = "2025-10-14T15:04:37.091Z" }, - { url = "https://files.pythonhosted.org/packages/49/36/506447b73eb46c120169dc1717fe2eff07c234bb3232a7200b5f5bd816e9/watchfiles-1.1.1-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5f3f58818dc0b07f7d9aa7fe9eb1037aecb9700e63e1f6acfed13e9fef648f5d", size = 596088, upload-time = "2025-10-14T15:04:38.39Z" }, - { url = "https://files.pythonhosted.org/packages/82/ab/5f39e752a9838ec4d52e9b87c1e80f1ee3ccdbe92e183c15b6577ab9de16/watchfiles-1.1.1-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:9bb9f66367023ae783551042d31b1d7fd422e8289eedd91f26754a66f44d5cff", size = 472923, upload-time = "2025-10-14T15:04:39.666Z" }, - { url = "https://files.pythonhosted.org/packages/af/b9/a419292f05e302dea372fa7e6fda5178a92998411f8581b9830d28fb9edb/watchfiles-1.1.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:aebfd0861a83e6c3d1110b78ad54704486555246e542be3e2bb94195eabb2606", size = 456080, upload-time = "2025-10-14T15:04:40.643Z" }, - { url = "https://files.pythonhosted.org/packages/b0/c3/d5932fd62bde1a30c36e10c409dc5d54506726f08cb3e1d8d0ba5e2bc8db/watchfiles-1.1.1-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:5fac835b4ab3c6487b5dbad78c4b3724e26bcc468e886f8ba8cc4306f68f6701", size = 629432, upload-time = "2025-10-14T15:04:41.789Z" }, - { url = "https://files.pythonhosted.org/packages/f7/77/16bddd9779fafb795f1a94319dc965209c5641db5bf1edbbccace6d1b3c0/watchfiles-1.1.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:399600947b170270e80134ac854e21b3ccdefa11a9529a3decc1327088180f10", size = 623046, upload-time = "2025-10-14T15:04:42.718Z" }, - { url = "https://files.pythonhosted.org/packages/46/ef/f2ecb9a0f342b4bfad13a2787155c6ee7ce792140eac63a34676a2feeef2/watchfiles-1.1.1-cp311-cp311-win32.whl", hash = "sha256:de6da501c883f58ad50db3a32ad397b09ad29865b5f26f64c24d3e3281685849", size = 271473, upload-time = "2025-10-14T15:04:43.624Z" }, - { url = "https://files.pythonhosted.org/packages/94/bc/f42d71125f19731ea435c3948cad148d31a64fccde3867e5ba4edee901f9/watchfiles-1.1.1-cp311-cp311-win_amd64.whl", hash = "sha256:35c53bd62a0b885bf653ebf6b700d1bf05debb78ad9292cf2a942b23513dc4c4", size = 287598, upload-time = "2025-10-14T15:04:44.516Z" }, - { url = "https://files.pythonhosted.org/packages/57/c9/a30f897351f95bbbfb6abcadafbaca711ce1162f4db95fc908c98a9165f3/watchfiles-1.1.1-cp311-cp311-win_arm64.whl", hash = "sha256:57ca5281a8b5e27593cb7d82c2ac927ad88a96ed406aa446f6344e4328208e9e", size = 277210, upload-time = "2025-10-14T15:04:45.883Z" }, - { url = "https://files.pythonhosted.org/packages/74/d5/f039e7e3c639d9b1d09b07ea412a6806d38123f0508e5f9b48a87b0a76cc/watchfiles-1.1.1-cp312-cp312-macosx_10_12_x86_64.whl", hash = "sha256:8c89f9f2f740a6b7dcc753140dd5e1ab9215966f7a3530d0c0705c83b401bd7d", size = 404745, upload-time = "2025-10-14T15:04:46.731Z" }, - { url = "https://files.pythonhosted.org/packages/a5/96/a881a13aa1349827490dab2d363c8039527060cfcc2c92cc6d13d1b1049e/watchfiles-1.1.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:bd404be08018c37350f0d6e34676bd1e2889990117a2b90070b3007f172d0610", size = 391769, upload-time = "2025-10-14T15:04:48.003Z" }, - { url = "https://files.pythonhosted.org/packages/4b/5b/d3b460364aeb8da471c1989238ea0e56bec24b6042a68046adf3d9ddb01c/watchfiles-1.1.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8526e8f916bb5b9a0a777c8317c23ce65de259422bba5b31325a6fa6029d33af", size = 449374, upload-time = "2025-10-14T15:04:49.179Z" }, - { url = "https://files.pythonhosted.org/packages/b9/44/5769cb62d4ed055cb17417c0a109a92f007114a4e07f30812a73a4efdb11/watchfiles-1.1.1-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:2edc3553362b1c38d9f06242416a5d8e9fe235c204a4072e988ce2e5bb1f69f6", size = 459485, upload-time = "2025-10-14T15:04:50.155Z" }, - { url = "https://files.pythonhosted.org/packages/19/0c/286b6301ded2eccd4ffd0041a1b726afda999926cf720aab63adb68a1e36/watchfiles-1.1.1-cp312-cp312-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:30f7da3fb3f2844259cba4720c3fc7138eb0f7b659c38f3bfa65084c7fc7abce", size = 488813, upload-time = "2025-10-14T15:04:51.059Z" }, - { url = "https://files.pythonhosted.org/packages/c7/2b/8530ed41112dd4a22f4dcfdb5ccf6a1baad1ff6eed8dc5a5f09e7e8c41c7/watchfiles-1.1.1-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:f8979280bdafff686ba5e4d8f97840f929a87ed9cdf133cbbd42f7766774d2aa", size = 594816, upload-time = "2025-10-14T15:04:52.031Z" }, - { url = "https://files.pythonhosted.org/packages/ce/d2/f5f9fb49489f184f18470d4f99f4e862a4b3e9ac2865688eb2099e3d837a/watchfiles-1.1.1-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:dcc5c24523771db3a294c77d94771abcfcb82a0e0ee8efd910c37c59ec1b31bb", size = 475186, upload-time = "2025-10-14T15:04:53.064Z" }, - { url = "https://files.pythonhosted.org/packages/cf/68/5707da262a119fb06fbe214d82dd1fe4a6f4af32d2d14de368d0349eb52a/watchfiles-1.1.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1db5d7ae38ff20153d542460752ff397fcf5c96090c1230803713cf3147a6803", size = 456812, upload-time = "2025-10-14T15:04:55.174Z" }, - { url = "https://files.pythonhosted.org/packages/66/ab/3cbb8756323e8f9b6f9acb9ef4ec26d42b2109bce830cc1f3468df20511d/watchfiles-1.1.1-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:28475ddbde92df1874b6c5c8aaeb24ad5be47a11f87cde5a28ef3835932e3e94", size = 630196, upload-time = "2025-10-14T15:04:56.22Z" }, - { url = "https://files.pythonhosted.org/packages/78/46/7152ec29b8335f80167928944a94955015a345440f524d2dfe63fc2f437b/watchfiles-1.1.1-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:36193ed342f5b9842edd3532729a2ad55c4160ffcfa3700e0d54be496b70dd43", size = 622657, upload-time = "2025-10-14T15:04:57.521Z" }, - { url = "https://files.pythonhosted.org/packages/0a/bf/95895e78dd75efe9a7f31733607f384b42eb5feb54bd2eb6ed57cc2e94f4/watchfiles-1.1.1-cp312-cp312-win32.whl", hash = "sha256:859e43a1951717cc8de7f4c77674a6d389b106361585951d9e69572823f311d9", size = 272042, upload-time = "2025-10-14T15:04:59.046Z" }, - { url = "https://files.pythonhosted.org/packages/87/0a/90eb755f568de2688cb220171c4191df932232c20946966c27a59c400850/watchfiles-1.1.1-cp312-cp312-win_amd64.whl", hash = "sha256:91d4c9a823a8c987cce8fa2690923b069966dabb196dd8d137ea2cede885fde9", size = 288410, upload-time = "2025-10-14T15:05:00.081Z" }, - { url = "https://files.pythonhosted.org/packages/36/76/f322701530586922fbd6723c4f91ace21364924822a8772c549483abed13/watchfiles-1.1.1-cp312-cp312-win_arm64.whl", hash = "sha256:a625815d4a2bdca61953dbba5a39d60164451ef34c88d751f6c368c3ea73d404", size = 278209, upload-time = "2025-10-14T15:05:01.168Z" }, - { url = "https://files.pythonhosted.org/packages/bb/f4/f750b29225fe77139f7ae5de89d4949f5a99f934c65a1f1c0b248f26f747/watchfiles-1.1.1-cp313-cp313-macosx_10_12_x86_64.whl", hash = "sha256:130e4876309e8686a5e37dba7d5e9bc77e6ed908266996ca26572437a5271e18", size = 404321, upload-time = "2025-10-14T15:05:02.063Z" }, - { url = "https://files.pythonhosted.org/packages/2b/f9/f07a295cde762644aa4c4bb0f88921d2d141af45e735b965fb2e87858328/watchfiles-1.1.1-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:5f3bde70f157f84ece3765b42b4a52c6ac1a50334903c6eaf765362f6ccca88a", size = 391783, upload-time = "2025-10-14T15:05:03.052Z" }, - { url = "https://files.pythonhosted.org/packages/bc/11/fc2502457e0bea39a5c958d86d2cb69e407a4d00b85735ca724bfa6e0d1a/watchfiles-1.1.1-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:14e0b1fe858430fc0251737ef3824c54027bedb8c37c38114488b8e131cf8219", size = 449279, upload-time = "2025-10-14T15:05:04.004Z" }, - { url = "https://files.pythonhosted.org/packages/e3/1f/d66bc15ea0b728df3ed96a539c777acfcad0eb78555ad9efcaa1274688f0/watchfiles-1.1.1-cp313-cp313-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:f27db948078f3823a6bb3b465180db8ebecf26dd5dae6f6180bd87383b6b4428", size = 459405, upload-time = "2025-10-14T15:05:04.942Z" }, - { url = "https://files.pythonhosted.org/packages/be/90/9f4a65c0aec3ccf032703e6db02d89a157462fbb2cf20dd415128251cac0/watchfiles-1.1.1-cp313-cp313-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:059098c3a429f62fc98e8ec62b982230ef2c8df68c79e826e37b895bc359a9c0", size = 488976, upload-time = "2025-10-14T15:05:05.905Z" }, - { url = "https://files.pythonhosted.org/packages/37/57/ee347af605d867f712be7029bb94c8c071732a4b44792e3176fa3c612d39/watchfiles-1.1.1-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:bfb5862016acc9b869bb57284e6cb35fdf8e22fe59f7548858e2f971d045f150", size = 595506, upload-time = "2025-10-14T15:05:06.906Z" }, - { url = "https://files.pythonhosted.org/packages/a8/78/cc5ab0b86c122047f75e8fc471c67a04dee395daf847d3e59381996c8707/watchfiles-1.1.1-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:319b27255aacd9923b8a276bb14d21a5f7ff82564c744235fc5eae58d95422ae", size = 474936, upload-time = "2025-10-14T15:05:07.906Z" }, - { url = "https://files.pythonhosted.org/packages/62/da/def65b170a3815af7bd40a3e7010bf6ab53089ef1b75d05dd5385b87cf08/watchfiles-1.1.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c755367e51db90e75b19454b680903631d41f9e3607fbd941d296a020c2d752d", size = 456147, upload-time = "2025-10-14T15:05:09.138Z" }, - { url = "https://files.pythonhosted.org/packages/57/99/da6573ba71166e82d288d4df0839128004c67d2778d3b566c138695f5c0b/watchfiles-1.1.1-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:c22c776292a23bfc7237a98f791b9ad3144b02116ff10d820829ce62dff46d0b", size = 630007, upload-time = "2025-10-14T15:05:10.117Z" }, - { url = "https://files.pythonhosted.org/packages/a8/51/7439c4dd39511368849eb1e53279cd3454b4a4dbace80bab88feeb83c6b5/watchfiles-1.1.1-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:3a476189be23c3686bc2f4321dd501cb329c0a0469e77b7b534ee10129ae6374", size = 622280, upload-time = "2025-10-14T15:05:11.146Z" }, - { url = "https://files.pythonhosted.org/packages/95/9c/8ed97d4bba5db6fdcdb2b298d3898f2dd5c20f6b73aee04eabe56c59677e/watchfiles-1.1.1-cp313-cp313-win32.whl", hash = "sha256:bf0a91bfb5574a2f7fc223cf95eeea79abfefa404bf1ea5e339c0c1560ae99a0", size = 272056, upload-time = "2025-10-14T15:05:12.156Z" }, - { url = "https://files.pythonhosted.org/packages/1f/f3/c14e28429f744a260d8ceae18bf58c1d5fa56b50d006a7a9f80e1882cb0d/watchfiles-1.1.1-cp313-cp313-win_amd64.whl", hash = "sha256:52e06553899e11e8074503c8e716d574adeeb7e68913115c4b3653c53f9bae42", size = 288162, upload-time = "2025-10-14T15:05:13.208Z" }, - { url = "https://files.pythonhosted.org/packages/dc/61/fe0e56c40d5cd29523e398d31153218718c5786b5e636d9ae8ae79453d27/watchfiles-1.1.1-cp313-cp313-win_arm64.whl", hash = "sha256:ac3cc5759570cd02662b15fbcd9d917f7ecd47efe0d6b40474eafd246f91ea18", size = 277909, upload-time = "2025-10-14T15:05:14.49Z" }, - { url = "https://files.pythonhosted.org/packages/79/42/e0a7d749626f1e28c7108a99fb9bf524b501bbbeb9b261ceecde644d5a07/watchfiles-1.1.1-cp313-cp313t-macosx_10_12_x86_64.whl", hash = "sha256:563b116874a9a7ce6f96f87cd0b94f7faf92d08d0021e837796f0a14318ef8da", size = 403389, upload-time = "2025-10-14T15:05:15.777Z" }, - { url = "https://files.pythonhosted.org/packages/15/49/08732f90ce0fbbc13913f9f215c689cfc9ced345fb1bcd8829a50007cc8d/watchfiles-1.1.1-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:3ad9fe1dae4ab4212d8c91e80b832425e24f421703b5a42ef2e4a1e215aff051", size = 389964, upload-time = "2025-10-14T15:05:16.85Z" }, - { url = "https://files.pythonhosted.org/packages/27/0d/7c315d4bd5f2538910491a0393c56bf70d333d51bc5b34bee8e68e8cea19/watchfiles-1.1.1-cp313-cp313t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ce70f96a46b894b36eba678f153f052967a0d06d5b5a19b336ab0dbbd029f73e", size = 448114, upload-time = "2025-10-14T15:05:17.876Z" }, - { url = "https://files.pythonhosted.org/packages/c3/24/9e096de47a4d11bc4df41e9d1e61776393eac4cb6eb11b3e23315b78b2cc/watchfiles-1.1.1-cp313-cp313t-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:cb467c999c2eff23a6417e58d75e5828716f42ed8289fe6b77a7e5a91036ca70", size = 460264, upload-time = "2025-10-14T15:05:18.962Z" }, - { url = "https://files.pythonhosted.org/packages/cc/0f/e8dea6375f1d3ba5fcb0b3583e2b493e77379834c74fd5a22d66d85d6540/watchfiles-1.1.1-cp313-cp313t-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:836398932192dae4146c8f6f737d74baeac8b70ce14831a239bdb1ca882fc261", size = 487877, upload-time = "2025-10-14T15:05:20.094Z" }, - { url = "https://files.pythonhosted.org/packages/ac/5b/df24cfc6424a12deb41503b64d42fbea6b8cb357ec62ca84a5a3476f654a/watchfiles-1.1.1-cp313-cp313t-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:743185e7372b7bc7c389e1badcc606931a827112fbbd37f14c537320fca08620", size = 595176, upload-time = "2025-10-14T15:05:21.134Z" }, - { url = "https://files.pythonhosted.org/packages/8f/b5/853b6757f7347de4e9b37e8cc3289283fb983cba1ab4d2d7144694871d9c/watchfiles-1.1.1-cp313-cp313t-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:afaeff7696e0ad9f02cbb8f56365ff4686ab205fcf9c4c5b6fdfaaa16549dd04", size = 473577, upload-time = "2025-10-14T15:05:22.306Z" }, - { url = "https://files.pythonhosted.org/packages/e1/f7/0a4467be0a56e80447c8529c9fce5b38eab4f513cb3d9bf82e7392a5696b/watchfiles-1.1.1-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3f7eb7da0eb23aa2ba036d4f616d46906013a68caf61b7fdbe42fc8b25132e77", size = 455425, upload-time = "2025-10-14T15:05:23.348Z" }, - { url = "https://files.pythonhosted.org/packages/8e/e0/82583485ea00137ddf69bc84a2db88bd92ab4a6e3c405e5fb878ead8d0e7/watchfiles-1.1.1-cp313-cp313t-musllinux_1_1_aarch64.whl", hash = "sha256:831a62658609f0e5c64178211c942ace999517f5770fe9436be4c2faeba0c0ef", size = 628826, upload-time = "2025-10-14T15:05:24.398Z" }, - { url = "https://files.pythonhosted.org/packages/28/9a/a785356fccf9fae84c0cc90570f11702ae9571036fb25932f1242c82191c/watchfiles-1.1.1-cp313-cp313t-musllinux_1_1_x86_64.whl", hash = "sha256:f9a2ae5c91cecc9edd47e041a930490c31c3afb1f5e6d71de3dc671bfaca02bf", size = 622208, upload-time = "2025-10-14T15:05:25.45Z" }, - { url = "https://files.pythonhosted.org/packages/d3/8e/e500f8b0b77be4ff753ac94dc06b33d8f0d839377fee1b78e8c8d8f031bf/watchfiles-1.1.1-pp311-pypy311_pp73-macosx_10_12_x86_64.whl", hash = "sha256:db476ab59b6765134de1d4fe96a1a9c96ddf091683599be0f26147ea1b2e4b88", size = 408250, upload-time = "2025-10-14T15:06:10.264Z" }, - { url = "https://files.pythonhosted.org/packages/bd/95/615e72cd27b85b61eec764a5ca51bd94d40b5adea5ff47567d9ebc4d275a/watchfiles-1.1.1-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:89eef07eee5e9d1fda06e38822ad167a044153457e6fd997f8a858ab7564a336", size = 396117, upload-time = "2025-10-14T15:06:11.28Z" }, - { url = "https://files.pythonhosted.org/packages/c9/81/e7fe958ce8a7fb5c73cc9fb07f5aeaf755e6aa72498c57d760af760c91f8/watchfiles-1.1.1-pp311-pypy311_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ce19e06cbda693e9e7686358af9cd6f5d61312ab8b00488bc36f5aabbaf77e24", size = 450493, upload-time = "2025-10-14T15:06:12.321Z" }, - { url = "https://files.pythonhosted.org/packages/6e/d4/ed38dd3b1767193de971e694aa544356e63353c33a85d948166b5ff58b9e/watchfiles-1.1.1-pp311-pypy311_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3e6f39af2eab0118338902798b5aa6664f46ff66bc0280de76fca67a7f262a49", size = 457546, upload-time = "2025-10-14T15:06:13.372Z" }, +sdist = { url = "https://files.pythonhosted.org/packages/2a/9a/d451fcc97d029f5812e898fd30a53fd8c15c7bbd058fd75cfc6beb9bd761/watchfiles-1.1.0.tar.gz", hash = "sha256:693ed7ec72cbfcee399e92c895362b6e66d63dac6b91e2c11ae03d10d503e575", size = 94406, upload-time = "2025-06-15T19:06:59.42Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/8b/78/7401154b78ab484ccaaeef970dc2af0cb88b5ba8a1b415383da444cdd8d3/watchfiles-1.1.0-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:c9649dfc57cc1f9835551deb17689e8d44666315f2e82d337b9f07bd76ae3aa2", size = 405751, upload-time = "2025-06-15T19:05:07.679Z" }, + { url = "https://files.pythonhosted.org/packages/76/63/e6c3dbc1f78d001589b75e56a288c47723de28c580ad715eb116639152b5/watchfiles-1.1.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:406520216186b99374cdb58bc48e34bb74535adec160c8459894884c983a149c", size = 397313, upload-time = "2025-06-15T19:05:08.764Z" }, + { url = "https://files.pythonhosted.org/packages/6c/a2/8afa359ff52e99af1632f90cbf359da46184207e893a5f179301b0c8d6df/watchfiles-1.1.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:cb45350fd1dc75cd68d3d72c47f5b513cb0578da716df5fba02fff31c69d5f2d", size = 450792, upload-time = "2025-06-15T19:05:09.869Z" }, + { url = "https://files.pythonhosted.org/packages/1d/bf/7446b401667f5c64972a57a0233be1104157fc3abf72c4ef2666c1bd09b2/watchfiles-1.1.0-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:11ee4444250fcbeb47459a877e5e80ed994ce8e8d20283857fc128be1715dac7", size = 458196, upload-time = "2025-06-15T19:05:11.91Z" }, + { url = "https://files.pythonhosted.org/packages/58/2f/501ddbdfa3fa874ea5597c77eeea3d413579c29af26c1091b08d0c792280/watchfiles-1.1.0-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:bda8136e6a80bdea23e5e74e09df0362744d24ffb8cd59c4a95a6ce3d142f79c", size = 484788, upload-time = "2025-06-15T19:05:13.373Z" }, + { url = "https://files.pythonhosted.org/packages/61/1e/9c18eb2eb5c953c96bc0e5f626f0e53cfef4bd19bd50d71d1a049c63a575/watchfiles-1.1.0-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:b915daeb2d8c1f5cee4b970f2e2c988ce6514aace3c9296e58dd64dc9aa5d575", size = 597879, upload-time = "2025-06-15T19:05:14.725Z" }, + { url = "https://files.pythonhosted.org/packages/8b/6c/1467402e5185d89388b4486745af1e0325007af0017c3384cc786fff0542/watchfiles-1.1.0-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:ed8fc66786de8d0376f9f913c09e963c66e90ced9aa11997f93bdb30f7c872a8", size = 477447, upload-time = "2025-06-15T19:05:15.775Z" }, + { url = "https://files.pythonhosted.org/packages/2b/a1/ec0a606bde4853d6c4a578f9391eeb3684a9aea736a8eb217e3e00aa89a1/watchfiles-1.1.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:fe4371595edf78c41ef8ac8df20df3943e13defd0efcb732b2e393b5a8a7a71f", size = 453145, upload-time = "2025-06-15T19:05:17.17Z" }, + { url = "https://files.pythonhosted.org/packages/90/b9/ef6f0c247a6a35d689fc970dc7f6734f9257451aefb30def5d100d6246a5/watchfiles-1.1.0-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:b7c5f6fe273291f4d414d55b2c80d33c457b8a42677ad14b4b47ff025d0893e4", size = 626539, upload-time = "2025-06-15T19:05:18.557Z" }, + { url = "https://files.pythonhosted.org/packages/34/44/6ffda5537085106ff5aaa762b0d130ac6c75a08015dd1621376f708c94de/watchfiles-1.1.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:7738027989881e70e3723c75921f1efa45225084228788fc59ea8c6d732eb30d", size = 624472, upload-time = "2025-06-15T19:05:19.588Z" }, + { url = "https://files.pythonhosted.org/packages/c3/e3/71170985c48028fa3f0a50946916a14055e741db11c2e7bc2f3b61f4d0e3/watchfiles-1.1.0-cp311-cp311-win32.whl", hash = "sha256:622d6b2c06be19f6e89b1d951485a232e3b59618def88dbeda575ed8f0d8dbf2", size = 279348, upload-time = "2025-06-15T19:05:20.856Z" }, + { url = "https://files.pythonhosted.org/packages/89/1b/3e39c68b68a7a171070f81fc2561d23ce8d6859659406842a0e4bebf3bba/watchfiles-1.1.0-cp311-cp311-win_amd64.whl", hash = "sha256:48aa25e5992b61debc908a61ab4d3f216b64f44fdaa71eb082d8b2de846b7d12", size = 292607, upload-time = "2025-06-15T19:05:21.937Z" }, + { url = "https://files.pythonhosted.org/packages/61/9f/2973b7539f2bdb6ea86d2c87f70f615a71a1fc2dba2911795cea25968aea/watchfiles-1.1.0-cp311-cp311-win_arm64.whl", hash = "sha256:00645eb79a3faa70d9cb15c8d4187bb72970b2470e938670240c7998dad9f13a", size = 285056, upload-time = "2025-06-15T19:05:23.12Z" }, + { url = "https://files.pythonhosted.org/packages/f6/b8/858957045a38a4079203a33aaa7d23ea9269ca7761c8a074af3524fbb240/watchfiles-1.1.0-cp312-cp312-macosx_10_12_x86_64.whl", hash = "sha256:9dc001c3e10de4725c749d4c2f2bdc6ae24de5a88a339c4bce32300a31ede179", size = 402339, upload-time = "2025-06-15T19:05:24.516Z" }, + { url = "https://files.pythonhosted.org/packages/80/28/98b222cca751ba68e88521fabd79a4fab64005fc5976ea49b53fa205d1fa/watchfiles-1.1.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:d9ba68ec283153dead62cbe81872d28e053745f12335d037de9cbd14bd1877f5", size = 394409, upload-time = "2025-06-15T19:05:25.469Z" }, + { url = "https://files.pythonhosted.org/packages/86/50/dee79968566c03190677c26f7f47960aff738d32087087bdf63a5473e7df/watchfiles-1.1.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:130fc497b8ee68dce163e4254d9b0356411d1490e868bd8790028bc46c5cc297", size = 450939, upload-time = "2025-06-15T19:05:26.494Z" }, + { url = "https://files.pythonhosted.org/packages/40/45/a7b56fb129700f3cfe2594a01aa38d033b92a33dddce86c8dfdfc1247b72/watchfiles-1.1.0-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:50a51a90610d0845a5931a780d8e51d7bd7f309ebc25132ba975aca016b576a0", size = 457270, upload-time = "2025-06-15T19:05:27.466Z" }, + { url = "https://files.pythonhosted.org/packages/b5/c8/fa5ef9476b1d02dc6b5e258f515fcaaecf559037edf8b6feffcbc097c4b8/watchfiles-1.1.0-cp312-cp312-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:dc44678a72ac0910bac46fa6a0de6af9ba1355669b3dfaf1ce5f05ca7a74364e", size = 483370, upload-time = "2025-06-15T19:05:28.548Z" }, + { url = "https://files.pythonhosted.org/packages/98/68/42cfcdd6533ec94f0a7aab83f759ec11280f70b11bfba0b0f885e298f9bd/watchfiles-1.1.0-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:a543492513a93b001975ae283a51f4b67973662a375a403ae82f420d2c7205ee", size = 598654, upload-time = "2025-06-15T19:05:29.997Z" }, + { url = "https://files.pythonhosted.org/packages/d3/74/b2a1544224118cc28df7e59008a929e711f9c68ce7d554e171b2dc531352/watchfiles-1.1.0-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:8ac164e20d17cc285f2b94dc31c384bc3aa3dd5e7490473b3db043dd70fbccfd", size = 478667, upload-time = "2025-06-15T19:05:31.172Z" }, + { url = "https://files.pythonhosted.org/packages/8c/77/e3362fe308358dc9f8588102481e599c83e1b91c2ae843780a7ded939a35/watchfiles-1.1.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f7590d5a455321e53857892ab8879dce62d1f4b04748769f5adf2e707afb9d4f", size = 452213, upload-time = "2025-06-15T19:05:32.299Z" }, + { url = "https://files.pythonhosted.org/packages/6e/17/c8f1a36540c9a1558d4faf08e909399e8133599fa359bf52ec8fcee5be6f/watchfiles-1.1.0-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:37d3d3f7defb13f62ece99e9be912afe9dd8a0077b7c45ee5a57c74811d581a4", size = 626718, upload-time = "2025-06-15T19:05:33.415Z" }, + { url = "https://files.pythonhosted.org/packages/26/45/fb599be38b4bd38032643783d7496a26a6f9ae05dea1a42e58229a20ac13/watchfiles-1.1.0-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:7080c4bb3efd70a07b1cc2df99a7aa51d98685be56be6038c3169199d0a1c69f", size = 623098, upload-time = "2025-06-15T19:05:34.534Z" }, + { url = "https://files.pythonhosted.org/packages/a1/e7/fdf40e038475498e160cd167333c946e45d8563ae4dd65caf757e9ffe6b4/watchfiles-1.1.0-cp312-cp312-win32.whl", hash = "sha256:cbcf8630ef4afb05dc30107bfa17f16c0896bb30ee48fc24bf64c1f970f3b1fd", size = 279209, upload-time = "2025-06-15T19:05:35.577Z" }, + { url = "https://files.pythonhosted.org/packages/3f/d3/3ae9d5124ec75143bdf088d436cba39812122edc47709cd2caafeac3266f/watchfiles-1.1.0-cp312-cp312-win_amd64.whl", hash = "sha256:cbd949bdd87567b0ad183d7676feb98136cde5bb9025403794a4c0db28ed3a47", size = 292786, upload-time = "2025-06-15T19:05:36.559Z" }, + { url = "https://files.pythonhosted.org/packages/26/2f/7dd4fc8b5f2b34b545e19629b4a018bfb1de23b3a496766a2c1165ca890d/watchfiles-1.1.0-cp312-cp312-win_arm64.whl", hash = "sha256:0a7d40b77f07be87c6faa93d0951a0fcd8cbca1ddff60a1b65d741bac6f3a9f6", size = 284343, upload-time = "2025-06-15T19:05:37.5Z" }, + { url = "https://files.pythonhosted.org/packages/d3/42/fae874df96595556a9089ade83be34a2e04f0f11eb53a8dbf8a8a5e562b4/watchfiles-1.1.0-cp313-cp313-macosx_10_12_x86_64.whl", hash = "sha256:5007f860c7f1f8df471e4e04aaa8c43673429047d63205d1630880f7637bca30", size = 402004, upload-time = "2025-06-15T19:05:38.499Z" }, + { url = "https://files.pythonhosted.org/packages/fa/55/a77e533e59c3003d9803c09c44c3651224067cbe7fb5d574ddbaa31e11ca/watchfiles-1.1.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:20ecc8abbd957046f1fe9562757903f5eaf57c3bce70929fda6c7711bb58074a", size = 393671, upload-time = "2025-06-15T19:05:39.52Z" }, + { url = "https://files.pythonhosted.org/packages/05/68/b0afb3f79c8e832e6571022611adbdc36e35a44e14f129ba09709aa4bb7a/watchfiles-1.1.0-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f2f0498b7d2a3c072766dba3274fe22a183dbea1f99d188f1c6c72209a1063dc", size = 449772, upload-time = "2025-06-15T19:05:40.897Z" }, + { url = "https://files.pythonhosted.org/packages/ff/05/46dd1f6879bc40e1e74c6c39a1b9ab9e790bf1f5a2fe6c08b463d9a807f4/watchfiles-1.1.0-cp313-cp313-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:239736577e848678e13b201bba14e89718f5c2133dfd6b1f7846fa1b58a8532b", size = 456789, upload-time = "2025-06-15T19:05:42.045Z" }, + { url = "https://files.pythonhosted.org/packages/8b/ca/0eeb2c06227ca7f12e50a47a3679df0cd1ba487ea19cf844a905920f8e95/watchfiles-1.1.0-cp313-cp313-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:eff4b8d89f444f7e49136dc695599a591ff769300734446c0a86cba2eb2f9895", size = 482551, upload-time = "2025-06-15T19:05:43.781Z" }, + { url = "https://files.pythonhosted.org/packages/31/47/2cecbd8694095647406645f822781008cc524320466ea393f55fe70eed3b/watchfiles-1.1.0-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:12b0a02a91762c08f7264e2e79542f76870c3040bbc847fb67410ab81474932a", size = 597420, upload-time = "2025-06-15T19:05:45.244Z" }, + { url = "https://files.pythonhosted.org/packages/d9/7e/82abc4240e0806846548559d70f0b1a6dfdca75c1b4f9fa62b504ae9b083/watchfiles-1.1.0-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:29e7bc2eee15cbb339c68445959108803dc14ee0c7b4eea556400131a8de462b", size = 477950, upload-time = "2025-06-15T19:05:46.332Z" }, + { url = "https://files.pythonhosted.org/packages/25/0d/4d564798a49bf5482a4fa9416dea6b6c0733a3b5700cb8a5a503c4b15853/watchfiles-1.1.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d9481174d3ed982e269c090f780122fb59cee6c3796f74efe74e70f7780ed94c", size = 451706, upload-time = "2025-06-15T19:05:47.459Z" }, + { url = "https://files.pythonhosted.org/packages/81/b5/5516cf46b033192d544102ea07c65b6f770f10ed1d0a6d388f5d3874f6e4/watchfiles-1.1.0-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:80f811146831c8c86ab17b640801c25dc0a88c630e855e2bef3568f30434d52b", size = 625814, upload-time = "2025-06-15T19:05:48.654Z" }, + { url = "https://files.pythonhosted.org/packages/0c/dd/7c1331f902f30669ac3e754680b6edb9a0dd06dea5438e61128111fadd2c/watchfiles-1.1.0-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:60022527e71d1d1fda67a33150ee42869042bce3d0fcc9cc49be009a9cded3fb", size = 622820, upload-time = "2025-06-15T19:05:50.088Z" }, + { url = "https://files.pythonhosted.org/packages/1b/14/36d7a8e27cd128d7b1009e7715a7c02f6c131be9d4ce1e5c3b73d0e342d8/watchfiles-1.1.0-cp313-cp313-win32.whl", hash = "sha256:32d6d4e583593cb8576e129879ea0991660b935177c0f93c6681359b3654bfa9", size = 279194, upload-time = "2025-06-15T19:05:51.186Z" }, + { url = "https://files.pythonhosted.org/packages/25/41/2dd88054b849aa546dbeef5696019c58f8e0774f4d1c42123273304cdb2e/watchfiles-1.1.0-cp313-cp313-win_amd64.whl", hash = "sha256:f21af781a4a6fbad54f03c598ab620e3a77032c5878f3d780448421a6e1818c7", size = 292349, upload-time = "2025-06-15T19:05:52.201Z" }, + { url = "https://files.pythonhosted.org/packages/c8/cf/421d659de88285eb13941cf11a81f875c176f76a6d99342599be88e08d03/watchfiles-1.1.0-cp313-cp313-win_arm64.whl", hash = "sha256:5366164391873ed76bfdf618818c82084c9db7fac82b64a20c44d335eec9ced5", size = 283836, upload-time = "2025-06-15T19:05:53.265Z" }, + { url = "https://files.pythonhosted.org/packages/45/10/6faf6858d527e3599cc50ec9fcae73590fbddc1420bd4fdccfebffeedbc6/watchfiles-1.1.0-cp313-cp313t-macosx_10_12_x86_64.whl", hash = "sha256:17ab167cca6339c2b830b744eaf10803d2a5b6683be4d79d8475d88b4a8a4be1", size = 400343, upload-time = "2025-06-15T19:05:54.252Z" }, + { url = "https://files.pythonhosted.org/packages/03/20/5cb7d3966f5e8c718006d0e97dfe379a82f16fecd3caa7810f634412047a/watchfiles-1.1.0-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:328dbc9bff7205c215a7807da7c18dce37da7da718e798356212d22696404339", size = 392916, upload-time = "2025-06-15T19:05:55.264Z" }, + { url = "https://files.pythonhosted.org/packages/8c/07/d8f1176328fa9e9581b6f120b017e286d2a2d22ae3f554efd9515c8e1b49/watchfiles-1.1.0-cp313-cp313t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f7208ab6e009c627b7557ce55c465c98967e8caa8b11833531fdf95799372633", size = 449582, upload-time = "2025-06-15T19:05:56.317Z" }, + { url = "https://files.pythonhosted.org/packages/66/e8/80a14a453cf6038e81d072a86c05276692a1826471fef91df7537dba8b46/watchfiles-1.1.0-cp313-cp313t-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:a8f6f72974a19efead54195bc9bed4d850fc047bb7aa971268fd9a8387c89011", size = 456752, upload-time = "2025-06-15T19:05:57.359Z" }, + { url = "https://files.pythonhosted.org/packages/5a/25/0853b3fe0e3c2f5af9ea60eb2e781eade939760239a72c2d38fc4cc335f6/watchfiles-1.1.0-cp313-cp313t-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d181ef50923c29cf0450c3cd47e2f0557b62218c50b2ab8ce2ecaa02bd97e670", size = 481436, upload-time = "2025-06-15T19:05:58.447Z" }, + { url = "https://files.pythonhosted.org/packages/fe/9e/4af0056c258b861fbb29dcb36258de1e2b857be4a9509e6298abcf31e5c9/watchfiles-1.1.0-cp313-cp313t-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:adb4167043d3a78280d5d05ce0ba22055c266cf8655ce942f2fb881262ff3cdf", size = 596016, upload-time = "2025-06-15T19:05:59.59Z" }, + { url = "https://files.pythonhosted.org/packages/c5/fa/95d604b58aa375e781daf350897aaaa089cff59d84147e9ccff2447c8294/watchfiles-1.1.0-cp313-cp313t-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:8c5701dc474b041e2934a26d31d39f90fac8a3dee2322b39f7729867f932b1d4", size = 476727, upload-time = "2025-06-15T19:06:01.086Z" }, + { url = "https://files.pythonhosted.org/packages/65/95/fe479b2664f19be4cf5ceeb21be05afd491d95f142e72d26a42f41b7c4f8/watchfiles-1.1.0-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b067915e3c3936966a8607f6fe5487df0c9c4afb85226613b520890049deea20", size = 451864, upload-time = "2025-06-15T19:06:02.144Z" }, + { url = "https://files.pythonhosted.org/packages/d3/8a/3c4af14b93a15ce55901cd7a92e1a4701910f1768c78fb30f61d2b79785b/watchfiles-1.1.0-cp313-cp313t-musllinux_1_1_aarch64.whl", hash = "sha256:9c733cda03b6d636b4219625a4acb5c6ffb10803338e437fb614fef9516825ef", size = 625626, upload-time = "2025-06-15T19:06:03.578Z" }, + { url = "https://files.pythonhosted.org/packages/da/f5/cf6aa047d4d9e128f4b7cde615236a915673775ef171ff85971d698f3c2c/watchfiles-1.1.0-cp313-cp313t-musllinux_1_1_x86_64.whl", hash = "sha256:cc08ef8b90d78bfac66f0def80240b0197008e4852c9f285907377b2947ffdcb", size = 622744, upload-time = "2025-06-15T19:06:05.066Z" }, + { url = "https://files.pythonhosted.org/packages/2c/00/70f75c47f05dea6fd30df90f047765f6fc2d6eb8b5a3921379b0b04defa2/watchfiles-1.1.0-cp314-cp314-macosx_10_12_x86_64.whl", hash = "sha256:9974d2f7dc561cce3bb88dfa8eb309dab64c729de85fba32e98d75cf24b66297", size = 402114, upload-time = "2025-06-15T19:06:06.186Z" }, + { url = "https://files.pythonhosted.org/packages/53/03/acd69c48db4a1ed1de26b349d94077cca2238ff98fd64393f3e97484cae6/watchfiles-1.1.0-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:c68e9f1fcb4d43798ad8814c4c1b61547b014b667216cb754e606bfade587018", size = 393879, upload-time = "2025-06-15T19:06:07.369Z" }, + { url = "https://files.pythonhosted.org/packages/2f/c8/a9a2a6f9c8baa4eceae5887fecd421e1b7ce86802bcfc8b6a942e2add834/watchfiles-1.1.0-cp314-cp314-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:95ab1594377effac17110e1352989bdd7bdfca9ff0e5eeccd8c69c5389b826d0", size = 450026, upload-time = "2025-06-15T19:06:08.476Z" }, + { url = "https://files.pythonhosted.org/packages/fe/51/d572260d98388e6e2b967425c985e07d47ee6f62e6455cefb46a6e06eda5/watchfiles-1.1.0-cp314-cp314-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:fba9b62da882c1be1280a7584ec4515d0a6006a94d6e5819730ec2eab60ffe12", size = 457917, upload-time = "2025-06-15T19:06:09.988Z" }, + { url = "https://files.pythonhosted.org/packages/c6/2d/4258e52917bf9f12909b6ec314ff9636276f3542f9d3807d143f27309104/watchfiles-1.1.0-cp314-cp314-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3434e401f3ce0ed6b42569128b3d1e3af773d7ec18751b918b89cd49c14eaafb", size = 483602, upload-time = "2025-06-15T19:06:11.088Z" }, + { url = "https://files.pythonhosted.org/packages/84/99/bee17a5f341a4345fe7b7972a475809af9e528deba056f8963d61ea49f75/watchfiles-1.1.0-cp314-cp314-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:fa257a4d0d21fcbca5b5fcba9dca5a78011cb93c0323fb8855c6d2dfbc76eb77", size = 596758, upload-time = "2025-06-15T19:06:12.197Z" }, + { url = "https://files.pythonhosted.org/packages/40/76/e4bec1d59b25b89d2b0716b41b461ed655a9a53c60dc78ad5771fda5b3e6/watchfiles-1.1.0-cp314-cp314-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:7fd1b3879a578a8ec2076c7961076df540b9af317123f84569f5a9ddee64ce92", size = 477601, upload-time = "2025-06-15T19:06:13.391Z" }, + { url = "https://files.pythonhosted.org/packages/1f/fa/a514292956f4a9ce3c567ec0c13cce427c158e9f272062685a8a727d08fc/watchfiles-1.1.0-cp314-cp314-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:62cc7a30eeb0e20ecc5f4bd113cd69dcdb745a07c68c0370cea919f373f65d9e", size = 451936, upload-time = "2025-06-15T19:06:14.656Z" }, + { url = "https://files.pythonhosted.org/packages/32/5d/c3bf927ec3bbeb4566984eba8dd7a8eb69569400f5509904545576741f88/watchfiles-1.1.0-cp314-cp314-musllinux_1_1_aarch64.whl", hash = "sha256:891c69e027748b4a73847335d208e374ce54ca3c335907d381fde4e41661b13b", size = 626243, upload-time = "2025-06-15T19:06:16.232Z" }, + { url = "https://files.pythonhosted.org/packages/e6/65/6e12c042f1a68c556802a84d54bb06d35577c81e29fba14019562479159c/watchfiles-1.1.0-cp314-cp314-musllinux_1_1_x86_64.whl", hash = "sha256:12fe8eaffaf0faa7906895b4f8bb88264035b3f0243275e0bf24af0436b27259", size = 623073, upload-time = "2025-06-15T19:06:17.457Z" }, + { url = "https://files.pythonhosted.org/packages/89/ab/7f79d9bf57329e7cbb0a6fd4c7bd7d0cee1e4a8ef0041459f5409da3506c/watchfiles-1.1.0-cp314-cp314t-macosx_10_12_x86_64.whl", hash = "sha256:bfe3c517c283e484843cb2e357dd57ba009cff351edf45fb455b5fbd1f45b15f", size = 400872, upload-time = "2025-06-15T19:06:18.57Z" }, + { url = "https://files.pythonhosted.org/packages/df/d5/3f7bf9912798e9e6c516094db6b8932df53b223660c781ee37607030b6d3/watchfiles-1.1.0-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:a9ccbf1f129480ed3044f540c0fdbc4ee556f7175e5ab40fe077ff6baf286d4e", size = 392877, upload-time = "2025-06-15T19:06:19.55Z" }, + { url = "https://files.pythonhosted.org/packages/0d/c5/54ec7601a2798604e01c75294770dbee8150e81c6e471445d7601610b495/watchfiles-1.1.0-cp314-cp314t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ba0e3255b0396cac3cc7bbace76404dd72b5438bf0d8e7cefa2f79a7f3649caa", size = 449645, upload-time = "2025-06-15T19:06:20.66Z" }, + { url = "https://files.pythonhosted.org/packages/0a/04/c2f44afc3b2fce21ca0b7802cbd37ed90a29874f96069ed30a36dfe57c2b/watchfiles-1.1.0-cp314-cp314t-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:4281cd9fce9fc0a9dbf0fc1217f39bf9cf2b4d315d9626ef1d4e87b84699e7e8", size = 457424, upload-time = "2025-06-15T19:06:21.712Z" }, + { url = "https://files.pythonhosted.org/packages/9f/b0/eec32cb6c14d248095261a04f290636da3df3119d4040ef91a4a50b29fa5/watchfiles-1.1.0-cp314-cp314t-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6d2404af8db1329f9a3c9b79ff63e0ae7131986446901582067d9304ae8aaf7f", size = 481584, upload-time = "2025-06-15T19:06:22.777Z" }, + { url = "https://files.pythonhosted.org/packages/d1/e2/ca4bb71c68a937d7145aa25709e4f5d68eb7698a25ce266e84b55d591bbd/watchfiles-1.1.0-cp314-cp314t-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:e78b6ed8165996013165eeabd875c5dfc19d41b54f94b40e9fff0eb3193e5e8e", size = 596675, upload-time = "2025-06-15T19:06:24.226Z" }, + { url = "https://files.pythonhosted.org/packages/a1/dd/b0e4b7fb5acf783816bc950180a6cd7c6c1d2cf7e9372c0ea634e722712b/watchfiles-1.1.0-cp314-cp314t-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:249590eb75ccc117f488e2fabd1bfa33c580e24b96f00658ad88e38844a040bb", size = 477363, upload-time = "2025-06-15T19:06:25.42Z" }, + { url = "https://files.pythonhosted.org/packages/69/c4/088825b75489cb5b6a761a4542645718893d395d8c530b38734f19da44d2/watchfiles-1.1.0-cp314-cp314t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d05686b5487cfa2e2c28ff1aa370ea3e6c5accfe6435944ddea1e10d93872147", size = 452240, upload-time = "2025-06-15T19:06:26.552Z" }, + { url = "https://files.pythonhosted.org/packages/10/8c/22b074814970eeef43b7c44df98c3e9667c1f7bf5b83e0ff0201b0bd43f9/watchfiles-1.1.0-cp314-cp314t-musllinux_1_1_aarch64.whl", hash = "sha256:d0e10e6f8f6dc5762adee7dece33b722282e1f59aa6a55da5d493a97282fedd8", size = 625607, upload-time = "2025-06-15T19:06:27.606Z" }, + { url = "https://files.pythonhosted.org/packages/32/fa/a4f5c2046385492b2273213ef815bf71a0d4c1943b784fb904e184e30201/watchfiles-1.1.0-cp314-cp314t-musllinux_1_1_x86_64.whl", hash = "sha256:af06c863f152005c7592df1d6a7009c836a247c9d8adb78fef8575a5a98699db", size = 623315, upload-time = "2025-06-15T19:06:29.076Z" }, + { url = "https://files.pythonhosted.org/packages/8c/6b/686dcf5d3525ad17b384fd94708e95193529b460a1b7bf40851f1328ec6e/watchfiles-1.1.0-pp311-pypy311_pp73-macosx_10_12_x86_64.whl", hash = "sha256:0ece16b563b17ab26eaa2d52230c9a7ae46cf01759621f4fbbca280e438267b3", size = 406910, upload-time = "2025-06-15T19:06:49.335Z" }, + { url = "https://files.pythonhosted.org/packages/f3/d3/71c2dcf81dc1edcf8af9f4d8d63b1316fb0a2dd90cbfd427e8d9dd584a90/watchfiles-1.1.0-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:51b81e55d40c4b4aa8658427a3ee7ea847c591ae9e8b81ef94a90b668999353c", size = 398816, upload-time = "2025-06-15T19:06:50.433Z" }, + { url = "https://files.pythonhosted.org/packages/b8/fa/12269467b2fc006f8fce4cd6c3acfa77491dd0777d2a747415f28ccc8c60/watchfiles-1.1.0-pp311-pypy311_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f2bcdc54ea267fe72bfc7d83c041e4eb58d7d8dc6f578dfddb52f037ce62f432", size = 451584, upload-time = "2025-06-15T19:06:51.834Z" }, + { url = "https://files.pythonhosted.org/packages/bd/d3/254cea30f918f489db09d6a8435a7de7047f8cb68584477a515f160541d6/watchfiles-1.1.0-pp311-pypy311_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:923fec6e5461c42bd7e3fd5ec37492c6f3468be0499bc0707b4bbbc16ac21792", size = 454009, upload-time = "2025-06-15T19:06:52.896Z" }, ] [[package]] @@ -6835,11 +7296,11 @@ wheels = [ [[package]] name = "websocket-client" -version = "1.9.0" +version = "1.8.0" source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/2c/41/aa4bf9664e4cda14c3b39865b12251e8e7d239f4cd0e3cc1b6c2ccde25c1/websocket_client-1.9.0.tar.gz", hash = "sha256:9e813624b6eb619999a97dc7958469217c3176312b3a16a4bd1bc7e08a46ec98", size = 70576, upload-time = "2025-10-07T21:16:36.495Z" } +sdist = { url = "https://files.pythonhosted.org/packages/e6/30/fba0d96b4b5fbf5948ed3f4681f7da2f9f64512e1d303f94b4cc174c24a5/websocket_client-1.8.0.tar.gz", hash = "sha256:3239df9f44da632f96012472805d40a23281a991027ce11d2f45a6f24ac4c3da", size = 54648, upload-time = "2024-04-23T22:16:16.976Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/34/db/b10e48aa8fff7407e67470363eac595018441cf32d5e1001567a7aeba5d2/websocket_client-1.9.0-py3-none-any.whl", hash = "sha256:af248a825037ef591efbf6ed20cc5faa03d3b47b9e5a2230a529eeee1c1fc3ef", size = 82616, upload-time = "2025-10-07T21:16:34.951Z" }, + { url = "https://files.pythonhosted.org/packages/5a/84/44687a29792a70e111c5c477230a72c4b957d88d16141199bf9acb7537a3/websocket_client-1.8.0-py3-none-any.whl", hash = "sha256:17b44cc997f5c498e809b22cdf2d9c7a9e71c02c8cc2b6c56e7c2d1239bfa526", size = 58826, upload-time = "2024-04-23T22:16:14.422Z" }, ] [[package]] @@ -6929,6 +7390,26 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/e8/cf/7d848740203c7b4b27eb55dbfede11aca974a51c3d894f6cc4b865f42f58/wrapt-1.17.3-cp313-cp313-win32.whl", hash = "sha256:53e5e39ff71b3fc484df8a522c933ea2b7cdd0d5d15ae82e5b23fde87d44cbd8", size = 36711, upload-time = "2025-08-12T05:53:10.074Z" }, { url = "https://files.pythonhosted.org/packages/57/54/35a84d0a4d23ea675994104e667ceff49227ce473ba6a59ba2c84f250b74/wrapt-1.17.3-cp313-cp313-win_amd64.whl", hash = "sha256:1f0b2f40cf341ee8cc1a97d51ff50dddb9fcc73241b9143ec74b30fc4f44f6cb", size = 38885, upload-time = "2025-08-12T05:53:08.695Z" }, { url = "https://files.pythonhosted.org/packages/01/77/66e54407c59d7b02a3c4e0af3783168fff8e5d61def52cda8728439d86bc/wrapt-1.17.3-cp313-cp313-win_arm64.whl", hash = "sha256:7425ac3c54430f5fc5e7b6f41d41e704db073309acfc09305816bc6a0b26bb16", size = 36896, upload-time = "2025-08-12T05:52:55.34Z" }, + { url = "https://files.pythonhosted.org/packages/02/a2/cd864b2a14f20d14f4c496fab97802001560f9f41554eef6df201cd7f76c/wrapt-1.17.3-cp314-cp314-macosx_10_13_universal2.whl", hash = "sha256:cf30f6e3c077c8e6a9a7809c94551203c8843e74ba0c960f4a98cd80d4665d39", size = 54132, upload-time = "2025-08-12T05:51:49.864Z" }, + { url = "https://files.pythonhosted.org/packages/d5/46/d011725b0c89e853dc44cceb738a307cde5d240d023d6d40a82d1b4e1182/wrapt-1.17.3-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:e228514a06843cae89621384cfe3a80418f3c04aadf8a3b14e46a7be704e4235", size = 39091, upload-time = "2025-08-12T05:51:38.935Z" }, + { url = "https://files.pythonhosted.org/packages/2e/9e/3ad852d77c35aae7ddebdbc3b6d35ec8013af7d7dddad0ad911f3d891dae/wrapt-1.17.3-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:5ea5eb3c0c071862997d6f3e02af1d055f381b1d25b286b9d6644b79db77657c", size = 39172, upload-time = "2025-08-12T05:51:59.365Z" }, + { url = "https://files.pythonhosted.org/packages/c3/f7/c983d2762bcce2326c317c26a6a1e7016f7eb039c27cdf5c4e30f4160f31/wrapt-1.17.3-cp314-cp314-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:281262213373b6d5e4bb4353bc36d1ba4084e6d6b5d242863721ef2bf2c2930b", size = 87163, upload-time = "2025-08-12T05:52:40.965Z" }, + { url = "https://files.pythonhosted.org/packages/e4/0f/f673f75d489c7f22d17fe0193e84b41540d962f75fce579cf6873167c29b/wrapt-1.17.3-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:dc4a8d2b25efb6681ecacad42fca8859f88092d8732b170de6a5dddd80a1c8fa", size = 87963, upload-time = "2025-08-12T05:52:20.326Z" }, + { url = "https://files.pythonhosted.org/packages/df/61/515ad6caca68995da2fac7a6af97faab8f78ebe3bf4f761e1b77efbc47b5/wrapt-1.17.3-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:373342dd05b1d07d752cecbec0c41817231f29f3a89aa8b8843f7b95992ed0c7", size = 86945, upload-time = "2025-08-12T05:52:21.581Z" }, + { url = "https://files.pythonhosted.org/packages/d3/bd/4e70162ce398462a467bc09e768bee112f1412e563620adc353de9055d33/wrapt-1.17.3-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:d40770d7c0fd5cbed9d84b2c3f2e156431a12c9a37dc6284060fb4bec0b7ffd4", size = 86857, upload-time = "2025-08-12T05:52:43.043Z" }, + { url = "https://files.pythonhosted.org/packages/2b/b8/da8560695e9284810b8d3df8a19396a6e40e7518059584a1a394a2b35e0a/wrapt-1.17.3-cp314-cp314-win32.whl", hash = "sha256:fbd3c8319de8e1dc79d346929cd71d523622da527cca14e0c1d257e31c2b8b10", size = 37178, upload-time = "2025-08-12T05:53:12.605Z" }, + { url = "https://files.pythonhosted.org/packages/db/c8/b71eeb192c440d67a5a0449aaee2310a1a1e8eca41676046f99ed2487e9f/wrapt-1.17.3-cp314-cp314-win_amd64.whl", hash = "sha256:e1a4120ae5705f673727d3253de3ed0e016f7cd78dc463db1b31e2463e1f3cf6", size = 39310, upload-time = "2025-08-12T05:53:11.106Z" }, + { url = "https://files.pythonhosted.org/packages/45/20/2cda20fd4865fa40f86f6c46ed37a2a8356a7a2fde0773269311f2af56c7/wrapt-1.17.3-cp314-cp314-win_arm64.whl", hash = "sha256:507553480670cab08a800b9463bdb881b2edeed77dc677b0a5915e6106e91a58", size = 37266, upload-time = "2025-08-12T05:52:56.531Z" }, + { url = "https://files.pythonhosted.org/packages/77/ed/dd5cf21aec36c80443c6f900449260b80e2a65cf963668eaef3b9accce36/wrapt-1.17.3-cp314-cp314t-macosx_10_13_universal2.whl", hash = "sha256:ed7c635ae45cfbc1a7371f708727bf74690daedc49b4dba310590ca0bd28aa8a", size = 56544, upload-time = "2025-08-12T05:51:51.109Z" }, + { url = "https://files.pythonhosted.org/packages/8d/96/450c651cc753877ad100c7949ab4d2e2ecc4d97157e00fa8f45df682456a/wrapt-1.17.3-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:249f88ed15503f6492a71f01442abddd73856a0032ae860de6d75ca62eed8067", size = 40283, upload-time = "2025-08-12T05:51:39.912Z" }, + { url = "https://files.pythonhosted.org/packages/d1/86/2fcad95994d9b572db57632acb6f900695a648c3e063f2cd344b3f5c5a37/wrapt-1.17.3-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:5a03a38adec8066d5a37bea22f2ba6bbf39fcdefbe2d91419ab864c3fb515454", size = 40366, upload-time = "2025-08-12T05:52:00.693Z" }, + { url = "https://files.pythonhosted.org/packages/64/0e/f4472f2fdde2d4617975144311f8800ef73677a159be7fe61fa50997d6c0/wrapt-1.17.3-cp314-cp314t-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:5d4478d72eb61c36e5b446e375bbc49ed002430d17cdec3cecb36993398e1a9e", size = 108571, upload-time = "2025-08-12T05:52:44.521Z" }, + { url = "https://files.pythonhosted.org/packages/cc/01/9b85a99996b0a97c8a17484684f206cbb6ba73c1ce6890ac668bcf3838fb/wrapt-1.17.3-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:223db574bb38637e8230eb14b185565023ab624474df94d2af18f1cdb625216f", size = 113094, upload-time = "2025-08-12T05:52:22.618Z" }, + { url = "https://files.pythonhosted.org/packages/25/02/78926c1efddcc7b3aa0bc3d6b33a822f7d898059f7cd9ace8c8318e559ef/wrapt-1.17.3-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:e405adefb53a435f01efa7ccdec012c016b5a1d3f35459990afc39b6be4d5056", size = 110659, upload-time = "2025-08-12T05:52:24.057Z" }, + { url = "https://files.pythonhosted.org/packages/dc/ee/c414501ad518ac3e6fe184753632fe5e5ecacdcf0effc23f31c1e4f7bfcf/wrapt-1.17.3-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:88547535b787a6c9ce4086917b6e1d291aa8ed914fdd3a838b3539dc95c12804", size = 106946, upload-time = "2025-08-12T05:52:45.976Z" }, + { url = "https://files.pythonhosted.org/packages/be/44/a1bd64b723d13bb151d6cc91b986146a1952385e0392a78567e12149c7b4/wrapt-1.17.3-cp314-cp314t-win32.whl", hash = "sha256:41b1d2bc74c2cac6f9074df52b2efbef2b30bdfe5f40cb78f8ca22963bc62977", size = 38717, upload-time = "2025-08-12T05:53:15.214Z" }, + { url = "https://files.pythonhosted.org/packages/79/d9/7cfd5a312760ac4dd8bf0184a6ee9e43c33e47f3dadc303032ce012b8fa3/wrapt-1.17.3-cp314-cp314t-win_amd64.whl", hash = "sha256:73d496de46cd2cdbdbcce4ae4bcdb4afb6a11234a1df9c085249d55166b95116", size = 41334, upload-time = "2025-08-12T05:53:14.178Z" }, + { url = "https://files.pythonhosted.org/packages/46/78/10ad9781128ed2f99dbc474f43283b13fea8ba58723e98844367531c18e9/wrapt-1.17.3-cp314-cp314t-win_arm64.whl", hash = "sha256:f38e60678850c42461d4202739f9bf1e3a737c7ad283638251e79cc49effb6b6", size = 38471, upload-time = "2025-08-12T05:52:57.784Z" }, { url = "https://files.pythonhosted.org/packages/1f/f6/a933bd70f98e9cf3e08167fc5cd7aaaca49147e48411c0bd5ae701bb2194/wrapt-1.17.3-py3-none-any.whl", hash = "sha256:7171ae35d2c33d326ac19dd8facb1e82e5fd04ef8c6c0e394d7af55a55051c22", size = 23591, upload-time = "2025-08-12T05:53:20.674Z" }, ] @@ -6964,80 +7445,84 @@ wheels = [ [[package]] name = "yarl" -version = "1.22.0" +version = "1.20.1" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "idna" }, { name = "multidict" }, { name = "propcache" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/57/63/0c6ebca57330cd313f6102b16dd57ffaf3ec4c83403dcb45dbd15c6f3ea1/yarl-1.22.0.tar.gz", hash = "sha256:bebf8557577d4401ba8bd9ff33906f1376c877aa78d1fe216ad01b4d6745af71", size = 187169, upload-time = "2025-10-06T14:12:55.963Z" } -wheels = [ - { url = "https://files.pythonhosted.org/packages/4d/27/5ab13fc84c76a0250afd3d26d5936349a35be56ce5785447d6c423b26d92/yarl-1.22.0-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:1ab72135b1f2db3fed3997d7e7dc1b80573c67138023852b6efb336a5eae6511", size = 141607, upload-time = "2025-10-06T14:09:16.298Z" }, - { url = "https://files.pythonhosted.org/packages/6a/a1/d065d51d02dc02ce81501d476b9ed2229d9a990818332242a882d5d60340/yarl-1.22.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:669930400e375570189492dc8d8341301578e8493aec04aebc20d4717f899dd6", size = 94027, upload-time = "2025-10-06T14:09:17.786Z" }, - { url = "https://files.pythonhosted.org/packages/c1/da/8da9f6a53f67b5106ffe902c6fa0164e10398d4e150d85838b82f424072a/yarl-1.22.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:792a2af6d58177ef7c19cbf0097aba92ca1b9cb3ffdd9c7470e156c8f9b5e028", size = 94963, upload-time = "2025-10-06T14:09:19.662Z" }, - { url = "https://files.pythonhosted.org/packages/68/fe/2c1f674960c376e29cb0bec1249b117d11738db92a6ccc4a530b972648db/yarl-1.22.0-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:3ea66b1c11c9150f1372f69afb6b8116f2dd7286f38e14ea71a44eee9ec51b9d", size = 368406, upload-time = "2025-10-06T14:09:21.402Z" }, - { url = "https://files.pythonhosted.org/packages/95/26/812a540e1c3c6418fec60e9bbd38e871eaba9545e94fa5eff8f4a8e28e1e/yarl-1.22.0-cp311-cp311-manylinux2014_armv7l.manylinux_2_17_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:3e2daa88dc91870215961e96a039ec73e4937da13cf77ce17f9cad0c18df3503", size = 336581, upload-time = "2025-10-06T14:09:22.98Z" }, - { url = "https://files.pythonhosted.org/packages/0b/f5/5777b19e26fdf98563985e481f8be3d8a39f8734147a6ebf459d0dab5a6b/yarl-1.22.0-cp311-cp311-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:ba440ae430c00eee41509353628600212112cd5018d5def7e9b05ea7ac34eb65", size = 388924, upload-time = "2025-10-06T14:09:24.655Z" }, - { url = "https://files.pythonhosted.org/packages/86/08/24bd2477bd59c0bbd994fe1d93b126e0472e4e3df5a96a277b0a55309e89/yarl-1.22.0-cp311-cp311-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:e6438cc8f23a9c1478633d216b16104a586b9761db62bfacb6425bac0a36679e", size = 392890, upload-time = "2025-10-06T14:09:26.617Z" }, - { url = "https://files.pythonhosted.org/packages/46/00/71b90ed48e895667ecfb1eaab27c1523ee2fa217433ed77a73b13205ca4b/yarl-1.22.0-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:4c52a6e78aef5cf47a98ef8e934755abf53953379b7d53e68b15ff4420e6683d", size = 365819, upload-time = "2025-10-06T14:09:28.544Z" }, - { url = "https://files.pythonhosted.org/packages/30/2d/f715501cae832651d3282387c6a9236cd26bd00d0ff1e404b3dc52447884/yarl-1.22.0-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:3b06bcadaac49c70f4c88af4ffcfbe3dc155aab3163e75777818092478bcbbe7", size = 363601, upload-time = "2025-10-06T14:09:30.568Z" }, - { url = "https://files.pythonhosted.org/packages/f8/f9/a678c992d78e394e7126ee0b0e4e71bd2775e4334d00a9278c06a6cce96a/yarl-1.22.0-cp311-cp311-musllinux_1_2_armv7l.whl", hash = "sha256:6944b2dc72c4d7f7052683487e3677456050ff77fcf5e6204e98caf785ad1967", size = 358072, upload-time = "2025-10-06T14:09:32.528Z" }, - { url = "https://files.pythonhosted.org/packages/2c/d1/b49454411a60edb6fefdcad4f8e6dbba7d8019e3a508a1c5836cba6d0781/yarl-1.22.0-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:d5372ca1df0f91a86b047d1277c2aaf1edb32d78bbcefffc81b40ffd18f027ed", size = 385311, upload-time = "2025-10-06T14:09:34.634Z" }, - { url = "https://files.pythonhosted.org/packages/87/e5/40d7a94debb8448c7771a916d1861d6609dddf7958dc381117e7ba36d9e8/yarl-1.22.0-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:51af598701f5299012b8416486b40fceef8c26fc87dc6d7d1f6fc30609ea0aa6", size = 381094, upload-time = "2025-10-06T14:09:36.268Z" }, - { url = "https://files.pythonhosted.org/packages/35/d8/611cc282502381ad855448643e1ad0538957fc82ae83dfe7762c14069e14/yarl-1.22.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:b266bd01fedeffeeac01a79ae181719ff848a5a13ce10075adbefc8f1daee70e", size = 370944, upload-time = "2025-10-06T14:09:37.872Z" }, - { url = "https://files.pythonhosted.org/packages/2d/df/fadd00fb1c90e1a5a8bd731fa3d3de2e165e5a3666a095b04e31b04d9cb6/yarl-1.22.0-cp311-cp311-win32.whl", hash = "sha256:a9b1ba5610a4e20f655258d5a1fdc7ebe3d837bb0e45b581398b99eb98b1f5ca", size = 81804, upload-time = "2025-10-06T14:09:39.359Z" }, - { url = "https://files.pythonhosted.org/packages/b5/f7/149bb6f45f267cb5c074ac40c01c6b3ea6d8a620d34b337f6321928a1b4d/yarl-1.22.0-cp311-cp311-win_amd64.whl", hash = "sha256:078278b9b0b11568937d9509b589ee83ef98ed6d561dfe2020e24a9fd08eaa2b", size = 86858, upload-time = "2025-10-06T14:09:41.068Z" }, - { url = "https://files.pythonhosted.org/packages/2b/13/88b78b93ad3f2f0b78e13bfaaa24d11cbc746e93fe76d8c06bf139615646/yarl-1.22.0-cp311-cp311-win_arm64.whl", hash = "sha256:b6a6f620cfe13ccec221fa312139135166e47ae169f8253f72a0abc0dae94376", size = 81637, upload-time = "2025-10-06T14:09:42.712Z" }, - { url = "https://files.pythonhosted.org/packages/75/ff/46736024fee3429b80a165a732e38e5d5a238721e634ab41b040d49f8738/yarl-1.22.0-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:e340382d1afa5d32b892b3ff062436d592ec3d692aeea3bef3a5cfe11bbf8c6f", size = 142000, upload-time = "2025-10-06T14:09:44.631Z" }, - { url = "https://files.pythonhosted.org/packages/5a/9a/b312ed670df903145598914770eb12de1bac44599549b3360acc96878df8/yarl-1.22.0-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:f1e09112a2c31ffe8d80be1b0988fa6a18c5d5cad92a9ffbb1c04c91bfe52ad2", size = 94338, upload-time = "2025-10-06T14:09:46.372Z" }, - { url = "https://files.pythonhosted.org/packages/ba/f5/0601483296f09c3c65e303d60c070a5c19fcdbc72daa061e96170785bc7d/yarl-1.22.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:939fe60db294c786f6b7c2d2e121576628468f65453d86b0fe36cb52f987bd74", size = 94909, upload-time = "2025-10-06T14:09:48.648Z" }, - { url = "https://files.pythonhosted.org/packages/60/41/9a1fe0b73dbcefce72e46cf149b0e0a67612d60bfc90fb59c2b2efdfbd86/yarl-1.22.0-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:e1651bf8e0398574646744c1885a41198eba53dc8a9312b954073f845c90a8df", size = 372940, upload-time = "2025-10-06T14:09:50.089Z" }, - { url = "https://files.pythonhosted.org/packages/17/7a/795cb6dfee561961c30b800f0ed616b923a2ec6258b5def2a00bf8231334/yarl-1.22.0-cp312-cp312-manylinux2014_armv7l.manylinux_2_17_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:b8a0588521a26bf92a57a1705b77b8b59044cdceccac7151bd8d229e66b8dedb", size = 345825, upload-time = "2025-10-06T14:09:52.142Z" }, - { url = "https://files.pythonhosted.org/packages/d7/93/a58f4d596d2be2ae7bab1a5846c4d270b894958845753b2c606d666744d3/yarl-1.22.0-cp312-cp312-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:42188e6a615c1a75bcaa6e150c3fe8f3e8680471a6b10150c5f7e83f47cc34d2", size = 386705, upload-time = "2025-10-06T14:09:54.128Z" }, - { url = "https://files.pythonhosted.org/packages/61/92/682279d0e099d0e14d7fd2e176bd04f48de1484f56546a3e1313cd6c8e7c/yarl-1.22.0-cp312-cp312-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:f6d2cb59377d99718913ad9a151030d6f83ef420a2b8f521d94609ecc106ee82", size = 396518, upload-time = "2025-10-06T14:09:55.762Z" }, - { url = "https://files.pythonhosted.org/packages/db/0f/0d52c98b8a885aeda831224b78f3be7ec2e1aa4a62091f9f9188c3c65b56/yarl-1.22.0-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:50678a3b71c751d58d7908edc96d332af328839eea883bb554a43f539101277a", size = 377267, upload-time = "2025-10-06T14:09:57.958Z" }, - { url = "https://files.pythonhosted.org/packages/22/42/d2685e35908cbeaa6532c1fc73e89e7f2efb5d8a7df3959ea8e37177c5a3/yarl-1.22.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:1e8fbaa7cec507aa24ea27a01456e8dd4b6fab829059b69844bd348f2d467124", size = 365797, upload-time = "2025-10-06T14:09:59.527Z" }, - { url = "https://files.pythonhosted.org/packages/a2/83/cf8c7bcc6355631762f7d8bdab920ad09b82efa6b722999dfb05afa6cfac/yarl-1.22.0-cp312-cp312-musllinux_1_2_armv7l.whl", hash = "sha256:433885ab5431bc3d3d4f2f9bd15bfa1614c522b0f1405d62c4f926ccd69d04fa", size = 365535, upload-time = "2025-10-06T14:10:01.139Z" }, - { url = "https://files.pythonhosted.org/packages/25/e1/5302ff9b28f0c59cac913b91fe3f16c59a033887e57ce9ca5d41a3a94737/yarl-1.22.0-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:b790b39c7e9a4192dc2e201a282109ed2985a1ddbd5ac08dc56d0e121400a8f7", size = 382324, upload-time = "2025-10-06T14:10:02.756Z" }, - { url = "https://files.pythonhosted.org/packages/bf/cd/4617eb60f032f19ae3a688dc990d8f0d89ee0ea378b61cac81ede3e52fae/yarl-1.22.0-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:31f0b53913220599446872d757257be5898019c85e7971599065bc55065dc99d", size = 383803, upload-time = "2025-10-06T14:10:04.552Z" }, - { url = "https://files.pythonhosted.org/packages/59/65/afc6e62bb506a319ea67b694551dab4a7e6fb7bf604e9bd9f3e11d575fec/yarl-1.22.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:a49370e8f711daec68d09b821a34e1167792ee2d24d405cbc2387be4f158b520", size = 374220, upload-time = "2025-10-06T14:10:06.489Z" }, - { url = "https://files.pythonhosted.org/packages/e7/3d/68bf18d50dc674b942daec86a9ba922d3113d8399b0e52b9897530442da2/yarl-1.22.0-cp312-cp312-win32.whl", hash = "sha256:70dfd4f241c04bd9239d53b17f11e6ab672b9f1420364af63e8531198e3f5fe8", size = 81589, upload-time = "2025-10-06T14:10:09.254Z" }, - { url = "https://files.pythonhosted.org/packages/c8/9a/6ad1a9b37c2f72874f93e691b2e7ecb6137fb2b899983125db4204e47575/yarl-1.22.0-cp312-cp312-win_amd64.whl", hash = "sha256:8884d8b332a5e9b88e23f60bb166890009429391864c685e17bd73a9eda9105c", size = 87213, upload-time = "2025-10-06T14:10:11.369Z" }, - { url = "https://files.pythonhosted.org/packages/44/c5/c21b562d1680a77634d748e30c653c3ca918beb35555cff24986fff54598/yarl-1.22.0-cp312-cp312-win_arm64.whl", hash = "sha256:ea70f61a47f3cc93bdf8b2f368ed359ef02a01ca6393916bc8ff877427181e74", size = 81330, upload-time = "2025-10-06T14:10:13.112Z" }, - { url = "https://files.pythonhosted.org/packages/ea/f3/d67de7260456ee105dc1d162d43a019ecad6b91e2f51809d6cddaa56690e/yarl-1.22.0-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:8dee9c25c74997f6a750cd317b8ca63545169c098faee42c84aa5e506c819b53", size = 139980, upload-time = "2025-10-06T14:10:14.601Z" }, - { url = "https://files.pythonhosted.org/packages/01/88/04d98af0b47e0ef42597b9b28863b9060bb515524da0a65d5f4db160b2d5/yarl-1.22.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:01e73b85a5434f89fc4fe27dcda2aff08ddf35e4d47bbbea3bdcd25321af538a", size = 93424, upload-time = "2025-10-06T14:10:16.115Z" }, - { url = "https://files.pythonhosted.org/packages/18/91/3274b215fd8442a03975ce6bee5fe6aa57a8326b29b9d3d56234a1dca244/yarl-1.22.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:22965c2af250d20c873cdbee8ff958fb809940aeb2e74ba5f20aaf6b7ac8c70c", size = 93821, upload-time = "2025-10-06T14:10:17.993Z" }, - { url = "https://files.pythonhosted.org/packages/61/3a/caf4e25036db0f2da4ca22a353dfeb3c9d3c95d2761ebe9b14df8fc16eb0/yarl-1.22.0-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:b4f15793aa49793ec8d1c708ab7f9eded1aa72edc5174cae703651555ed1b601", size = 373243, upload-time = "2025-10-06T14:10:19.44Z" }, - { url = "https://files.pythonhosted.org/packages/6e/9e/51a77ac7516e8e7803b06e01f74e78649c24ee1021eca3d6a739cb6ea49c/yarl-1.22.0-cp313-cp313-manylinux2014_armv7l.manylinux_2_17_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:e5542339dcf2747135c5c85f68680353d5cb9ffd741c0f2e8d832d054d41f35a", size = 342361, upload-time = "2025-10-06T14:10:21.124Z" }, - { url = "https://files.pythonhosted.org/packages/d4/f8/33b92454789dde8407f156c00303e9a891f1f51a0330b0fad7c909f87692/yarl-1.22.0-cp313-cp313-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:5c401e05ad47a75869c3ab3e35137f8468b846770587e70d71e11de797d113df", size = 387036, upload-time = "2025-10-06T14:10:22.902Z" }, - { url = "https://files.pythonhosted.org/packages/d9/9a/c5db84ea024f76838220280f732970aa4ee154015d7f5c1bfb60a267af6f/yarl-1.22.0-cp313-cp313-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:243dda95d901c733f5b59214d28b0120893d91777cb8aa043e6ef059d3cddfe2", size = 397671, upload-time = "2025-10-06T14:10:24.523Z" }, - { url = "https://files.pythonhosted.org/packages/11/c9/cd8538dc2e7727095e0c1d867bad1e40c98f37763e6d995c1939f5fdc7b1/yarl-1.22.0-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:bec03d0d388060058f5d291a813f21c011041938a441c593374da6077fe21b1b", size = 377059, upload-time = "2025-10-06T14:10:26.406Z" }, - { url = "https://files.pythonhosted.org/packages/a1/b9/ab437b261702ced75122ed78a876a6dec0a1b0f5e17a4ac7a9a2482d8abe/yarl-1.22.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:b0748275abb8c1e1e09301ee3cf90c8a99678a4e92e4373705f2a2570d581273", size = 365356, upload-time = "2025-10-06T14:10:28.461Z" }, - { url = "https://files.pythonhosted.org/packages/b2/9d/8e1ae6d1d008a9567877b08f0ce4077a29974c04c062dabdb923ed98e6fe/yarl-1.22.0-cp313-cp313-musllinux_1_2_armv7l.whl", hash = "sha256:47fdb18187e2a4e18fda2c25c05d8251a9e4a521edaed757fef033e7d8498d9a", size = 361331, upload-time = "2025-10-06T14:10:30.541Z" }, - { url = "https://files.pythonhosted.org/packages/ca/5a/09b7be3905962f145b73beb468cdd53db8aa171cf18c80400a54c5b82846/yarl-1.22.0-cp313-cp313-musllinux_1_2_ppc64le.whl", hash = "sha256:c7044802eec4524fde550afc28edda0dd5784c4c45f0be151a2d3ba017daca7d", size = 382590, upload-time = "2025-10-06T14:10:33.352Z" }, - { url = "https://files.pythonhosted.org/packages/aa/7f/59ec509abf90eda5048b0bc3e2d7b5099dffdb3e6b127019895ab9d5ef44/yarl-1.22.0-cp313-cp313-musllinux_1_2_s390x.whl", hash = "sha256:139718f35149ff544caba20fce6e8a2f71f1e39b92c700d8438a0b1d2a631a02", size = 385316, upload-time = "2025-10-06T14:10:35.034Z" }, - { url = "https://files.pythonhosted.org/packages/e5/84/891158426bc8036bfdfd862fabd0e0fa25df4176ec793e447f4b85cf1be4/yarl-1.22.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:e1b51bebd221006d3d2f95fbe124b22b247136647ae5dcc8c7acafba66e5ee67", size = 374431, upload-time = "2025-10-06T14:10:37.76Z" }, - { url = "https://files.pythonhosted.org/packages/bb/49/03da1580665baa8bef5e8ed34c6df2c2aca0a2f28bf397ed238cc1bbc6f2/yarl-1.22.0-cp313-cp313-win32.whl", hash = "sha256:d3e32536234a95f513bd374e93d717cf6b2231a791758de6c509e3653f234c95", size = 81555, upload-time = "2025-10-06T14:10:39.649Z" }, - { url = "https://files.pythonhosted.org/packages/9a/ee/450914ae11b419eadd067c6183ae08381cfdfcb9798b90b2b713bbebddda/yarl-1.22.0-cp313-cp313-win_amd64.whl", hash = "sha256:47743b82b76d89a1d20b83e60d5c20314cbd5ba2befc9cda8f28300c4a08ed4d", size = 86965, upload-time = "2025-10-06T14:10:41.313Z" }, - { url = "https://files.pythonhosted.org/packages/98/4d/264a01eae03b6cf629ad69bae94e3b0e5344741e929073678e84bf7a3e3b/yarl-1.22.0-cp313-cp313-win_arm64.whl", hash = "sha256:5d0fcda9608875f7d052eff120c7a5da474a6796fe4d83e152e0e4d42f6d1a9b", size = 81205, upload-time = "2025-10-06T14:10:43.167Z" }, - { url = "https://files.pythonhosted.org/packages/88/fc/6908f062a2f77b5f9f6d69cecb1747260831ff206adcbc5b510aff88df91/yarl-1.22.0-cp313-cp313t-macosx_10_13_universal2.whl", hash = "sha256:719ae08b6972befcba4310e49edb1161a88cdd331e3a694b84466bd938a6ab10", size = 146209, upload-time = "2025-10-06T14:10:44.643Z" }, - { url = "https://files.pythonhosted.org/packages/65/47/76594ae8eab26210b4867be6f49129861ad33da1f1ebdf7051e98492bf62/yarl-1.22.0-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:47d8a5c446df1c4db9d21b49619ffdba90e77c89ec6e283f453856c74b50b9e3", size = 95966, upload-time = "2025-10-06T14:10:46.554Z" }, - { url = "https://files.pythonhosted.org/packages/ab/ce/05e9828a49271ba6b5b038b15b3934e996980dd78abdfeb52a04cfb9467e/yarl-1.22.0-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:cfebc0ac8333520d2d0423cbbe43ae43c8838862ddb898f5ca68565e395516e9", size = 97312, upload-time = "2025-10-06T14:10:48.007Z" }, - { url = "https://files.pythonhosted.org/packages/d1/c5/7dffad5e4f2265b29c9d7ec869c369e4223166e4f9206fc2243ee9eea727/yarl-1.22.0-cp313-cp313t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:4398557cbf484207df000309235979c79c4356518fd5c99158c7d38203c4da4f", size = 361967, upload-time = "2025-10-06T14:10:49.997Z" }, - { url = "https://files.pythonhosted.org/packages/50/b2/375b933c93a54bff7fc041e1a6ad2c0f6f733ffb0c6e642ce56ee3b39970/yarl-1.22.0-cp313-cp313t-manylinux2014_armv7l.manylinux_2_17_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:2ca6fd72a8cd803be290d42f2dec5cdcd5299eeb93c2d929bf060ad9efaf5de0", size = 323949, upload-time = "2025-10-06T14:10:52.004Z" }, - { url = "https://files.pythonhosted.org/packages/66/50/bfc2a29a1d78644c5a7220ce2f304f38248dc94124a326794e677634b6cf/yarl-1.22.0-cp313-cp313t-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:ca1f59c4e1ab6e72f0a23c13fca5430f889634166be85dbf1013683e49e3278e", size = 361818, upload-time = "2025-10-06T14:10:54.078Z" }, - { url = "https://files.pythonhosted.org/packages/46/96/f3941a46af7d5d0f0498f86d71275696800ddcdd20426298e572b19b91ff/yarl-1.22.0-cp313-cp313t-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:6c5010a52015e7c70f86eb967db0f37f3c8bd503a695a49f8d45700144667708", size = 372626, upload-time = "2025-10-06T14:10:55.767Z" }, - { url = "https://files.pythonhosted.org/packages/c1/42/8b27c83bb875cd89448e42cd627e0fb971fa1675c9ec546393d18826cb50/yarl-1.22.0-cp313-cp313t-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:9d7672ecf7557476642c88497c2f8d8542f8e36596e928e9bcba0e42e1e7d71f", size = 341129, upload-time = "2025-10-06T14:10:57.985Z" }, - { url = "https://files.pythonhosted.org/packages/49/36/99ca3122201b382a3cf7cc937b95235b0ac944f7e9f2d5331d50821ed352/yarl-1.22.0-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:3b7c88eeef021579d600e50363e0b6ee4f7f6f728cd3486b9d0f3ee7b946398d", size = 346776, upload-time = "2025-10-06T14:10:59.633Z" }, - { url = "https://files.pythonhosted.org/packages/85/b4/47328bf996acd01a4c16ef9dcd2f59c969f495073616586f78cd5f2efb99/yarl-1.22.0-cp313-cp313t-musllinux_1_2_armv7l.whl", hash = "sha256:f4afb5c34f2c6fecdcc182dfcfc6af6cccf1aa923eed4d6a12e9d96904e1a0d8", size = 334879, upload-time = "2025-10-06T14:11:01.454Z" }, - { url = "https://files.pythonhosted.org/packages/c2/ad/b77d7b3f14a4283bffb8e92c6026496f6de49751c2f97d4352242bba3990/yarl-1.22.0-cp313-cp313t-musllinux_1_2_ppc64le.whl", hash = "sha256:59c189e3e99a59cf8d83cbb31d4db02d66cda5a1a4374e8a012b51255341abf5", size = 350996, upload-time = "2025-10-06T14:11:03.452Z" }, - { url = "https://files.pythonhosted.org/packages/81/c8/06e1d69295792ba54d556f06686cbd6a7ce39c22307100e3fb4a2c0b0a1d/yarl-1.22.0-cp313-cp313t-musllinux_1_2_s390x.whl", hash = "sha256:5a3bf7f62a289fa90f1990422dc8dff5a458469ea71d1624585ec3a4c8d6960f", size = 356047, upload-time = "2025-10-06T14:11:05.115Z" }, - { url = "https://files.pythonhosted.org/packages/4b/b8/4c0e9e9f597074b208d18cef227d83aac36184bfbc6eab204ea55783dbc5/yarl-1.22.0-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:de6b9a04c606978fdfe72666fa216ffcf2d1a9f6a381058d4378f8d7b1e5de62", size = 342947, upload-time = "2025-10-06T14:11:08.137Z" }, - { url = "https://files.pythonhosted.org/packages/e0/e5/11f140a58bf4c6ad7aca69a892bff0ee638c31bea4206748fc0df4ebcb3a/yarl-1.22.0-cp313-cp313t-win32.whl", hash = "sha256:1834bb90991cc2999f10f97f5f01317f99b143284766d197e43cd5b45eb18d03", size = 86943, upload-time = "2025-10-06T14:11:10.284Z" }, - { url = "https://files.pythonhosted.org/packages/31/74/8b74bae38ed7fe6793d0c15a0c8207bbb819cf287788459e5ed230996cdd/yarl-1.22.0-cp313-cp313t-win_amd64.whl", hash = "sha256:ff86011bd159a9d2dfc89c34cfd8aff12875980e3bd6a39ff097887520e60249", size = 93715, upload-time = "2025-10-06T14:11:11.739Z" }, - { url = "https://files.pythonhosted.org/packages/69/66/991858aa4b5892d57aef7ee1ba6b4d01ec3b7eb3060795d34090a3ca3278/yarl-1.22.0-cp313-cp313t-win_arm64.whl", hash = "sha256:7861058d0582b847bc4e3a4a4c46828a410bca738673f35a29ba3ca5db0b473b", size = 83857, upload-time = "2025-10-06T14:11:13.586Z" }, - { url = "https://files.pythonhosted.org/packages/73/ae/b48f95715333080afb75a4504487cbe142cae1268afc482d06692d605ae6/yarl-1.22.0-py3-none-any.whl", hash = "sha256:1380560bdba02b6b6c90de54133c81c9f2a453dee9912fe58c1dcced1edb7cff", size = 46814, upload-time = "2025-10-06T14:12:53.872Z" }, +sdist = { url = "https://files.pythonhosted.org/packages/3c/fb/efaa23fa4e45537b827620f04cf8f3cd658b76642205162e072703a5b963/yarl-1.20.1.tar.gz", hash = "sha256:d017a4997ee50c91fd5466cef416231bb82177b93b029906cefc542ce14c35ac", size = 186428, upload-time = "2025-06-10T00:46:09.923Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/b1/18/893b50efc2350e47a874c5c2d67e55a0ea5df91186b2a6f5ac52eff887cd/yarl-1.20.1-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:47ee6188fea634bdfaeb2cc420f5b3b17332e6225ce88149a17c413c77ff269e", size = 133833, upload-time = "2025-06-10T00:43:07.393Z" }, + { url = "https://files.pythonhosted.org/packages/89/ed/b8773448030e6fc47fa797f099ab9eab151a43a25717f9ac043844ad5ea3/yarl-1.20.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:d0f6500f69e8402d513e5eedb77a4e1818691e8f45e6b687147963514d84b44b", size = 91070, upload-time = "2025-06-10T00:43:09.538Z" }, + { url = "https://files.pythonhosted.org/packages/e3/e3/409bd17b1e42619bf69f60e4f031ce1ccb29bd7380117a55529e76933464/yarl-1.20.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:7a8900a42fcdaad568de58887c7b2f602962356908eedb7628eaf6021a6e435b", size = 89818, upload-time = "2025-06-10T00:43:11.575Z" }, + { url = "https://files.pythonhosted.org/packages/f8/77/64d8431a4d77c856eb2d82aa3de2ad6741365245a29b3a9543cd598ed8c5/yarl-1.20.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bad6d131fda8ef508b36be3ece16d0902e80b88ea7200f030a0f6c11d9e508d4", size = 347003, upload-time = "2025-06-10T00:43:14.088Z" }, + { url = "https://files.pythonhosted.org/packages/8d/d2/0c7e4def093dcef0bd9fa22d4d24b023788b0a33b8d0088b51aa51e21e99/yarl-1.20.1-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:df018d92fe22aaebb679a7f89fe0c0f368ec497e3dda6cb81a567610f04501f1", size = 336537, upload-time = "2025-06-10T00:43:16.431Z" }, + { url = "https://files.pythonhosted.org/packages/f0/f3/fc514f4b2cf02cb59d10cbfe228691d25929ce8f72a38db07d3febc3f706/yarl-1.20.1-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:8f969afbb0a9b63c18d0feecf0db09d164b7a44a053e78a7d05f5df163e43833", size = 362358, upload-time = "2025-06-10T00:43:18.704Z" }, + { url = "https://files.pythonhosted.org/packages/ea/6d/a313ac8d8391381ff9006ac05f1d4331cee3b1efaa833a53d12253733255/yarl-1.20.1-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:812303eb4aa98e302886ccda58d6b099e3576b1b9276161469c25803a8db277d", size = 357362, upload-time = "2025-06-10T00:43:20.888Z" }, + { url = "https://files.pythonhosted.org/packages/00/70/8f78a95d6935a70263d46caa3dd18e1f223cf2f2ff2037baa01a22bc5b22/yarl-1.20.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:98c4a7d166635147924aa0bf9bfe8d8abad6fffa6102de9c99ea04a1376f91e8", size = 348979, upload-time = "2025-06-10T00:43:23.169Z" }, + { url = "https://files.pythonhosted.org/packages/cb/05/42773027968968f4f15143553970ee36ead27038d627f457cc44bbbeecf3/yarl-1.20.1-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:12e768f966538e81e6e7550f9086a6236b16e26cd964cf4df35349970f3551cf", size = 337274, upload-time = "2025-06-10T00:43:27.111Z" }, + { url = "https://files.pythonhosted.org/packages/05/be/665634aa196954156741ea591d2f946f1b78ceee8bb8f28488bf28c0dd62/yarl-1.20.1-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:fe41919b9d899661c5c28a8b4b0acf704510b88f27f0934ac7a7bebdd8938d5e", size = 363294, upload-time = "2025-06-10T00:43:28.96Z" }, + { url = "https://files.pythonhosted.org/packages/eb/90/73448401d36fa4e210ece5579895731f190d5119c4b66b43b52182e88cd5/yarl-1.20.1-cp311-cp311-musllinux_1_2_armv7l.whl", hash = "sha256:8601bc010d1d7780592f3fc1bdc6c72e2b6466ea34569778422943e1a1f3c389", size = 358169, upload-time = "2025-06-10T00:43:30.701Z" }, + { url = "https://files.pythonhosted.org/packages/c3/b0/fce922d46dc1eb43c811f1889f7daa6001b27a4005587e94878570300881/yarl-1.20.1-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:daadbdc1f2a9033a2399c42646fbd46da7992e868a5fe9513860122d7fe7a73f", size = 362776, upload-time = "2025-06-10T00:43:32.51Z" }, + { url = "https://files.pythonhosted.org/packages/f1/0d/b172628fce039dae8977fd22caeff3eeebffd52e86060413f5673767c427/yarl-1.20.1-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:03aa1e041727cb438ca762628109ef1333498b122e4c76dd858d186a37cec845", size = 381341, upload-time = "2025-06-10T00:43:34.543Z" }, + { url = "https://files.pythonhosted.org/packages/6b/9b/5b886d7671f4580209e855974fe1cecec409aa4a89ea58b8f0560dc529b1/yarl-1.20.1-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:642980ef5e0fa1de5fa96d905c7e00cb2c47cb468bfcac5a18c58e27dbf8d8d1", size = 379988, upload-time = "2025-06-10T00:43:36.489Z" }, + { url = "https://files.pythonhosted.org/packages/73/be/75ef5fd0fcd8f083a5d13f78fd3f009528132a1f2a1d7c925c39fa20aa79/yarl-1.20.1-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:86971e2795584fe8c002356d3b97ef6c61862720eeff03db2a7c86b678d85b3e", size = 371113, upload-time = "2025-06-10T00:43:38.592Z" }, + { url = "https://files.pythonhosted.org/packages/50/4f/62faab3b479dfdcb741fe9e3f0323e2a7d5cd1ab2edc73221d57ad4834b2/yarl-1.20.1-cp311-cp311-win32.whl", hash = "sha256:597f40615b8d25812f14562699e287f0dcc035d25eb74da72cae043bb884d773", size = 81485, upload-time = "2025-06-10T00:43:41.038Z" }, + { url = "https://files.pythonhosted.org/packages/f0/09/d9c7942f8f05c32ec72cd5c8e041c8b29b5807328b68b4801ff2511d4d5e/yarl-1.20.1-cp311-cp311-win_amd64.whl", hash = "sha256:26ef53a9e726e61e9cd1cda6b478f17e350fb5800b4bd1cd9fe81c4d91cfeb2e", size = 86686, upload-time = "2025-06-10T00:43:42.692Z" }, + { url = "https://files.pythonhosted.org/packages/5f/9a/cb7fad7d73c69f296eda6815e4a2c7ed53fc70c2f136479a91c8e5fbdb6d/yarl-1.20.1-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:bdcc4cd244e58593a4379fe60fdee5ac0331f8eb70320a24d591a3be197b94a9", size = 133667, upload-time = "2025-06-10T00:43:44.369Z" }, + { url = "https://files.pythonhosted.org/packages/67/38/688577a1cb1e656e3971fb66a3492501c5a5df56d99722e57c98249e5b8a/yarl-1.20.1-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:b29a2c385a5f5b9c7d9347e5812b6f7ab267193c62d282a540b4fc528c8a9d2a", size = 91025, upload-time = "2025-06-10T00:43:46.295Z" }, + { url = "https://files.pythonhosted.org/packages/50/ec/72991ae51febeb11a42813fc259f0d4c8e0507f2b74b5514618d8b640365/yarl-1.20.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:1112ae8154186dfe2de4732197f59c05a83dc814849a5ced892b708033f40dc2", size = 89709, upload-time = "2025-06-10T00:43:48.22Z" }, + { url = "https://files.pythonhosted.org/packages/99/da/4d798025490e89426e9f976702e5f9482005c548c579bdae792a4c37769e/yarl-1.20.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:90bbd29c4fe234233f7fa2b9b121fb63c321830e5d05b45153a2ca68f7d310ee", size = 352287, upload-time = "2025-06-10T00:43:49.924Z" }, + { url = "https://files.pythonhosted.org/packages/1a/26/54a15c6a567aac1c61b18aa0f4b8aa2e285a52d547d1be8bf48abe2b3991/yarl-1.20.1-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:680e19c7ce3710ac4cd964e90dad99bf9b5029372ba0c7cbfcd55e54d90ea819", size = 345429, upload-time = "2025-06-10T00:43:51.7Z" }, + { url = "https://files.pythonhosted.org/packages/d6/95/9dcf2386cb875b234353b93ec43e40219e14900e046bf6ac118f94b1e353/yarl-1.20.1-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:4a979218c1fdb4246a05efc2cc23859d47c89af463a90b99b7c56094daf25a16", size = 365429, upload-time = "2025-06-10T00:43:53.494Z" }, + { url = "https://files.pythonhosted.org/packages/91/b2/33a8750f6a4bc224242a635f5f2cff6d6ad5ba651f6edcccf721992c21a0/yarl-1.20.1-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:255b468adf57b4a7b65d8aad5b5138dce6a0752c139965711bdcb81bc370e1b6", size = 363862, upload-time = "2025-06-10T00:43:55.766Z" }, + { url = "https://files.pythonhosted.org/packages/98/28/3ab7acc5b51f4434b181b0cee8f1f4b77a65919700a355fb3617f9488874/yarl-1.20.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a97d67108e79cfe22e2b430d80d7571ae57d19f17cda8bb967057ca8a7bf5bfd", size = 355616, upload-time = "2025-06-10T00:43:58.056Z" }, + { url = "https://files.pythonhosted.org/packages/36/a3/f666894aa947a371724ec7cd2e5daa78ee8a777b21509b4252dd7bd15e29/yarl-1.20.1-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:8570d998db4ddbfb9a590b185a0a33dbf8aafb831d07a5257b4ec9948df9cb0a", size = 339954, upload-time = "2025-06-10T00:43:59.773Z" }, + { url = "https://files.pythonhosted.org/packages/f1/81/5f466427e09773c04219d3450d7a1256138a010b6c9f0af2d48565e9ad13/yarl-1.20.1-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:97c75596019baae7c71ccf1d8cc4738bc08134060d0adfcbe5642f778d1dca38", size = 365575, upload-time = "2025-06-10T00:44:02.051Z" }, + { url = "https://files.pythonhosted.org/packages/2e/e3/e4b0ad8403e97e6c9972dd587388940a032f030ebec196ab81a3b8e94d31/yarl-1.20.1-cp312-cp312-musllinux_1_2_armv7l.whl", hash = "sha256:1c48912653e63aef91ff988c5432832692ac5a1d8f0fb8a33091520b5bbe19ef", size = 365061, upload-time = "2025-06-10T00:44:04.196Z" }, + { url = "https://files.pythonhosted.org/packages/ac/99/b8a142e79eb86c926f9f06452eb13ecb1bb5713bd01dc0038faf5452e544/yarl-1.20.1-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:4c3ae28f3ae1563c50f3d37f064ddb1511ecc1d5584e88c6b7c63cf7702a6d5f", size = 364142, upload-time = "2025-06-10T00:44:06.527Z" }, + { url = "https://files.pythonhosted.org/packages/34/f2/08ed34a4a506d82a1a3e5bab99ccd930a040f9b6449e9fd050320e45845c/yarl-1.20.1-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:c5e9642f27036283550f5f57dc6156c51084b458570b9d0d96100c8bebb186a8", size = 381894, upload-time = "2025-06-10T00:44:08.379Z" }, + { url = "https://files.pythonhosted.org/packages/92/f8/9a3fbf0968eac704f681726eff595dce9b49c8a25cd92bf83df209668285/yarl-1.20.1-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:2c26b0c49220d5799f7b22c6838409ee9bc58ee5c95361a4d7831f03cc225b5a", size = 383378, upload-time = "2025-06-10T00:44:10.51Z" }, + { url = "https://files.pythonhosted.org/packages/af/85/9363f77bdfa1e4d690957cd39d192c4cacd1c58965df0470a4905253b54f/yarl-1.20.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:564ab3d517e3d01c408c67f2e5247aad4019dcf1969982aba3974b4093279004", size = 374069, upload-time = "2025-06-10T00:44:12.834Z" }, + { url = "https://files.pythonhosted.org/packages/35/99/9918c8739ba271dcd935400cff8b32e3cd319eaf02fcd023d5dcd487a7c8/yarl-1.20.1-cp312-cp312-win32.whl", hash = "sha256:daea0d313868da1cf2fac6b2d3a25c6e3a9e879483244be38c8e6a41f1d876a5", size = 81249, upload-time = "2025-06-10T00:44:14.731Z" }, + { url = "https://files.pythonhosted.org/packages/eb/83/5d9092950565481b413b31a23e75dd3418ff0a277d6e0abf3729d4d1ce25/yarl-1.20.1-cp312-cp312-win_amd64.whl", hash = "sha256:48ea7d7f9be0487339828a4de0360d7ce0efc06524a48e1810f945c45b813698", size = 86710, upload-time = "2025-06-10T00:44:16.716Z" }, + { url = "https://files.pythonhosted.org/packages/8a/e1/2411b6d7f769a07687acee88a062af5833cf1966b7266f3d8dfb3d3dc7d3/yarl-1.20.1-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:0b5ff0fbb7c9f1b1b5ab53330acbfc5247893069e7716840c8e7d5bb7355038a", size = 131811, upload-time = "2025-06-10T00:44:18.933Z" }, + { url = "https://files.pythonhosted.org/packages/b2/27/584394e1cb76fb771371770eccad35de400e7b434ce3142c2dd27392c968/yarl-1.20.1-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:14f326acd845c2b2e2eb38fb1346c94f7f3b01a4f5c788f8144f9b630bfff9a3", size = 90078, upload-time = "2025-06-10T00:44:20.635Z" }, + { url = "https://files.pythonhosted.org/packages/bf/9a/3246ae92d4049099f52d9b0fe3486e3b500e29b7ea872d0f152966fc209d/yarl-1.20.1-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:f60e4ad5db23f0b96e49c018596707c3ae89f5d0bd97f0ad3684bcbad899f1e7", size = 88748, upload-time = "2025-06-10T00:44:22.34Z" }, + { url = "https://files.pythonhosted.org/packages/a3/25/35afe384e31115a1a801fbcf84012d7a066d89035befae7c5d4284df1e03/yarl-1.20.1-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:49bdd1b8e00ce57e68ba51916e4bb04461746e794e7c4d4bbc42ba2f18297691", size = 349595, upload-time = "2025-06-10T00:44:24.314Z" }, + { url = "https://files.pythonhosted.org/packages/28/2d/8aca6cb2cabc8f12efcb82749b9cefecbccfc7b0384e56cd71058ccee433/yarl-1.20.1-cp313-cp313-manylinux_2_17_armv7l.manylinux2014_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:66252d780b45189975abfed839616e8fd2dbacbdc262105ad7742c6ae58f3e31", size = 342616, upload-time = "2025-06-10T00:44:26.167Z" }, + { url = "https://files.pythonhosted.org/packages/0b/e9/1312633d16b31acf0098d30440ca855e3492d66623dafb8e25b03d00c3da/yarl-1.20.1-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:59174e7332f5d153d8f7452a102b103e2e74035ad085f404df2e40e663a22b28", size = 361324, upload-time = "2025-06-10T00:44:27.915Z" }, + { url = "https://files.pythonhosted.org/packages/bc/a0/688cc99463f12f7669eec7c8acc71ef56a1521b99eab7cd3abb75af887b0/yarl-1.20.1-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:e3968ec7d92a0c0f9ac34d5ecfd03869ec0cab0697c91a45db3fbbd95fe1b653", size = 359676, upload-time = "2025-06-10T00:44:30.041Z" }, + { url = "https://files.pythonhosted.org/packages/af/44/46407d7f7a56e9a85a4c207724c9f2c545c060380718eea9088f222ba697/yarl-1.20.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d1a4fbb50e14396ba3d375f68bfe02215d8e7bc3ec49da8341fe3157f59d2ff5", size = 352614, upload-time = "2025-06-10T00:44:32.171Z" }, + { url = "https://files.pythonhosted.org/packages/b1/91/31163295e82b8d5485d31d9cf7754d973d41915cadce070491778d9c9825/yarl-1.20.1-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:11a62c839c3a8eac2410e951301309426f368388ff2f33799052787035793b02", size = 336766, upload-time = "2025-06-10T00:44:34.494Z" }, + { url = "https://files.pythonhosted.org/packages/b4/8e/c41a5bc482121f51c083c4c2bcd16b9e01e1cf8729e380273a952513a21f/yarl-1.20.1-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:041eaa14f73ff5a8986b4388ac6bb43a77f2ea09bf1913df7a35d4646db69e53", size = 364615, upload-time = "2025-06-10T00:44:36.856Z" }, + { url = "https://files.pythonhosted.org/packages/e3/5b/61a3b054238d33d70ea06ebba7e58597891b71c699e247df35cc984ab393/yarl-1.20.1-cp313-cp313-musllinux_1_2_armv7l.whl", hash = "sha256:377fae2fef158e8fd9d60b4c8751387b8d1fb121d3d0b8e9b0be07d1b41e83dc", size = 360982, upload-time = "2025-06-10T00:44:39.141Z" }, + { url = "https://files.pythonhosted.org/packages/df/a3/6a72fb83f8d478cb201d14927bc8040af901811a88e0ff2da7842dd0ed19/yarl-1.20.1-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:1c92f4390e407513f619d49319023664643d3339bd5e5a56a3bebe01bc67ec04", size = 369792, upload-time = "2025-06-10T00:44:40.934Z" }, + { url = "https://files.pythonhosted.org/packages/7c/af/4cc3c36dfc7c077f8dedb561eb21f69e1e9f2456b91b593882b0b18c19dc/yarl-1.20.1-cp313-cp313-musllinux_1_2_ppc64le.whl", hash = "sha256:d25ddcf954df1754ab0f86bb696af765c5bfaba39b74095f27eececa049ef9a4", size = 382049, upload-time = "2025-06-10T00:44:42.854Z" }, + { url = "https://files.pythonhosted.org/packages/19/3a/e54e2c4752160115183a66dc9ee75a153f81f3ab2ba4bf79c3c53b33de34/yarl-1.20.1-cp313-cp313-musllinux_1_2_s390x.whl", hash = "sha256:909313577e9619dcff8c31a0ea2aa0a2a828341d92673015456b3ae492e7317b", size = 384774, upload-time = "2025-06-10T00:44:45.275Z" }, + { url = "https://files.pythonhosted.org/packages/9c/20/200ae86dabfca89060ec6447649f219b4cbd94531e425e50d57e5f5ac330/yarl-1.20.1-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:793fd0580cb9664548c6b83c63b43c477212c0260891ddf86809e1c06c8b08f1", size = 374252, upload-time = "2025-06-10T00:44:47.31Z" }, + { url = "https://files.pythonhosted.org/packages/83/75/11ee332f2f516b3d094e89448da73d557687f7d137d5a0f48c40ff211487/yarl-1.20.1-cp313-cp313-win32.whl", hash = "sha256:468f6e40285de5a5b3c44981ca3a319a4b208ccc07d526b20b12aeedcfa654b7", size = 81198, upload-time = "2025-06-10T00:44:49.164Z" }, + { url = "https://files.pythonhosted.org/packages/ba/ba/39b1ecbf51620b40ab402b0fc817f0ff750f6d92712b44689c2c215be89d/yarl-1.20.1-cp313-cp313-win_amd64.whl", hash = "sha256:495b4ef2fea40596bfc0affe3837411d6aa3371abcf31aac0ccc4bdd64d4ef5c", size = 86346, upload-time = "2025-06-10T00:44:51.182Z" }, + { url = "https://files.pythonhosted.org/packages/43/c7/669c52519dca4c95153c8ad96dd123c79f354a376346b198f438e56ffeb4/yarl-1.20.1-cp313-cp313t-macosx_10_13_universal2.whl", hash = "sha256:f60233b98423aab21d249a30eb27c389c14929f47be8430efa7dbd91493a729d", size = 138826, upload-time = "2025-06-10T00:44:52.883Z" }, + { url = "https://files.pythonhosted.org/packages/6a/42/fc0053719b44f6ad04a75d7f05e0e9674d45ef62f2d9ad2c1163e5c05827/yarl-1.20.1-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:6f3eff4cc3f03d650d8755c6eefc844edde99d641d0dcf4da3ab27141a5f8ddf", size = 93217, upload-time = "2025-06-10T00:44:54.658Z" }, + { url = "https://files.pythonhosted.org/packages/4f/7f/fa59c4c27e2a076bba0d959386e26eba77eb52ea4a0aac48e3515c186b4c/yarl-1.20.1-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:69ff8439d8ba832d6bed88af2c2b3445977eba9a4588b787b32945871c2444e3", size = 92700, upload-time = "2025-06-10T00:44:56.784Z" }, + { url = "https://files.pythonhosted.org/packages/2f/d4/062b2f48e7c93481e88eff97a6312dca15ea200e959f23e96d8ab898c5b8/yarl-1.20.1-cp313-cp313t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3cf34efa60eb81dd2645a2e13e00bb98b76c35ab5061a3989c7a70f78c85006d", size = 347644, upload-time = "2025-06-10T00:44:59.071Z" }, + { url = "https://files.pythonhosted.org/packages/89/47/78b7f40d13c8f62b499cc702fdf69e090455518ae544c00a3bf4afc9fc77/yarl-1.20.1-cp313-cp313t-manylinux_2_17_armv7l.manylinux2014_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:8e0fe9364ad0fddab2688ce72cb7a8e61ea42eff3c7caeeb83874a5d479c896c", size = 323452, upload-time = "2025-06-10T00:45:01.605Z" }, + { url = "https://files.pythonhosted.org/packages/eb/2b/490d3b2dc66f52987d4ee0d3090a147ea67732ce6b4d61e362c1846d0d32/yarl-1.20.1-cp313-cp313t-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:8f64fbf81878ba914562c672024089e3401974a39767747691c65080a67b18c1", size = 346378, upload-time = "2025-06-10T00:45:03.946Z" }, + { url = "https://files.pythonhosted.org/packages/66/ad/775da9c8a94ce925d1537f939a4f17d782efef1f973039d821cbe4bcc211/yarl-1.20.1-cp313-cp313t-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:f6342d643bf9a1de97e512e45e4b9560a043347e779a173250824f8b254bd5ce", size = 353261, upload-time = "2025-06-10T00:45:05.992Z" }, + { url = "https://files.pythonhosted.org/packages/4b/23/0ed0922b47a4f5c6eb9065d5ff1e459747226ddce5c6a4c111e728c9f701/yarl-1.20.1-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:56dac5f452ed25eef0f6e3c6a066c6ab68971d96a9fb441791cad0efba6140d3", size = 335987, upload-time = "2025-06-10T00:45:08.227Z" }, + { url = "https://files.pythonhosted.org/packages/3e/49/bc728a7fe7d0e9336e2b78f0958a2d6b288ba89f25a1762407a222bf53c3/yarl-1.20.1-cp313-cp313t-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c7d7f497126d65e2cad8dc5f97d34c27b19199b6414a40cb36b52f41b79014be", size = 329361, upload-time = "2025-06-10T00:45:10.11Z" }, + { url = "https://files.pythonhosted.org/packages/93/8f/b811b9d1f617c83c907e7082a76e2b92b655400e61730cd61a1f67178393/yarl-1.20.1-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:67e708dfb8e78d8a19169818eeb5c7a80717562de9051bf2413aca8e3696bf16", size = 346460, upload-time = "2025-06-10T00:45:12.055Z" }, + { url = "https://files.pythonhosted.org/packages/70/fd/af94f04f275f95da2c3b8b5e1d49e3e79f1ed8b6ceb0f1664cbd902773ff/yarl-1.20.1-cp313-cp313t-musllinux_1_2_armv7l.whl", hash = "sha256:595c07bc79af2494365cc96ddeb772f76272364ef7c80fb892ef9d0649586513", size = 334486, upload-time = "2025-06-10T00:45:13.995Z" }, + { url = "https://files.pythonhosted.org/packages/84/65/04c62e82704e7dd0a9b3f61dbaa8447f8507655fd16c51da0637b39b2910/yarl-1.20.1-cp313-cp313t-musllinux_1_2_i686.whl", hash = "sha256:7bdd2f80f4a7df852ab9ab49484a4dee8030023aa536df41f2d922fd57bf023f", size = 342219, upload-time = "2025-06-10T00:45:16.479Z" }, + { url = "https://files.pythonhosted.org/packages/91/95/459ca62eb958381b342d94ab9a4b6aec1ddec1f7057c487e926f03c06d30/yarl-1.20.1-cp313-cp313t-musllinux_1_2_ppc64le.whl", hash = "sha256:c03bfebc4ae8d862f853a9757199677ab74ec25424d0ebd68a0027e9c639a390", size = 350693, upload-time = "2025-06-10T00:45:18.399Z" }, + { url = "https://files.pythonhosted.org/packages/a6/00/d393e82dd955ad20617abc546a8f1aee40534d599ff555ea053d0ec9bf03/yarl-1.20.1-cp313-cp313t-musllinux_1_2_s390x.whl", hash = "sha256:344d1103e9c1523f32a5ed704d576172d2cabed3122ea90b1d4e11fe17c66458", size = 355803, upload-time = "2025-06-10T00:45:20.677Z" }, + { url = "https://files.pythonhosted.org/packages/9e/ed/c5fb04869b99b717985e244fd93029c7a8e8febdfcffa06093e32d7d44e7/yarl-1.20.1-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:88cab98aa4e13e1ade8c141daeedd300a4603b7132819c484841bb7af3edce9e", size = 341709, upload-time = "2025-06-10T00:45:23.221Z" }, + { url = "https://files.pythonhosted.org/packages/24/fd/725b8e73ac2a50e78a4534ac43c6addf5c1c2d65380dd48a9169cc6739a9/yarl-1.20.1-cp313-cp313t-win32.whl", hash = "sha256:b121ff6a7cbd4abc28985b6028235491941b9fe8fe226e6fdc539c977ea1739d", size = 86591, upload-time = "2025-06-10T00:45:25.793Z" }, + { url = "https://files.pythonhosted.org/packages/94/c3/b2e9f38bc3e11191981d57ea08cab2166e74ea770024a646617c9cddd9f6/yarl-1.20.1-cp313-cp313t-win_amd64.whl", hash = "sha256:541d050a355bbbc27e55d906bc91cb6fe42f96c01413dd0f4ed5a5240513874f", size = 93003, upload-time = "2025-06-10T00:45:27.752Z" }, + { url = "https://files.pythonhosted.org/packages/b4/2d/2345fce04cfd4bee161bf1e7d9cdc702e3e16109021035dbb24db654a622/yarl-1.20.1-py3-none-any.whl", hash = "sha256:83b8eb083fe4683c6115795d9fc1cfaf2cbbefb19b3a1cb68f6527460f483a77", size = 46542, upload-time = "2025-06-10T00:46:07.521Z" }, ] [[package]]