diff --git a/workflows/cve-fixer/.claude/commands/cve.find.md b/workflows/cve-fixer/.claude/commands/cve.find.md index f30e71b..688284b 100644 --- a/workflows/cve-fixer/.claude/commands/cve.find.md +++ b/workflows/cve-fixer/.claude/commands/cve.find.md @@ -33,7 +33,8 @@ Report: artifacts/cve-fixer/find/cve-issues-20260226-145018.md 1. **Parse Arguments and Flags** - Parse the command arguments for the component name, optional subcomponent, and optional flags - **Supported flags:** - - `--ignore-resolved` — Exclude issues with Jira status "Resolved" from results + - `--ignore-resolved` — Exclude issues with status "Resolved" from results + - `--ignore-vex` — Exclude issues already closed as "Not a Bug" with a VEX justification - The component name is the first argument that is not a flag - The subcomponent is the second positional argument that is not a flag (optional) - If component is not provided, ask the user to type the component name @@ -51,62 +52,52 @@ Report: artifacts/cve-fixer/find/cve-issues-20260226-145018.md /cve.find "AI Evaluations" trustyai-ragas ``` -2. **Check JIRA API Token (REQUIRED - User Setup)** - - **This is the ONLY thing the user must configure manually before proceeding** +2. **Verify Jira Access** - - Check if JIRA_API_TOKEN and JIRA_EMAIL are set: - ```bash - if [ -z "$JIRA_API_TOKEN" ]; then - echo "ERROR: JIRA_API_TOKEN is not set" - else - echo "JIRA_API_TOKEN is set" - fi - if [ -z "$JIRA_EMAIL" ]; then - echo "ERROR: JIRA_EMAIL is not set" - else - echo "JIRA_EMAIL is set" - fi - ``` - - - **If JIRA_API_TOKEN or JIRA_EMAIL is NOT set or empty**: - - **STOP here and inform the user they need to set up both variables first** - - Provide instructions: - - **Step 1: Generate a Jira API Token** - - Go to https://id.atlassian.com/manage-profile/security/api-tokens - - Click "Create API token" - - Give it a name and copy the token - - **Step 2: Export both environment variables** - ```bash - export JIRA_API_TOKEN="your-token-here" - export JIRA_EMAIL="your-email@redhat.com" - ``` - To make it persistent, add to `~/.bashrc` or `~/.zshrc`: - ```bash - echo 'export JIRA_API_TOKEN="your-token-here"' >> ~/.bashrc - echo 'export JIRA_EMAIL="your-email@redhat.com"' >> ~/.bashrc - source ~/.bashrc - ``` - - - **After user sets the variables, verify they're exported correctly** using the check script above - - Should output: "JIRA_API_TOKEN is set" and "JIRA_EMAIL is set" - - - **Only proceed to the next steps if both JIRA_API_TOKEN and JIRA_EMAIL are set** + Secrets may be injected by the Ambient session, a secrets manager, or an MCP server — do NOT rely solely on bash env var checks. Instead, attempt a lightweight test API call and let the response determine whether credentials are available. + + ```bash + JIRA_BASE_URL="https://redhat.atlassian.net" + AUTH=$(echo -n "${JIRA_EMAIL}:${JIRA_API_TOKEN}" | base64) + + # Retry once on network failure (curl exit code 000 = timeout/no response) + for ATTEMPT in 1 2; do + TEST_RESPONSE=$(curl -s -o /dev/null -w "%{http_code}" -X GET \ + --connect-timeout 10 --max-time 15 \ + -H "Authorization: Basic ${AUTH}" \ + -H "Content-Type: application/json" \ + "${JIRA_BASE_URL}/rest/api/3/myself") + [ "$TEST_RESPONSE" != "000" ] && break + echo "⚠️ Network timeout on attempt ${ATTEMPT}, retrying..." + sleep 3 + done + ``` + + - **HTTP 200** → credentials valid, proceed + - **HTTP 401** → credentials missing or invalid. Note: `/rest/api/3/myself` returns 401 for all authentication failures — there is no separate 403 for this endpoint. Only now inform the user: + - Check if `JIRA_API_TOKEN` and `JIRA_EMAIL` are configured as Ambient session secrets + - If not, generate a token at https://id.atlassian.com/manage-profile/security/api-tokens and export: + + ```bash + export JIRA_API_TOKEN="your-token-here" + export JIRA_EMAIL="your-email@redhat.com" + ``` + - **HTTP 000 after retry** → persistent network issue — inform user and stop + + **Do NOT pre-check env vars with `[ -z "$JIRA_API_TOKEN" ]` and stop.** The variables may be available to the API call even if not visible to the shell check (e.g. Ambient secrets injection). 3. **Query Jira for CVE Issues** - a. Set up variables: + a. Set up variables (AUTH already set from Step 2): + ```bash COMPONENT_NAME="[from step 1]" JIRA_BASE_URL="https://redhat.atlassian.net" - JIRA_EMAIL="${JIRA_EMAIL}" - JIRA_API_TOKEN="${JIRA_API_TOKEN}" - # Jira Cloud uses Basic Auth: base64(email:api-token) - AUTH=$(echo -n "${JIRA_EMAIL}:${JIRA_API_TOKEN}" | base64) + # AUTH already constructed in Step 2 — reuse it ``` b. Construct JQL query and execute API call: + ```bash # Normalize component name with case-insensitive lookup against mapping file # Try relative to cwd (workflow root), then repo-relative fallback @@ -156,6 +147,12 @@ Report: artifacts/cve-fixer/find/cve-issues-20260226-145018.md JQL="${JQL} AND status not in (\"Resolved\")" fi + # Append VEX filter if --ignore-vex flag was provided + # Excludes issues closed as "Not a Bug" (VEX justified) or "Obsolete" or "Won't Fix" + if [ "$IGNORE_VEX" = "true" ]; then + JQL="${JQL} AND NOT (status = \"Closed\" AND resolution in (\"Not a Bug\", \"Obsolete\", \"Won't Fix\"))" + fi + # URL-encode the JQL query for the GET request ENCODED_JQL=$(python3 -c "import urllib.parse; print(urllib.parse.quote('''${JQL}'''))") diff --git a/workflows/cve-fixer/.claude/commands/cve.fix.md b/workflows/cve-fixer/.claude/commands/cve.fix.md index d2bd3de..98e828c 100644 --- a/workflows/cve-fixer/.claude/commands/cve.fix.md +++ b/workflows/cve-fixer/.claude/commands/cve.fix.md @@ -140,34 +140,141 @@ Summary: - The CVE scan in Step 5 acts as the safety net — it will skip repos where the CVE doesn't exist - Log a warning: "⚠️ Could not extract container from summary — processing all component repos" - **3.3: For each target repo, gather:** - - Repository name (e.g., "opendatahub-io/odh-dashboard") - - Default branch (e.g., "main") - - Active release branches (e.g., ["v2.29.0-fixes", "v2.28.0-fixes", "rhoai-3.0"]) - - Primary target branch for CVE fixes (from `cve_fix_workflow.primary_target`) - - Backport targets from `cve_fix_workflow` - - Repository type (monorepo vs single package) - - Repo type: upstream or downstream (from `repo_type` field, defaults to upstream if absent) - - **Multi-repo strategy**: When a container chain has upstream, midstream, and downstream repos: - - Fix upstream first, then apply the same fix to midstream and downstream - - Each repo gets its own clone, branch, PR, and verification cycle - - Steps 4 through 11 are repeated for EACH repository in the list + **3.3: For each target repo, determine target branches:** + + The branches to fix depend on `repo_type`: + + - **`upstream` or `midstream`**: target `default_branch` only (e.g., `main`) + - Fixes flow forward from there — no backports needed at this level + - **`downstream`**: target `default_branch` AND every branch in `active_release_branches` + - Each branch gets its own separate PR — never combine multiple branches in one PR + - If `active_release_branches` is empty, target `default_branch` only + + ```bash + # Determine target branches per repo — deduplicate to avoid processing DEFAULT_BRANCH twice + if [ "$REPO_TYPE" = "downstream" ]; then + ALL_BRANCHES=("$DEFAULT_BRANCH" "${ACTIVE_RELEASE_BRANCHES[@]}") + TARGET_BRANCHES=($(printf '%s\n' "${ALL_BRANCHES[@]}" | awk '!seen[$0]++')) + else + TARGET_BRANCHES=("$DEFAULT_BRANCH") + fi + ``` + + **Example for llm-d inference-scheduler:** + ``` + upstream llm-d/llm-d-inference-scheduler → PR against: main + midstream opendatahub-io/llm-d-inference-scheduler → PR against: main + downstream red-hat-data-services/llm-d-inference-scheduler → PRs against: + - main + - rhoai-3.3 + - rhoai-3.4 + - rhoai-3.4-ea.1 + - rhoai-3.4-ea.2 + ``` + + **Multi-repo + multi-branch strategy**: + - Fix upstream repos first, then midstream, then downstream + - For downstream: Steps 4 through 11 repeat for EACH branch independently + - Each branch produces its own fix branch including the target branch name to avoid collisions: + `fix/cve-YYYY-XXXXX---attempt-1` + e.g. `fix/cve-2025-66418-urllib3-rhoai-3.4-attempt-1` + - Never combine fixes for multiple branches into a single PR 4. **Clone or Use Existing Repository** - - Always use `/tmp` for repository operations with unique dirs per repo - - For each repo, extract `REPO_ORG` and `REPO_NAME` from `github_url`, set `REPO_DIR="/tmp/${REPO_ORG}/${REPO_NAME}"` - - If `$REPO_DIR` exists, `cd` into it; otherwise `mkdir -p "/tmp/${REPO_ORG}"`, `git clone` the URL, `cd` in, and `git checkout` the target branch - - **Configure git credentials** immediately after clone (needed for push): - 1. `gh auth setup-git` (if `gh` is authenticated) - 2. Else set `credential.helper` using `$GITHUB_TOKEN` or `$GH_TOKEN` - 3. Else switch remote to SSH if `~/.ssh/id_rsa` or `id_ed25519` exists - 4. Else warn: no credentials configured, push will fail - - **Multi-repo example**: - ```bash - # Upstream: /tmp/opendatahub-io/models-as-a-service (branch: main) - # Downstream: /tmp/red-hat-data-services/models-as-a-service (branch: rhoai-3.0) - ``` + + **4.0: GitHub Authentication Setup** + + Use a **Classic Personal Access Token (PAT)** with `repo` scope. This is the most reliable option across all repo types (upstream external orgs, midstream ODH, downstream RHDS): + + ```bash + # Recommended: Classic PAT with repo scope + # Generate at: https://github.com/settings/tokens (classic) + # Required scopes: repo (full control of private repositories) + export GITHUB_TOKEN="ghp_your_classic_pat_here" + gh auth login --with-token <<< "$GITHUB_TOKEN" + ``` + + **Why Classic PAT?** + - Fine-grained PATs require org-level approval for `red-hat-data-services`, `opendatahub-io` etc. + - The Ambient Code GitHub App only covers repos where it is installed — it will NOT work for upstream repos like `llm-d/*`, `eval-hub/*`, `trustyai-explainability/*` + - Classic PAT with `repo` scope works immediately for any repo you are a member of + + **4.1: Clone and detect write access** + + For each repo: + ```bash + REPO_ORG=$(echo "$GITHUB_URL" | sed 's|https://github.com/||' | cut -d/ -f1) + REPO_NAME=$(echo "$GITHUB_URL" | sed 's|https://github.com/||' | cut -d/ -f2) + REPO_FULL="${REPO_ORG}/${REPO_NAME}" + REPO_DIR="/tmp/${REPO_ORG}/${REPO_NAME}" + + # Clone the repo + mkdir -p "/tmp/${REPO_ORG}" + gh repo clone "$REPO_FULL" "$REPO_DIR" -- --depth=1 2>/dev/null || \ + git clone "https://github.com/${REPO_FULL}.git" "$REPO_DIR" + + # Check if user has write (push) access + PUSH_ACCESS=$(gh api repos/${REPO_FULL} --jq '.permissions.push' 2>/dev/null) + ``` + + **4.2: Fork fallback if no write access** + + If `PUSH_ACCESS` is `false` or the push fails: + ```bash + # Create a fork under the authenticated user's account + gh repo fork "$REPO_FULL" --clone=false + + FORK_USER=$(gh api user --jq '.login') + FORK_REPO="${FORK_USER}/${REPO_NAME}" + + # Add fork as a remote + cd "$REPO_DIR" + git remote add fork "https://github.com/${FORK_REPO}.git" + + # Push fix branch to fork, PR targets the original repo + git push fork "$FIX_BRANCH" + gh pr create --repo "$REPO_FULL" --head "${FORK_USER}:${FIX_BRANCH}" \ + --base "$TARGET_BRANCH" --title "..." --body "..." + ``` + + This is common for upstream repos (`llm-d/*`, `eval-hub/*`, `trustyai-explainability/*`) where the user doesn't have direct write access. + + **4.3: Branch loop — isolated worktree per branch** + + To prevent cross-branch contamination (uncommitted files, lockfile drift, tool artifacts), + use a separate worktree for each target branch rather than switching branches in the same dir: + + ```bash + for TARGET_BRANCH in "${TARGET_BRANCHES[@]}"; do + BRANCH_DIR="/tmp/${REPO_ORG}/${REPO_NAME}-${TARGET_BRANCH//\//-}" + git -C "$REPO_DIR" worktree add "$BRANCH_DIR" "$TARGET_BRANCH" + cd "$BRANCH_DIR" + git pull origin "$TARGET_BRANCH" + # Steps 5–11 run here — fix, test, push, PR + FIX_BRANCH="fix/cve-${CVE_ID}-${PACKAGE}-${TARGET_BRANCH//\//-}-attempt-1" + git -C "$REPO_DIR" worktree remove "$BRANCH_DIR" --force # cleanup after PR + done + ``` + + Each worktree is fully isolated — no shared index or working tree state between branches. + + **Example output for downstream with write access (5 separate PRs):** + + ``` + Repo: red-hat-data-services/llm-d-inference-scheduler + ├── worktree: main → fix branch: fix/cve-2025-66418-urllib3-main-attempt-1 + ├── worktree: rhoai-3.3 → fix branch: fix/cve-2025-66418-urllib3-rhoai-3.3-attempt-1 + ├── worktree: rhoai-3.4 → fix branch: fix/cve-2025-66418-urllib3-rhoai-3.4-attempt-1 + ├── worktree: rhoai-3.4-ea.1 → fix branch: fix/cve-2025-66418-urllib3-rhoai-3.4-ea.1-attempt-1 + └── worktree: rhoai-3.4-ea.2 → fix branch: fix/cve-2025-66418-urllib3-rhoai-3.4-ea.2-attempt-1 + ``` + + **Example output for upstream with no write access (fork, 1 PR):** + + ``` + Repo: llm-d/llm-d-inference-scheduler (no write access → fork) + └── worktree: main → push to /llm-d-inference-scheduler → PR targeting llm-d/... main + ``` 4.5. **Load Global Fix Guidance from `.cve-fix/` Folder** - Runs ONCE after all repos are cloned, BEFORE any fixes. Builds a global knowledge base from `.cve-fix/` folders across all cloned repos. @@ -305,18 +412,166 @@ Summary: - **Package found at a version** → compare against CVE affected version range - If version is in affected range → proceed with fix - If version is already patched → mark as already fixed (see below) - - **Package not found in any manifest** → it may be transitive or RPM-installed + - **Package not found in any manifest** → check whether it comes from the base image (Step 5.2.1b below) before falling back to transitive/RPM handling + + **5.2.1b: Base image check (when package not found in application manifests)** + + The package may be pre-installed in the container's base image rather than declared by the application. + + ```bash + # Find Dockerfile.konflux (or Dockerfile) in the repo root + DOCKERFILE=$(ls Dockerfile.konflux Dockerfile.konflux.* Dockerfile 2>/dev/null | head -1) + + if [ -n "$DOCKERFILE" ]; then + # Extract FROM line(s) — may be multiple stages + BASE_IMAGES=$(grep -E '^FROM ' "$DOCKERFILE" | awk '{print $2}') + echo "Base images in use: $BASE_IMAGES" + fi + ``` + + **For each base image found:** + ```bash + for BASE_IMAGE in $BASE_IMAGES; do + REGISTRY=$(echo "$BASE_IMAGE" | cut -d/ -f1) + IMAGE_REF=$(echo "$BASE_IMAGE" | sed 's/:.*$//') # strip tag + CURRENT_TAG=$(echo "$BASE_IMAGE" | grep -oP '(?<=:)[^@]+' || echo "latest") + + echo "Checking for newer tags of: $IMAGE_REF (current: $CURRENT_TAG)" + + # List available tags (works for quay.io, registry.access.redhat.com, etc.) + SKOPEO_OUTPUT=$(skopeo list-tags "docker://${IMAGE_REF}" 2>&1) + SKOPEO_EXIT=$? + if [ $SKOPEO_EXIT -ne 0 ]; then + echo "⚠️ skopeo list-tags failed for ${IMAGE_REF}: ${SKOPEO_OUTPUT}" + echo "⚠️ Skipping base image update check — treating as no newer tag available" + continue + fi + AVAILABLE_TAGS=$(echo "$SKOPEO_OUTPUT" | jq -r '.Tags[]' 2>/dev/null | sort -V) + + # Find tags newer than current using sort -V (semantic version aware) + NEWER_TAGS=$(printf '%s\n' "$AVAILABLE_TAGS" | sort -V | \ + awk -v cur="$CURRENT_TAG" 'found{print} $0==cur{found=1}') + done + ``` + + **Interpret base image check results:** + + - **Newer base image tag available** → the fix may already be in a newer tag. Update the `FROM` line in the Dockerfile: + ```bash + LATEST_TAG=$(echo "$AVAILABLE_TAGS" | tail -1) + sed -i "s|${BASE_IMAGE}|${IMAGE_REF}:${LATEST_TAG}|g" "$DOCKERFILE" + ``` + - Create a PR with this change + - PR title: `fix(cve): CVE-YYYY-XXXXX — update base image to ${LATEST_TAG}` + - PR note: "⚠️ This CVE is in the base image layer, not application code. Updated base image from `${CURRENT_TAG}` to `${LATEST_TAG}`. Verify the new tag includes the fix for `${PACKAGE}` before merging." + - **Stop here — do not attempt application manifest fixes** + + - **No newer base image tag available** → base image hasn't been updated yet: + - Do NOT create a PR (no code change to make) + - Add Jira comment: "CVE is in the base image layer (`${BASE_IMAGE}`). No updated base image tag is currently available. The base image team (e.g. AIPCC for `quay.io/aipcc/*`) needs to release an updated image before this can be resolved." + - Document in `artifacts/cve-fixer/fixes/base-image-pending-CVE-YYYY-XXXXX.md` + - Print: "⚠️ CVE-YYYY-XXXXX is in base image ${BASE_IMAGE} — no fix available yet. Jira comment added." + - **Stop here — skip VEX justification and PR creation** + + - **No Dockerfile found** → package may be a transitive or RPM dependency. Fall back to original guidance: - **Do NOT blindly add a direct dependency** — this can cause version conflicts or unnecessary bloat - Instead, document the situation and create PR with guidance: - **Go**: transitive deps require a `replace` directive in go.mod — add it only if intentional - - **Python**: adding to requirements.txt may conflict with what pip resolves transitively; prefer updating the parent package that pulls it in + - **Python**: prefer updating the parent package that pulls it in; use `pip-compile` to trace the dependency - **Node**: use npm `overrides` to force a safe version without adding a direct dep - - Include note in PR: "⚠️ Package not found directly in manifests — may be a transitive or RPM-installed dependency. Manual review required to confirm the right fix approach." - - **Both scan AND version check find nothing** → mark as already fixed: - - **DO NOT create a PR** - - **Print to stdout**: "✅ CVE-YYYY-XXXXX is already fixed in [repository] ([branch]). No action needed." - - **Document in artifacts**: `artifacts/cve-fixer/fixes/already-fixed-CVE-YYYY-XXXXX.md` - - **Note**: Jira ticket may need manual closure + - Include note in PR: "⚠️ Package not found in application manifests — may be a transitive or RPM-installed dependency. Manual review required." + + - **Both scan AND version check find nothing AND no base image issue** → CVE not present in this repo. Do NOT create a PR. + Determine the appropriate VEX "Not Affected" justification and add it to the Jira issue: + + **5.2.2: Determine VEX justification** + + The following justifications can be auto-determined by the workflow: + + | # | Justification | Auto-detectable? | How | + |---|---|---|---| + | 1 | **Component not Present** | ✅ Yes | `PACKAGE` not found in any manifest (requirements.txt, go.mod, package.json) | + | 2 | **Vulnerable Code not Present** | ✅ Yes | Package found in manifest at a non-vulnerable version | + | 3 | **Vulnerable Code not in Execute Path** | ✅ Yes (Go only) | govulncheck finds the module but reports the vulnerable symbol is not called | + | 4 | **Vulnerable Code cannot be Controlled by Adversary** | ❌ Human judgment | Requires understanding the attack surface | + | 5 | **Inline Mitigations already Exist** | ❌ Human judgment | Requires knowing codebase protections | + + ```bash + VEX_JUSTIFICATION="" + VEX_EVIDENCE="" + + # Check 1: Component not Present + if [ -z "$(grep -ri "${PACKAGE}" requirements*.txt setup.py pyproject.toml go.mod package.json 2>/dev/null)" ]; then + VEX_JUSTIFICATION="Component not Present" + VEX_EVIDENCE="Package '${PACKAGE}' not found in any dependency manifest (requirements.txt, go.mod, package.json)" + + # Check 2: Vulnerable Code not Present — package present but at a non-vulnerable version. + # PACKAGE_VERSION is set during the manifest check in Step 5.2.1. + # Compare using sort -V: if the installed version sorts after the last affected version, it is safe. + elif [ -n "$PACKAGE_VERSION" ] && [ -n "$CVE_FIXED_VERSION" ]; then + HIGHER=$(printf '%s\n' "$PACKAGE_VERSION" "$CVE_FIXED_VERSION" | sort -V | tail -1) + if [ "$HIGHER" = "$PACKAGE_VERSION" ] && [ "$PACKAGE_VERSION" != "$CVE_FIXED_VERSION" ]; then + VEX_JUSTIFICATION="Vulnerable Code not Present" + VEX_EVIDENCE="Package '${PACKAGE}' present at version ${PACKAGE_VERSION} which is >= fixed version ${CVE_FIXED_VERSION}" + fi + + # Check 3: Vulnerable Code not in Execute Path (Go only — govulncheck call graph analysis) + # govulncheck prints an "Informational" block when a module is in the dep tree but the + # vulnerable symbol is not reachable. Look for the package name in Informational output. + elif echo "$SCAN_OUTPUT" | grep -q "Informational" && \ + echo "$SCAN_OUTPUT" | grep -A5 "Informational" | grep -qi "${PACKAGE}"; then + VEX_JUSTIFICATION="Vulnerable Code not in Execute Path" + VEX_EVIDENCE="govulncheck found module ${PACKAGE} in dependency tree but reported it as Informational — vulnerable symbol is not called in the code path" + fi + ``` + + **If justification auto-determined (cases 1, 2, 3):** + - Add a comment to the Jira issue with the justification and evidence + - Do NOT auto-close the issue — leave closing to the human reviewer + - Document in `artifacts/cve-fixer/fixes/vex-justified-CVE-YYYY-XXXXX.md` + - Print: "✅ CVE-YYYY-XXXXX not present in [repo]. VEX justification added to [JIRA-KEY]: [justification]" + + ```bash + # Add Jira comment with VEX justification — use jq to safely build JSON (avoids injection) + COMMENT_TEXT="VEX Justification (auto-detected by CVE fixer workflow) + +Justification: ${VEX_JUSTIFICATION} +Evidence: ${VEX_EVIDENCE} +Repository: ${REPO_FULL} +Branch: ${TARGET_BRANCH} +Scan date: $(date -u +%Y-%m-%dT%H:%M:%SZ) + +This issue can be closed as 'Not a Bug / ${VEX_JUSTIFICATION}' if the above evidence is satisfactory." + + COMMENT_JSON=$(jq -n --arg body "$COMMENT_TEXT" '{"body": $body}') + + # Post comment via Jira API + AUTH=$(echo -n "${JIRA_EMAIL}:${JIRA_API_TOKEN}" | base64) + curl -s -X POST \ + -H "Authorization: Basic ${AUTH}" \ + -H "Content-Type: application/json" \ + -d "$COMMENT_JSON" \ + "${JIRA_BASE_URL}/rest/api/3/issue/${JIRA_KEY}/comment" + ``` + + **If justification cannot be auto-determined (cases 4, 5):** + - Do NOT add a Jira comment automatically + - Document in `artifacts/cve-fixer/fixes/vex-needs-human-review-CVE-YYYY-XXXXX.md` with: + - CVE ID, Jira key, repo, branch + - Scan output showing CVE not found + - Note: "Requires human judgment — select one of: 'Vulnerable Code cannot be Controlled by Adversary' or 'Inline Mitigations already Exist'" + - Print: "⚠️ CVE-YYYY-XXXXX not found in [repo] but justification requires human judgment — see artifacts" + + **In interactive mode:** when justification can't be auto-determined, prompt the user: + ``` + CVE-YYYY-XXXXX not found in scan. Select VEX justification for [JIRA-KEY]: + 1. Component not Present (auto-detected: not applicable here) + 2. Vulnerable Code not Present (auto-detected: not applicable here) + 3. Vulnerable Code not in Execute Path (auto-detected: not applicable here) + 4. Vulnerable Code cannot be Controlled by Adversary ← human judgment required + 5. Inline Mitigations already Exist ← human judgment required + 6. Skip — don't add justification yet + ``` - Only skip the CVE entirely when BOTH the scan AND the direct package check find no evidence of the vulnerability @@ -918,8 +1173,17 @@ EOF - Jira issue references - PR URL for the created pull request -- **Already Fixed Report**: `artifacts/cve-fixer/fixes/already-fixed-CVE-YYYY-XXXXX.md` (if CVE was already fixed) - - CVE ID and repository checked +- **Already Fixed Report**: `artifacts/cve-fixer/fixes/already-fixed-CVE-YYYY-XXXXX.md` (if CVE confirmed not present via both scan and package check) + - CVE ID, repository, and scan evidence + +- **VEX Justified Report**: `artifacts/cve-fixer/fixes/vex-justified-CVE-YYYY-XXXXX.md` (if auto-detected VEX justification added to Jira) + - CVE ID, Jira key, justification type, evidence, scan output + +- **VEX Human Review Report**: `artifacts/cve-fixer/fixes/vex-needs-human-review-CVE-YYYY-XXXXX.md` (if VEX justification requires human judgment) + - CVE ID, Jira key, scan output, and recommended justification options (4 or 5) + +- **Base Image Pending Report**: `artifacts/cve-fixer/fixes/base-image-pending-CVE-YYYY-XXXXX.md` (if CVE is in base image and no newer tag available) + - CVE ID, base image reference, Jira comment added - Scan results showing CVE is not present - Timestamp of verification - Note about Jira ticket requiring manual closure