Skip to content

feat: wire blast-radius to /v1/analysis/impact endpoint#25

Merged
jonathanpopham merged 6 commits intosupermodeltools:mainfrom
jonathanpopham:feat/impact-api-endpoint
Apr 3, 2026
Merged

feat: wire blast-radius to /v1/analysis/impact endpoint#25
jonathanpopham merged 6 commits intosupermodeltools:mainfrom
jonathanpopham:feat/impact-api-endpoint

Conversation

@jonathanpopham
Copy link
Copy Markdown
Contributor

@jonathanpopham jonathanpopham commented Apr 2, 2026

Closes #24

Summary

Replaces naive import-edge BFS with the API's dedicated impact analysis endpoint (POST /v1/analysis/impact), which provides risk scoring, affected functions, entry points, and diff-based analysis.

Before (main)

AFFECTED FILE   HOPS
AuthController  1
SecurityConfig  1
...
25 file(s) affected.

Just file names and hop count from import edges.

After (this PR)

$ supermodel blast-radius AuthConstants.java

Target: src/control-plane/.../AuthConstants.java
Risk: critical  |  Direct: 110  |  Transitive: 280  |  Files: 25
  → High fan-in (110 direct callers)
  → Affects 2 entry point(s)
  → Broad blast radius (25 files)
  → Crosses 3 domain boundaries

AFFECTED FILE                              DIRECT  TRANSITIVE
KeyIssuanceServiceTest.java                33      0
DataPlaneApiDelegateImplTest.java          0       30
...

Entry points affected:
AuthController       route_handler
DashboardOAuth2...   route_handler

Three modes

  • supermodel blast-radius <file> — analyze specific target(s)
  • supermodel blast-radius --diff changes.diff — analyze from git diff
  • supermodel blast-radius — global coupling map

Note

Should also be wirable from supermodel audit as one of the quality checks.

Test plan

  • go test ./... — all pass (4 unit tests + integration tests updated)
  • Live: blast-radius AuthConstants.java → risk=critical, 25 files, 2 entry points
  • JSON output works

Summary by CodeRabbit

  • New Features

    • Command accepts multiple targets and an additional alias for "impact"
    • Added --diff to analyze impact from a unified diff
    • Impact analysis is now API-backed and returns full JSON (risk scores, affected files/entry points, global metrics)
  • Improvements

    • Human output shows per-target risk scores, affected-file and entry-point tables, and a "Most critical files" view
    • Removed --depth and replaced local traversal with API-driven workflow
  • Tests

    • Updated tests to validate new JSON/human output and API integration

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 2, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review

Walkthrough

The blast-radius CLI is rewired to call the /v1/analysis/impact API: it packages the repo into a ZIP (optionally with a diff), posts an async impact job (with targets and idempotency key), polls until completion, and renders the API-provided ImpactResult (human or JSON). Local graph BFS, path-normalisation helpers, and depth traversal were removed.

Changes

Cohort / File(s) Summary
API Client & Types
internal/api/client.go, internal/api/types.go
Added Client.Impact(...) which posts ZIP + optional diff to /v1/analysis/impact with async polling, plus new exported types (ImpactResult, ImpactMetadata, ImpactTarget, ImpactTargetInfo, BlastRadius, AffectedFunction, AffectedFile, AffectedEntryPoint, ImpactGlobalMetrics, CriticalFileMetric, CrossDomainDependency).
CLI Command
cmd/blastradius.go
Cobra command updated to accept one-or-more positional args ([file...]), added alias impact, removed --depth, added --diff, and now passes full args slice to the handler.
Blast-radius Handler
internal/blastradius/handler.go
Rewrote Run to accept []string targets and Diff option; removed local graph traversal and Result model; creates repo ZIP, computes cache hash/idempotency key, calls client.Impact, and prints *api.ImpactResult (human/JSON). Options updated (removed Depth, added Diff).
Repo Archiving
internal/blastradius/zip.go
New zip utility: createZip, isGitRepo, gitArchive, walkZip, with skipDirs and rules to skip dotfiles and >10MB files; prefers git archive, falls back to filesystem walk.
Tests
internal/blastradius/handler_test.go, internal/blastradius/integration_test.go, internal/blastradius/normalise_test.go
Removed older graph/path-matching and normalise tests; added printResults human/JSON unit tests and integration test calling client.Impact. Integration tests changed to pass explicit target slices and include a global-coupling run. Entire normalise_test.go deleted.
Other internal changes
internal/blastradius/...
Deleted graph-related helpers (pathMatches, findBlastRadius, normalise), removed local Result type and traversal code, updated imports and tests to reflect API-driven flow.

Sequence Diagram

sequenceDiagram
    rect rgba(200,220,255,0.5)
    actor User
    end
    participant CLI as CLI Command
    participant Handler as BlastRadius Handler
    participant Zipper as Zip Utility
    participant Client as API Client
    participant API as /v1/analysis/impact

    User->>CLI: supermodel blast-radius main.go
    CLI->>Handler: Run(ctx, cfg, dir, [targets], opts)
    Handler->>Zipper: createZip(dir)
    Zipper->>Zipper: isGitRepo()?
    alt Git repo
        Zipper->>Zipper: gitArchive()
    else Fallback
        Zipper->>Zipper: walkZip() + filters
    end
    Zipper-->>Handler: zipPath
    Handler->>Handler: HashFile(zipPath) -> idempotencyKey
    Handler->>Client: Impact(ctx, zipPath, idempKey, targets, diffPath)
    loop Poll until done
        Client->>API: POST /v1/analysis/impact (ZIP + optional diff, targets)
        API-->>Client: {status: pending|processing|complete, ...}
    end
    Client-->>Handler: ImpactResult{Impacts, GlobalMetrics, Metadata}
    Handler->>Handler: printResults(ImpactResult, format)
    Handler-->>User: Formatted output (human/JSON)
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Suggested reviewers

  • greynewell

Poem

Zip the tree, post the call,
Polling waits while servers scrawl.
Targets set, the API sings,
Files and entry points take wings.
A new blast-radius — bells and things.

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 42.11% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately describes the main change: wiring the blast-radius command to use the API's /v1/analysis/impact endpoint instead of the prior local BFS approach.
Description check ✅ Passed The description is comprehensive and well-structured. It includes a summary of the changes, before/after examples showing the improved output, explains the three modes, covers the test plan with checkmarks, and notes future integration with supermodel audit.
Linked Issues check ✅ Passed All coding requirements from issue #24 are met: the Impact() method is added to api.Client with async polling, response types (ImpactResult, BlastRadius, AffectedFunction, AffectedFile, etc.) are defined, handler.go is rewritten to call the endpoint, --diff flag is supported, and output surfaces risk scores, affected functions, and entry points across three modes.
Out of Scope Changes check ✅ Passed All changes are directly aligned with the linked issue. The PR modifies only the blast-radius command and its supporting infrastructure (API client, types, handlers, tests) to implement the endpoint integration; no unrelated files or features are altered.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🧹 Nitpick comments (2)
internal/api/client.go (1)

210-224: Polling logic is duplicated — optional DRY refactor

This polling loop is pretty much identical to the one in DeadCode() (lines 137-152) and pollUntilComplete() (lines 86-100). Totally works, but if you ever need to tweak the polling behavior (like adding exponential backoff), you'd have to change it in three places.

Not blocking — just something to consider for a future cleanup pass. You could extract a generic pollJob helper that takes a "post" function and returns the completed job.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/api/client.go` around lines 210 - 224, The polling loop in
postImpact (the for loop checking job.Status and waiting using job.RetryAfter)
is duplicated across DeadCode() and pollUntilComplete(); extract a reusable
helper (e.g., pollJob) that accepts a context and a callback function (the
"post" function such as c.postImpact or the equivalent used by
DeadCode/pollUntilComplete) and returns the completed job or error, then replace
the duplicated loops in postImpact, DeadCode, and pollUntilComplete to call
pollJob(ctx, postFunc, initialJob) so all retry/wait logic (including the
RetryAfter default) is centralized.
internal/blastradius/zip.go (1)

44-48: Silent fallback when gitArchive fails — consider logging

Hey, so when gitArchive fails, the error just gets swallowed and we fall through to walkZip. This is fine behavior-wise (the fallback is the right call), but it might be helpful to log a debug message when this happens. Otherwise, if someone's wondering why their archive is huge or missing .gitignore-ignored files, there's no breadcrumb.

Totally optional — just a heads up for future debugging.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/blastradius/zip.go` around lines 44 - 48, When isGitRepo(dir) is
true and gitArchive(dir, dest) returns an error, currently the error is
swallowed and we fall back to walkZip; change this to log a debug-level message
including the error and the dir/dest context before falling back. Update the
gitArchive call path in zip.go (the block referencing isGitRepo and gitArchive)
to capture the returned error, call the package logger (e.g., debug or log) with
a concise message plus the error and repository path, then continue to the
existing walkZip fallback.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@internal/api/client.go`:
- Around line 199-203: The Impact method is building the request URL by
concatenating the targets query param raw, which breaks on spaces/special chars;
update Client.Impact to URL-encode the targets (e.g., use url.QueryEscape or
build query with url.Values) before appending so endpoint becomes impactEndpoint
+ "?" + encoded query; ensure you reference the targets string and the Impact
function when making the change and preserve existing logic when targets == "".

In `@internal/api/types.go`:
- Around line 220-227: Struct field alignment is inconsistent in the
ImpactTarget definition (and other structs such as AffectedFile and
ImpactGlobalMetrics); fix it by formatting the file with gofmt (run gofmt -w on
the source file containing ImpactTarget) so types and json tags align
consistently, then verify ImpactTarget, AffectedFile, and ImpactGlobalMetrics
fields are properly aligned and commit the formatted file.

In `@internal/blastradius/handler.go`:
- Around line 38-44: The handler currently passes targets as a comma-joined
string to client.Impact which will break for filenames containing commas (see
the call in handler.go: result, err := client.Impact(..., strings.Join(targets,
","), ...)). Fix this by validating targets before calling client.Impact:
iterate over the targets slice and if any target contains a comma return a clear
error (e.g., "target contains comma; use API array-style or rename file") so
callers fail fast; alternatively, if the api package can be changed, update
client.Impact to accept a []string and encode it as repeated query params
(targets[]=...) instead of joining with commas.
- Around line 69-105: The loop over result.Impacts copies a large (216-byte)
Impact struct each iteration; change the iteration to use the index and take a
pointer to the slice element to avoid copying. Replace "for _, impact := range
result.Impacts {" with something like "for i := range result.Impacts { impact :=
&result.Impacts[i]" and keep the existing references (impact.Target,
impact.AffectedFiles, impact.EntryPointsAffected, etc.) so the code operates on
the pointer without allocating large copies each iteration.

In `@internal/blastradius/zip.go`:
- Around line 82-110: The directory walk in the anonymous func passed to
filepath.Walk should skip symbolic links to avoid directory traversal and
infinite loops; add a symlink check (info.Mode()&os.ModeSymlink != 0) after the
info.IsDir() branch in the Walk callback so symlinks are treated like hidden
files (return nil) and not descended into or archived; mirror the same pattern
used in internal/factory/zip.go to locate the exact implementation to copy.

---

Nitpick comments:
In `@internal/api/client.go`:
- Around line 210-224: The polling loop in postImpact (the for loop checking
job.Status and waiting using job.RetryAfter) is duplicated across DeadCode() and
pollUntilComplete(); extract a reusable helper (e.g., pollJob) that accepts a
context and a callback function (the "post" function such as c.postImpact or the
equivalent used by DeadCode/pollUntilComplete) and returns the completed job or
error, then replace the duplicated loops in postImpact, DeadCode, and
pollUntilComplete to call pollJob(ctx, postFunc, initialJob) so all retry/wait
logic (including the RetryAfter default) is centralized.

In `@internal/blastradius/zip.go`:
- Around line 44-48: When isGitRepo(dir) is true and gitArchive(dir, dest)
returns an error, currently the error is swallowed and we fall back to walkZip;
change this to log a debug-level message including the error and the dir/dest
context before falling back. Update the gitArchive call path in zip.go (the
block referencing isGitRepo and gitArchive) to capture the returned error, call
the package logger (e.g., debug or log) with a concise message plus the error
and repository path, then continue to the existing walkZip fallback.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 4f84d868-0b14-4a26-8607-e34fa6f6b7c3

📥 Commits

Reviewing files that changed from the base of the PR and between 1a30bcc and e88165c.

📒 Files selected for processing (8)
  • cmd/blastradius.go
  • internal/api/client.go
  • internal/api/types.go
  • internal/blastradius/handler.go
  • internal/blastradius/handler_test.go
  • internal/blastradius/integration_test.go
  • internal/blastradius/normalise_test.go
  • internal/blastradius/zip.go
💤 Files with no reviewable changes (1)
  • internal/blastradius/normalise_test.go

Comment on lines +199 to +203
func (c *Client) Impact(ctx context.Context, zipPath, idempotencyKey, targets, diffPath string) (*ImpactResult, error) {
endpoint := impactEndpoint
if targets != "" {
endpoint += "?targets=" + targets
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

URL encoding missing for targets query param — could break with special characters

Hey, so if someone passes a file path with spaces or special characters (like src/my file.ts or src/foo&bar.ts), this will produce a malformed URL. The targets string is concatenated directly without escaping.

Quick example of what goes wrong:

  • Input: targets = "src/my file.ts"
  • Result: /v1/analysis/impact?targets=src/my file.ts (broken URL)

You'll want to use url.QueryEscape(targets) or build the URL properly with url.Values.

🐛 Proposed fix
+import "net/url"
+
 func (c *Client) Impact(ctx context.Context, zipPath, idempotencyKey, targets, diffPath string) (*ImpactResult, error) {
 	endpoint := impactEndpoint
 	if targets != "" {
-		endpoint += "?targets=" + targets
+		endpoint += "?targets=" + url.QueryEscape(targets)
 	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/api/client.go` around lines 199 - 203, The Impact method is building
the request URL by concatenating the targets query param raw, which breaks on
spaces/special chars; update Client.Impact to URL-encode the targets (e.g., use
url.QueryEscape or build query with url.Values) before appending so endpoint
becomes impactEndpoint + "?" + encoded query; ensure you reference the targets
string and the Impact function when making the change and preserve existing
logic when targets == "".

Comment on lines +38 to 44
client := api.New(cfg)
spin = ui.Start("Analyzing impact…")
result, err := client.Impact(ctx, zipPath, "impact-"+hash[:16], strings.Join(targets, ","), opts.Diff)
spin.Stop()
if err != nil {
return err
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Edge case: filenames with commas would be misinterpreted

When you strings.Join(targets, ","), a single target like src/foo,bar.ts becomes indistinguishable from two targets src/foo and bar.ts on the API side.

This is probably super rare, but if the API supports an array-style query param (like ?targets[]=a&targets[]=b) or a different delimiter, that would be safer. Otherwise, at minimum, you could validate that no target contains a comma and error early with a helpful message.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/blastradius/handler.go` around lines 38 - 44, The handler currently
passes targets as a comma-joined string to client.Impact which will break for
filenames containing commas (see the call in handler.go: result, err :=
client.Impact(..., strings.Join(targets, ","), ...)). Fix this by validating
targets before calling client.Impact: iterate over the targets slice and if any
target contains a comma return a clear error (e.g., "target contains comma; use
API array-style or rename file") so callers fail fast; alternatively, if the api
package can be changed, update client.Impact to accept a []string and encode it
as repeated query params (targets[]=...) instead of joining with commas.

Comment on lines +82 to +110
return filepath.Walk(dir, func(path string, info os.FileInfo, err error) error {
if err != nil {
return err
}
rel, err := filepath.Rel(dir, path)
if err != nil {
return err
}
if info.IsDir() {
if skipDirs[info.Name()] {
return filepath.SkipDir
}
return nil
}
if strings.HasPrefix(info.Name(), ".") || info.Size() > 10<<20 {
return nil
}
w, err := zw.Create(filepath.ToSlash(rel))
if err != nil {
return err
}
f, err := os.Open(path)
if err != nil {
return err
}
defer f.Close()
_, err = io.Copy(w, f)
return err
})
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check what Go version is being used to see if filepath.WalkDir is available
cat go.mod | head -5

Repository: supermodeltools/cli

Length of output: 122


🏁 Script executed:

# Find and examine the full walkZip function
rg -B 5 "filepath\.Walk" internal/blastradius/zip.go -A 40

Repository: supermodeltools/cli

Length of output: 710


🏁 Script executed:

# Search for skipDirs to see what directories are being skipped
rg "skipDirs" internal/blastradius/zip.go -B 10 -A 2

Repository: supermodeltools/cli

Length of output: 949


🏁 Script executed:

# Check if there are any symlink-related checks elsewhere in the file
rg -i "symlink|readlink|mode" internal/blastradius/zip.go

Repository: supermodeltools/cli

Length of output: 111


🏁 Script executed:

# Let me verify the actual behavior of filepath.Walk with symlinks in Go
# Check Go documentation or test behavior
cat > /tmp/test_walk.go << 'EOF'
package main

import (
	"fmt"
	"os"
	"path/filepath"
)

func main() {
	// Verify what filepath.Walk does with symlinks
	// According to Go docs, filepath.Walk follows symlinks
	// Let's confirm this is documented behavior
	fmt.Println("filepath.Walk behavior with symlinks:")
	fmt.Println("- It DOES follow symbolic links (documented behavior)")
	fmt.Println("- This means circular symlinks could cause infinite recursion")
}
EOF
go run /tmp/test_walk.go

Repository: supermodeltools/cli

Length of output: 198


🏁 Script executed:

# Check if there are any tests or other code in the repo that deal with symlinks
rg -i "symlink" . --type go

Repository: supermodeltools/cli

Length of output: 538


🏁 Script executed:

# Look at the full function definition to understand the complete context
cat -n internal/blastradius/zip.go

Repository: supermodeltools/cli

Length of output: 3271


Add symlink check to prevent directory traversal and infinite loops

filepath.Walk follows symbolic links, which can be a problem. For example, if someone has ln -s .. loop in their directory, the walk could either loop forever or read files outside the intended directory.

Check info.Mode()&os.ModeSymlink != 0 and skip symlinks — it's a simple one-liner. (Interestingly, your codebase already does this exact pattern in internal/factory/zip.go, so you can copy that approach.)

if info.Mode()&os.ModeSymlink != 0 {
    return nil
}

Add this right after the info.IsDir() check (around line 90) so symlinks are skipped just like you do with hidden files.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/blastradius/zip.go` around lines 82 - 110, The directory walk in the
anonymous func passed to filepath.Walk should skip symbolic links to avoid
directory traversal and infinite loops; add a symlink check
(info.Mode()&os.ModeSymlink != 0) after the info.IsDir() branch in the Walk
callback so symlinks are treated like hidden files (return nil) and not
descended into or archived; mirror the same pattern used in
internal/factory/zip.go to locate the exact implementation to copy.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@internal/blastradius/handler.go`:
- Around line 24-40: The request key used when calling client.Impact in Run is
too coarse ("impact-"+hash[:16]) and ignores targets, opts.Diff and opts.Force,
making --force a no-op and allowing stale cached responses; fix by deriving the
idempotency key from the full request shape (e.g., include zip hash +
strings.Join(targets, ",") + opts.Diff and any relevant options) so different
targets/diffs produce different keys, and if opts.Force is true either append a
salt (timestamp/UUID) or skip using the derived key to force a fresh run before
calling client.Impact; update the code around Run, client.Impact and the key
construction to use this composed key.
- Around line 77-103: Add rendering for impact.AffectedFunctions so CLI output
includes the distance/relationship info: after the
AffectedFiles/EntryPointsAffected blocks, check len(impact.AffectedFunctions) >
0, print a blank line and a header like "Affected functions:" then build rows
from impact.AffectedFunctions (use the fields on each item such as
Function/name, File, DirectDependencies, TransitiveDependencies,
Relationship/distance) and call ui.Table(w,
[]string{"FUNCTION","FILE","DIRECT","TRANSITIVE","RELATION"}, rows) to mirror
how AffectedFiles and EntryPointsAffected are rendered; reference
impact.AffectedFunctions and ui.Table to locate where to add this block.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: aab12bf0-bfb6-4541-82b2-6b6bd4618ced

📥 Commits

Reviewing files that changed from the base of the PR and between e88165c and e9ef4d6.

📒 Files selected for processing (2)
  • internal/api/types.go
  • internal/blastradius/handler.go
🚧 Files skipped from review as they are similar to previous changes (1)
  • internal/api/types.go

Comment on lines +24 to +40
func Run(ctx context.Context, cfg *config.Config, dir string, targets []string, opts Options) error {
spin := ui.Start("Creating repository archive…")
zipPath, err := createZip(dir)
spin.Stop()
if err != nil {
return fmt.Errorf("create archive: %w", err)
}
defer os.Remove(zipPath)

// Run finds all files transitively affected by a change to target and prints them.
func Run(ctx context.Context, cfg *config.Config, dir, target string, opts Options) error {
g, _, err := analyze.GetGraph(ctx, cfg, dir, opts.Force)
hash, err := cache.HashFile(zipPath)
if err != nil {
return err
}
results, err := findBlastRadius(g, dir, target, opts.Depth)

client := api.New(cfg)
spin = ui.Start("Analyzing impact…")
result, err := client.Impact(ctx, zipPath, "impact-"+hash[:16], strings.Join(targets, ","), opts.Diff)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

--force is a no-op, and the request key is too coarse.

Run always sends the same "impact-"+hash[:16] key for a given repo ZIP, so the fingerprint does not change for different targets or different --diff files. That means runs like blast-radius foo.go and blast-radius bar.go against the same checkout reuse the same key, and opts.Force never changes that. If the backend dedupes on idempotency, this can hand back a stale result and makes cache bypass impossible from the CLI. Please derive the key from the full request shape, or skip/salt it when force is set.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/blastradius/handler.go` around lines 24 - 40, The request key used
when calling client.Impact in Run is too coarse ("impact-"+hash[:16]) and
ignores targets, opts.Diff and opts.Force, making --force a no-op and allowing
stale cached responses; fix by deriving the idempotency key from the full
request shape (e.g., include zip hash + strings.Join(targets, ",") + opts.Diff
and any relevant options) so different targets/diffs produce different keys, and
if opts.Force is true either append a salt (timestamp/UUID) or skip using the
derived key to force a fresh run before calling client.Impact; update the code
around Run, client.Impact and the key construction to use this composed key.

Comment on lines +77 to 103
fmt.Fprintf(w, "Risk: %s | Direct: %d | Transitive: %d | Files: %d\n",
br.RiskScore, br.DirectDependents, br.TransitiveDependents, br.AffectedFiles)

for _, parent := range importedBy[cur] {
if _, seen := visited[parent]; seen {
continue
if len(br.RiskFactors) > 0 {
for _, rf := range br.RiskFactors {
fmt.Fprintf(w, " → %s\n", rf)
}
visited[parent] = depth + 1
queue = append(queue, parent)
}

n, ok := g.NodeByID(parent)
if !ok {
continue
}
file := n.Prop("path", "name", "file")
if file != "" && !pathMatches(file, targetRel) {
results = append(results, Result{File: file, Depth: depth + 1})
if len(impact.AffectedFiles) > 0 {
fmt.Fprintln(w)
rows := make([][]string, len(impact.AffectedFiles))
for i, f := range impact.AffectedFiles {
rows[i] = []string{f.File, fmt.Sprintf("%d", f.DirectDependencies), fmt.Sprintf("%d", f.TransitiveDependencies)}
}
ui.Table(w, []string{"AFFECTED FILE", "DIRECT", "TRANSITIVE"}, rows)
}
}

sort.Slice(results, func(i, j int) bool {
if results[i].Depth != results[j].Depth {
return results[i].Depth < results[j].Depth
if len(impact.EntryPointsAffected) > 0 {
fmt.Fprintln(w)
fmt.Fprintln(w, "Entry points affected:")
rows := make([][]string, len(impact.EntryPointsAffected))
for i, ep := range impact.EntryPointsAffected {
rows[i] = []string{ep.File, ep.Name, ep.Type}
}
ui.Table(w, []string{"FILE", "NAME", "TYPE"}, rows)
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Human output still drops the affected-function data.

This formatter prints risk, files, and entry points, but never shows impact.AffectedFunctions. So the default CLI output still hides the distance/relationship info the new endpoint adds unless people switch to JSON. Please render that list here too.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/blastradius/handler.go` around lines 77 - 103, Add rendering for
impact.AffectedFunctions so CLI output includes the distance/relationship info:
after the AffectedFiles/EntryPointsAffected blocks, check
len(impact.AffectedFunctions) > 0, print a blank line and a header like
"Affected functions:" then build rows from impact.AffectedFunctions (use the
fields on each item such as Function/name, File, DirectDependencies,
TransitiveDependencies, Relationship/distance) and call ui.Table(w,
[]string{"FUNCTION","FILE","DIRECT","TRANSITIVE","RELATION"}, rows) to mirror
how AffectedFiles and EntryPointsAffected are rendered; reference
impact.AffectedFunctions and ui.Table to locate where to add this block.

@jonathanpopham
Copy link
Copy Markdown
Contributor Author

Test results

Unit tests

All pass across macOS, Ubuntu, Windows. CI green.

Live API tests (supermodel-public-api repo)

Target modesupermodel blast-radius AuthConstants.java:

Target: src/control-plane/.../AuthConstants.java
Risk: critical  |  Direct: 110  |  Transitive: 280  |  Files: 25
  → High fan-in (110 direct callers)
  → Affects 2 entry point(s)
  → Broad blast radius (25 files)
  → Crosses 3 domain boundaries

JSON mode — valid structured output:

risk=critical direct=110 files=25 entries=2

Global modesupermodel blast-radius (no args): returns full coupling map

Audit commandsupermodel audit: runs clean, produces HEALTHY report. No regressions.

Before (main) vs After (this PR)

Before: "25 file(s) affected" — just file names and hop counts from import edges
After:  risk=critical, 110 direct/280 transitive dependents, 2 entry points, 4 risk factors

@jonathanpopham jonathanpopham marked this pull request as ready for review April 2, 2026 21:11
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (2)
internal/blastradius/integration_test.go (2)

84-91: Consider adding a couple more assertions.

Right now you're only checking TotalFiles > 0, which is a reasonable smoke check. But since you're already logging TargetsAnalyzed, TotalFunctions, and len(result.Impacts), you could easily promote some of those to actual assertions.

For example, since you're explicitly passing "main.go" as a target, you'd expect TargetsAnalyzed >= 1. This would catch regressions where the API silently ignores your target.

💡 Optional: add more assertions
 	if result.Metadata.TotalFiles == 0 {
 		t.Error("expected totalFiles > 0")
 	}
+	if result.Metadata.TargetsAnalyzed == 0 {
+		t.Error("expected targetsAnalyzed > 0 since we passed a target")
+	}
 	t.Logf("targets=%d files=%d functions=%d impacts=%d",
 		result.Metadata.TargetsAnalyzed,
 		result.Metadata.TotalFiles,
 		result.Metadata.TotalFunctions,
 		len(result.Impacts))
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/blastradius/integration_test.go` around lines 84 - 91, Add
assertions to the test to fail on regressions by checking result.Metadata fields
instead of only logging them: assert that result.Metadata.TargetsAnalyzed >= 1
(since you passed "main.go"), and also assert result.Metadata.TotalFunctions > 0
and len(result.Impacts) > 0 to ensure functions were discovered and impacts
computed; update the test in integration_test.go around the existing checks that
reference result.Metadata.TargetsAnalyzed, result.Metadata.TotalFunctions, and
len(result.Impacts).

52-68: Missing negative test case — consider adding one.

I noticed you removed TestIntegration_Run_UnknownFile (the old negative test). All your tests are now happy-path. It might be worth having at least one test that verifies error handling — like what happens when the API returns an error, or when the zip creation fails.

That said, if error handling is covered in unit tests elsewhere, this is totally fine to skip here.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/blastradius/integration_test.go` around lines 52 - 68, Add a
negative integration test (e.g., recreate TestIntegration_Run_UnknownFile) that
calls blastradius.Run and asserts it returns an error: set up the same test
harness (use testutil.IntegrationConfig and testutil.MinimalGoDir, set HOME to
t.TempDir(), create ctx with timeout), but inject a failing condition such as a
mock API server that returns an error or intentionally make zip creation fail
(e.g., by making dir unreadable) before calling blastradius.Run with the same
blastradius.Options; then expect a non-nil error and fail the test if Run
succeeds.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@internal/blastradius/integration_test.go`:
- Around line 84-91: Add assertions to the test to fail on regressions by
checking result.Metadata fields instead of only logging them: assert that
result.Metadata.TargetsAnalyzed >= 1 (since you passed "main.go"), and also
assert result.Metadata.TotalFunctions > 0 and len(result.Impacts) > 0 to ensure
functions were discovered and impacts computed; update the test in
integration_test.go around the existing checks that reference
result.Metadata.TargetsAnalyzed, result.Metadata.TotalFunctions, and
len(result.Impacts).
- Around line 52-68: Add a negative integration test (e.g., recreate
TestIntegration_Run_UnknownFile) that calls blastradius.Run and asserts it
returns an error: set up the same test harness (use testutil.IntegrationConfig
and testutil.MinimalGoDir, set HOME to t.TempDir(), create ctx with timeout),
but inject a failing condition such as a mock API server that returns an error
or intentionally make zip creation fail (e.g., by making dir unreadable) before
calling blastradius.Run with the same blastradius.Options; then expect a non-nil
error and fail the test if Run succeeds.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 656a046a-d28c-4532-ba81-36234e6033e2

📥 Commits

Reviewing files that changed from the base of the PR and between e9ef4d6 and d1f3f95.

📒 Files selected for processing (2)
  • internal/blastradius/handler_test.go
  • internal/blastradius/integration_test.go
🚧 Files skipped from review as they are similar to previous changes (1)
  • internal/blastradius/handler_test.go

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (3)
internal/blastradius/handler.go (3)

93-110: ⚠️ Potential issue | 🟠 Major

Human output still skips AffectedFunctions, so key impact detail is hidden by default.

The endpoint returns function-level distance/relationship data, but human mode currently prints only files and entry points. This misses one of the core objectives unless users switch to JSON.

🛠️ Suggested rendering block
 		if len(impact.EntryPointsAffected) > 0 {
 			fmt.Fprintln(w)
 			fmt.Fprintln(w, "Entry points affected:")
 			rows := make([][]string, len(impact.EntryPointsAffected))
 			for i, ep := range impact.EntryPointsAffected {
 				rows[i] = []string{ep.File, ep.Name, ep.Type}
 			}
 			ui.Table(w, []string{"FILE", "NAME", "TYPE"}, rows)
 		}
+
+		if len(impact.AffectedFunctions) > 0 {
+			fmt.Fprintln(w)
+			fmt.Fprintln(w, "Affected functions:")
+			rows := make([][]string, len(impact.AffectedFunctions))
+			for i, fn := range impact.AffectedFunctions {
+				rows[i] = []string{
+					fn.Function,
+					fn.File,
+					fmt.Sprintf("%d", fn.Distance),
+					fn.Relationship,
+				}
+			}
+			ui.Table(w, []string{"FUNCTION", "FILE", "DISTANCE", "RELATIONSHIP"}, rows)
+		}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/blastradius/handler.go` around lines 93 - 110, The human-readable
output omits impact.AffectedFunctions so users miss function-level details;
update the handler to render impact.AffectedFunctions similar to
AffectedFiles/EntryPointsAffected: after the EntryPoints block (or appropriate
place) check if len(impact.AffectedFunctions) > 0, write a separating newline
and a heading, build rows from impact.AffectedFunctions (use unique fields
present on that struct, e.g. function name, file, direct/transitive distances or
relationship fields), and call ui.Table(w, headers, rows) (same ui.Table used
for AffectedFiles/EntryPointsAffected) so function-level impact is shown by
default in human mode.

39-42: ⚠️ Potential issue | 🟡 Minor

Comma-joining targets is ambiguous for filenames that contain commas.

A single target like src/foo,bar.ts is indistinguishable from two targets once joined. Safer to validate and fail early unless the API supports repeated params.

🛠️ Suggested guard
  targetStr := strings.Join(targets, ",")
+ for _, t := range targets {
+   if strings.Contains(t, ",") {
+     return fmt.Errorf("target %q contains a comma; please pass targets without commas", t)
+   }
+ }
  if targetStr != "" {
    idempotencyKey += "-" + targetStr
  }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/blastradius/handler.go` around lines 39 - 42, The current
idempotency key construction joins targets with commas which makes filenames
containing commas ambiguous; update the handler to validate the targets slice
before joining: iterate over targets and if any element contains ',' return a
validation error (400/bad request) instead of proceeding, and only then build
targetStr and append to idempotencyKey; reference the targets variable,
targetStr construction, and idempotencyKey so you modify the same assembly logic
to fail early on comma-containing targets.

38-42: ⚠️ Potential issue | 🟠 Major

--force still behaves like a no-op, and diff runs can still collide in cache.

Right now the idempotency key only varies by ZIP hash + targets. Two runs with different --diff inputs can reuse the same key, and opts.Force doesn’t alter behavior. Example: same checkout + same target, but different diff files => potentially stale reused result.

🛠️ Suggested fix
  hash, err := cache.HashFile(zipPath)
  if err != nil {
    return err
  }

- idempotencyKey := "impact-" + hash[:16]
+ idempotencyKey := "impact-" + hash[:16]
  targetStr := strings.Join(targets, ",")
  if targetStr != "" {
    idempotencyKey += "-" + targetStr
  }
+ if opts.Diff != "" {
+   diffHash, err := cache.HashFile(opts.Diff)
+   if err != nil {
+     return fmt.Errorf("hash diff: %w", err)
+   }
+   idempotencyKey += "-diff-" + diffHash[:16]
+ }
+ if opts.Force {
+   idempotencyKey += fmt.Sprintf("-force-%d", time.Now().UnixNano())
+ }

  client := api.New(cfg)

Also applies to: 46-46

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/blastradius/handler.go` around lines 38 - 42, The idempotency key
currently uses only hash and targets (idempotencyKey, hash, targets) which
causes different --diff inputs or the --force flag (opts.Force) to collide;
update the key construction in handler.go to include a deterministic
representation of the diff input(s) (e.g., join or hash opts.Diff or
opts.DiffFiles) and the opts.Force flag so runs with different diff files or
force=true produce distinct idempotencyKey values.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@internal/blastradius/handler.go`:
- Around line 93-110: The human-readable output omits impact.AffectedFunctions
so users miss function-level details; update the handler to render
impact.AffectedFunctions similar to AffectedFiles/EntryPointsAffected: after the
EntryPoints block (or appropriate place) check if len(impact.AffectedFunctions)
> 0, write a separating newline and a heading, build rows from
impact.AffectedFunctions (use unique fields present on that struct, e.g.
function name, file, direct/transitive distances or relationship fields), and
call ui.Table(w, headers, rows) (same ui.Table used for
AffectedFiles/EntryPointsAffected) so function-level impact is shown by default
in human mode.
- Around line 39-42: The current idempotency key construction joins targets with
commas which makes filenames containing commas ambiguous; update the handler to
validate the targets slice before joining: iterate over targets and if any
element contains ',' return a validation error (400/bad request) instead of
proceeding, and only then build targetStr and append to idempotencyKey;
reference the targets variable, targetStr construction, and idempotencyKey so
you modify the same assembly logic to fail early on comma-containing targets.
- Around line 38-42: The idempotency key currently uses only hash and targets
(idempotencyKey, hash, targets) which causes different --diff inputs or the
--force flag (opts.Force) to collide; update the key construction in handler.go
to include a deterministic representation of the diff input(s) (e.g., join or
hash opts.Diff or opts.DiffFiles) and the opts.Force flag so runs with different
diff files or force=true produce distinct idempotencyKey values.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: a8a64fac-2f1f-44fe-885d-c62e3c0e0c85

📥 Commits

Reviewing files that changed from the base of the PR and between 06af21e and 993bc23.

📒 Files selected for processing (1)
  • internal/blastradius/handler.go

Replace naive import-edge BFS with the API's dedicated impact analysis
endpoint which provides risk scoring, affected functions with distance,
entry points, direct/transitive dependency counts, and risk factors.

Three modes:
  supermodel blast-radius <file>              # target specific file(s)
  supermodel blast-radius --diff changes.diff # analyze from git diff
  supermodel blast-radius                     # global coupling map

New flags: --diff
Removed: --depth (handled by API)
Added alias: impact

Closes supermodeltools#24
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

feat: wire blast-radius to /v1/analysis/impact endpoint

1 participant