feat: wire blast-radius to /v1/analysis/impact endpoint#25
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughThe blast-radius CLI is rewired to call the /v1/analysis/impact API: it packages the repo into a ZIP (optionally with a diff), posts an async impact job (with targets and idempotency key), polls until completion, and renders the API-provided ImpactResult (human or JSON). Local graph BFS, path-normalisation helpers, and depth traversal were removed. Changes
Sequence DiagramsequenceDiagram
rect rgba(200,220,255,0.5)
actor User
end
participant CLI as CLI Command
participant Handler as BlastRadius Handler
participant Zipper as Zip Utility
participant Client as API Client
participant API as /v1/analysis/impact
User->>CLI: supermodel blast-radius main.go
CLI->>Handler: Run(ctx, cfg, dir, [targets], opts)
Handler->>Zipper: createZip(dir)
Zipper->>Zipper: isGitRepo()?
alt Git repo
Zipper->>Zipper: gitArchive()
else Fallback
Zipper->>Zipper: walkZip() + filters
end
Zipper-->>Handler: zipPath
Handler->>Handler: HashFile(zipPath) -> idempotencyKey
Handler->>Client: Impact(ctx, zipPath, idempKey, targets, diffPath)
loop Poll until done
Client->>API: POST /v1/analysis/impact (ZIP + optional diff, targets)
API-->>Client: {status: pending|processing|complete, ...}
end
Client-->>Handler: ImpactResult{Impacts, GlobalMetrics, Metadata}
Handler->>Handler: printResults(ImpactResult, format)
Handler-->>User: Formatted output (human/JSON)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 5
🧹 Nitpick comments (2)
internal/api/client.go (1)
210-224: Polling logic is duplicated — optional DRY refactorThis polling loop is pretty much identical to the one in
DeadCode()(lines 137-152) andpollUntilComplete()(lines 86-100). Totally works, but if you ever need to tweak the polling behavior (like adding exponential backoff), you'd have to change it in three places.Not blocking — just something to consider for a future cleanup pass. You could extract a generic
pollJobhelper that takes a "post" function and returns the completed job.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/api/client.go` around lines 210 - 224, The polling loop in postImpact (the for loop checking job.Status and waiting using job.RetryAfter) is duplicated across DeadCode() and pollUntilComplete(); extract a reusable helper (e.g., pollJob) that accepts a context and a callback function (the "post" function such as c.postImpact or the equivalent used by DeadCode/pollUntilComplete) and returns the completed job or error, then replace the duplicated loops in postImpact, DeadCode, and pollUntilComplete to call pollJob(ctx, postFunc, initialJob) so all retry/wait logic (including the RetryAfter default) is centralized.internal/blastradius/zip.go (1)
44-48: Silent fallback whengitArchivefails — consider loggingHey, so when
gitArchivefails, the error just gets swallowed and we fall through towalkZip. This is fine behavior-wise (the fallback is the right call), but it might be helpful to log a debug message when this happens. Otherwise, if someone's wondering why their archive is huge or missing.gitignore-ignored files, there's no breadcrumb.Totally optional — just a heads up for future debugging.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/blastradius/zip.go` around lines 44 - 48, When isGitRepo(dir) is true and gitArchive(dir, dest) returns an error, currently the error is swallowed and we fall back to walkZip; change this to log a debug-level message including the error and the dir/dest context before falling back. Update the gitArchive call path in zip.go (the block referencing isGitRepo and gitArchive) to capture the returned error, call the package logger (e.g., debug or log) with a concise message plus the error and repository path, then continue to the existing walkZip fallback.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@internal/api/client.go`:
- Around line 199-203: The Impact method is building the request URL by
concatenating the targets query param raw, which breaks on spaces/special chars;
update Client.Impact to URL-encode the targets (e.g., use url.QueryEscape or
build query with url.Values) before appending so endpoint becomes impactEndpoint
+ "?" + encoded query; ensure you reference the targets string and the Impact
function when making the change and preserve existing logic when targets == "".
In `@internal/api/types.go`:
- Around line 220-227: Struct field alignment is inconsistent in the
ImpactTarget definition (and other structs such as AffectedFile and
ImpactGlobalMetrics); fix it by formatting the file with gofmt (run gofmt -w on
the source file containing ImpactTarget) so types and json tags align
consistently, then verify ImpactTarget, AffectedFile, and ImpactGlobalMetrics
fields are properly aligned and commit the formatted file.
In `@internal/blastradius/handler.go`:
- Around line 38-44: The handler currently passes targets as a comma-joined
string to client.Impact which will break for filenames containing commas (see
the call in handler.go: result, err := client.Impact(..., strings.Join(targets,
","), ...)). Fix this by validating targets before calling client.Impact:
iterate over the targets slice and if any target contains a comma return a clear
error (e.g., "target contains comma; use API array-style or rename file") so
callers fail fast; alternatively, if the api package can be changed, update
client.Impact to accept a []string and encode it as repeated query params
(targets[]=...) instead of joining with commas.
- Around line 69-105: The loop over result.Impacts copies a large (216-byte)
Impact struct each iteration; change the iteration to use the index and take a
pointer to the slice element to avoid copying. Replace "for _, impact := range
result.Impacts {" with something like "for i := range result.Impacts { impact :=
&result.Impacts[i]" and keep the existing references (impact.Target,
impact.AffectedFiles, impact.EntryPointsAffected, etc.) so the code operates on
the pointer without allocating large copies each iteration.
In `@internal/blastradius/zip.go`:
- Around line 82-110: The directory walk in the anonymous func passed to
filepath.Walk should skip symbolic links to avoid directory traversal and
infinite loops; add a symlink check (info.Mode()&os.ModeSymlink != 0) after the
info.IsDir() branch in the Walk callback so symlinks are treated like hidden
files (return nil) and not descended into or archived; mirror the same pattern
used in internal/factory/zip.go to locate the exact implementation to copy.
---
Nitpick comments:
In `@internal/api/client.go`:
- Around line 210-224: The polling loop in postImpact (the for loop checking
job.Status and waiting using job.RetryAfter) is duplicated across DeadCode() and
pollUntilComplete(); extract a reusable helper (e.g., pollJob) that accepts a
context and a callback function (the "post" function such as c.postImpact or the
equivalent used by DeadCode/pollUntilComplete) and returns the completed job or
error, then replace the duplicated loops in postImpact, DeadCode, and
pollUntilComplete to call pollJob(ctx, postFunc, initialJob) so all retry/wait
logic (including the RetryAfter default) is centralized.
In `@internal/blastradius/zip.go`:
- Around line 44-48: When isGitRepo(dir) is true and gitArchive(dir, dest)
returns an error, currently the error is swallowed and we fall back to walkZip;
change this to log a debug-level message including the error and the dir/dest
context before falling back. Update the gitArchive call path in zip.go (the
block referencing isGitRepo and gitArchive) to capture the returned error, call
the package logger (e.g., debug or log) with a concise message plus the error
and repository path, then continue to the existing walkZip fallback.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 4f84d868-0b14-4a26-8607-e34fa6f6b7c3
📒 Files selected for processing (8)
cmd/blastradius.gointernal/api/client.gointernal/api/types.gointernal/blastradius/handler.gointernal/blastradius/handler_test.gointernal/blastradius/integration_test.gointernal/blastradius/normalise_test.gointernal/blastradius/zip.go
💤 Files with no reviewable changes (1)
- internal/blastradius/normalise_test.go
| func (c *Client) Impact(ctx context.Context, zipPath, idempotencyKey, targets, diffPath string) (*ImpactResult, error) { | ||
| endpoint := impactEndpoint | ||
| if targets != "" { | ||
| endpoint += "?targets=" + targets | ||
| } |
There was a problem hiding this comment.
URL encoding missing for targets query param — could break with special characters
Hey, so if someone passes a file path with spaces or special characters (like src/my file.ts or src/foo&bar.ts), this will produce a malformed URL. The targets string is concatenated directly without escaping.
Quick example of what goes wrong:
- Input:
targets = "src/my file.ts" - Result:
/v1/analysis/impact?targets=src/my file.ts(broken URL)
You'll want to use url.QueryEscape(targets) or build the URL properly with url.Values.
🐛 Proposed fix
+import "net/url"
+
func (c *Client) Impact(ctx context.Context, zipPath, idempotencyKey, targets, diffPath string) (*ImpactResult, error) {
endpoint := impactEndpoint
if targets != "" {
- endpoint += "?targets=" + targets
+ endpoint += "?targets=" + url.QueryEscape(targets)
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/api/client.go` around lines 199 - 203, The Impact method is building
the request URL by concatenating the targets query param raw, which breaks on
spaces/special chars; update Client.Impact to URL-encode the targets (e.g., use
url.QueryEscape or build query with url.Values) before appending so endpoint
becomes impactEndpoint + "?" + encoded query; ensure you reference the targets
string and the Impact function when making the change and preserve existing
logic when targets == "".
| client := api.New(cfg) | ||
| spin = ui.Start("Analyzing impact…") | ||
| result, err := client.Impact(ctx, zipPath, "impact-"+hash[:16], strings.Join(targets, ","), opts.Diff) | ||
| spin.Stop() | ||
| if err != nil { | ||
| return err | ||
| } |
There was a problem hiding this comment.
Edge case: filenames with commas would be misinterpreted
When you strings.Join(targets, ","), a single target like src/foo,bar.ts becomes indistinguishable from two targets src/foo and bar.ts on the API side.
This is probably super rare, but if the API supports an array-style query param (like ?targets[]=a&targets[]=b) or a different delimiter, that would be safer. Otherwise, at minimum, you could validate that no target contains a comma and error early with a helpful message.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/blastradius/handler.go` around lines 38 - 44, The handler currently
passes targets as a comma-joined string to client.Impact which will break for
filenames containing commas (see the call in handler.go: result, err :=
client.Impact(..., strings.Join(targets, ","), ...)). Fix this by validating
targets before calling client.Impact: iterate over the targets slice and if any
target contains a comma return a clear error (e.g., "target contains comma; use
API array-style or rename file") so callers fail fast; alternatively, if the api
package can be changed, update client.Impact to accept a []string and encode it
as repeated query params (targets[]=...) instead of joining with commas.
| return filepath.Walk(dir, func(path string, info os.FileInfo, err error) error { | ||
| if err != nil { | ||
| return err | ||
| } | ||
| rel, err := filepath.Rel(dir, path) | ||
| if err != nil { | ||
| return err | ||
| } | ||
| if info.IsDir() { | ||
| if skipDirs[info.Name()] { | ||
| return filepath.SkipDir | ||
| } | ||
| return nil | ||
| } | ||
| if strings.HasPrefix(info.Name(), ".") || info.Size() > 10<<20 { | ||
| return nil | ||
| } | ||
| w, err := zw.Create(filepath.ToSlash(rel)) | ||
| if err != nil { | ||
| return err | ||
| } | ||
| f, err := os.Open(path) | ||
| if err != nil { | ||
| return err | ||
| } | ||
| defer f.Close() | ||
| _, err = io.Copy(w, f) | ||
| return err | ||
| }) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check what Go version is being used to see if filepath.WalkDir is available
cat go.mod | head -5Repository: supermodeltools/cli
Length of output: 122
🏁 Script executed:
# Find and examine the full walkZip function
rg -B 5 "filepath\.Walk" internal/blastradius/zip.go -A 40Repository: supermodeltools/cli
Length of output: 710
🏁 Script executed:
# Search for skipDirs to see what directories are being skipped
rg "skipDirs" internal/blastradius/zip.go -B 10 -A 2Repository: supermodeltools/cli
Length of output: 949
🏁 Script executed:
# Check if there are any symlink-related checks elsewhere in the file
rg -i "symlink|readlink|mode" internal/blastradius/zip.goRepository: supermodeltools/cli
Length of output: 111
🏁 Script executed:
# Let me verify the actual behavior of filepath.Walk with symlinks in Go
# Check Go documentation or test behavior
cat > /tmp/test_walk.go << 'EOF'
package main
import (
"fmt"
"os"
"path/filepath"
)
func main() {
// Verify what filepath.Walk does with symlinks
// According to Go docs, filepath.Walk follows symlinks
// Let's confirm this is documented behavior
fmt.Println("filepath.Walk behavior with symlinks:")
fmt.Println("- It DOES follow symbolic links (documented behavior)")
fmt.Println("- This means circular symlinks could cause infinite recursion")
}
EOF
go run /tmp/test_walk.goRepository: supermodeltools/cli
Length of output: 198
🏁 Script executed:
# Check if there are any tests or other code in the repo that deal with symlinks
rg -i "symlink" . --type goRepository: supermodeltools/cli
Length of output: 538
🏁 Script executed:
# Look at the full function definition to understand the complete context
cat -n internal/blastradius/zip.goRepository: supermodeltools/cli
Length of output: 3271
Add symlink check to prevent directory traversal and infinite loops
filepath.Walk follows symbolic links, which can be a problem. For example, if someone has ln -s .. loop in their directory, the walk could either loop forever or read files outside the intended directory.
Check info.Mode()&os.ModeSymlink != 0 and skip symlinks — it's a simple one-liner. (Interestingly, your codebase already does this exact pattern in internal/factory/zip.go, so you can copy that approach.)
if info.Mode()&os.ModeSymlink != 0 {
return nil
}
Add this right after the info.IsDir() check (around line 90) so symlinks are skipped just like you do with hidden files.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/blastradius/zip.go` around lines 82 - 110, The directory walk in the
anonymous func passed to filepath.Walk should skip symbolic links to avoid
directory traversal and infinite loops; add a symlink check
(info.Mode()&os.ModeSymlink != 0) after the info.IsDir() branch in the Walk
callback so symlinks are treated like hidden files (return nil) and not
descended into or archived; mirror the same pattern used in
internal/factory/zip.go to locate the exact implementation to copy.
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@internal/blastradius/handler.go`:
- Around line 24-40: The request key used when calling client.Impact in Run is
too coarse ("impact-"+hash[:16]) and ignores targets, opts.Diff and opts.Force,
making --force a no-op and allowing stale cached responses; fix by deriving the
idempotency key from the full request shape (e.g., include zip hash +
strings.Join(targets, ",") + opts.Diff and any relevant options) so different
targets/diffs produce different keys, and if opts.Force is true either append a
salt (timestamp/UUID) or skip using the derived key to force a fresh run before
calling client.Impact; update the code around Run, client.Impact and the key
construction to use this composed key.
- Around line 77-103: Add rendering for impact.AffectedFunctions so CLI output
includes the distance/relationship info: after the
AffectedFiles/EntryPointsAffected blocks, check len(impact.AffectedFunctions) >
0, print a blank line and a header like "Affected functions:" then build rows
from impact.AffectedFunctions (use the fields on each item such as
Function/name, File, DirectDependencies, TransitiveDependencies,
Relationship/distance) and call ui.Table(w,
[]string{"FUNCTION","FILE","DIRECT","TRANSITIVE","RELATION"}, rows) to mirror
how AffectedFiles and EntryPointsAffected are rendered; reference
impact.AffectedFunctions and ui.Table to locate where to add this block.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: aab12bf0-bfb6-4541-82b2-6b6bd4618ced
📒 Files selected for processing (2)
internal/api/types.gointernal/blastradius/handler.go
🚧 Files skipped from review as they are similar to previous changes (1)
- internal/api/types.go
internal/blastradius/handler.go
Outdated
| func Run(ctx context.Context, cfg *config.Config, dir string, targets []string, opts Options) error { | ||
| spin := ui.Start("Creating repository archive…") | ||
| zipPath, err := createZip(dir) | ||
| spin.Stop() | ||
| if err != nil { | ||
| return fmt.Errorf("create archive: %w", err) | ||
| } | ||
| defer os.Remove(zipPath) | ||
|
|
||
| // Run finds all files transitively affected by a change to target and prints them. | ||
| func Run(ctx context.Context, cfg *config.Config, dir, target string, opts Options) error { | ||
| g, _, err := analyze.GetGraph(ctx, cfg, dir, opts.Force) | ||
| hash, err := cache.HashFile(zipPath) | ||
| if err != nil { | ||
| return err | ||
| } | ||
| results, err := findBlastRadius(g, dir, target, opts.Depth) | ||
|
|
||
| client := api.New(cfg) | ||
| spin = ui.Start("Analyzing impact…") | ||
| result, err := client.Impact(ctx, zipPath, "impact-"+hash[:16], strings.Join(targets, ","), opts.Diff) |
There was a problem hiding this comment.
--force is a no-op, and the request key is too coarse.
Run always sends the same "impact-"+hash[:16] key for a given repo ZIP, so the fingerprint does not change for different targets or different --diff files. That means runs like blast-radius foo.go and blast-radius bar.go against the same checkout reuse the same key, and opts.Force never changes that. If the backend dedupes on idempotency, this can hand back a stale result and makes cache bypass impossible from the CLI. Please derive the key from the full request shape, or skip/salt it when force is set.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/blastradius/handler.go` around lines 24 - 40, The request key used
when calling client.Impact in Run is too coarse ("impact-"+hash[:16]) and
ignores targets, opts.Diff and opts.Force, making --force a no-op and allowing
stale cached responses; fix by deriving the idempotency key from the full
request shape (e.g., include zip hash + strings.Join(targets, ",") + opts.Diff
and any relevant options) so different targets/diffs produce different keys, and
if opts.Force is true either append a salt (timestamp/UUID) or skip using the
derived key to force a fresh run before calling client.Impact; update the code
around Run, client.Impact and the key construction to use this composed key.
| fmt.Fprintf(w, "Risk: %s | Direct: %d | Transitive: %d | Files: %d\n", | ||
| br.RiskScore, br.DirectDependents, br.TransitiveDependents, br.AffectedFiles) | ||
|
|
||
| for _, parent := range importedBy[cur] { | ||
| if _, seen := visited[parent]; seen { | ||
| continue | ||
| if len(br.RiskFactors) > 0 { | ||
| for _, rf := range br.RiskFactors { | ||
| fmt.Fprintf(w, " → %s\n", rf) | ||
| } | ||
| visited[parent] = depth + 1 | ||
| queue = append(queue, parent) | ||
| } | ||
|
|
||
| n, ok := g.NodeByID(parent) | ||
| if !ok { | ||
| continue | ||
| } | ||
| file := n.Prop("path", "name", "file") | ||
| if file != "" && !pathMatches(file, targetRel) { | ||
| results = append(results, Result{File: file, Depth: depth + 1}) | ||
| if len(impact.AffectedFiles) > 0 { | ||
| fmt.Fprintln(w) | ||
| rows := make([][]string, len(impact.AffectedFiles)) | ||
| for i, f := range impact.AffectedFiles { | ||
| rows[i] = []string{f.File, fmt.Sprintf("%d", f.DirectDependencies), fmt.Sprintf("%d", f.TransitiveDependencies)} | ||
| } | ||
| ui.Table(w, []string{"AFFECTED FILE", "DIRECT", "TRANSITIVE"}, rows) | ||
| } | ||
| } | ||
|
|
||
| sort.Slice(results, func(i, j int) bool { | ||
| if results[i].Depth != results[j].Depth { | ||
| return results[i].Depth < results[j].Depth | ||
| if len(impact.EntryPointsAffected) > 0 { | ||
| fmt.Fprintln(w) | ||
| fmt.Fprintln(w, "Entry points affected:") | ||
| rows := make([][]string, len(impact.EntryPointsAffected)) | ||
| for i, ep := range impact.EntryPointsAffected { | ||
| rows[i] = []string{ep.File, ep.Name, ep.Type} | ||
| } | ||
| ui.Table(w, []string{"FILE", "NAME", "TYPE"}, rows) | ||
| } |
There was a problem hiding this comment.
Human output still drops the affected-function data.
This formatter prints risk, files, and entry points, but never shows impact.AffectedFunctions. So the default CLI output still hides the distance/relationship info the new endpoint adds unless people switch to JSON. Please render that list here too.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/blastradius/handler.go` around lines 77 - 103, Add rendering for
impact.AffectedFunctions so CLI output includes the distance/relationship info:
after the AffectedFiles/EntryPointsAffected blocks, check
len(impact.AffectedFunctions) > 0, print a blank line and a header like
"Affected functions:" then build rows from impact.AffectedFunctions (use the
fields on each item such as Function/name, File, DirectDependencies,
TransitiveDependencies, Relationship/distance) and call ui.Table(w,
[]string{"FUNCTION","FILE","DIRECT","TRANSITIVE","RELATION"}, rows) to mirror
how AffectedFiles and EntryPointsAffected are rendered; reference
impact.AffectedFunctions and ui.Table to locate where to add this block.
Test resultsUnit testsAll pass across macOS, Ubuntu, Windows. CI green. Live API tests (supermodel-public-api repo)Target mode — JSON mode — valid structured output: Global mode — Audit command — Before (main) vs After (this PR) |
There was a problem hiding this comment.
🧹 Nitpick comments (2)
internal/blastradius/integration_test.go (2)
84-91: Consider adding a couple more assertions.Right now you're only checking
TotalFiles > 0, which is a reasonable smoke check. But since you're already loggingTargetsAnalyzed,TotalFunctions, andlen(result.Impacts), you could easily promote some of those to actual assertions.For example, since you're explicitly passing
"main.go"as a target, you'd expectTargetsAnalyzed >= 1. This would catch regressions where the API silently ignores your target.💡 Optional: add more assertions
if result.Metadata.TotalFiles == 0 { t.Error("expected totalFiles > 0") } + if result.Metadata.TargetsAnalyzed == 0 { + t.Error("expected targetsAnalyzed > 0 since we passed a target") + } t.Logf("targets=%d files=%d functions=%d impacts=%d", result.Metadata.TargetsAnalyzed, result.Metadata.TotalFiles, result.Metadata.TotalFunctions, len(result.Impacts))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/blastradius/integration_test.go` around lines 84 - 91, Add assertions to the test to fail on regressions by checking result.Metadata fields instead of only logging them: assert that result.Metadata.TargetsAnalyzed >= 1 (since you passed "main.go"), and also assert result.Metadata.TotalFunctions > 0 and len(result.Impacts) > 0 to ensure functions were discovered and impacts computed; update the test in integration_test.go around the existing checks that reference result.Metadata.TargetsAnalyzed, result.Metadata.TotalFunctions, and len(result.Impacts).
52-68: Missing negative test case — consider adding one.I noticed you removed
TestIntegration_Run_UnknownFile(the old negative test). All your tests are now happy-path. It might be worth having at least one test that verifies error handling — like what happens when the API returns an error, or when the zip creation fails.That said, if error handling is covered in unit tests elsewhere, this is totally fine to skip here.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/blastradius/integration_test.go` around lines 52 - 68, Add a negative integration test (e.g., recreate TestIntegration_Run_UnknownFile) that calls blastradius.Run and asserts it returns an error: set up the same test harness (use testutil.IntegrationConfig and testutil.MinimalGoDir, set HOME to t.TempDir(), create ctx with timeout), but inject a failing condition such as a mock API server that returns an error or intentionally make zip creation fail (e.g., by making dir unreadable) before calling blastradius.Run with the same blastradius.Options; then expect a non-nil error and fail the test if Run succeeds.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@internal/blastradius/integration_test.go`:
- Around line 84-91: Add assertions to the test to fail on regressions by
checking result.Metadata fields instead of only logging them: assert that
result.Metadata.TargetsAnalyzed >= 1 (since you passed "main.go"), and also
assert result.Metadata.TotalFunctions > 0 and len(result.Impacts) > 0 to ensure
functions were discovered and impacts computed; update the test in
integration_test.go around the existing checks that reference
result.Metadata.TargetsAnalyzed, result.Metadata.TotalFunctions, and
len(result.Impacts).
- Around line 52-68: Add a negative integration test (e.g., recreate
TestIntegration_Run_UnknownFile) that calls blastradius.Run and asserts it
returns an error: set up the same test harness (use testutil.IntegrationConfig
and testutil.MinimalGoDir, set HOME to t.TempDir(), create ctx with timeout),
but inject a failing condition such as a mock API server that returns an error
or intentionally make zip creation fail (e.g., by making dir unreadable) before
calling blastradius.Run with the same blastradius.Options; then expect a non-nil
error and fail the test if Run succeeds.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 656a046a-d28c-4532-ba81-36234e6033e2
📒 Files selected for processing (2)
internal/blastradius/handler_test.gointernal/blastradius/integration_test.go
🚧 Files skipped from review as they are similar to previous changes (1)
- internal/blastradius/handler_test.go
There was a problem hiding this comment.
♻️ Duplicate comments (3)
internal/blastradius/handler.go (3)
93-110:⚠️ Potential issue | 🟠 MajorHuman output still skips
AffectedFunctions, so key impact detail is hidden by default.The endpoint returns function-level distance/relationship data, but human mode currently prints only files and entry points. This misses one of the core objectives unless users switch to JSON.
🛠️ Suggested rendering block
if len(impact.EntryPointsAffected) > 0 { fmt.Fprintln(w) fmt.Fprintln(w, "Entry points affected:") rows := make([][]string, len(impact.EntryPointsAffected)) for i, ep := range impact.EntryPointsAffected { rows[i] = []string{ep.File, ep.Name, ep.Type} } ui.Table(w, []string{"FILE", "NAME", "TYPE"}, rows) } + + if len(impact.AffectedFunctions) > 0 { + fmt.Fprintln(w) + fmt.Fprintln(w, "Affected functions:") + rows := make([][]string, len(impact.AffectedFunctions)) + for i, fn := range impact.AffectedFunctions { + rows[i] = []string{ + fn.Function, + fn.File, + fmt.Sprintf("%d", fn.Distance), + fn.Relationship, + } + } + ui.Table(w, []string{"FUNCTION", "FILE", "DISTANCE", "RELATIONSHIP"}, rows) + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/blastradius/handler.go` around lines 93 - 110, The human-readable output omits impact.AffectedFunctions so users miss function-level details; update the handler to render impact.AffectedFunctions similar to AffectedFiles/EntryPointsAffected: after the EntryPoints block (or appropriate place) check if len(impact.AffectedFunctions) > 0, write a separating newline and a heading, build rows from impact.AffectedFunctions (use unique fields present on that struct, e.g. function name, file, direct/transitive distances or relationship fields), and call ui.Table(w, headers, rows) (same ui.Table used for AffectedFiles/EntryPointsAffected) so function-level impact is shown by default in human mode.
39-42:⚠️ Potential issue | 🟡 MinorComma-joining targets is ambiguous for filenames that contain commas.
A single target like
src/foo,bar.tsis indistinguishable from two targets once joined. Safer to validate and fail early unless the API supports repeated params.🛠️ Suggested guard
targetStr := strings.Join(targets, ",") + for _, t := range targets { + if strings.Contains(t, ",") { + return fmt.Errorf("target %q contains a comma; please pass targets without commas", t) + } + } if targetStr != "" { idempotencyKey += "-" + targetStr }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/blastradius/handler.go` around lines 39 - 42, The current idempotency key construction joins targets with commas which makes filenames containing commas ambiguous; update the handler to validate the targets slice before joining: iterate over targets and if any element contains ',' return a validation error (400/bad request) instead of proceeding, and only then build targetStr and append to idempotencyKey; reference the targets variable, targetStr construction, and idempotencyKey so you modify the same assembly logic to fail early on comma-containing targets.
38-42:⚠️ Potential issue | 🟠 Major
--forcestill behaves like a no-op, and diff runs can still collide in cache.Right now the idempotency key only varies by ZIP hash + targets. Two runs with different
--diffinputs can reuse the same key, andopts.Forcedoesn’t alter behavior. Example: same checkout + same target, but different diff files => potentially stale reused result.🛠️ Suggested fix
hash, err := cache.HashFile(zipPath) if err != nil { return err } - idempotencyKey := "impact-" + hash[:16] + idempotencyKey := "impact-" + hash[:16] targetStr := strings.Join(targets, ",") if targetStr != "" { idempotencyKey += "-" + targetStr } + if opts.Diff != "" { + diffHash, err := cache.HashFile(opts.Diff) + if err != nil { + return fmt.Errorf("hash diff: %w", err) + } + idempotencyKey += "-diff-" + diffHash[:16] + } + if opts.Force { + idempotencyKey += fmt.Sprintf("-force-%d", time.Now().UnixNano()) + } client := api.New(cfg)Also applies to: 46-46
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/blastradius/handler.go` around lines 38 - 42, The idempotency key currently uses only hash and targets (idempotencyKey, hash, targets) which causes different --diff inputs or the --force flag (opts.Force) to collide; update the key construction in handler.go to include a deterministic representation of the diff input(s) (e.g., join or hash opts.Diff or opts.DiffFiles) and the opts.Force flag so runs with different diff files or force=true produce distinct idempotencyKey values.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@internal/blastradius/handler.go`:
- Around line 93-110: The human-readable output omits impact.AffectedFunctions
so users miss function-level details; update the handler to render
impact.AffectedFunctions similar to AffectedFiles/EntryPointsAffected: after the
EntryPoints block (or appropriate place) check if len(impact.AffectedFunctions)
> 0, write a separating newline and a heading, build rows from
impact.AffectedFunctions (use unique fields present on that struct, e.g.
function name, file, direct/transitive distances or relationship fields), and
call ui.Table(w, headers, rows) (same ui.Table used for
AffectedFiles/EntryPointsAffected) so function-level impact is shown by default
in human mode.
- Around line 39-42: The current idempotency key construction joins targets with
commas which makes filenames containing commas ambiguous; update the handler to
validate the targets slice before joining: iterate over targets and if any
element contains ',' return a validation error (400/bad request) instead of
proceeding, and only then build targetStr and append to idempotencyKey;
reference the targets variable, targetStr construction, and idempotencyKey so
you modify the same assembly logic to fail early on comma-containing targets.
- Around line 38-42: The idempotency key currently uses only hash and targets
(idempotencyKey, hash, targets) which causes different --diff inputs or the
--force flag (opts.Force) to collide; update the key construction in handler.go
to include a deterministic representation of the diff input(s) (e.g., join or
hash opts.Diff or opts.DiffFiles) and the opts.Force flag so runs with different
diff files or force=true produce distinct idempotencyKey values.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: a8a64fac-2f1f-44fe-885d-c62e3c0e0c85
📒 Files selected for processing (1)
internal/blastradius/handler.go
Replace naive import-edge BFS with the API's dedicated impact analysis endpoint which provides risk scoring, affected functions with distance, entry points, direct/transitive dependency counts, and risk factors. Three modes: supermodel blast-radius <file> # target specific file(s) supermodel blast-radius --diff changes.diff # analyze from git diff supermodel blast-radius # global coupling map New flags: --diff Removed: --depth (handled by API) Added alias: impact Closes supermodeltools#24
e6c2707 to
dbfb150
Compare
Closes #24
Summary
Replaces naive import-edge BFS with the API's dedicated impact analysis endpoint (
POST /v1/analysis/impact), which provides risk scoring, affected functions, entry points, and diff-based analysis.Before (main)
Just file names and hop count from import edges.
After (this PR)
Three modes
supermodel blast-radius <file>— analyze specific target(s)supermodel blast-radius --diff changes.diff— analyze from git diffsupermodel blast-radius— global coupling mapNote
Should also be wirable from
supermodel auditas one of the quality checks.Test plan
go test ./...— all pass (4 unit tests + integration tests updated)blast-radius AuthConstants.java→ risk=critical, 25 files, 2 entry pointsSummary by CodeRabbit
New Features
Improvements
Tests