feat: wire impact analysis into audit health report#30
Conversation
Runs global impact analysis alongside domain analysis during audit. Adds Impact Analysis section with risk scores and dependent counts. Critical blast-radius files now trigger DEGRADED status and generate recommendations. Closes supermodeltools#26
WalkthroughThe audit command now runs an impact analysis after domain analysis, enriches the HealthReport with per-file blast-radius metrics, treats any Changes
Sequence DiagramsequenceDiagram
participant Audit as Audit Command
participant ImpactRunner as runImpactForAudit
participant API as Impact API
participant Factory as Health Factory
participant Renderer as Renderer
Audit->>Factory: factory.Analyze(rootDir)
Audit->>ImpactRunner: runImpactForAudit(rootDir)
ImpactRunner->>ImpactRunner: create ZIP of rootDir
ImpactRunner->>API: POST /v1/analysis/impact (ZIP) [10m timeout]
API-->>ImpactRunner: ImpactResult (BlastRadius + GlobalMetrics)
ImpactRunner-->>Audit: return ImpactResult
Audit->>Factory: factory.EnrichWithImpact(report, impact)
Factory->>Factory: populate ImpactFiles, sort & cap to top 10, update status/recommendations
Audit->>Renderer: RenderHealth(report)
Renderer->>Renderer: renderImpactSection (if ImpactFiles non-empty)
Renderer-->>Audit: Markdown report
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (3)
cmd/audit.go (2)
89-99: Minor: Hash slicing assumes minimum length.
hash[:16]will panic if the hash is somehow shorter than 16 characters. In practice,cache.HashFileprobably returns a SHA256 hex string (64 chars), so this is fine. But if you want to be extra defensive:idempotencyKey := "audit-impact-" if len(hash) >= 16 { idempotencyKey += hash[:16] } else { idempotencyKey += hash }Totally optional — just something to consider if this code path could ever receive unexpected input.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cmd/audit.go` around lines 89 - 99, The code slices hash[:16] which can panic if hash is shorter; update the logic after hash, err := cache.HashFile(zipPath) to build an idempotency key safely (e.g., produce idempotencyKey := "audit-impact-" + hash if len(hash) < 16 else "audit-impact-"+hash[:16]) and pass that idempotencyKey into client.Impact instead of the direct "audit-impact-"+hash[:16]; ensure references to hash, cache.HashFile and the client.Impact call are updated accordingly.
94-96: Consider usingcmd.Context()as the parent context.Right now you're doing:
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Minute)This means if the user hits Ctrl+C, the impact analysis won't respond — they'd have to wait for the timeout or kill the process. Compare this to
blastradius.Runwhich usescmd.Context()directly (seeinternal/blastradius/handler.go:23).Consider wrapping the command's context instead:
ctx, cancel := context.WithTimeout(cmd.Context(), 10*time.Minute)That way, user cancellation still works, but you also have the 10-minute ceiling.
🔧 Suggested fix
- ctx, cancel := context.WithTimeout(context.Background(), 10*time.Minute) + ctx, cancel := context.WithTimeout(cmd.Context(), 10*time.Minute)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cmd/audit.go` around lines 94 - 96, The context created for the analysis currently uses context.Background(), which ignores user cancellation; update the timeout context to wrap the command's context by using cmd.Context() as the parent (i.e., call context.WithTimeout(cmd.Context(), 10*time.Minute)) so Ctrl+C cancels immediately while still enforcing the 10-minute ceiling; keep the existing cancel() defer. This change applies where ctx, cancel := context.WithTimeout(context.Background(), 10*time.Minute) is declared in cmd/audit.go near the client := api.New(cfg) logic.internal/factory/render.go (1)
65-78: Rendering logic looks good, but heads up on the complexity warning.The impact analysis table rendering is clean and follows the same pattern as the other sections. The empty risk score fallback to
"-"is a nice touch.However, the pipeline is flagging that
RenderHealthnow has cyclomatic complexity of 22 (threshold is 20). This is a minor lint issue — the function has grown with all these conditional sections.A couple of options to consider (not blocking, just food for thought):
- Extract each section into a small helper like
renderImpactAnalysis(w, r.ImpactFiles)- Or suppress the lint warning if the team agrees the function is readable as-is
Since this is a rendering function that's basically a sequence of conditional writes, I'd lean toward keeping it as-is and bumping the threshold or suppressing it for this function. Up to you!
💡 Optional: Extract to helper function
+func renderImpactAnalysis(w io.Writer, files []ImpactFile) { + if len(files) == 0 { + return + } + fmt.Fprint(w, "\n## Impact Analysis\n\n") + fmt.Fprintln(w, "| File | Risk | Direct | Transitive | Affected Files |") + fmt.Fprintln(w, "|------|------|--------|------------|----------------|") + for i := range files { + f := &files[i] + risk := f.RiskScore + if risk == "" { + risk = "-" + } + fmt.Fprintf(w, "| %s | %s | %d | %d | %d |\n", f.Path, risk, f.Direct, f.Transitive, f.Files) + } +}Then in
RenderHealth:- // Impact analysis - if len(r.ImpactFiles) > 0 { - fmt.Fprint(w, "\n## Impact Analysis\n\n") - fmt.Fprintln(w, "| File | Risk | Direct | Transitive | Affected Files |") - fmt.Fprintln(w, "|------|------|--------|------------|----------------|") - for i := range r.ImpactFiles { - f := &r.ImpactFiles[i] - risk := f.RiskScore - if risk == "" { - risk = "-" - } - fmt.Fprintf(w, "| %s | %s | %d | %d | %d |\n", f.Path, risk, f.Direct, f.Transitive, f.Files) - } - } + renderImpactAnalysis(w, r.ImpactFiles)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/factory/render.go` around lines 65 - 78, RenderHealth exceeds the cyclomatic complexity threshold (22 > 20) because the inline Impact Analysis block increases branching; extract the block that writes the table into a small helper like renderImpactAnalysis(w, r.ImpactFiles) (accepting io.Writer and the slice type used for ImpactFiles) and replace the inline loop in RenderHealth with a single call to renderImpactAnalysis(w, r.ImpactFiles); alternatively, if you prefer to keep the logic inline, add a linter suppression/comment just above RenderHealth to exempt it from the complexity rule.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@cmd/audit.go`:
- Around line 89-99: The code slices hash[:16] which can panic if hash is
shorter; update the logic after hash, err := cache.HashFile(zipPath) to build an
idempotency key safely (e.g., produce idempotencyKey := "audit-impact-" + hash
if len(hash) < 16 else "audit-impact-"+hash[:16]) and pass that idempotencyKey
into client.Impact instead of the direct "audit-impact-"+hash[:16]; ensure
references to hash, cache.HashFile and the client.Impact call are updated
accordingly.
- Around line 94-96: The context created for the analysis currently uses
context.Background(), which ignores user cancellation; update the timeout
context to wrap the command's context by using cmd.Context() as the parent
(i.e., call context.WithTimeout(cmd.Context(), 10*time.Minute)) so Ctrl+C
cancels immediately while still enforcing the 10-minute ceiling; keep the
existing cancel() defer. This change applies where ctx, cancel :=
context.WithTimeout(context.Background(), 10*time.Minute) is declared in
cmd/audit.go near the client := api.New(cfg) logic.
In `@internal/factory/render.go`:
- Around line 65-78: RenderHealth exceeds the cyclomatic complexity threshold
(22 > 20) because the inline Impact Analysis block increases branching; extract
the block that writes the table into a small helper like renderImpactAnalysis(w,
r.ImpactFiles) (accepting io.Writer and the slice type used for ImpactFiles) and
replace the inline loop in RenderHealth with a single call to
renderImpactAnalysis(w, r.ImpactFiles); alternatively, if you prefer to keep the
logic inline, add a linter suppression/comment just above RenderHealth to exempt
it from the complexity rule.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 46b9a448-9b32-49c9-b066-c219a182ef8f
📒 Files selected for processing (4)
cmd/audit.gointernal/factory/health.gointernal/factory/render.gointernal/factory/types.go
| func EnrichWithImpact(r *HealthReport, impact *api.ImpactResult) { | ||
| for i := range impact.Impacts { | ||
| imp := &impact.Impacts[i] | ||
| r.ImpactFiles = append(r.ImpactFiles, ImpactFile{ | ||
| Path: imp.Target.File, | ||
| RiskScore: imp.BlastRadius.RiskScore, | ||
| Direct: imp.BlastRadius.DirectDependents, | ||
| Transitive: imp.BlastRadius.TransitiveDependents, | ||
| Files: imp.BlastRadius.AffectedFiles, | ||
| }) | ||
| } | ||
| // Also pull in global critical files if the API returned them. | ||
| for i := range impact.GlobalMetrics.MostCriticalFiles { | ||
| cf := &impact.GlobalMetrics.MostCriticalFiles[i] | ||
| r.ImpactFiles = append(r.ImpactFiles, ImpactFile{ | ||
| Path: cf.File, | ||
| Direct: cf.DependentCount, | ||
| }) | ||
| } | ||
| // Cap to top 10 by direct dependents. | ||
| sort.Slice(r.ImpactFiles, func(i, j int) bool { | ||
| return r.ImpactFiles[i].Direct > r.ImpactFiles[j].Direct | ||
| }) | ||
| if len(r.ImpactFiles) > 10 { | ||
| r.ImpactFiles = r.ImpactFiles[:10] | ||
| } |
There was a problem hiding this comment.
Potential duplicate files if the same path appears in both sources.
So here's what's happening: you're pulling files from two places:
impact.Impacts(line 43-51) — blast radius resultsimpact.GlobalMetrics.MostCriticalFiles(line 54-60) — global critical files
If the same file path appears in both lists, you'll end up with duplicates in r.ImpactFiles. After sorting by Direct count, both entries could make it into the top 10, which might confuse users seeing the same file twice in the report.
Example scenario:
foo.goappears inImpactswith Direct=50foo.goalso appears inMostCriticalFileswith DependentCount=50- Both get added → duplicate rows in the rendered table
Consider deduplicating by path:
🔧 Suggested fix using a map
func EnrichWithImpact(r *HealthReport, impact *api.ImpactResult) {
+ seen := make(map[string]bool)
for i := range impact.Impacts {
imp := &impact.Impacts[i]
+ if seen[imp.Target.File] {
+ continue
+ }
+ seen[imp.Target.File] = true
r.ImpactFiles = append(r.ImpactFiles, ImpactFile{
Path: imp.Target.File,
RiskScore: imp.BlastRadius.RiskScore,
Direct: imp.BlastRadius.DirectDependents,
Transitive: imp.BlastRadius.TransitiveDependents,
Files: imp.BlastRadius.AffectedFiles,
})
}
// Also pull in global critical files if the API returned them.
for i := range impact.GlobalMetrics.MostCriticalFiles {
cf := &impact.GlobalMetrics.MostCriticalFiles[i]
+ if seen[cf.File] {
+ continue
+ }
+ seen[cf.File] = true
r.ImpactFiles = append(r.ImpactFiles, ImpactFile{
Path: cf.File,
Direct: cf.DependentCount,
})
}
Before / After (live test against supermodel-public-api)Before (main)No impact analysis. No blast radius data. No critical file warnings. After (this PR)Status correctly escalated from HEALTHY → DEGRADED due to critical blast-radius files. Unit tests: all pass. Impact failure is non-blocking (audit continues with warning). |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (2)
internal/factory/factory_test.go (2)
1030-1043: Consider verifying the ordering, not just the count.Right now you're checking that 10 files are kept, which is great. But the implementation sorts by
Directdescending before capping — so the top 10 by direct dependents should survive.For example,
src/file0.tshasDirect: 100andsrc/file14.tshasDirect: 86. After the cap,file0throughfile9should be present (the ones with the highestDirectvalues). You could add a quick check like:// Verify the highest Direct value is first (should be file0 with 100) if r.ImpactFiles[0].Direct != 100 { t.Errorf("expected top file to have Direct=100, got %d", r.ImpactFiles[0].Direct) }This would catch bugs where the sort or cap logic is wrong. Not critical, just nice-to-have.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/factory/factory_test.go` around lines 1030 - 1043, Update the TestEnrichWithImpact_CapsAt10 unit test to also assert that the preserved ImpactFiles are the top 10 by Direct dependents (i.e., sorted descending) rather than only checking the count; inside TestEnrichWithImpact_CapsAt10 after calling EnrichWithImpact, add assertions on r.ImpactFiles (for example check r.ImpactFiles[0].Direct == 100 and that the sequence of Direct values decreases for the first 10 entries or that the last kept entry matches the expected 10th-highest value) so the test verifies both ordering and capping behavior of EnrichWithImpact.
973-1077: Missing test forGlobalMetrics.MostCriticalFilespath.Looking at
EnrichWithImpactinhealth.go, there's a second loop that pulls in files fromimpact.GlobalMetrics.MostCriticalFiles:for i := range impact.GlobalMetrics.MostCriticalFiles { cf := &impact.GlobalMetrics.MostCriticalFiles[i] r.ImpactFiles = append(r.ImpactFiles, ImpactFile{ Path: cf.File, Direct: cf.DependentCount, }) }None of the current tests exercise this path. It would be good to add something like:
func TestEnrichWithImpact_IncludesGlobalCriticalFiles(t *testing.T) { r := &HealthReport{Status: StatusHealthy} impact := &api.ImpactResult{ GlobalMetrics: api.GlobalMetrics{ MostCriticalFiles: []api.CriticalFile{ {File: "src/shared.ts", DependentCount: 50}, }, }, } EnrichWithImpact(r, impact) if len(r.ImpactFiles) != 1 || r.ImpactFiles[0].Path != "src/shared.ts" { t.Errorf("expected GlobalMetrics file, got %v", r.ImpactFiles) } }This ensures both sources of impact files are working correctly.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/factory/factory_test.go` around lines 973 - 1077, Add a unit test that exercises the GlobalMetrics.MostCriticalFiles branch in EnrichWithImpact: create a HealthReport, build an api.ImpactResult with GlobalMetrics.MostCriticalFiles containing one api.CriticalFile (e.g., File:"src/shared.ts", DependentCount:50), call EnrichWithImpact(r, impact) and assert that r.ImpactFiles has one entry whose Path == "src/shared.ts" and Direct == 50; name the test TestEnrichWithImpact_IncludesGlobalCriticalFiles and use the existing ImpactFile/HealthReport types for assertions.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@internal/factory/factory_test.go`:
- Around line 1068-1077: EnrichWithImpact currently dereferences impact.Impacts
and impact.GlobalMetrics without guarding for a nil impact; add a nil-check at
the top of EnrichWithImpact (e.g., if impact == nil { return }) so passing nil
won't panic, and update/add a unit test (e.g., extend
TestEnrichWithImpact_EmptyImpact or add TestEnrichWithImpact_NilImpact) to call
EnrichWithImpact(r, nil) and assert the HealthReport (Status and ImpactFiles)
remains unchanged; alternatively, if you prefer a contract, add a clear comment
on EnrichWithImpact requiring non-nil api.ImpactResult and ensure callers adhere
to it, but prefer the nil-guard for robustness.
---
Nitpick comments:
In `@internal/factory/factory_test.go`:
- Around line 1030-1043: Update the TestEnrichWithImpact_CapsAt10 unit test to
also assert that the preserved ImpactFiles are the top 10 by Direct dependents
(i.e., sorted descending) rather than only checking the count; inside
TestEnrichWithImpact_CapsAt10 after calling EnrichWithImpact, add assertions on
r.ImpactFiles (for example check r.ImpactFiles[0].Direct == 100 and that the
sequence of Direct values decreases for the first 10 entries or that the last
kept entry matches the expected 10th-highest value) so the test verifies both
ordering and capping behavior of EnrichWithImpact.
- Around line 973-1077: Add a unit test that exercises the
GlobalMetrics.MostCriticalFiles branch in EnrichWithImpact: create a
HealthReport, build an api.ImpactResult with GlobalMetrics.MostCriticalFiles
containing one api.CriticalFile (e.g., File:"src/shared.ts", DependentCount:50),
call EnrichWithImpact(r, impact) and assert that r.ImpactFiles has one entry
whose Path == "src/shared.ts" and Direct == 50; name the test
TestEnrichWithImpact_IncludesGlobalCriticalFiles and use the existing
ImpactFile/HealthReport types for assertions.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: b6714517-c767-4a07-a3bf-39f058986507
📒 Files selected for processing (1)
internal/factory/factory_test.go
| func TestEnrichWithImpact_EmptyImpact(t *testing.T) { | ||
| r := &HealthReport{Status: StatusHealthy} | ||
| EnrichWithImpact(r, &api.ImpactResult{}) | ||
| if r.Status != StatusHealthy { | ||
| t.Errorf("expected HEALTHY with empty impact, got %s", r.Status) | ||
| } | ||
| if len(r.ImpactFiles) != 0 { | ||
| t.Errorf("expected 0 impact files, got %d", len(r.ImpactFiles)) | ||
| } | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check if EnrichWithImpact has any nil-guard for the impact parameter
ast-grep --pattern $'func EnrichWithImpact($_, $_) {
$$$
}'Repository: supermodeltools/cli
Length of output: 2016
Add a nil guard or clarify the contract.
The EnrichWithImpact function accesses impact.Impacts and impact.GlobalMetrics without checking if impact is nil first. If someone calls EnrichWithImpact(r, nil), it'll panic right away on that first for-loop.
Either add a simple nil check at the start (like if impact == nil { return }), or add a comment explaining that the caller must always pass a non-nil api.ImpactResult. Also worth adding a test that documents the actual behavior—right now the test only covers the empty-but-valid case.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/factory/factory_test.go` around lines 1068 - 1077, EnrichWithImpact
currently dereferences impact.Impacts and impact.GlobalMetrics without guarding
for a nil impact; add a nil-check at the top of EnrichWithImpact (e.g., if
impact == nil { return }) so passing nil won't panic, and update/add a unit test
(e.g., extend TestEnrichWithImpact_EmptyImpact or add
TestEnrichWithImpact_NilImpact) to call EnrichWithImpact(r, nil) and assert the
HealthReport (Status and ImpactFiles) remains unchanged; alternatively, if you
prefer a contract, add a clear comment on EnrichWithImpact requiring non-nil
api.ImpactResult and ensure callers adhere to it, but prefer the nil-guard for
robustness.
There was a problem hiding this comment.
♻️ Duplicate comments (1)
internal/factory/health.go (1)
46-63:⚠️ Potential issue | 🟡 MinorDeduplicate impact entries by file path before append.
If the same file appears in both
impact.Impactsandimpact.GlobalMetrics.MostCriticalFiles, it gets added twice. That can duplicate table rows and duplicate recommendations.🔧 Proposed fix (dedupe + merge by path)
func EnrichWithImpact(r *HealthReport, impact *api.ImpactResult) { if impact == nil { return } + byPath := make(map[string]int, len(r.ImpactFiles)) + for i := range r.ImpactFiles { + byPath[r.ImpactFiles[i].Path] = i + } + + upsert := func(next ImpactFile) { + if idx, ok := byPath[next.Path]; ok { + cur := &r.ImpactFiles[idx] + if next.RiskScore != "" { + cur.RiskScore = next.RiskScore + } + if next.Direct > cur.Direct { + cur.Direct = next.Direct + } + if next.Transitive > cur.Transitive { + cur.Transitive = next.Transitive + } + if next.Files > cur.Files { + cur.Files = next.Files + } + return + } + byPath[next.Path] = len(r.ImpactFiles) + r.ImpactFiles = append(r.ImpactFiles, next) + } + for i := range impact.Impacts { imp := &impact.Impacts[i] - r.ImpactFiles = append(r.ImpactFiles, ImpactFile{ + upsert(ImpactFile{ Path: imp.Target.File, RiskScore: imp.BlastRadius.RiskScore, Direct: imp.BlastRadius.DirectDependents, Transitive: imp.BlastRadius.TransitiveDependents, Files: imp.BlastRadius.AffectedFiles, }) } // Also pull in global critical files if the API returned them. for i := range impact.GlobalMetrics.MostCriticalFiles { cf := &impact.GlobalMetrics.MostCriticalFiles[i] - r.ImpactFiles = append(r.ImpactFiles, ImpactFile{ + upsert(ImpactFile{ Path: cf.File, Direct: cf.DependentCount, }) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/factory/health.go` around lines 46 - 63, The loop currently appends entries from impact.Impacts and impact.GlobalMetrics.MostCriticalFiles without deduplication; create a map keyed by file path to de-duplicate and merge metrics before assigning to r.ImpactFiles: when iterating impact.Impacts, insert ImpactFile into map[path] with RiskScore/Direct/Transitive/Files; when iterating impact.GlobalMetrics.MostCriticalFiles, check map[path] and if present merge/update the existing entry (e.g., set Direct to the max or sum of existing.Direct and cf.DependentCount and fill missing fields such as RiskScore), otherwise insert a new ImpactFile; finally replace r.ImpactFiles with the map values (preserving desired ordering if needed). Use the existing symbols impact.Impacts, impact.GlobalMetrics.MostCriticalFiles, r.ImpactFiles and ImpactFile to locate and implement the fix.
🧹 Nitpick comments (1)
internal/factory/health.go (1)
65-67: Use a deterministic tie-breaker in sort.Sorting only by
Directcan shuffle equal values between runs. AddPathtie-breaker to keep report output stable.♻️ Suggested tweak
sort.Slice(r.ImpactFiles, func(i, j int) bool { - return r.ImpactFiles[i].Direct > r.ImpactFiles[j].Direct + if r.ImpactFiles[i].Direct != r.ImpactFiles[j].Direct { + return r.ImpactFiles[i].Direct > r.ImpactFiles[j].Direct + } + return r.ImpactFiles[i].Path < r.ImpactFiles[j].Path })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/factory/health.go` around lines 65 - 67, The current sort.Slice comparator only compares r.ImpactFiles[i].Direct which can reorder equal Direct values; update the comparator used on r.ImpactFiles so it first compares Direct and, when equal, uses the Path field as a deterministic tie-breaker (e.g., return a comparison of r.ImpactFiles[i].Path < r.ImpactFiles[j].Path). Keep the primary sort order by Direct and add Path as the secondary key to ensure stable, reproducible output.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@internal/factory/health.go`:
- Around line 46-63: The loop currently appends entries from impact.Impacts and
impact.GlobalMetrics.MostCriticalFiles without deduplication; create a map keyed
by file path to de-duplicate and merge metrics before assigning to
r.ImpactFiles: when iterating impact.Impacts, insert ImpactFile into map[path]
with RiskScore/Direct/Transitive/Files; when iterating
impact.GlobalMetrics.MostCriticalFiles, check map[path] and if present
merge/update the existing entry (e.g., set Direct to the max or sum of
existing.Direct and cf.DependentCount and fill missing fields such as
RiskScore), otherwise insert a new ImpactFile; finally replace r.ImpactFiles
with the map values (preserving desired ordering if needed). Use the existing
symbols impact.Impacts, impact.GlobalMetrics.MostCriticalFiles, r.ImpactFiles
and ImpactFile to locate and implement the fix.
---
Nitpick comments:
In `@internal/factory/health.go`:
- Around line 65-67: The current sort.Slice comparator only compares
r.ImpactFiles[i].Direct which can reorder equal Direct values; update the
comparator used on r.ImpactFiles so it first compares Direct and, when equal,
uses the Path field as a deterministic tie-breaker (e.g., return a comparison of
r.ImpactFiles[i].Path < r.ImpactFiles[j].Path). Keep the primary sort order by
Direct and add Path as the secondary key to ensure stable, reproducible output.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 1cd2c290-cecd-49de-b13e-fa345c8e7b07
📒 Files selected for processing (1)
internal/factory/health.go
Closes #26
Summary
Runs
/v1/analysis/impact(global mode) duringsupermodel auditand adds the results to the health report. Critical blast-radius files now affect the health status and generate recommendations.What changed
cmd/audit.go— runs impact analysis after domain analysis, callsfactory.EnrichWithImpactinternal/factory/health.go— newEnrichWithImpact(), updatedscoreStatusandgenerateRecommendationsto factor in critical impact filesinternal/factory/types.go— newImpactFiletype, addedImpactFilestoHealthReportinternal/factory/render.go— renders Impact Analysis tableBefore
After
Test plan
go test ./...— all passsupermodel auditshows impact table and critical recommendationsSummary by CodeRabbit
New Features
Bug Fixes / Reliability
Tests