feat: wire MCP server to dedicated dead-code and impact API endpoints#32
Conversation
…points Replace inline graph traversal (findDeadFunctions, findAffected) with calls to client.DeadCode() and client.Impact(). MCP tools now return the same rich results as the CLI commands — confidence levels, risk scores, line numbers, reasons, entry points. Removed ~100 lines of inline helpers (findDeadFunctions, findAffected, isEntryPoint, pathMatches, deadFn, affected types). Updated tool schemas to match new params (min_confidence, limit for dead_code; file optional for blast_radius). Closes supermodeltools#27
WalkthroughRewires MCP tool dispatch to dedicated handlers, removes local graph-analysis helpers, and calls analysis APIs via Changes
Sequence Diagram(s)sequenceDiagram
participant MCP as MCP Server
participant FS as Repo/Filesystem
participant API as Analysis API
participant Cache as Graph Cache
MCP->>FS: ensureZip() — create repo zip, compute hash
FS-->>MCP: zipPath, zipHash
MCP->>API: client.DeadCode(zipPath, idempotencyKey=zipHash, min_confidence, limit)
API-->>MCP: DeadCodeResults (candidates: file, line, confidence, reason)
MCP->>API: client.Impact(zipPath, idempotencyKey=zipHash+file?, file?)
API-->>MCP: ImpactResults (risk, affected files, entry points, factors)
MCP->>Cache: use/reuse cached graph for analyze/get_graph requests
MCP->>MCP: formatDeadCode / formatImpact -> return tool response
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@internal/mcp/server.go`:
- Around line 248-255: toolDeadCode and toolBlastRadius recreate the repo
zip/hash independently of the cached in-memory graph (served by toolGetGraph),
causing snapshot drift; fix by having these handlers call s.ensureZip() and then
compare the returned hash to the server's cached graph snapshot (e.g., a new
s.graphHash field) and if they differ, invalidate s.graph (set to nil) or update
s.graphHash to the new hash so all handlers use the same snapshot; apply the
same fix to the other affected handlers (the functions around the noted ranges
and any use of ensureZip()/s.graph) so any change in working-tree hash either
resets s.graph or synchronizes it to the new hash before using it.
- Around line 95-100: The MCP endpoint currently only accepts and forwards a
"file" argument and formats file-level output; update it to accept a generic
target (file or function), forward that target to Impact(...targets...) instead
of only args["file"], and format responses using ImpactTargetInfo{Name, Line,
Type} and AffectedFunctions so function-level results are preserved. Concretely:
change the request parsing to accept a "target" (or reuse args["target"] /
args["file_or_function"]) and pass that value into the call to Impact(...),
update any marshal/response builders that previously assumed file-only output to
iterate/serialize ImpactTargetInfo and AffectedFunctions, and ensure callers
that construct the request populate the new field. Use the existing types
Impact(...targets...), ImpactTargetInfo{Name, Line, Type}, and AffectedFunctions
to implement the forwarding and response formatting.
- Around line 256-257: The parsers currently silently coerce bad/missing entries
from the args map (e.g., min_confidence via min_confidence, limit via intArg,
and file/blast_radius/dead_code uses) to zero values which widens requests;
instead validate types and fail fast: update the code paths that read args
(referencing min_confidence, intArg, and the args map) to check the type
assertion results and return an explicit error/HTTP 400 when an arg is the wrong
type or missing, and modify intArg to surface parsing/validation errors rather
than returning 0; apply the same pattern to the other occurrences noted (around
the sections you highlighted at 287-294 and 452-455) so bad client input is
rejected rather than treated as default values.
- Around line 406-421: The linter flags an unnamedResult; fix by naming the
return values on ensureZip (e.g. change signature to func (s *server)
ensureZip() (zipPath string, hash string, err error)) and then assign to those
named variables (replace the short-declarations like zipPath, err :=
createZip(s.dir) and hash, err := cache.HashFile(zipPath) with simple
assignments zipPath, err = createZip(s.dir) and hash, err =
cache.HashFile(zipPath)) so the function uses the named returns consistently and
returns the named variables.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: c9b0c528-9ea6-48ff-8819-96d4c7f5c8a7
📒 Files selected for processing (1)
internal/mcp/server.go
| Description: "Analyze the impact of changing a file or function. Returns risk score, affected files and functions, entry points impacted, and risk factors.", | ||
| InputSchema: toolSchema{ | ||
| Type: "object", | ||
| Required: []string{"file"}, | ||
| Type: "object", | ||
| Properties: map[string]schemaProp{ | ||
| "file": {Type: "string", Description: "Repo-relative path to the file (e.g. internal/api/client.go)."}, | ||
| "depth": {Type: "integer", Description: "Maximum traversal depth. 0 = unlimited."}, | ||
| "force": {Type: "boolean", Description: "Re-analyze even if a cached result exists."}, | ||
| "file": {Type: "string", Description: "Repo-relative path to the file (e.g. internal/api/client.go). Omit for global coupling map."}, | ||
| }, |
There was a problem hiding this comment.
Blast radius still doesn’t expose function-level analysis.
The tool now advertises “file or function,” but the MCP surface only accepts file, forwards only args["file"], and formats only file-level output. The underlying API/result types are already generic (Impact(...targets...), ImpactTargetInfo{Name, Line, Type}, AffectedFunctions), so MCP clients still miss the function-level parity this change is aiming for.
Also applies to: 287-294, 316-327
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/mcp/server.go` around lines 95 - 100, The MCP endpoint currently
only accepts and forwards a "file" argument and formats file-level output;
update it to accept a generic target (file or function), forward that target to
Impact(...targets...) instead of only args["file"], and format responses using
ImpactTargetInfo{Name, Line, Type} and AffectedFunctions so function-level
results are preserved. Concretely: change the request parsing to accept a
"target" (or reuse args["target"] / args["file_or_function"]) and pass that
value into the call to Impact(...), update any marshal/response builders that
previously assumed file-only output to iterate/serialize ImpactTargetInfo and
AffectedFunctions, and ensure callers that construct the request populate the
new field. Use the existing types Impact(...targets...), ImpactTargetInfo{Name,
Line, Type}, and AffectedFunctions to implement the forwarding and response
formatting.
| // toolDeadCode calls the dedicated /v1/analysis/dead-code endpoint. | ||
| func (s *server) toolDeadCode(ctx context.Context, args map[string]any) (string, error) { | ||
| zipPath, hash, err := s.ensureZip() | ||
| if err != nil { | ||
| return "", err | ||
| } | ||
| defer os.Remove(zipPath) | ||
|
|
There was a problem hiding this comment.
These tools can drift onto a different repo snapshot than get_graph.
toolDeadCode and toolBlastRadius always zip/hash the working tree again, while toolGetGraph can keep serving the cached in-memory graph. After the repo changes mid-session, the server can return impact/dead-code data for one snapshot and graph data for another. Please make them share one snapshot/hash or invalidate s.graph when the working tree hash changes.
Also applies to: 281-285, 344-347, 404-421
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/mcp/server.go` around lines 248 - 255, toolDeadCode and
toolBlastRadius recreate the repo zip/hash independently of the cached in-memory
graph (served by toolGetGraph), causing snapshot drift; fix by having these
handlers call s.ensureZip() and then compare the returned hash to the server's
cached graph snapshot (e.g., a new s.graphHash field) and if they differ,
invalidate s.graph (set to nil) or update s.graphHash to the new hash so all
handlers use the same snapshot; apply the same fix to the other affected
handlers (the functions around the noted ranges and any use of
ensureZip()/s.graph) so any change in working-tree hash either resets s.graph or
synchronizes it to the new hash before using it.
| minConfidence, _ := args["min_confidence"].(string) | ||
| limit := intArg(args, "limit") |
There was a problem hiding this comment.
Fail fast on bad args instead of widening the request.
These parsers silently fall back to zero values. A non-string file turns blast_radius into a global coupling-map call, and a bad limit turns dead_code into “return everything.” That makes client bugs hard to spot and can trigger much broader API work than the caller asked for.
Also applies to: 287-294, 452-455
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/mcp/server.go` around lines 256 - 257, The parsers currently
silently coerce bad/missing entries from the args map (e.g., min_confidence via
min_confidence, limit via intArg, and file/blast_radius/dead_code uses) to zero
values which widens requests; instead validate types and fail fast: update the
code paths that read args (referencing min_confidence, intArg, and the args map)
to check the type assertion results and return an explicit error/HTTP 400 when
an arg is the wrong type or missing, and modify intArg to surface
parsing/validation errors rather than returning 0; apply the same pattern to the
other occurrences noted (around the sections you highlighted at 287-294 and
452-455) so bad client input is rejected rather than treated as default values.
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (3)
internal/mcp/server.go (3)
250-251:⚠️ Potential issue | 🟠 MajorThese tools can still answer from different repo snapshots.
toolDeadCodeandtoolBlastRadiusalways zip/hash the current tree, buttoolGetGraphcan keep servings.graphfrom an earlier analysis. If the working tree changes mid-session, MCP can return contradictory answers for the same repo. Please sync them through one snapshot/hash or invalidates.graphwhen the hash changes.Also applies to: 281-282, 344-346
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/mcp/server.go` around lines 250 - 251, The tools (toolDeadCode, toolBlastRadius) use the current zip/hash from ensureZip but toolGetGraph may serve a stale s.graph, causing inconsistent answers; modify ensureZip usage so after calling ensureZip() you compare the returned hash to the stored snapshot hash (e.g., a field like s.currentHash) and if they differ set s.graph = nil (or otherwise invalidate it) and update s.currentHash, and also ensure toolGetGraph reconstructs the graph when s.graph is nil or when its backing hash does not match the new hash; key symbols: ensureZip, s.graph, toolGetGraph, toolDeadCode, toolBlastRadius, and the returned hash from ensureZip.
95-100:⚠️ Potential issue | 🟠 Major
blast_radiusstill stops at file-level.The tool text says “file or function,” but the schema only accepts
file, the request only forwardsargs["file"], and the formatter never renders affected functions. So MCP still can’t target a function, and it still falls short of the output it advertises.Also applies to: 287-294, 312-337
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/mcp/server.go` around lines 95 - 100, The blast_radius tool advertises "file or function" but only accepts and forwards a "file" field: update the InputSchema (toolSchema Properties) to accept a "function" (or a unified "target" with type/enum), change the code that builds the tool request so it forwards args["function"] (or args["target"]) in addition to args["file"], and update the formatter responsible for rendering affected items so it includes and displays affected functions (not just files); search for the usage of InputSchema/Properties for "file", the code path that reads args["file"] and constructs the request, and the formatter function that renders affected files and extend them to handle and render affected functions accordingly.
256-257:⚠️ Potential issue | 🟠 MajorMalformed args still widen the analysis instead of failing fast.
A bad
min_confidencejust removes filtering, a non-numericlimitbecomes0viaintArg, and a non-stringfileturnsblast_radiusinto a repo-wide call. That makes client bugs both expensive and easy to miss. Please validate these inputs and return a clear error instead of defaulting.Also applies to: 287-287, 452-454
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/mcp/server.go` around lines 256 - 257, The current parsing of inputs (min_confidence, limit via intArg, and file/blast_radius) silently falls back to permissive defaults; update the request validation to fail fast: when extracting min_confidence (variable minConfidence), ensure it's present and convertible to the expected type or return a clear error, change intArg to validate that the "limit" value is numeric and >0 (or return an error) instead of coercing to 0, and validate the "file" argument is a string (reject or error when malformed) before computing blast_radius; surface descriptive errors from the handler that calls these values so callers receive immediate validation failures rather than widened queries.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@internal/mcp/server.go`:
- Around line 299-307: The current branch in the function that checks if
len(result.Impacts) == 0 incorrectly falls back to
result.GlobalMetrics.MostCriticalFiles even when a specific target was
requested; change the logic in that branch to check the request's target (e.g.,
the variable target or request.Target) and only build/return the
MostCriticalFiles string when the caller omitted the target, otherwise return a
target-specific "no impact detected" message (or an error) so repo-wide hot
spots are not returned for targeted queries; update the conditional around
result.GlobalMetrics.MostCriticalFiles and the return path of that block
accordingly.
- Line 406: The function signature for ensureZip currently spells out types for
consecutive string params individually; update the signature in the ensureZip
declaration (function ensureZip) to combine identical types using Go shorthand
so it reads (zipPath, hash string, err error) instead of (zipPath string, hash
string, err error) to satisfy the paramTypeCombine lint rule.
---
Duplicate comments:
In `@internal/mcp/server.go`:
- Around line 250-251: The tools (toolDeadCode, toolBlastRadius) use the current
zip/hash from ensureZip but toolGetGraph may serve a stale s.graph, causing
inconsistent answers; modify ensureZip usage so after calling ensureZip() you
compare the returned hash to the stored snapshot hash (e.g., a field like
s.currentHash) and if they differ set s.graph = nil (or otherwise invalidate it)
and update s.currentHash, and also ensure toolGetGraph reconstructs the graph
when s.graph is nil or when its backing hash does not match the new hash; key
symbols: ensureZip, s.graph, toolGetGraph, toolDeadCode, toolBlastRadius, and
the returned hash from ensureZip.
- Around line 95-100: The blast_radius tool advertises "file or function" but
only accepts and forwards a "file" field: update the InputSchema (toolSchema
Properties) to accept a "function" (or a unified "target" with type/enum),
change the code that builds the tool request so it forwards args["function"] (or
args["target"]) in addition to args["file"], and update the formatter
responsible for rendering affected items so it includes and displays affected
functions (not just files); search for the usage of InputSchema/Properties for
"file", the code path that reads args["file"] and constructs the request, and
the formatter function that renders affected files and extend them to handle and
render affected functions accordingly.
- Around line 256-257: The current parsing of inputs (min_confidence, limit via
intArg, and file/blast_radius) silently falls back to permissive defaults;
update the request validation to fail fast: when extracting min_confidence
(variable minConfidence), ensure it's present and convertible to the expected
type or return a clear error, change intArg to validate that the "limit" value
is numeric and >0 (or return an error) instead of coercing to 0, and validate
the "file" argument is a string (reject or error when malformed) before
computing blast_radius; surface descriptive errors from the handler that calls
these values so callers receive immediate validation failures rather than
widened queries.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: b4aa3819-91cb-4b1d-891a-1e6fa07cc40b
📒 Files selected for processing (1)
internal/mcp/server.go
internal/mcp/server.go
Outdated
| if len(result.Impacts) == 0 { | ||
| if len(result.GlobalMetrics.MostCriticalFiles) > 0 { | ||
| var sb strings.Builder | ||
| sb.WriteString("Most critical files (by dependent count):\n\n") | ||
| for i := range result.GlobalMetrics.MostCriticalFiles { | ||
| f := &result.GlobalMetrics.MostCriticalFiles[i] | ||
| fmt.Fprintf(&sb, "- %s (%d dependents)\n", f.File, f.DependentCount) | ||
| } | ||
| return sb.String(), nil |
There was a problem hiding this comment.
Don’t return repo-wide hot spots for a targeted query.
If target is set and the API returns no impacts, this branch still falls back to MostCriticalFiles, which is repo-wide data. That can look like a valid answer for the requested file/function even though it’s unrelated. Only use this fallback when the caller omitted target; otherwise return a target-specific “no impact detected.”
Suggested fix
if len(result.Impacts) == 0 {
- if len(result.GlobalMetrics.MostCriticalFiles) > 0 {
+ if target == "" && len(result.GlobalMetrics.MostCriticalFiles) > 0 {
var sb strings.Builder
sb.WriteString("Most critical files (by dependent count):\n\n")
for i := range result.GlobalMetrics.MostCriticalFiles {
f := &result.GlobalMetrics.MostCriticalFiles[i]
fmt.Fprintf(&sb, "- %s (%d dependents)\n", f.File, f.DependentCount)
}
return sb.String(), nil
}
- return "No impact detected.", nil
+ if target != "" {
+ return fmt.Sprintf("No impact detected for %s.", target), nil
+ }
+ return "No impact detected.", nil
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/mcp/server.go` around lines 299 - 307, The current branch in the
function that checks if len(result.Impacts) == 0 incorrectly falls back to
result.GlobalMetrics.MostCriticalFiles even when a specific target was
requested; change the logic in that branch to check the request's target (e.g.,
the variable target or request.Target) and only build/return the
MostCriticalFiles string when the caller omitted the target, otherwise return a
target-specific "no impact detected" message (or an error) so repo-wide hot
spots are not returned for targeted queries; update the conditional around
result.GlobalMetrics.MostCriticalFiles and the return path of that block
accordingly.
There was a problem hiding this comment.
♻️ Duplicate comments (4)
internal/mcp/server.go (4)
248-251:⚠️ Potential issue | 🟠 MajorThese tools can still drift onto different repo snapshots.
toolDeadCodeandtoolBlastRadiusalways zip the current working tree, whiletoolGetGraphcan keep serving the cacheds.graph. After the repo changes mid-session, one MCP session can mix stale graph data with fresh impact/dead-code results. Please tieensureZip()tos.hashor invalidates.graphwhen the hash changes.Also applies to: 281-285, 344-347, 404-421
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/mcp/server.go` around lines 248 - 251, toolDeadCode and toolBlastRadius call ensureZip() which zips the current working tree while toolGetGraph may serve a cached s.graph, causing mixed stale/fresh data when the repo changes mid-session; modify the flow so ensureZip() is tied to s.hash (or returns the hash) and whenever a new hash is produced compare it to s.hash and, if different, set s.hash to the new value and invalidate s.graph (clear or reload) before running toolDeadCode, toolBlastRadius, toolGetGraph (also apply same fix to the other affected handlers referenced in the review), ensuring all handlers use the same snapshot hash before proceeding.
256-257:⚠️ Potential issue | 🟠 MajorBad arg types silently broaden the API call.
A non-string
fileturnsblast_radiusinto a repo-wide query, a badmin_confidenceremoves filtering, and a non-integerlimitfalls back to0/all results.intArg()also truncates non-integral JSON numbers. This should fail fast instead of quietly widening the request.Also applies to: 287-287, 452-455
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/mcp/server.go` around lines 256 - 257, The handler currently broadens queries when arg types are wrong because it silently casts values (e.g., min_confidence via minConfidence, limit via intArg(args, "limit"), and file/blast_radius) — change these to validate input types and fail fast: check that args["min_confidence"] is a string (or explicitly parse/validate a numeric confidence) and return an error if not; ensure args["limit"] is an integer type (reject floats or non-numeric values rather than truncating) by updating intArg or replacing it with strict parsing that returns an error on non-integers; and validate the file/blast_radius parameter types similarly so incorrect types do not convert into repo-wide queries. Update the same checks used around the other occurrences (the other min_confidence/limit/file handling sites) to return a clear error to the caller when types are invalid.
94-100:⚠️ Potential issue | 🟠 Major
blast_radiusis still file-only on the MCP surface.Line 95 says “file or function”, but Line 99 only exposes
file, Line 287 only reads that key, and Lines 312-327 still serialize file-level fields while droppingAffectedFunctions. MCP clients still can’t request or receive the function-level impact data the API already returns.Also applies to: 287-295, 312-327
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/mcp/server.go` around lines 94 - 100, The MCP "blast_radius" tool advertises "file or function" but only exposes and reads the "file" property and serializes only file-level results; update the tool InputSchema (tool name "blast_radius" and its toolSchema Properties) to add a "function" string property, update the request parsing code that currently reads the "file" key to also read "function" (and pass it through to the underlying API call), and restore/include the AffectedFunctions field when building/serializing the tool response so MCP clients can request and receive function-level impact data.
299-309:⚠️ Potential issue | 🟠 MajorDon’t return repo-wide hot spots for a targeted miss.
When the caller provides a specific file/target and the API returns no impacts, this branch still falls back to
MostCriticalFiles, which is unrelated repo-wide data. That makes “no impact for X” look like a valid answer for X. Only use this fallback when no target was supplied.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/mcp/server.go` around lines 299 - 309, The current branch returns repo-wide MostCriticalFiles whenever result.Impacts is empty, which surfaces unrelated hotspots for a targeted query; update the conditional so the MostCriticalFiles fallback is only used when no target was supplied (e.g., check the request's target variable such as req.Target or local target string) — if result.Impacts is empty and target is non-empty return a target-specific "no impact" message (or the existing "No impact detected."), but if target is empty and len(result.GlobalMetrics.MostCriticalFiles) > 0 build and return the MostCriticalFiles string as before; adjust the logic around result.Impacts and result.GlobalMetrics.MostCriticalFiles accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@internal/mcp/server.go`:
- Around line 248-251: toolDeadCode and toolBlastRadius call ensureZip() which
zips the current working tree while toolGetGraph may serve a cached s.graph,
causing mixed stale/fresh data when the repo changes mid-session; modify the
flow so ensureZip() is tied to s.hash (or returns the hash) and whenever a new
hash is produced compare it to s.hash and, if different, set s.hash to the new
value and invalidate s.graph (clear or reload) before running toolDeadCode,
toolBlastRadius, toolGetGraph (also apply same fix to the other affected
handlers referenced in the review), ensuring all handlers use the same snapshot
hash before proceeding.
- Around line 256-257: The handler currently broadens queries when arg types are
wrong because it silently casts values (e.g., min_confidence via minConfidence,
limit via intArg(args, "limit"), and file/blast_radius) — change these to
validate input types and fail fast: check that args["min_confidence"] is a
string (or explicitly parse/validate a numeric confidence) and return an error
if not; ensure args["limit"] is an integer type (reject floats or non-numeric
values rather than truncating) by updating intArg or replacing it with strict
parsing that returns an error on non-integers; and validate the
file/blast_radius parameter types similarly so incorrect types do not convert
into repo-wide queries. Update the same checks used around the other occurrences
(the other min_confidence/limit/file handling sites) to return a clear error to
the caller when types are invalid.
- Around line 94-100: The MCP "blast_radius" tool advertises "file or function"
but only exposes and reads the "file" property and serializes only file-level
results; update the tool InputSchema (tool name "blast_radius" and its
toolSchema Properties) to add a "function" string property, update the request
parsing code that currently reads the "file" key to also read "function" (and
pass it through to the underlying API call), and restore/include the
AffectedFunctions field when building/serializing the tool response so MCP
clients can request and receive function-level impact data.
- Around line 299-309: The current branch returns repo-wide MostCriticalFiles
whenever result.Impacts is empty, which surfaces unrelated hotspots for a
targeted query; update the conditional so the MostCriticalFiles fallback is only
used when no target was supplied (e.g., check the request's target variable such
as req.Target or local target string) — if result.Impacts is empty and target is
non-empty return a target-specific "no impact" message (or the existing "No
impact detected."), but if target is empty and
len(result.GlobalMetrics.MostCriticalFiles) > 0 build and return the
MostCriticalFiles string as before; adjust the logic around result.Impacts and
result.GlobalMetrics.MostCriticalFiles accordingly.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 22d5e07c-96a4-4354-b567-6fdc25e87c53
📒 Files selected for processing (1)
internal/mcp/server.go
There was a problem hiding this comment.
🧹 Nitpick comments (2)
internal/mcp/server_test.go (1)
99-113: Consider adding a test for "no affected files" case.You've got
TestFormatImpact_NoEntryPointswhich is great. For symmetry, you might want aTestFormatImpact_NoAffectedFilesto verify that section is also omitted correctly. Same pattern:func TestFormatImpact_NoAffectedFiles(t *testing.T) { result := &api.ImpactResult{ Metadata: api.ImpactMetadata{TargetsAnalyzed: 1, TotalFiles: 50, TotalFunctions: 200}, Impacts: []api.ImpactTarget{ { Target: api.ImpactTargetInfo{File: "src/util.ts", Type: "file"}, BlastRadius: api.BlastRadius{DirectDependents: 2, RiskScore: "low"}, // AffectedFiles intentionally empty }, }, } got := formatImpact(result) if strings.Contains(got, "Affected files") { t.Error("should not contain affected files section when none") } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/mcp/server_test.go` around lines 99 - 113, Add a new unit test mirroring TestFormatImpact_NoEntryPoints to verify formatImpact omits the "Affected files" section when there are no affected files: create TestFormatImpact_NoAffectedFiles that builds an api.ImpactResult with one api.ImpactTarget whose AffectedFiles slice is nil/empty (use api.ImpactResult, api.ImpactTarget, api.ImpactMetadata as in the existing test), call formatImpact(result) and assert the returned string does not contain "Affected files" (t.Error if it does).internal/mcp/server.go (1)
463-466: Helper enables silent fallback - consider adding validation variant.The helper itself is fine for JSON number handling (numbers come in as
float64). But as mentioned earlier, it silently returns 0 when the type assertion fails.If you want to support strict validation, you could add a sibling function:
func intArgStrict(args map[string]any, key string) (int, error) { v, ok := args[key] if !ok { return 0, nil // key not present is OK } f, ok := v.(float64) if !ok { return 0, fmt.Errorf("%s must be a number, got %T", key, v) } return int(f), nil }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/mcp/server.go` around lines 463 - 466, The current helper intArg(args map[string]any, key string) int silently returns 0 when the type assertion fails, which hides type errors; add a strict variant intArgStrict(args map[string]any, key string) (int, error) that first checks presence of the key, returns (0, nil) if absent, asserts the value to float64 and returns a descriptive error if the assertion fails (include the key and actual type), otherwise convert and return the int; update callers that need validation to use intArgStrict and keep intArg for permissive code paths.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@internal/mcp/server_test.go`:
- Around line 99-113: Add a new unit test mirroring
TestFormatImpact_NoEntryPoints to verify formatImpact omits the "Affected files"
section when there are no affected files: create
TestFormatImpact_NoAffectedFiles that builds an api.ImpactResult with one
api.ImpactTarget whose AffectedFiles slice is nil/empty (use api.ImpactResult,
api.ImpactTarget, api.ImpactMetadata as in the existing test), call
formatImpact(result) and assert the returned string does not contain "Affected
files" (t.Error if it does).
In `@internal/mcp/server.go`:
- Around line 463-466: The current helper intArg(args map[string]any, key
string) int silently returns 0 when the type assertion fails, which hides type
errors; add a strict variant intArgStrict(args map[string]any, key string) (int,
error) that first checks presence of the key, returns (0, nil) if absent,
asserts the value to float64 and returns a descriptive error if the assertion
fails (include the key and actual type), otherwise convert and return the int;
update callers that need validation to use intArgStrict and keep intArg for
permissive code paths.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 4ffd1626-2193-462a-ae39-6534e09bb73f
📒 Files selected for processing (2)
internal/mcp/server.gointernal/mcp/server_test.go
MCP dead_code: Before vs AfterBefore (main) — inline graph traversalAfter (this PR) — real API endpoint717 false-positive-heavy results → 5 high-confidence results with file:line and reasons. Same repo, same MCP protocol. Unit tests: 6 new (MCP had 0 before). All 14 packages pass. |
Closes #27
Summary
Replace inline graph traversal in the MCP server with calls to the real API endpoints. The MCP
dead_codeandblast_radiustools now useclient.DeadCode()andclient.Impact()— same data quality as the CLI commands.What changed
1 file:
internal/mcp/server.go— net -4 lines (157 added, 161 removed)Removed:
findDeadFunctions,findAffected,isEntryPoint,pathMatches,deadFn,affected— ~100 lines of inline graph traversalAdded:
toolDeadCode→client.DeadCode(),toolBlastRadius→client.Impact(),ensureZiphelper,intArghelperUpdated tool schemas:
dead_code: removedinclude_exports, addedmin_confidenceandlimitblast_radius: removeddepth,fileis now optional (omit for global coupling map)Before (inline graph traversal)
After (real API)
Test plan
go test ./...— all passSummary by CodeRabbit
Updates
New Features
Refactor
Tests