feat: file mode — write graph files from analyze, add watch/clean/hook#45
feat: file mode — write graph files from analyze, add watch/clean/hook#45greynewell merged 4 commits intomainfrom
Conversation
Ports the supermodel-sidecars CLI from graph-fusion into the supermodel CLI as a new `sidecars` subcommand group. Sidecars are lightweight .graph.* files placed next to each source file containing dependency, call graph, and blast-radius data for AI coding agents. New commands: supermodel sidecars generate [path] zip+upload, build cache, render all supermodel sidecars watch [path] generate on start, then daemon mode supermodel sidecars clean [path] remove all .graph.* sidecar files supermodel sidecars hook forward Claude Code PostToolUse events supermodel sidecars render [path] render from existing cache (offline) supermodel sidecars setup print quick-start instructions Architecture: - All business logic in internal/sidecars/ (package sidecars) - Only imports internal/api/, internal/config/, internal/ui/ - Cobra wiring in cmd/sidecars.go following cmd/analyze.go pattern API additions to internal/api/: - SidecarIR / SidecarGraph / SidecarDomain / SidecarSubdomain types that preserve full Node/Relationship data (IDs, labels, properties) needed for graph indexing — unlike the simplified IRNode/IRRelationship stubs - Client.AnalyzeSidecars() — full graph fetch returning SidecarIR - Client.AnalyzeIncremental() — incremental update with changedFiles field Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
WalkthroughAdds a file-watch daemon and CLI commands to generate/watch/clean/hook sidecar files; implements secure repo zipping, an incremental upload + polling API (AnalyzeSidecars / AnalyzeIncremental), an in-memory graph cache, selective rendering of Changes
Sequence Diagram(s)sequenceDiagram
participant IDE as IDE / File System
participant Hook as hook CLI (stdin)
participant Daemon as Watch Daemon
participant Watcher as Git Watcher
participant Client as API Client
participant Cache as In-memory Cache / Renderer
participant FS as Filesystem (.graph files)
IDE->>Hook: emit change event (JSON stdin)
Hook->>Daemon: UDP notify (file path)
Watcher->>Daemon: batch of changed paths (poll)
Daemon->>Daemon: debounce & collect changed files
Daemon->>Daemon: create incremental ZIP (only changed files)
Daemon->>Client: AnalyzeIncremental(zip, changedFiles, idempotency)
Client->>Client: POST multipart (zip + changedFiles) and pollLoop
Client-->>Daemon: returns incremental SidecarIR
Daemon->>Daemon: mergeGraph(incremental IR)
Daemon->>Cache: rebuild/update indices
Daemon->>Cache: compute affected files
Daemon->>Cache: RenderAll(affected files)
Cache->>FS: WriteSidecar (atomic writes)
Daemon->>Daemon: persist merged IR to cache file
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 5
🧹 Nitpick comments (9)
internal/sidecars/daemon.go (3)
474-475: Bug:appendresults assigned to wrong variables — data loss.This is a real bug! Look at these lines:
d.ir.Graph.Nodes = append(keptNodes, newNodes...) d.ir.Graph.Relationships = append(keptRels, newRels...)The linter flagged
appendAssign: append result not assigned to the same slice. Here's what's happening:You're appending
newNodestokeptNodes, but assigning the result tod.ir.Graph.Nodes. IfkeptNodeshas enough capacity, this works by accident. But if the underlying array needs to grow, you get a new slice and the originalkeptNodesis unchanged.Actually wait — looking more carefully, I think this IS intentional. You're building
keptNodesfrom scratch, then combining it withnewNodesintod.ir.Graph.Nodes. The lint rule is being overly pedantic here sincekeptNodesis a local variable you're done with.But to silence the lint and make intent clearer:
♻️ Clearer way to express the merge
- d.ir.Graph.Nodes = append(keptNodes, newNodes...) - d.ir.Graph.Relationships = append(keptRels, newRels...) + // Merge kept nodes/rels with new ones + keptNodes = append(keptNodes, newNodes...) + d.ir.Graph.Nodes = keptNodes + keptRels = append(keptRels, newRels...) + d.ir.Graph.Relationships = keptRels🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/sidecars/daemon.go` around lines 474 - 475, The append results are being assigned to d.ir.Graph.Nodes/Relationships while appending onto local slices keptNodes/keptRels, which trips the linter and can be confusing; change the code to append into the same local slices (keptNodes = append(keptNodes, newNodes...) and keptRels = append(keptRels, newRels...)) and then assign those locals to d.ir.Graph.Nodes and d.ir.Graph.Relationships (d.ir.Graph.Nodes = keptNodes; d.ir.Graph.Relationships = keptRels) so intent is explicit and the append-assignment lint rule is satisfied.
44-63: Consider passingDaemonConfigby pointer.The linter says
DaemonConfigis 80 bytes. Passing by pointer avoids copying:func NewDaemon(cfg DaemonConfig, ...) // copies 80 bytes func NewDaemon(cfg *DaemonConfig, ...) // copies 8 bytes (pointer)Not a huge deal for a function called once at startup, but it's a good habit.
♻️ Proposed fix
-func NewDaemon(cfg DaemonConfig, client *api.Client) *Daemon { +func NewDaemon(cfg *DaemonConfig, client *api.Client) *Daemon { + if cfg == nil { + cfg = &DaemonConfig{} + } if cfg.Debounce <= 0 { cfg.Debounce = 2 * time.Second } // ... rest unchanged but now modifies the caller's configNote: If you go this route, be careful about mutating the caller's config (the defaulting logic). You might want to copy it first.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/sidecars/daemon.go` around lines 44 - 63, The NewDaemon function currently takes DaemonConfig by value which copies ~80 bytes; change the signature to accept a pointer (NewDaemon(cfg *DaemonConfig, client *api.Client) *Daemon) to avoid the copy, but avoid mutating the caller's struct by making a local copy (e.g., copy := *cfg) and apply defaulting to that local copy (use the local variable in place of cfg for Debounce, NotifyPort, LogFunc, cache, logf, notifyCh, etc.); update all call sites to pass a pointer to DaemonConfig.
285-484:mergeGraphhas cyclomatic complexity of 54 — consider breaking it up.The linter flagged this function as having complexity 54 (threshold is 20). Looking at it, I can see why — it's doing a lot:
- Building sets of changed files and new node IDs
- Mapping existing files by path
- Resolving external dependencies to internal nodes
- Building old-to-new ID mappings
- Filtering kept nodes and relationships
- Remapping relationship endpoints
- Merging everything together
Each of those could be a separate helper function. For example:
buildExtRemap(incremental, existingFilePaths) map[string]stringfilterKeptNodes(nodes, changedSet, newNodeIDs) []api.NoderemapRelationships(rels, extRemap, oldToNew) []api.RelationshipNot blocking, but this function will be hard to maintain as-is.
Would you like me to help break this down into smaller functions?
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/sidecars/daemon.go` around lines 285 - 484, mergeGraph is too large and should be split into focused helper functions: extract building changedSet/newNodeIDs into buildChangedAndNewSets(changedFiles []string, incremental *api.SidecarIR) (map[string]bool, map[string]bool), extracting existing file index into indexExistingFiles(nodes []api.Node) (map[string]string, []struct{path, id string}), extracting external dependency resolution into buildExtRemap(incremental *api.SidecarIR, existingFilePaths []struct{path,id string}) (map[string]string, map[string]bool), extracting incremental file/function indices into indexIncrementalFilesAndFns(nodes []api.Node) (map[string]string, map[string]string), extracting old-to-new mapping into buildOldToNew(dNodes []api.Node, incFileByPath, incFnByKey map[string]string) map[string]string, extracting node filtering into filterKeptNodes(existingNodes []api.Node, changedSet, newNodeIDs map[string]bool, incFileByPath map[string]string, incFnByKey map[string]string) []api.Node, and extracting relationship remapping/merging into remapAndMergeRelationships(oldRels, incRels []api.Relationship, newNodeIDs map[string]bool, removedOldIDs map[string]bool, oldToNew, extRemap map[string]string) []api.Relationship; then simplify mergeGraph to call these helpers and set d.ir.Graph.Nodes/Relationships and Domains and logging accordingly. Ensure helper names (buildExtRemap, filterKeptNodes, remapAndMergeRelationships, buildOldToNew, indexExistingFiles, indexIncrementalFilesAndFns, buildChangedAndNewSets) match the suggested callers to make locating code easy.internal/sidecars/render.go (1)
173-183: Lint: Combine chainedappendcalls.The linter flagged this — you can combine these sequential appends into single calls for slightly cleaner code.
♻️ Proposed fix
var lines []string - lines = append(lines, fmt.Sprintf("%s [impact]", prefix)) - lines = append(lines, fmt.Sprintf("%s risk %s", prefix, risk)) + lines = append(lines, + fmt.Sprintf("%s [impact]", prefix), + fmt.Sprintf("%s risk %s", prefix, risk)) if len(domains) > 0 { lines = append(lines, fmt.Sprintf("%s domains %s", prefix, strings.Join(sortedBoolKeys(domains), " · "))) } - lines = append(lines, fmt.Sprintf("%s direct %d", prefix, directCount)) - lines = append(lines, fmt.Sprintf("%s transitive %d", prefix, transitiveCount)) + lines = append(lines, + fmt.Sprintf("%s direct %d", prefix, directCount), + fmt.Sprintf("%s transitive %d", prefix, transitiveCount))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/sidecars/render.go` around lines 173 - 183, The repeated chained appends to the lines slice should be consolidated into fewer append calls: replace multiple lines = append(lines, fmt.Sprintf(...)) statements with a single append that includes multiple formatted strings (e.g., append(lines, fmt.Sprintf(...), fmt.Sprintf(...), ...)) while preserving the conditional for domains (keep the if len(domains) > 0 block and append its formatted string there), and ensure entries for directCount, transitiveCount, and the conditional directFiles (if directCount > 0) still use fmt.Sprintf with the same prefixes and sortedBoolKeys(directFiles) so the output semantics remain identical.internal/sidecars/graph.go (2)
268-279:firstStringfallback returns a key name, not a value — might be confusing.Look at line 278:
return keys[len(keys)-1]If none of the property keys match, this returns the last key name (like
"name"orn.ID), not an actual value. The docstring says "Returns the last key as a fallback string" which is technically accurate but easy to misuse.For example, at line 74:
path := firstString(props, "filePath", "path", "name", n.ID)If none of those props exist, you get
n.IDback — which happens to work here because the last argument IS the actual ID value. But this pattern relies on the caller always putting the fallback value as the last "key".Not a bug per se, but the API is a bit footgun-y.
📝 Consider explicit fallback parameter
-func firstString(props map[string]any, keys ...string) string { +func firstString(props map[string]any, fallback string, keys ...string) string { for _, k := range keys { if v, ok := props[k]; ok { if s, ok := v.(string); ok && s != "" { return s } } } - return keys[len(keys)-1] + return fallback }Then call sites become clearer:
path := firstString(props, n.ID, "filePath", "path", "name")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/sidecars/graph.go` around lines 268 - 279, The helper firstString currently returns the last key name when no prop matches, which is confusing and fragile; change its signature to accept an explicit fallback (e.g. firstString(props map[string]any, fallback string, keys ...string) string), update its behavior to return the provided fallback when no matching string is found, and update its docstring accordingly; then update all callers (e.g. replace calls like firstString(props, "filePath","path","name", n.ID) with firstString(props, n.ID, "filePath","path","name")) so the fallback value is explicit and not implicitly treated as a key.
73-85: Lint: Rewrite if-else chain as switch statement.The linter flagged this if-else chain. A switch would be cleaner:
♻️ Proposed fix
- if n.HasLabel("File") { + switch { + case n.HasLabel("File"): path := firstString(props, "filePath", "path", "name", n.ID) c.IDToPath[n.ID] = path - } else if n.HasLabel("LocalDependency") { + case n.HasLabel("LocalDependency"): path := firstString(props, "filePath", "name", n.ID) c.IDToPath[n.ID] = path - } else if n.HasLabel("ExternalDependency") { + case n.HasLabel("ExternalDependency"): name := n.Prop("name") if name == "" { name = n.ID } c.IDToPath[n.ID] = "[ext]" + name }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/sidecars/graph.go` around lines 73 - 85, Replace the if-else chain that checks n.HasLabel("File"), "LocalDependency", and "ExternalDependency" with an equivalent switch statement (e.g., switch { case n.HasLabel("File"): ... case n.HasLabel("LocalDependency"): ... case n.HasLabel("ExternalDependency"): ... }) in the same block so the behavior is unchanged: call firstString(props, ...) for File and LocalDependency cases and assign to c.IDToPath[n.ID], and for ExternalDependency use n.Prop("name") with the fallback to n.ID and prefix "[ext]" before assigning to c.IDToPath[n.ID]; keep the exact variable names (n, props, c.IDToPath, firstString, n.Prop) to locate the code.internal/sidecars/zip.go (2)
75-108:matchPatterncomment mentions?but it's not actually handled.The function comment says "simple glob matching (*, ?)" but looking at line 80, you only check for
*and?to decide if it's a pattern, and then the actual matching logic only handles*(viastrings.Split(pattern, "*")).If someone uses a pattern like
secret?.json, the?won't match a single character — it'll just be treated as a literal?.Not a big deal since your current
hardBlockedPatternsdon't use?, but the comment is misleading.📝 Option: Either fix the comment or implement `?` support
Quick fix for the comment:
-// matchPattern does simple glob matching (*, ?). +// matchPattern does simple glob matching with * wildcards. +// Note: ? is not implemented as a single-character wildcard.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/sidecars/zip.go` around lines 75 - 108, The comment for matchPattern is misleading because it claims support for "*, ?" but the implementation only treats '?' as part of a literal string; either update the comment to only claim "*" support or implement '?' semantics: modify matchPattern to treat '?' as a single-character wildcard (preserving the current case-insensitive behavior) by changing the matching loop so each non-empty segment (from splitting by '*') is matched against the remaining name using a character-by-character comparison that treats '?' in the segment as matching any single character (advance remaining by one for a '?' and by equality for other chars); keep the existing prefix/suffix checks (parts[0], last) but apply the '?'-aware comparison when locating and consuming parts.
190-198: Silently ignoringfilepath.Relerror could hide issues.Line 195 discards the error from
filepath.Rel. While this is unlikely to fail in practice (you're walking fromrepoDirso paths should always be relative), silently ignoring errors can make debugging harder if something weird happens.rel, _ := filepath.Rel(repoDir, path) // error ignoredAt minimum, you could log it or skip the file if it fails.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/sidecars/zip.go` around lines 190 - 198, The filepath.Rel error is being ignored in the filepath.Walk callback (inside the repoDir walk where variables repoDir, path and rel are used); update the callback to check the returned error from filepath.Rel and handle it instead of discarding it — either log the error and skip that entry (e.g., log with the package logger and return nil to continue) or return the error from the Walk callback to propagate and abort the walk so the caller can handle it; modify the code around the rel, _ := filepath.Rel(repoDir, path) line to handle the error appropriately.internal/sidecars/watcher.go (1)
138-143: Git command errors are silently swallowed.
runGitignores any errors from the git command:out, _ := cmd.Output() return string(out)This is probably fine for a polling watcher (if git fails, you just get no changes), but it makes debugging harder. If someone's repo has a weird git state, they'll see no file changes detected with no indication why.
Consider at least debug-logging the error:
📝 Optional: Log git errors
func (w *Watcher) runGit(args ...string) string { cmd := exec.Command("git", args...) cmd.Dir = w.repoDir - out, _ := cmd.Output() + out, err := cmd.Output() + if err != nil { + // Could add debug logging here if needed + return "" + } return string(out) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/sidecars/watcher.go` around lines 138 - 143, The runGit method currently discards git command errors; change it to capture the error from cmd.Output() and log it for debugging (include args, error, and any stdout/stderr) before returning the output string. In the Watcher.runGit function, call out, err := cmd.Output(); if err != nil, emit a debug/error log using the Watcher logger (e.g., w.logger.Errorf/Debugf) with context including args, err and out (or fallback to the package log if no logger exists), and still return string(out). Ensure you keep cmd.Dir = w.repoDir and reference the same runGit and Watcher symbols.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@internal/api/client.go`:
- Around line 161-177: AnalyzeIncremental currently polls by calling postZipTo
without including the changedFiles multipart field, relying on idempotencyKey to
preserve that data; confirm with the server team whether idempotencyKey
guarantees server-side reuse of changedFiles and, if not, modify postZipTo (or
the polling call in AnalyzeIncremental) to include the changedFiles field on
each poll so the server receives consistent payloads. Also refactor the
duplicated loop into a shared helper (extract the polling behavior used in
AnalyzeIncremental and pollUntilComplete into a single function) and have
AnalyzeIncremental call that helper to avoid drift between the two
implementations.
In `@internal/sidecars/daemon.go`:
- Around line 522-525: The goroutine that waits on ctx.Done() calls
conn.SetReadDeadline(time.Now()) without checking the returned error; update the
anonymous goroutine (the function using ctx.Done() and conn.SetReadDeadline) to
capture the error returned by conn.SetReadDeadline and handle it (at minimum log
or ignore explicitly) so the unchecked error is removed (e.g., err :=
conn.SetReadDeadline(time.Now()); if err != nil { /* log or ignore */ }).
In `@internal/sidecars/handler.go`:
- Around line 298-299: Check and handle the error returned by
conn.SetWriteDeadline in the sidecar hook sender code: update the
SetWriteDeadline call (the one using conn.SetWriteDeadline(time.Now().Add(500 *
time.Millisecond))) so the returned error is assigned and explicitly handled
(e.g., if err != nil { /* swallow or optional debug log */ }) to satisfy the
linter while keeping the hook silent; leave the subsequent
conn.Write([]byte(filePath)) error ignored as-is per UDP fire-and-forget
behavior.
- Line 365: Replace the non-idiomatic length check in the condition that pads
content before adding a trailing newline: change the left side of the if in the
block using the variable `content` (currently `len(content) > 0 &&
!strings.HasSuffix(content, "\n")`) to use the idiomatic string emptiness check
`content != "" && !strings.HasSuffix(content, "\n")`, leaving the rest of the
condition and logic in the handler (around the `strings.HasSuffix` check)
unchanged.
In `@internal/sidecars/zip.go`:
- Around line 49-73: loadCustomExclusions currently mutates package-level maps
zipSkipDirs and zipSkipExtensions, causing races and state pollution; change
loadCustomExclusions to not touch globals but instead return the exclusion sets
(e.g., return two maps or a struct of maps) or accept maps to populate, then in
CreateZipFile and DryRunList create local copies of the global defaults, call
the revised loadCustomExclusions to merge repo-specific excludes into those
locals, and use the locals for zip traversal; ensure no direct writes to the
package-level zipSkipDirs/zipSkipExtensions and that callers use the
returned/updated local maps so concurrent calls remain isolated and thread-safe.
---
Nitpick comments:
In `@internal/sidecars/daemon.go`:
- Around line 474-475: The append results are being assigned to
d.ir.Graph.Nodes/Relationships while appending onto local slices
keptNodes/keptRels, which trips the linter and can be confusing; change the code
to append into the same local slices (keptNodes = append(keptNodes, newNodes...)
and keptRels = append(keptRels, newRels...)) and then assign those locals to
d.ir.Graph.Nodes and d.ir.Graph.Relationships (d.ir.Graph.Nodes = keptNodes;
d.ir.Graph.Relationships = keptRels) so intent is explicit and the
append-assignment lint rule is satisfied.
- Around line 44-63: The NewDaemon function currently takes DaemonConfig by
value which copies ~80 bytes; change the signature to accept a pointer
(NewDaemon(cfg *DaemonConfig, client *api.Client) *Daemon) to avoid the copy,
but avoid mutating the caller's struct by making a local copy (e.g., copy :=
*cfg) and apply defaulting to that local copy (use the local variable in place
of cfg for Debounce, NotifyPort, LogFunc, cache, logf, notifyCh, etc.); update
all call sites to pass a pointer to DaemonConfig.
- Around line 285-484: mergeGraph is too large and should be split into focused
helper functions: extract building changedSet/newNodeIDs into
buildChangedAndNewSets(changedFiles []string, incremental *api.SidecarIR)
(map[string]bool, map[string]bool), extracting existing file index into
indexExistingFiles(nodes []api.Node) (map[string]string, []struct{path, id
string}), extracting external dependency resolution into
buildExtRemap(incremental *api.SidecarIR, existingFilePaths []struct{path,id
string}) (map[string]string, map[string]bool), extracting incremental
file/function indices into indexIncrementalFilesAndFns(nodes []api.Node)
(map[string]string, map[string]string), extracting old-to-new mapping into
buildOldToNew(dNodes []api.Node, incFileByPath, incFnByKey map[string]string)
map[string]string, extracting node filtering into filterKeptNodes(existingNodes
[]api.Node, changedSet, newNodeIDs map[string]bool, incFileByPath
map[string]string, incFnByKey map[string]string) []api.Node, and extracting
relationship remapping/merging into remapAndMergeRelationships(oldRels, incRels
[]api.Relationship, newNodeIDs map[string]bool, removedOldIDs map[string]bool,
oldToNew, extRemap map[string]string) []api.Relationship; then simplify
mergeGraph to call these helpers and set d.ir.Graph.Nodes/Relationships and
Domains and logging accordingly. Ensure helper names (buildExtRemap,
filterKeptNodes, remapAndMergeRelationships, buildOldToNew, indexExistingFiles,
indexIncrementalFilesAndFns, buildChangedAndNewSets) match the suggested callers
to make locating code easy.
In `@internal/sidecars/graph.go`:
- Around line 268-279: The helper firstString currently returns the last key
name when no prop matches, which is confusing and fragile; change its signature
to accept an explicit fallback (e.g. firstString(props map[string]any, fallback
string, keys ...string) string), update its behavior to return the provided
fallback when no matching string is found, and update its docstring accordingly;
then update all callers (e.g. replace calls like firstString(props,
"filePath","path","name", n.ID) with firstString(props, n.ID,
"filePath","path","name")) so the fallback value is explicit and not implicitly
treated as a key.
- Around line 73-85: Replace the if-else chain that checks n.HasLabel("File"),
"LocalDependency", and "ExternalDependency" with an equivalent switch statement
(e.g., switch { case n.HasLabel("File"): ... case n.HasLabel("LocalDependency"):
... case n.HasLabel("ExternalDependency"): ... }) in the same block so the
behavior is unchanged: call firstString(props, ...) for File and LocalDependency
cases and assign to c.IDToPath[n.ID], and for ExternalDependency use
n.Prop("name") with the fallback to n.ID and prefix "[ext]" before assigning to
c.IDToPath[n.ID]; keep the exact variable names (n, props, c.IDToPath,
firstString, n.Prop) to locate the code.
In `@internal/sidecars/render.go`:
- Around line 173-183: The repeated chained appends to the lines slice should be
consolidated into fewer append calls: replace multiple lines = append(lines,
fmt.Sprintf(...)) statements with a single append that includes multiple
formatted strings (e.g., append(lines, fmt.Sprintf(...), fmt.Sprintf(...), ...))
while preserving the conditional for domains (keep the if len(domains) > 0 block
and append its formatted string there), and ensure entries for directCount,
transitiveCount, and the conditional directFiles (if directCount > 0) still use
fmt.Sprintf with the same prefixes and sortedBoolKeys(directFiles) so the output
semantics remain identical.
In `@internal/sidecars/watcher.go`:
- Around line 138-143: The runGit method currently discards git command errors;
change it to capture the error from cmd.Output() and log it for debugging
(include args, error, and any stdout/stderr) before returning the output string.
In the Watcher.runGit function, call out, err := cmd.Output(); if err != nil,
emit a debug/error log using the Watcher logger (e.g., w.logger.Errorf/Debugf)
with context including args, err and out (or fallback to the package log if no
logger exists), and still return string(out). Ensure you keep cmd.Dir =
w.repoDir and reference the same runGit and Watcher symbols.
In `@internal/sidecars/zip.go`:
- Around line 75-108: The comment for matchPattern is misleading because it
claims support for "*, ?" but the implementation only treats '?' as part of a
literal string; either update the comment to only claim "*" support or implement
'?' semantics: modify matchPattern to treat '?' as a single-character wildcard
(preserving the current case-insensitive behavior) by changing the matching loop
so each non-empty segment (from splitting by '*') is matched against the
remaining name using a character-by-character comparison that treats '?' in the
segment as matching any single character (advance remaining by one for a '?' and
by equality for other chars); keep the existing prefix/suffix checks (parts[0],
last) but apply the '?'-aware comparison when locating and consuming parts.
- Around line 190-198: The filepath.Rel error is being ignored in the
filepath.Walk callback (inside the repoDir walk where variables repoDir, path
and rel are used); update the callback to check the returned error from
filepath.Rel and handle it instead of discarding it — either log the error and
skip that entry (e.g., log with the package logger and return nil to continue)
or return the error from the Walk callback to propagate and abort the walk so
the caller can handle it; modify the code around the rel, _ :=
filepath.Rel(repoDir, path) line to handle the error appropriately.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 0317b265-5046-4c84-bbe7-b86d0a84f5e7
📒 Files selected for processing (9)
cmd/sidecars.gointernal/api/client.gointernal/api/types.gointernal/sidecars/daemon.gointernal/sidecars/graph.gointernal/sidecars/handler.gointernal/sidecars/render.gointernal/sidecars/watcher.gointernal/sidecars/zip.go
internal/files/handler.go
Outdated
| conn.SetWriteDeadline(time.Now().Add(500 * time.Millisecond)) | ||
| _, _ = conn.Write([]byte(filePath)) |
There was a problem hiding this comment.
Check SetWriteDeadline error per static analysis.
While I totally get that hooks should fail silently (line 261 comment is spot on), the linter is flagging line 298 for unchecked error. Even though it's unlikely to fail, checking it keeps the linter happy and is good hygiene.
The Write error being ignored on line 299 is fine — UDP is fire-and-forget by design.
🔧 Proposed fix
- conn.SetWriteDeadline(time.Now().Add(500 * time.Millisecond))
+ if err := conn.SetWriteDeadline(time.Now().Add(500 * time.Millisecond)); err != nil {
+ return nil // silently exit on connection issues
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| conn.SetWriteDeadline(time.Now().Add(500 * time.Millisecond)) | |
| _, _ = conn.Write([]byte(filePath)) | |
| if err := conn.SetWriteDeadline(time.Now().Add(500 * time.Millisecond)); err != nil { | |
| return nil // silently exit on connection issues | |
| } | |
| _, _ = conn.Write([]byte(filePath)) |
🧰 Tools
🪛 GitHub Check: golangci-lint
[failure] 298-298:
Error return value of conn.SetWriteDeadline is not checked (errcheck)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/sidecars/handler.go` around lines 298 - 299, Check and handle the
error returned by conn.SetWriteDeadline in the sidecar hook sender code: update
the SetWriteDeadline call (the one using
conn.SetWriteDeadline(time.Now().Add(500 * time.Millisecond))) so the returned
error is assigned and explicitly handled (e.g., if err != nil { /* swallow or
optional debug log */ }) to satisfy the linter while keeping the hook silent;
leave the subsequent conn.Write([]byte(filePath)) error ignored as-is per UDP
fire-and-forget behavior.
| func loadCustomExclusions(repoDir string) { | ||
| cfgPath := filepath.Join(repoDir, ".supermodel.json") | ||
| if abs, err := filepath.Abs(cfgPath); err == nil { | ||
| cfgPath = abs | ||
| } | ||
| data, err := os.ReadFile(cfgPath) | ||
| if err != nil { | ||
| return | ||
| } | ||
| var cfg struct { | ||
| ExcludeDirs []string `json:"exclude_dirs"` | ||
| ExcludeExts []string `json:"exclude_exts"` | ||
| } | ||
| if err := json.Unmarshal(data, &cfg); err != nil { | ||
| fmt.Fprintf(os.Stderr, "warning: failed to parse %s: %v — custom exclusions will be ignored\n", | ||
| cfgPath, err) | ||
| return | ||
| } | ||
| for _, d := range cfg.ExcludeDirs { | ||
| zipSkipDirs[d] = true | ||
| } | ||
| for _, e := range cfg.ExcludeExts { | ||
| zipSkipExtensions[e] = true | ||
| } | ||
| } |
There was a problem hiding this comment.
Global state mutation is not thread-safe and pollutes subsequent calls.
Hey, so loadCustomExclusions modifies the package-level maps zipSkipDirs and zipSkipExtensions. This causes two problems:
-
Not thread-safe: If two goroutines call
CreateZipFileorDryRunListfor different repos simultaneously, they'll race on these maps. -
State pollution: Once you call this for repo A (which has
exclude_dirs: ["custom_dir"]), that exclusion sticks around for repo B even if B doesn't have that config.
Simple example of the bug:
// First call for repo with custom exclusions
CreateZipFile("/path/to/repoA", nil) // adds "custom_dir" to zipSkipDirs
// Second call for different repo
CreateZipFile("/path/to/repoB", nil) // still skips "custom_dir" even if repoB doesn't want that🔧 Proposed fix: Use local copies of the skip maps
-func loadCustomExclusions(repoDir string) {
+func loadCustomExclusions(repoDir string, skipDirs, skipExts map[string]bool) {
cfgPath := filepath.Join(repoDir, ".supermodel.json")
if abs, err := filepath.Abs(cfgPath); err == nil {
cfgPath = abs
}
data, err := os.ReadFile(cfgPath)
if err != nil {
return
}
var cfg struct {
ExcludeDirs []string `json:"exclude_dirs"`
ExcludeExts []string `json:"exclude_exts"`
}
if err := json.Unmarshal(data, &cfg); err != nil {
fmt.Fprintf(os.Stderr, "warning: failed to parse %s: %v — custom exclusions will be ignored\n",
cfgPath, err)
return
}
for _, d := range cfg.ExcludeDirs {
- zipSkipDirs[d] = true
+ skipDirs[d] = true
}
for _, e := range cfg.ExcludeExts {
- zipSkipExtensions[e] = true
+ skipExts[e] = true
}
}Then in CreateZipFile and DryRunList, create local copies of the global maps before calling loadCustomExclusions.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/sidecars/zip.go` around lines 49 - 73, loadCustomExclusions
currently mutates package-level maps zipSkipDirs and zipSkipExtensions, causing
races and state pollution; change loadCustomExclusions to not touch globals but
instead return the exclusion sets (e.g., return two maps or a struct of maps) or
accept maps to populate, then in CreateZipFile and DryRunList create local
copies of the global defaults, call the revised loadCustomExclusions to merge
repo-specific excludes into those locals, and use the locals for zip traversal;
ensure no direct writes to the package-level zipSkipDirs/zipSkipExtensions and
that callers use the returned/updated local maps so concurrent calls remain
isolated and thread-safe.
- supermodel analyze now writes .graph files by default (--no-files to opt out) - new top-level: watch, clean, hook - internal/sidecars → internal/files - files: false in config disables file mode globally Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 8
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
internal/config/config.go (1)
103-112:⚠️ Potential issue | 🟡 MinorMake
SUPERMODEL_FILESoverride in both directions.Right now only the literal
"false"changesc.Files. If the config hasfiles: false, settingSUPERMODEL_FILES=truestill leaves file mode off, which is pretty surprising for an env override.🛠️ Quick fix
- if os.Getenv("SUPERMODEL_FILES") == "false" { - c.Files = boolPtr(false) + if raw, ok := os.LookupEnv("SUPERMODEL_FILES"); ok { + if enabled, err := strconv.ParseBool(raw); err == nil { + c.Files = boolPtr(enabled) + } }Also add
strconvto the imports.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/config/config.go` around lines 103 - 112, The applyEnv method only flips c.Files when SUPERMODEL_FILES == "false", so environment overrides cannot turn files back on; update Config.applyEnv to parse SUPERMODEL_FILES with strconv.ParseBool (or similar) and set c.Files = &parsedBool when the env var is present (handle parse error by ignoring or logging), and add the strconv import; reference applyEnv, c.Files, and the SUPERMODEL_FILES env var.
♻️ Duplicate comments (2)
internal/files/zip.go (1)
49-73:⚠️ Potential issue | 🟠 MajorKeep custom exclusions out of the package globals.
Each call mutates
zipSkipDirsandzipSkipExtensions. That makes one repo’s.supermodel.jsonleak into later calls, and it is not safe if two zips run at the same time.♻️ Quick fix
-func loadCustomExclusions(repoDir string) { +func loadCustomExclusions(repoDir string, skipDirs, skipExts map[string]bool) { @@ for _, d := range cfg.ExcludeDirs { - zipSkipDirs[d] = true + skipDirs[d] = true } for _, e := range cfg.ExcludeExts { - zipSkipExtensions[e] = true + skipExts[e] = true } }Then clone the default maps inside
CreateZipFileandDryRunList, and pass those locals in.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/files/zip.go` around lines 49 - 73, loadCustomExclusions currently mutates package-level maps zipSkipDirs and zipSkipExtensions causing leakage between runs and race issues; change loadCustomExclusions to return the per-run exclusion maps (or accept maps as arguments) instead of writing globals, then in CreateZipFile and DryRunList clone the default maps into local maps, pass those locals into loadCustomExclusions (or merge the returned maps) and use the locals for zip filtering; update all call sites (CreateZipFile, DryRunList) to use the local maps rather than the package globals zipSkipDirs/zipSkipExtensions.internal/files/daemon.go (1)
522-525:⚠️ Potential issue | 🟡 MinorStill need to handle
SetReadDeadline's error.Line 524 is the same
errcheckfailure the pipeline is already complaining about, so CI stays red until this return value is handled explicitly.#!/bin/bash # Confirms the unchecked shutdown deadline call is still present. rg -n -C2 'SetReadDeadline\(time\.Now\(\)\)' internal/files/daemon.goSmall fix
go func() { <-ctx.Done() - conn.SetReadDeadline(time.Now()) + _ = conn.SetReadDeadline(time.Now()) }()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/files/daemon.go` around lines 522 - 525, The goroutine that waits for ctx.Done() calls conn.SetReadDeadline(time.Now()) without handling its error; update that anonymous goroutine to capture the returned error (e.g., err := conn.SetReadDeadline(time.Now())) and handle it explicitly — for example, log the error or ignore it with a documented rationale (if using a logger call like logger.Warnf/Debugf or similar), or at minimum check if err != nil and return/handle accordingly — locate the call inside the goroutine that uses ctx.Done() and conn.SetReadDeadline and add the error check and handling there.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@cmd/watch.go`:
- Around line 18-22: The shutdown help text in the watch command's Long
description incorrectly claims "Press Ctrl+C to stop and remove graph files";
update the Long string (in the Cobra command declaration, e.g., the watch
command variable that sets Long in cmd/watch.go) to accurately reflect actual
behavior—change it to say something like "Press Ctrl+C to stop the daemon." or
otherwise remove the "and remove graph files" clause so it no longer promises
cleanup that doesn't occur in the current shutdown path.
In `@internal/files/daemon.go`:
- Around line 432-471: The merge currently leaves relationships pointing at
removed node IDs; to fix, normalize relationship endpoints via resolveNodeID
before adding them to keptRels and newRels: when iterating
d.ir.Graph.Relationships (the kept loop) apply resolveNodeID to rel.StartNode
and rel.EndNode (after applying oldToNew) and skip the relationship if either
resolves to a removed/stubbed ID; likewise, when building newRels from
incremental.Graph.Relationships apply extRemap then run resolveNodeID on both
endpoints and skip relations whose resolved IDs are removed; this ensures both
keptRels and newRels contain only normalized, live node IDs (use variables rel,
oldToNew, extRemap, resolvedSet, removedOldIDs, resolveNodeID to locate spots to
change).
In `@internal/files/graph.go`:
- Around line 142-156: The belongsTo handling is order-dependent because it only
checks c.IDToPath[rel.StartNode] then fn.File, so if defines_function runs later
the file never gets assigned; update the belongsTo case to look up file
candidates from both c.IDToPath and any function node entries (check
c.IDToPath[rel.StartNode], c.IDToPath[rel.EndNode], and c.FnByID for
rel.StartNode/rel.EndNode) and use the first non-empty path, and if a fn exists
with empty File assign that resolved path to fn.File; keep the existing
defines_function assignment but make belongsTo robust to either edge order by
sourcing the file from both IDToPath and FnByID before skipping.
In `@internal/files/handler.go`:
- Line 69: The DryRun option currently allows previewing RenderAll(...) but
still mutates the repo by calling updateGitignore and persisting
.supermodel/graph.json; change the flow so all filesystem-mutating calls are
skipped when opts.DryRun is true. Specifically, wrap any calls to
updateGitignore and the code path that writes/persists the graph file (the
cold-start persistence) with a guard like if !opts.DryRun { ... }, and ensure
RenderAll continues to receive opts.DryRun for preview only; apply the same
guard to the other write sites referenced around the blocks at the
updateGitignore call and the sections corresponding to lines 92-105 and 122 so a
DryRun makes no on-disk changes.
- Around line 237-252: Replace the bufio.Scanner read loop and strings.Join with
a streaming JSON decode: use json.NewDecoder(os.Stdin).Decode(&ev) to populate
the existing postToolUseEvent variable (ev), remove the scanner.Err() check and
raw string handling, and preserve the current silent no-op behavior by returning
nil when decoding fails (i.e., if json.NewDecoder(...).Decode(&ev) returns an
error, return nil instead of propagating it); update any imports if needed to
ensure encoding/json is used.
In `@internal/files/watcher.go`:
- Around line 79-95: The current logic only emits a WatchEvent when a path
transitions from clean to dirty because w.lastKnownFiles is just a set of paths;
to fix, change w.lastKnownFiles to track per-file mtime or hash (e.g.,
map[string]time.Time or map[string]string) and compare that stored value against
the current file's mtime/hash in the loop that builds newEvents (use
currentDirty to carry the current mtime/hash too); when the stored value differs
emit a WatchEvent for that path (so repeated saves re-emit) and update
w.lastKnownFiles[path] to the new mtime/hash before assigning w.lastKnownFiles =
currentDirty. Ensure the symbols involved are WatchEvent, newEvents,
currentDirty, indexChanged and w.lastKnownFiles so this change re-emits on
mtime/hash changes rather than only on first dirty detection.
In `@internal/files/zip.go`:
- Around line 37-43: zipSkipExtensions contains multi-part suffixes (e.g.
".bundle.js") but the code currently uses filepath.Ext to check extensions, so
those keys never match; update the check to test full suffixes with
strings.HasSuffix instead of filepath.Ext. Locate uses of zipSkipExtensions and
any extension checks (the existing filepath.Ext(...) comparisons) and replace
them with a loop or a direct strings.HasSuffix(filename, ext) test against each
key in zipSkipExtensions (or normalize by converting the map into a slice of
suffixes and checking HasSuffix), ensuring both the declaration
zipSkipExtensions and the other extension-exclusion logic (the same check used
around the other occurrence) use the suffix-based match so multi-part names like
"app.bundle.js" and repo-level patterns like ".pb.go" are skipped correctly.
- Around line 169-182: Reject any onlyFiles entries that resolve outside the
repo before calling os.Lstat/addFileToZip: compute absolute/clean paths for
repoDir and the joined full path (use filepath.Clean/Abs) and verify absFull is
either equal to absRepo or has absRepo+path.Separator as a prefix; if it does
not (i.e., it escapes via ../ or absolute paths), skip and continue. Update the
loop that iterates over onlyFiles (the block using filepath.Join(repoDir, rel),
os.Lstat, shouldInclude, and addFileToZip) to perform this check first so only
in-repo files are processed.
---
Outside diff comments:
In `@internal/config/config.go`:
- Around line 103-112: The applyEnv method only flips c.Files when
SUPERMODEL_FILES == "false", so environment overrides cannot turn files back on;
update Config.applyEnv to parse SUPERMODEL_FILES with strconv.ParseBool (or
similar) and set c.Files = &parsedBool when the env var is present (handle parse
error by ignoring or logging), and add the strconv import; reference applyEnv,
c.Files, and the SUPERMODEL_FILES env var.
---
Duplicate comments:
In `@internal/files/daemon.go`:
- Around line 522-525: The goroutine that waits for ctx.Done() calls
conn.SetReadDeadline(time.Now()) without handling its error; update that
anonymous goroutine to capture the returned error (e.g., err :=
conn.SetReadDeadline(time.Now())) and handle it explicitly — for example, log
the error or ignore it with a documented rationale (if using a logger call like
logger.Warnf/Debugf or similar), or at minimum check if err != nil and
return/handle accordingly — locate the call inside the goroutine that uses
ctx.Done() and conn.SetReadDeadline and add the error check and handling there.
In `@internal/files/zip.go`:
- Around line 49-73: loadCustomExclusions currently mutates package-level maps
zipSkipDirs and zipSkipExtensions causing leakage between runs and race issues;
change loadCustomExclusions to return the per-run exclusion maps (or accept maps
as arguments) instead of writing globals, then in CreateZipFile and DryRunList
clone the default maps into local maps, pass those locals into
loadCustomExclusions (or merge the returned maps) and use the locals for zip
filtering; update all call sites (CreateZipFile, DryRunList) to use the local
maps rather than the package globals zipSkipDirs/zipSkipExtensions.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 33e239c7-5721-4ec9-818f-438f4bfdb364
📒 Files selected for processing (11)
cmd/analyze.gocmd/clean.gocmd/hook.gocmd/watch.gointernal/config/config.gointernal/files/daemon.gointernal/files/graph.gointernal/files/handler.gointernal/files/render.gointernal/files/watcher.gointernal/files/zip.go
| Long: `Runs a full generate on startup (using cached graph if available), then | ||
| enters daemon mode. Listens for file-change notifications from the | ||
| 'supermodel hook' command and incrementally re-renders affected files. | ||
|
|
||
| Press Ctrl+C to stop and remove graph files.`, |
There was a problem hiding this comment.
Fix the shutdown help text.
Line 22 says Ctrl+C removes graph files, but the current shutdown path only stops the daemon. Cleanup is a separate flow, so this text is promising behavior that does not happen.
Suggested text
-Press Ctrl+C to stop and remove graph files.`,
+Press Ctrl+C to stop. Run `supermodel clean` if you want to remove graph files.`,📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| Long: `Runs a full generate on startup (using cached graph if available), then | |
| enters daemon mode. Listens for file-change notifications from the | |
| 'supermodel hook' command and incrementally re-renders affected files. | |
| Press Ctrl+C to stop and remove graph files.`, | |
| Long: `Runs a full generate on startup (using cached graph if available), then | |
| enters daemon mode. Listens for file-change notifications from the | |
| 'supermodel hook' command and incrementally re-renders affected files. | |
| Press Ctrl+C to stop. Run `supermodel clean` if you want to remove graph files.`, |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cmd/watch.go` around lines 18 - 22, The shutdown help text in the watch
command's Long description incorrectly claims "Press Ctrl+C to stop and remove
graph files"; update the Long string (in the Cobra command declaration, e.g.,
the watch command variable that sets Long in cmd/watch.go) to accurately reflect
actual behavior—change it to say something like "Press Ctrl+C to stop the
daemon." or otherwise remove the "and remove graph files" clause so it no longer
promises cleanup that doesn't occur in the current shutdown path.
| var keptRels []api.Relationship | ||
| for _, r := range d.ir.Graph.Relationships { | ||
| startIsNew := newNodeIDs[r.StartNode] | ||
| endIsNew := newNodeIDs[r.EndNode] | ||
| if startIsNew && endIsNew { | ||
| continue | ||
| } | ||
|
|
||
| if startIsNew || removedOldIDs[r.StartNode] { | ||
| continue | ||
| } | ||
|
|
||
| rel := r | ||
| if newID, ok := oldToNew[rel.StartNode]; ok { | ||
| rel.StartNode = newID | ||
| } | ||
| if newID, ok := oldToNew[rel.EndNode]; ok { | ||
| rel.EndNode = newID | ||
| } | ||
| keptRels = append(keptRels, rel) | ||
| } | ||
|
|
||
| var newNodes []api.Node | ||
| for _, n := range incremental.Graph.Nodes { | ||
| if resolvedSet[n.ID] { | ||
| continue | ||
| } | ||
| newNodes = append(newNodes, n) | ||
| } | ||
|
|
||
| var newRels []api.Relationship | ||
| for _, r := range incremental.Graph.Relationships { | ||
| rel := r | ||
| if mapped, ok := extRemap[rel.StartNode]; ok { | ||
| rel.StartNode = mapped | ||
| } | ||
| if mapped, ok := extRemap[rel.EndNode]; ok { | ||
| rel.EndNode = mapped | ||
| } | ||
| newRels = append(newRels, rel) |
There was a problem hiding this comment.
Normalize relationship endpoints before keeping or appending them.
Right now this merge can leave edges pointing at nodes that were just removed. Simple case: if foo.ts deletes bar(), an unchanged caller can keep its old CALLS -> bar edge, and newRels can also remap to an old file ID that oldToNew already replaced. That leaves stale graph data in d.ir and will skew later cache/build passes.
One simple way to tighten this up
- var keptNodes []api.Node
+ removedNodeIDs := make(map[string]bool)
+ var keptNodes []api.Node
for _, n := range d.ir.Graph.Nodes {
fp := n.Prop("filePath")
if fp == "" {
fp = n.Prop("path")
}
if changedSet[fp] || changedSet[n.ID] {
+ removedNodeIDs[n.ID] = true
continue
}
if newNodeIDs[n.ID] {
+ removedNodeIDs[n.ID] = true
continue
}
keptNodes = append(keptNodes, n)
}
+ resolveNodeID := func(id string) (string, bool) {
+ if mapped, ok := extRemap[id]; ok {
+ id = mapped
+ }
+ if mapped, ok := oldToNew[id]; ok {
+ id = mapped
+ }
+ if removedNodeIDs[id] {
+ return "", false
+ }
+ return id, true
+ }Then run both keptRels and newRels through resolveNodeID before appending.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/files/daemon.go` around lines 432 - 471, The merge currently leaves
relationships pointing at removed node IDs; to fix, normalize relationship
endpoints via resolveNodeID before adding them to keptRels and newRels: when
iterating d.ir.Graph.Relationships (the kept loop) apply resolveNodeID to
rel.StartNode and rel.EndNode (after applying oldToNew) and skip the
relationship if either resolves to a removed/stubbed ID; likewise, when building
newRels from incremental.Graph.Relationships apply extRemap then run
resolveNodeID on both endpoints and skip relations whose resolved IDs are
removed; this ensures both keptRels and newRels contain only normalized, live
node IDs (use variables rel, oldToNew, extRemap, resolvedSet, removedOldIDs,
resolveNodeID to locate spots to change).
| case "defines_function": | ||
| filePath := c.IDToPath[rel.StartNode] | ||
| if fn, ok := c.FnByID[rel.EndNode]; ok && fn.File == "" && filePath != "" { | ||
| fn.File = filePath | ||
| } | ||
|
|
||
| case "belongsTo": | ||
| nodePath := c.IDToPath[rel.StartNode] | ||
| if nodePath == "" { | ||
| if fn, ok := c.FnByID[rel.StartNode]; ok { | ||
| nodePath = fn.File | ||
| } | ||
| } | ||
| if nodePath == "" { | ||
| continue |
There was a problem hiding this comment.
Don’t make domain assignment depend on relationship order.
For function nodes without a filePath property, belongsTo falls back to fn.File and then bails out if defines_function has not been seen yet. If those edges arrive in the opposite order, the file never gets a domain, so the impact section ends up undercounting domain spread.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/files/graph.go` around lines 142 - 156, The belongsTo handling is
order-dependent because it only checks c.IDToPath[rel.StartNode] then fn.File,
so if defines_function runs later the file never gets assigned; update the
belongsTo case to look up file candidates from both c.IDToPath and any function
node entries (check c.IDToPath[rel.StartNode], c.IDToPath[rel.EndNode], and
c.FnByID for rel.StartNode/rel.EndNode) and use the first non-empty path, and if
a fn exists with empty File assign that resolved path to fn.File; keep the
existing defines_function assignment but make belongsTo robust to either edge
order by sourcing the file from both IDToPath and FnByID before skipping.
| return err | ||
| } | ||
| ui.Success("Wrote %d sidecars for %d source files", written, len(files)) | ||
| return updateGitignore(repoDir) |
There was a problem hiding this comment.
Make DryRun non-mutating.
RenderAll(..., opts.DryRun) only previews sidecar writes, but this function still updates .gitignore on both paths and persists .supermodel/graph.json on a cold start. So a "preview" run can still dirty the repo.
One way to gate the writes
- return updateGitignore(repoDir)
+ if opts.DryRun {
+ return nil
+ }
+ return updateGitignore(repoDir)
...
- // Persist cache
- if err := os.MkdirAll(filepath.Dir(cacheFile), 0o755); err != nil {
- return fmt.Errorf("create cache dir: %w", err)
- }
- cacheJSON, err := json.MarshalIndent(ir, "", " ")
- if err != nil {
- return fmt.Errorf("marshal cache: %w", err)
- }
- tmp := cacheFile + ".tmp"
- if err := os.WriteFile(tmp, cacheJSON, 0o644); err != nil {
- return fmt.Errorf("write cache: %w", err)
- }
- if err := os.Rename(tmp, cacheFile); err != nil {
- return fmt.Errorf("finalize cache: %w", err)
- }
+ if !opts.DryRun {
+ if err := os.MkdirAll(filepath.Dir(cacheFile), 0o755); err != nil {
+ return fmt.Errorf("create cache dir: %w", err)
+ }
+ cacheJSON, err := json.MarshalIndent(ir, "", " ")
+ if err != nil {
+ return fmt.Errorf("marshal cache: %w", err)
+ }
+ tmp := cacheFile + ".tmp"
+ if err := os.WriteFile(tmp, cacheJSON, 0o644); err != nil {
+ return fmt.Errorf("write cache: %w", err)
+ }
+ if err := os.Rename(tmp, cacheFile); err != nil {
+ return fmt.Errorf("finalize cache: %w", err)
+ }
+ }
...
- return updateGitignore(repoDir)
+ if opts.DryRun {
+ return nil
+ }
+ return updateGitignore(repoDir)Also applies to: 92-105, 122-122
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/files/handler.go` at line 69, The DryRun option currently allows
previewing RenderAll(...) but still mutates the repo by calling updateGitignore
and persisting .supermodel/graph.json; change the flow so all
filesystem-mutating calls are skipped when opts.DryRun is true. Specifically,
wrap any calls to updateGitignore and the code path that writes/persists the
graph file (the cold-start persistence) with a guard like if !opts.DryRun { ...
}, and ensure RenderAll continues to receive opts.DryRun for preview only; apply
the same guard to the other write sites referenced around the blocks at the
updateGitignore call and the sections corresponding to lines 92-105 and 122 so a
DryRun makes no on-disk changes.
| scanner := bufio.NewScanner(os.Stdin) | ||
| var lines []string | ||
| for scanner.Scan() { | ||
| lines = append(lines, scanner.Text()) | ||
| } | ||
| if err := scanner.Err(); err != nil { | ||
| return fmt.Errorf("reading stdin: %w", err) | ||
| } | ||
|
|
||
| raw := strings.Join(lines, "\n") | ||
|
|
||
| var ev postToolUseEvent | ||
| if err := json.Unmarshal([]byte(raw), &ev); err != nil { | ||
| // Not a valid event — silently exit (hooks must not break the agent) | ||
| return nil | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Shows the hook path that currently scans stdin line-by-line and propagates scanner.Err().
rg -n -C3 'bufio\.NewScanner|scanner\.Err\(|json\.Unmarshal\(' internal/files/handler.goRepository: supermodeltools/cli
Length of output: 1209
🏁 Script executed:
#!/bin/bash
# Check if bufio is used elsewhere in handler.go after line 252
rg -n 'bufio' internal/files/handler.go
# Also check the imports at the top of the file
head -30 internal/files/handler.go | grep -n "import"Repository: supermodeltools/cli
Length of output: 127
🏁 Script executed:
#!/bin/bash
# Check if there are any other Scanner usages in the file
rg -n 'Scanner' internal/files/handler.goRepository: supermodeltools/cli
Length of output: 105
🏁 Script executed:
#!/bin/bash
# Look at the full context around postToolUseEvent to understand the struct
rg -n -B5 -A10 'type postToolUseEvent' internal/files/handler.goRepository: supermodeltools/cli
Length of output: 545
Use json.Decoder to avoid hitting Scanner's token limit.
bufio.Scanner has a default 64KB token limit. When Claude Code sends a PostToolUse hook with a big tool_result in the JSON, a single-line payload can exceed this limit. The current code then returns an error instead of silently exiting—which breaks the hook contract that says hooks must never break the agent.
Replace the scanner loop and strings.Join with json.NewDecoder(os.Stdin).Decode(&ev). It's simpler, handles streaming input without size limits, and keeps the silent no-op behavior when JSON parsing fails.
Smaller and safer read path
-import "bufio"
...
- scanner := bufio.NewScanner(os.Stdin)
- var lines []string
- for scanner.Scan() {
- lines = append(lines, scanner.Text())
- }
- if err := scanner.Err(); err != nil {
- return fmt.Errorf("reading stdin: %w", err)
- }
-
- raw := strings.Join(lines, "\n")
-
var ev postToolUseEvent
- if err := json.Unmarshal([]byte(raw), &ev); err != nil {
+ if err := json.NewDecoder(os.Stdin).Decode(&ev); err != nil {
// Not a valid event — silently exit (hooks must not break the agent)
return nil
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| scanner := bufio.NewScanner(os.Stdin) | |
| var lines []string | |
| for scanner.Scan() { | |
| lines = append(lines, scanner.Text()) | |
| } | |
| if err := scanner.Err(); err != nil { | |
| return fmt.Errorf("reading stdin: %w", err) | |
| } | |
| raw := strings.Join(lines, "\n") | |
| var ev postToolUseEvent | |
| if err := json.Unmarshal([]byte(raw), &ev); err != nil { | |
| // Not a valid event — silently exit (hooks must not break the agent) | |
| return nil | |
| } | |
| var ev postToolUseEvent | |
| if err := json.NewDecoder(os.Stdin).Decode(&ev); err != nil { | |
| // Not a valid event — silently exit (hooks must not break the agent) | |
| return nil | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/files/handler.go` around lines 237 - 252, Replace the bufio.Scanner
read loop and strings.Join with a streaming JSON decode: use
json.NewDecoder(os.Stdin).Decode(&ev) to populate the existing postToolUseEvent
variable (ev), remove the scanner.Err() check and raw string handling, and
preserve the current silent no-op behavior by returning nil when decoding fails
(i.e., if json.NewDecoder(...).Decode(&ev) returns an error, return nil instead
of propagating it); update any imports if needed to ensure encoding/json is
used.
| var newEvents []WatchEvent | ||
| now := time.Now() | ||
| for f := range currentDirty { | ||
| if _, known := w.lastKnownFiles[f]; !known { | ||
| newEvents = append(newEvents, WatchEvent{Path: f, Time: now}) | ||
| } | ||
| } | ||
|
|
||
| if indexChanged { | ||
| for f := range w.lastKnownFiles { | ||
| if _, stillDirty := currentDirty[f]; !stillDirty { | ||
| newEvents = append(newEvents, WatchEvent{Path: f, Time: now}) | ||
| } | ||
| } | ||
| } | ||
|
|
||
| w.lastKnownFiles = currentDirty |
There was a problem hiding this comment.
This only fires on “clean → dirty”, not on later saves.
Edit foo.go twice without committing: the first save emits, the second one does not, because the file is already in lastKnownFiles. In watch mode that leaves sidecars stale while a file stays dirty. Track per-file mtime/hash for dirty files, or re-emit when a dirty file’s mtime changes.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/files/watcher.go` around lines 79 - 95, The current logic only emits
a WatchEvent when a path transitions from clean to dirty because
w.lastKnownFiles is just a set of paths; to fix, change w.lastKnownFiles to
track per-file mtime or hash (e.g., map[string]time.Time or map[string]string)
and compare that stored value against the current file's mtime/hash in the loop
that builds newEvents (use currentDirty to carry the current mtime/hash too);
when the stored value differs emit a WatchEvent for that path (so repeated saves
re-emit) and update w.lastKnownFiles[path] to the new mtime/hash before
assigning w.lastKnownFiles = currentDirty. Ensure the symbols involved are
WatchEvent, newEvents, currentDirty, indexChanged and w.lastKnownFiles so this
change re-emits on mtime/hash changes rather than only on first dirty detection.
| var zipSkipExtensions = map[string]bool{ | ||
| ".min.js": true, ".min.css": true, ".bundle.js": true, ".map": true, | ||
| ".ico": true, ".woff": true, ".woff2": true, ".ttf": true, | ||
| ".eot": true, ".otf": true, ".mp4": true, ".mp3": true, | ||
| ".wav": true, ".png": true, ".jpg": true, ".jpeg": true, | ||
| ".gif": true, ".webp": true, | ||
| } |
There was a problem hiding this comment.
Multi-part suffixes never match this extension check.
filepath.Ext("app.bundle.js") is just .js, so .bundle.js in zipSkipExtensions is dead, and repo-level exclude_exts has the same problem for values like .pb.go or .generated.ts. Those files still get zipped.
🧩 Quick fix
- ext := strings.ToLower(filepath.Ext(filename))
- if zipSkipExtensions[ext] {
- return false
- }
+ lower := strings.ToLower(filename)
+ for suffix := range zipSkipExtensions {
+ if strings.HasSuffix(lower, suffix) {
+ return false
+ }
+ }Also applies to: 138-145
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/files/zip.go` around lines 37 - 43, zipSkipExtensions contains
multi-part suffixes (e.g. ".bundle.js") but the code currently uses filepath.Ext
to check extensions, so those keys never match; update the check to test full
suffixes with strings.HasSuffix instead of filepath.Ext. Locate uses of
zipSkipExtensions and any extension checks (the existing filepath.Ext(...)
comparisons) and replace them with a loop or a direct
strings.HasSuffix(filename, ext) test against each key in zipSkipExtensions (or
normalize by converting the map into a slice of suffixes and checking
HasSuffix), ensuring both the declaration zipSkipExtensions and the other
extension-exclusion logic (the same check used around the other occurrence) use
the suffix-based match so multi-part names like "app.bundle.js" and repo-level
patterns like ".pb.go" are skipped correctly.
| if onlyFiles != nil { | ||
| for _, rel := range onlyFiles { | ||
| full := filepath.Join(repoDir, rel) | ||
| info, err := os.Lstat(full) | ||
| if err != nil { | ||
| continue | ||
| } | ||
| if info.Mode()&os.ModeSymlink != 0 { | ||
| continue | ||
| } | ||
| if !shouldInclude(rel, info.Size()) { | ||
| continue | ||
| } | ||
| if err := addFileToZip(zw, full, rel); err != nil { |
There was a problem hiding this comment.
Reject onlyFiles entries that escape the repo.
In incremental mode, rel is joined straight onto repoDir and read from disk. A value like ../../.ssh/id_rsa resolves outside the repo and will get uploaded if it passes the name filters.
🔒 Quick fix
func CreateZipFile(repoDir string, onlyFiles []string) (string, error) {
+ repoAbs, err := filepath.Abs(repoDir)
+ if err != nil {
+ return "", fmt.Errorf("resolve repo dir: %w", err)
+ }
loadCustomExclusions(repoDir)
@@
if onlyFiles != nil {
for _, rel := range onlyFiles {
- full := filepath.Join(repoDir, rel)
+ rel = filepath.Clean(rel)
+ if filepath.IsAbs(rel) || rel == ".." || strings.HasPrefix(rel, ".."+string(filepath.Separator)) {
+ continue
+ }
+ full := filepath.Join(repoAbs, rel)
info, err := os.Lstat(full)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/files/zip.go` around lines 169 - 182, Reject any onlyFiles entries
that resolve outside the repo before calling os.Lstat/addFileToZip: compute
absolute/clean paths for repoDir and the joined full path (use
filepath.Clean/Abs) and verify absFull is either equal to absRepo or has
absRepo+path.Separator as a prefix; if it does not (i.e., it escapes via ../ or
absolute paths), skip and continue. Update the loop that iterates over onlyFiles
(the block using filepath.Join(repoDir, rel), os.Lstat, shouldInclude, and
addFileToZip) to perform this check first so only in-repo files are processed.
- nolint:gocyclo on Build, mergeGraph, renderImpactSection (graph algorithms) - nolint:gocritic on NewDaemon hugeParam (value-semantic config struct) - fix appendAssign: assign append back to source slice before assigning to field - fix ifElseChain: rewrite label checks as switch statement - fix appendCombine: use slice literal and multi-arg append in renderImpactSection - fix errcheck: _ = conn.SetReadDeadline / SetWriteDeadline - fix emptyStringTest: content != "" instead of len(content) > 0 - fix gosec G302: .gitignore opened with 0o600 - fix goimports: reformat daemon.go, types.go, hook.go Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
… retries - extract pollLoop(ctx, post func) so both full and incremental paths share identical polling logic without duplication - add postIncrementalZip helper that always includes changedFiles in the multipart form — previously retries called postZipTo which omitted the field - AnalyzeIncremental now uses pollLoop with a closure over changedFiles so every request (initial + retries) sends the full payload Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
🧹 Nitpick comments (2)
internal/files/render.go (1)
239-247: Go sidecar wrapper is clever, but the silent skip might hide issues.The
//go:build ignoretrick at Line 240 is smart—it keeps Go happy without polluting your build.However, Lines 245-247 silently skip path traversal errors. That's probably fine (you don't want one bad path to abort the whole render), but consider logging these so users know something was skipped:
if err := WriteSidecar(repoDir, scPath, fullContent, dryRun); err != nil { if strings.Contains(err.Error(), "path traversal") { + // Could log: fmt.Fprintf(os.Stderr, "skipped: %s (path traversal)\n", scPath) continue } return written, err }Not blocking—just a thought for debuggability.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/files/render.go` around lines 239 - 247, Silent skipping of path traversal errors in the render loop hides skipped files; update the block around SidecarFilename/WriteSidecar so that when WriteSidecar returns an error and strings.Contains(err.Error(), "path traversal") is true you still emit a log (including the scPath, srcFile and err.Error()) before continuing, preserving the existing continue behavior and keeping repoDir, fullContent and dryRun handling unchanged.internal/files/graph.go (1)
267-278: Heads up:firstStringreturns the last key as a literal fallback.This is intentional (node ID as fallback path), but just making sure you know: if none of the property keys match, Line 277 returns
keys[len(keys)-1]. SofirstString(props, "filePath", "name", n.ID)returns the actual value ofn.IDwhen no props match, not the string"n.ID".If
n.IDlooks likefile:src/foo.tsand you use that as a path, downstream code might get confused. But since this is for File/LocalDependency nodes where IDs typically ARE paths, it's probably fine.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/files/graph.go` around lines 267 - 278, The function firstString currently returns the last keys[len(keys)-1] as a literal when no props match, which can unintentionally return an ID/path string (e.g., n.ID) as a fallback; change firstString so that if no matching property is found it returns an empty string (or an explicit safe fallback) instead of the last key literal, and update any callers that relied on the previous behavior; locate and modify the firstString function to implement this safer fallback.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@internal/files/graph.go`:
- Around line 267-278: The function firstString currently returns the last
keys[len(keys)-1] as a literal when no props match, which can unintentionally
return an ID/path string (e.g., n.ID) as a fallback; change firstString so that
if no matching property is found it returns an empty string (or an explicit safe
fallback) instead of the last key literal, and update any callers that relied on
the previous behavior; locate and modify the firstString function to implement
this safer fallback.
In `@internal/files/render.go`:
- Around line 239-247: Silent skipping of path traversal errors in the render
loop hides skipped files; update the block around SidecarFilename/WriteSidecar
so that when WriteSidecar returns an error and strings.Contains(err.Error(),
"path traversal") is true you still emit a log (including the scPath, srcFile
and err.Error()) before continuing, preserving the existing continue behavior
and keeping repoDir, fullContent and dryRun handling unchanged.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: dea2f5df-64c3-4042-b066-56c6617b8e20
📒 Files selected for processing (6)
cmd/hook.gointernal/api/types.gointernal/files/daemon.gointernal/files/graph.gointernal/files/handler.gointernal/files/render.go
🚧 Files skipped from review as they are similar to previous changes (2)
- cmd/hook.go
- internal/api/types.go
Summary
supermodel analyzenow writes.graph.*sidecar files next to source files by default. Opt out with--no-filesor setfiles: falsein~/.supermodel/config.yaml.watch(daemon mode),clean(remove.graph.*files),hook(Claude Code PostToolUse forwarder).internal/sidecars→internal/files: Package renamed to reflect its role as a general output mode, not a separate feature.cmd/sidecars.godeleted: Thesupermodel sidecars generate|watch|clean|hook|render|setupsubcommand group is gone.What changed
supermodel sidecars generatesupermodel analyze(now also writes.graphfiles)supermodel sidecars watchsupermodel watchsupermodel sidecars cleansupermodel cleansupermodel sidecars hooksupermodel hooksupermodel sidecars rendersupermodel sidecars setupConfig
Or per-invocation:
supermodel analyze --no-filesOr via env:
SUPERMODEL_FILES=falseTest plan
go run . --helpshowswatch,clean,hookas top-level commands (nosidecars)go run . analyze --helpshows--no-filesflaggo build ./... && go vet ./...pass cleansupermodel analyzewrites.graph.*files next to source filessupermodel analyze --no-filesskips writing filessupermodel watchstarts daemon and listens for UDP on port 7734supermodel cleanremoves.graph.*filessupermodel hookreads stdin and sends UDP notification🤖 Generated with Claude Code
Summary by CodeRabbit