Skip to content

PR #391

PR #391 #1124

Triggered via dynamic May 4, 2026 18:13
Status Success
Total duration 1m 27s
Artifacts

codeql

on: dynamic
Matrix: analyze
Fit to window
Zoom out
Zoom in

Annotations

11 warnings
Analyze (python)
Starting April 2026, the CodeQL Action will skip computing file coverage information on pull requests to improve analysis performance. File coverage information will still be computed on non-PR analyses. To opt out of this change, create a custom repository property with the name `github-codeql-file-coverage-on-prs` and the type "True/false", then set this property to `true` in the repository's settings.
Analyze (javascript-typescript)
Starting April 2026, the CodeQL Action will skip computing file coverage information on pull requests to improve analysis performance. File coverage information will still be computed on non-PR analyses. To opt out of this change, create a custom repository property with the name `github-codeql-file-coverage-on-prs` and the type "True/false", then set this property to `true` in the repository's settings.
Discovered test total bumped from 52 to 57 without corresponding items list change: src/snapshot-tests/__fixtures__/json/device/test--failure.json#L34
The fixture changes the discovered test `total` from 52 to 57, but the visible context shows the same item entries with no new tests appended. If the items array length did not actually grow by 5, this fixture update is inconsistent and may indicate the snapshot was patched to make tests pass rather than reflecting an intentional behavior change. Per the skill guardrails, fixture updates must map to intentional behavior changes and JSON fixtures must preserve stable structured output envelopes.
Forced shutdown timer and server.close callback can run cleanup concurrently and both call process.exit: src/daemon.ts#L336
When the 5s forced shutdown timer fires, it sets `forcedShutdownTimer = null`, then asynchronously runs `cleanupArtifacts()` followed by `flushAndCloseSentry(1000)` and `process.exit(1)`. During that async window, `server.close()` may still invoke its callback; because `forcedShutdownTimer` is already null, the guarded `clearTimeout` is skipped and the callback starts its own concurrent `cleanupArtifacts()` and a second `flushAndCloseSentry`/`process.exit(exitCode)` chain. This can produce concurrent workspace filesystem cleanup (which the PR specifically aims to prevent for owned artifacts) and a non-deterministic exit code.
TOCTOU race between lstatSync check and chmodSync allows symlink-following on shared /tmp: src/daemon/socket-path.ts#L84
validateSocketDir uses lstatSync to verify the directory is not a symlink, then later calls chmodSync(dir, 0o700) which follows symlinks. Because daemonRunDir() now resolves to tmpdir() (a world-writable location), an attacker on the same host could replace the directory with a symlink to a sensitive path between the lstat/stat and chmod calls, causing chmodSync to apply 0o700 to the symlink target. The previous implementation kept these files under ~/.xcodebuildmcp where this race was not exploitable. Use fchmod on an O_NOFOLLOW-opened file descriptor or open the directory once and operate on the fd to close the race.
[7XK-N3P] TOCTOU race between lstatSync check and chmodSync allows symlink-following on shared /tmp (additional location): src/daemon/socket-path.ts#L89
validateSocketDir uses lstatSync to verify the directory is not a symlink, then later calls chmodSync(dir, 0o700) which follows symlinks. Because daemonRunDir() now resolves to tmpdir() (a world-writable location), an attacker on the same host could replace the directory with a symlink to a sensitive path between the lstat/stat and chmod calls, causing chmodSync to apply 0o700 to the symlink target. The previous implementation kept these files under ~/.xcodebuildmcp where this race was not exploitable. Use fchmod on an O_NOFOLLOW-opened file descriptor or open the directory once and operate on the fd to close the race.
TOCTOU in stale-directory recovery can destroy a freshly created lock: src/utils/fs-lock-sync.ts#L70
In tryRecoverExpiredLockDir, when readLockOwnerSync returns null and the directory's mtime is older than leaseMs, recovery proceeds without any post-rename validation (the owner equality check is skipped because recovery.owner is null). Between the staleness check and the renameSync, another process can complete createLock by writing owner.json into the same directory; this implementation will then rename and rmSync that directory, deleting a live lock. The user-visible consequence is that two processes can briefly believe they hold the same lock, breaking the mutual-exclusion guarantee the lock is supposed to provide.
[MA8-VDH] TOCTOU in stale-directory recovery can destroy a freshly created lock (additional location): src/utils/fs-lock.ts#L116
In tryRecoverExpiredLockDir, when readLockOwnerSync returns null and the directory's mtime is older than leaseMs, recovery proceeds without any post-rename validation (the owner equality check is skipped because recovery.owner is null). Between the staleness check and the renameSync, another process can complete createLock by writing owner.json into the same directory; this implementation will then rename and rmSync that directory, deleting a live lock. The user-visible consequence is that two processes can briefly believe they hold the same lock, breaking the mutual-exclusion guarantee the lock is supposed to provide.
Liveness check treats EPERM as 'process not alive', enabling premature lock recovery: src/utils/fs-lock.ts#L90
`shouldRecoverLockDir` (line 93) calls `isPidAlive(staleOwner.pid)` to decide whether the owner is gone. `isPidAlive` returns false on both ESRCH and EPERM. EPERM from `process.kill(pid, 0)` actually indicates the process exists but is owned by a different user — i.e. it IS alive. In multi-user/cross-uid scenarios (different MCP servers running under different accounts on the same host) this causes the recovery path to consider a live process dead and steal its lock, defeating the multi-process safety the PR claims to provide.
[4R3-XAL] Liveness check treats EPERM as 'process not alive', enabling premature lock recovery (additional location): src/utils/process-liveness.ts#L11
`shouldRecoverLockDir` (line 93) calls `isPidAlive(staleOwner.pid)` to decide whether the owner is gone. `isPidAlive` returns false on both ESRCH and EPERM. EPERM from `process.kill(pid, 0)` actually indicates the process exists but is owned by a different user — i.e. it IS alive. In multi-user/cross-uid scenarios (different MCP servers running under different accounts on the same host) this causes the recovery path to consider a live process dead and steal its lock, defeating the multi-process safety the PR claims to provide.
Schedule pre-key cooldown only updates on completion, allowing rapid re-scheduling between submission and completion: src/utils/workspace-filesystem-lifecycle.ts#L397
`scheduleWorkspaceFilesystemLifecycleSweep` checks `lastScheduledAtByPreKey` to short-circuit, but the map is only updated inside the `setTimeout` callback's `.then` after the sweep completes (and only if not skipped). Between the moment a sweep is scheduled and the moment it completes, repeated calls within the cooldown window will pass the pre-key check and proceed to the secondary `runningScheduledSweeps`/`lastScheduledAtByScope` check. While `runningScheduledSweeps` provides some protection once the timer fires, during the `WORKSPACE_FILESYSTEM_LIFECYCLE_SCHEDULE_DELAY_MS` delay before the timer fires, multiple timers can be queued because nothing records the scheduled-at time at submission. This can result in redundant scheduling work and timers firing in rapid succession.