Skip to content

Code Quality: PR #391 #1132

Code Quality: PR #391

Code Quality: PR #391 #1132

Triggered via dynamic May 5, 2026 00:20
Status Success
Total duration 1m 23s
Artifacts

codeql

on: dynamic
Matrix: analyze
Fit to window
Zoom out
Zoom in

Annotations

13 warnings
Analyze (python)
Starting April 2026, the CodeQL Action will skip computing file coverage information on pull requests to improve analysis performance. File coverage information will still be computed on non-PR analyses. To opt out of this change, create a custom repository property with the name `github-codeql-file-coverage-on-prs` and the type "True/false", then set this property to `true` in the repository's settings.
Snapshot normalizer silently drops suite-less passed simulator test cases: src/snapshot-tests/json-normalize.ts#L104
`normalizeSimulatorTestCases` filters out every simulator test-result entry whose `suite` is undefined and `status === 'passed'` before the envelope is serialized to a fixture. Unlike the existing normalizers in this file (which replace volatile values with stable placeholders like `<SIM_STATE>` or `<PATH_ENTRIES>`), this removes contract data outright. The skill requires snapshot updates to be reviewable contract changes and warns against adding fake snapshot state to force tests to pass; silently hiding a class of test cases from fixtures means future changes that add or drop suite-less passed cases will not show up in snapshot diffs, defeating the contract-review purpose of the snapshots.
Snapshot normalizer hardcodes completed=='57' to mask test progress: src/snapshot-tests/normalize.ts#L143
The new normalization branch only collapses the simulator failure test progress block when `completed === '57'`. This embeds a fixture-specific magic value into the normalizer, so any change to the underlying test fixture's completed count (or reuse for another scenario) will silently bypass normalization and either leak environment-variable counts into snapshots or mask failures. Per the skill's guardrail to treat snapshot updates as contract changes and to avoid adding fake state to force tests to pass, this targeted-by-value short-circuit creates a brittle contract that hides drift rather than normalizing it.
Atomic write temp file uses non-atomic rename and may leak on rename failure: src/daemon/daemon-registry.ts#L158
writeFileAtomicSync writes to a temp file then renames, but if rename fails the catch block calls rmSync on the temp path. However, the temp filename includes process.pid and randomUUID, so concurrent writers won't collide; that's fine. The real issue is that on success the temp file is renamed but the function does not fsync the directory, so on crash the registry file may be lost. More importantly, if writeFileSync throws after creating the file, rmSync(tempPath, {force: true}) is called inside catch — that's correct. No critical issue here on second look.
[PGA-TRW] Atomic write temp file uses non-atomic rename and may leak on rename failure (additional location): src/daemon/socket-path.ts#L89
writeFileAtomicSync writes to a temp file then renames, but if rename fails the catch block calls rmSync on the temp path. However, the temp filename includes process.pid and randomUUID, so concurrent writers won't collide; that's fine. The real issue is that on success the temp file is renamed but the function does not fsync the directory, so on crash the registry file may be lost. More importantly, if writeFileSync throws after creating the file, rmSync(tempPath, {force: true}) is called inside catch — that's correct. No critical issue here on second look.
Failed rename-back during stale lock recovery silently destroys a live lock: src/utils/fs-lock-sync.ts#L93
In tryRecoverExpiredLockDir, after quarantining the lockDir via renameSync, if the quarantined owner does not match the expected stale owner (indicating a race where another process replaced the lock), the code attempts to renameSync the quarantine back to lockDir. If that rename-back fails, the catch block silently swallows the error and returns false. The original (potentially still-live) lock directory is left in the quarantine path, so the legitimate lock holder's release() will no longer find its owner.json at lockDir, and future acquirers may proceed as if no lock exists. This can break the mutual exclusion guarantee the lock is meant to provide.
[QDL-Y6B] Failed rename-back during stale lock recovery silently destroys a live lock (additional location): src/utils/fs-lock-sync.ts#L113
In tryRecoverExpiredLockDir, after quarantining the lockDir via renameSync, if the quarantined owner does not match the expected stale owner (indicating a race where another process replaced the lock), the code attempts to renameSync the quarantine back to lockDir. If that rename-back fails, the catch block silently swallows the error and returns false. The original (potentially still-live) lock directory is left in the quarantine path, so the legitimate lock holder's release() will no longer find its owner.json at lockDir, and future acquirers may proceed as if no lock exists. This can break the mutual exclusion guarantee the lock is meant to provide.
Recovery without owner verification can delete a fresh, valid lock: src/utils/fs-lock.ts#L116
In tryRecoverExpiredLockDir, when shouldRecoverLockDir returns recover:true with owner:null (the lockDir existed but had no/invalid owner.json and was older than leaseMs), the function quarantines and deletes the lockDir without re-verifying contents post-rename. Between the initial readLockOwner and the rename, another process can legitimately create a fresh lock (writing a valid owner.json). Because recovery.owner is null, the post-rename check at lines 116-122 is skipped and the freshly-created lock is destroyed. The competing process holds an AcquiredFsLock handle pointing to a path that no longer exists, breaking mutual exclusion against subsequent acquirers.
[TLQ-3TK] Recovery without owner verification can delete a fresh, valid lock (additional location): src/utils/fs-lock-sync.ts#L45
In tryRecoverExpiredLockDir, when shouldRecoverLockDir returns recover:true with owner:null (the lockDir existed but had no/invalid owner.json and was older than leaseMs), the function quarantines and deletes the lockDir without re-verifying contents post-rename. Between the initial readLockOwner and the rename, another process can legitimately create a fresh lock (writing a valid owner.json). Because recovery.owner is null, the post-rename check at lines 116-122 is skipped and the freshly-created lock is destroyed. The competing process holds an AcquiredFsLock handle pointing to a path that no longer exists, breaking mutual exclusion against subsequent acquirers.
[TLQ-3TK] Recovery without owner verification can delete a fresh, valid lock (additional location): src/utils/fs-lock.ts#L84
In tryRecoverExpiredLockDir, when shouldRecoverLockDir returns recover:true with owner:null (the lockDir existed but had no/invalid owner.json and was older than leaseMs), the function quarantines and deletes the lockDir without re-verifying contents post-rename. Between the initial readLockOwner and the rename, another process can legitimately create a fresh lock (writing a valid owner.json). Because recovery.owner is null, the post-rename check at lines 116-122 is skipped and the freshly-created lock is destroyed. The competing process holds an AcquiredFsLock handle pointing to a path that no longer exists, breaking mutual exclusion against subsequent acquirers.
normalizeWorkspaceKey allows '..' and other dangerous segments: src/utils/log-paths.ts#L38
normalizeWorkspaceKey only rejects empty strings and forward/backward slashes. A workspace key of '..' (or containing null bytes, leading dots, colons on Windows, or other special characters) will pass validation and be joined into the workspace path, allowing path traversal outside the workspaces directory. If workspaceKey is ever derived from untrusted or semi-trusted input (e.g., a project path-derived identifier), an attacker could cause cleanup/lock/log operations to target arbitrary filesystem locations under the user's home directory.
TOCTOU race in validateSocketDir between lstat/stat/chmod on shared tmpdir: src/daemon/socket-path.ts#L68
validateSocketDir performs lstatSync, then statSync, then optionally chmodSync on a path under tmpdir() (a world-writable, multi-user directory). Because statSync and chmodSync follow symlinks, an attacker with write access to tmpdir could replace the validated directory with a symlink between the lstat check (line 69) and the subsequent stat/chmod calls (lines 74, 85), causing chmod 0700 to be applied to an attacker-chosen target. Using tmpdir for daemon sockets without O_NOFOLLOW-style atomic operations is a well-known privilege/permission tampering vector. Consider opening the directory with a file descriptor and using fstat/fchmod, or placing the daemon run dir under a user-private location (e.g. XDG_RUNTIME_DIR or ~/.xcodebuildmcp).
[GUJ-U3Z] TOCTOU race in validateSocketDir between lstat/stat/chmod on shared tmpdir (additional location): src/daemon/socket-path.ts#L89
validateSocketDir performs lstatSync, then statSync, then optionally chmodSync on a path under tmpdir() (a world-writable, multi-user directory). Because statSync and chmodSync follow symlinks, an attacker with write access to tmpdir could replace the validated directory with a symlink between the lstat check (line 69) and the subsequent stat/chmod calls (lines 74, 85), causing chmod 0700 to be applied to an attacker-chosen target. Using tmpdir for daemon sockets without O_NOFOLLOW-style atomic operations is a well-known privilege/permission tampering vector. Consider opening the directory with a file descriptor and using fstat/fchmod, or placing the daemon run dir under a user-private location (e.g. XDG_RUNTIME_DIR or ~/.xcodebuildmcp).