fix(telegram): add socket timeout and extract tgApi into shared module#1399
fix(telegram): add socket timeout and extract tgApi into shared module#1399tommylin-signalpro wants to merge 7 commits intoNVIDIA:mainfrom
Conversation
Move the Telegram API client from an inline function in telegram-bridge.js into bin/lib/telegram-api.js. The new module accepts the bot token as a parameter instead of reading it from module-level state, and includes a 60s socket timeout with req.destroy() to prevent the poll loop from hanging on dead TCP connections. telegram-bridge.js now imports and wraps the shared function. Tests in test/telegram-api.test.js import the same production module and verify timeout, recovery, and error handling against local TLS servers. Refs: NVIDIA#1398 Signed-off-by: Tommy Lin <tommylin@signalpro.com.tw>
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds a shared Telegram Bot API client and tests, switches bridge code to use it, adds an interactive preset selector used by the CLI, and replaces the top-level installer with a bootstrap plus a new version-aware installer script; tests updated to exercise these flows. Changes
Sequence Diagram(s)sequenceDiagram
participant Bridge as Bridge (scripts/telegram-bridge.js)
participant API as tgApi (bin/lib/telegram-api.js)
participant Telegram as Telegram API (api.telegram.org / test server)
Bridge->>API: tgApi(token, method, body, opts)
API->>Telegram: POST /bot{token}/{method} (JSON) with opts (hostname/port/timeout)
Telegram-->>API: TCP chunks / end / error / abort
API-->>Bridge: Promise resolves with parsed JSON or rejects with error/timeout
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
⚔️ Resolve merge conflicts
Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@bin/lib/telegram-api.js`:
- Around line 47-57: The response handler for the HTTPS request only listens for
"data" and "end" which lets mid-stream errors or aborts hang the promise; update
the callback that receives the response (the res in the https.request callback)
to also attach res.on("error", ...) and res.on("aborted", ...) handlers that
immediately reject or resolve with an error object (consistent with the existing
resolve({ ok: false, error: ... })) so the outer Promise doesn't hang; apply the
same change to the other response handler instance referenced near line 61 to
ensure both places handle res errors/aborts.
In `@test/telegram-api.test.js`:
- Around line 156-160: The test "handles connection refused (server down)" races
because tempServer.close() is not awaited; change the close call to be awaited
by wrapping it in a Promise (e.g., await new Promise((resolve, reject) =>
tempServer.close(err => err ? reject(err) : resolve()))) so the server listener
is fully closed before issuing the API request; update the tempServer.close()
call in that test to use this awaited pattern referencing tempServer and the
test name to locate the change.
- Around line 69-75: The afterEach teardown is not awaiting server shutdowns so
handles linger; make afterEach async and await each server's close completion by
wrapping s.close in a Promise (or using a provided closeAsync) and resolving in
the close callback, ensuring s.closeAllConnections() is called first where
present; then await Promise.all over servers.map(...) (or sequential awaits) to
guarantee all s.close() completions before the test exits, and only then clear
the servers array.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 68bc192b-2545-4825-b290-7f896305a859
📒 Files selected for processing (3)
bin/lib/telegram-api.jsscripts/telegram-bridge.jstest/telegram-api.test.js
Address CodeRabbit review feedback on NVIDIA#1399: - Add res.on("error") and res.on("aborted") handlers with a settle guard to prevent promise hangs on mid-stream socket errors. - Make afterEach async and await server.close() to prevent lingering handles and port conflicts between tests. - Await tempServer.close() in the connection-refused test to eliminate a race condition. - Update mid-response test to assert rejection instead of accepting any outcome, now that aborted/error handlers are in place. Signed-off-by: Tommy Lin <tommylin@signalpro.com.tw>
There was a problem hiding this comment.
🧹 Nitpick comments (1)
bin/lib/telegram-api.js (1)
55-56: Make response decoding UTF-8 safe before JSON parse.Line 56 appends raw chunks directly to a string. If multibyte UTF-8 characters are split across chunks, content can be corrupted. Set encoding on
res(or accumulate Buffers andBuffer.concat) before parsing.Proposed patch
const req = https.request(reqOpts, (res) => { let buf = ""; + res.setEncoding("utf8"); res.on("data", (c) => (buf += c)); res.on("aborted", () => settle(reject, new Error(`Telegram API ${method} response aborted`))); res.on("error", (err) => settle(reject, err));🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bin/lib/telegram-api.js` around lines 55 - 56, The response handling currently appends raw chunks to a string via res.on("data", ...) which can corrupt multibyte UTF-8 characters; update the handler in telegram-api.js around the res callbacks to decode safely by either calling res.setEncoding('utf8') before attaching data listeners or by accumulating Buffer objects (push each chunk in res.on("data", chunk) into an array, then Buffer.concat on 'end' and toString('utf8')) before JSON.parse; ensure the variable currently named buf and the res.on("end", ...) JSON.parse logic use the properly decoded string or buffer-to-string result.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@bin/lib/telegram-api.js`:
- Around line 55-56: The response handling currently appends raw chunks to a
string via res.on("data", ...) which can corrupt multibyte UTF-8 characters;
update the handler in telegram-api.js around the res callbacks to decode safely
by either calling res.setEncoding('utf8') before attaching data listeners or by
accumulating Buffer objects (push each chunk in res.on("data", chunk) into an
array, then Buffer.concat on 'end' and toString('utf8')) before JSON.parse;
ensure the variable currently named buf and the res.on("end", ...) JSON.parse
logic use the properly decoded string or buffer-to-string result.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: b85a7a1b-9e58-4577-ac97-7aae7be784f4
📒 Files selected for processing (2)
bin/lib/telegram-api.jstest/telegram-api.test.js
🚧 Files skipped from review as they are similar to previous changes (1)
- test/telegram-api.test.js
|
✨ Thanks for submitting this pull request, which proposes a way to fix a reliability issue in the Telegram bridge by adding socket timeouts. |
ee9cf0d to
8889744
Compare
## Summary - turn the repo-root `install.sh` into a thin bootstrap that selects one install ref and then executes installer logic from that same ref - reuse `scripts/install.sh` as the versioned installer payload instead of adding another installer path - preserve compatibility for older refs by only treating `scripts/install.sh` as the payload when it contains an explicit payload marker; otherwise fall back to the cloned root `install.sh` ## Validation - `node_modules/.bin/vitest run test/install-preflight.test.js` - `bash -n install.sh scripts/install.sh` - `npx eslint test/install-preflight.test.js` ## Notes - keeps `curl -fsSL https://www.nvidia.com/nemoclaw.sh | bash` working - keeps direct repo-checkout execution working for both `bash install.sh` and `bash scripts/install.sh` - adds explicit coverage for `curl -fsSL https://www.nvidia.com/nemoclaw.sh | NEMOCLAW_INSTALL_TAG=v0.0.1 bash` - pushed update at `46ad602`; repo pre-push still has an unrelated failure in `src/lib/preflight.test.ts` <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit * **Refactor** * Reworked the installer into a lightweight bootstrap that selects and runs a versioned payload, simplifying controls and validation for a faster, more reliable install experience. * Streamlined error/output styling and download verification to reduce noise. * **New Features** * Self-contained, versioned installer option with built-in Node.js setup, conditional GPU tooling/model handling, and improved onboarding/resume and PATH guidance. * **Tests** * Updated end-to-end tests to cover bootstrap paths, payload execution, and non-clone install scenarios. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
Add res.setEncoding("utf8") before attaching the data listener to
prevent multibyte UTF-8 characters from being corrupted when split
across chunks.
Signed-off-by: Tommy Lin <tommylin@signalpro.com.tw>
<!-- markdownlint-disable MD041 -->
## Summary
<!-- 1-3 sentences: what this PR does and why. -->
Allow `nemoclaw <sandbox> policy-add` to be selected by number instead
of typing preset names.
```bash
root@se7en:/tmp/NemoClaw# node bin/nemoclaw.js seven-demo policy-add
Available presets:
1) ○ discord — Discord API, gateway, and CDN access
2) ○ docker — Docker Hub and NVIDIA container registry access
3) ○ huggingface — Hugging Face Hub, LFS, and Inference API access
4) ○ jira — Jira and Atlassian Cloud access
5) ● npm — npm and Yarn registry access
6) ○ outlook — Microsoft Outlook and Graph API access
7) ○ pypi — Python Package Index (PyPI) access
8) ○ slack — Slack API, Socket Mode, and webhooks access
9) ○ telegram — Telegram Bot API access
● applied, ○ not applied
Choose preset [1]: 7
Apply 'pypi' to sandbox 'seven-demo'? [Y/n]: y
✓ Policy version 3 submitted (hash: 462a3f55b4da)
✓ Policy version 3 loaded (active version: 3)
Applied preset: pypi
```
## Related Issue
<!-- Link to the issue: Fixes #NNN or Closes #NNN. Remove this section
if none. -->
Fixes NVIDIA#1164
## Changes
<!-- Bullet list of key changes. -->
## Type of Change
<!-- Check the one that applies. -->
- [x] Code change for a new feature, bug fix, or refactor.
- [ ] Code change with doc updates.
- [ ] Doc only. Prose changes without code sample modifications.
- [ ] Doc only. Includes code sample changes.
## Testing
<!-- What testing was done? -->
- [x] `npx prek run --all-files` passes (or equivalently `make check`).
- [x] `npm test` passes.
- [ ] `make docs` builds without warnings. (for doc-only changes)
## Checklist
### General
- [x] I have read and followed the [contributing
guide](https://github.com/NVIDIA/NemoClaw/blob/main/CONTRIBUTING.md).
- [x] I have read and followed the [style
guide](https://github.com/NVIDIA/NemoClaw/blob/main/docs/CONTRIBUTING.md).
(for doc-only changes)
### Code Changes
<!-- Skip if this is a doc-only PR. -->
- [x] Formatters applied — `npx prek run --all-files` auto-fixes
formatting (or `make format` for targeted runs).
- [x] Tests added or updated for new or changed behavior.
- [x] No secrets, API keys, or credentials committed.
- [ ] Doc pages updated for any user-facing behavior changes (new
commands, changed defaults, new features, bug fixes that contradict
existing docs).
### Doc Changes
<!-- Skip if this PR has no doc changes. -->
- [ ] Follows the [style
guide](https://github.com/NVIDIA/NemoClaw/blob/main/docs/CONTRIBUTING.md).
Try running the `update-docs` agent skill to draft changes while
complying with the style guide. For example, prompt your agent with
"`/update-docs` catch up the docs for the new changes I made in this
PR."
- [ ] New pages include SPDX license header and frontmatter, if creating
a new page.
- [ ] Cross-references and links verified.
---
<!-- DCO sign-off (required by CI). Replace with your real name and
email. -->
Signed-off-by: Seven Cheng <sevenc@nvidia.com>
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit
* **New Features**
* Interactive preset selection menu with visual status (● applied, ○ not
applied)
* Empty input selects the default (first non-applied) when available
* Policy-add CLI now invokes the interactive selector and asks for
confirmation before applying
* **Validation**
* Rejects invalid, non-numeric, out-of-range, or already-applied
selections and prints feedback
* **Tests**
* Expanded tests covering selection behavior, defaulting, rejection
cases, list rendering, and CLI confirmation flow
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
Signed-off-by: Tommy Lin <tommylin@signalpro.com.tw> Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Replace em-dash characters in JSDoc with hyphens and add inline type assertions for server.address() to satisfy jsconfig checkJs. Signed-off-by: Tommy Lin <tommylin@signalpro.com.tw> Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
8889744 to
9f51c01
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (4)
test/telegram-api.test.js (1)
23-48: Consider moving cert generation tobeforeAllfor clearer failure handling.Running
opensslat module load means import failures produce cryptic errors if openssl is unavailable. Moving this to abeforeAllhook would allow for a descriptive skip or clearer error message. That said, this is fine for the current CI environment.♻️ Optional: Move to beforeAll with error handling
-const tmpDir = fs.mkdtempSync("/tmp/tg-api-test-"); -const keyPath = path.join(tmpDir, "key.pem"); -const certPath = path.join(tmpDir, "cert.pem"); -execFileSync( - "openssl", - [ - "req", - "-x509", - "-newkey", - "rsa:2048", - "-keyout", - keyPath, - "-out", - certPath, - "-days", - "1", - "-nodes", - "-subj", - "/CN=localhost", - ], - { stdio: "ignore" }, -); -const key = fs.readFileSync(keyPath); -const cert = fs.readFileSync(certPath); -fs.rmSync(tmpDir, { recursive: true }); +let key, cert; + +beforeAll(() => { + const tmpDir = fs.mkdtempSync("/tmp/tg-api-test-"); + const keyPath = path.join(tmpDir, "key.pem"); + const certPath = path.join(tmpDir, "cert.pem"); + try { + execFileSync( + "openssl", + [ + "req", "-x509", "-newkey", "rsa:2048", + "-keyout", keyPath, "-out", certPath, + "-days", "1", "-nodes", "-subj", "/CN=localhost", + ], + { stdio: "ignore" }, + ); + key = fs.readFileSync(keyPath); + cert = fs.readFileSync(certPath); + } finally { + fs.rmSync(tmpDir, { recursive: true, force: true }); + } +});🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/telegram-api.test.js` around lines 23 - 48, The module currently runs OpenSSL and creates temp files at import time (tmpDir, keyPath, certPath, execFileSync, fs.readFileSync), which can produce cryptic failures; move this setup into a jest beforeAll hook so the cert/key generation runs at test startup instead of module load, wrap the execFileSync call in try/catch to provide a clear skip or error message if openssl is missing, and ensure cleanup (fs.rmSync) runs in an afterAll or finally block to remove tmpDir; update usages of key/cert to read from variables initialized in beforeAll.test/policies.test.js (1)
23-62: Clean up temporary test directories inrunPolicyAdd.
runPolicyAddcreates a temp directory per invocation and never removes it. This can accumulate artifacts across local/CI runs.Proposed fix
function runPolicyAdd(confirmAnswer) { const tmpDir = fs.mkdtempSync(path.join(os.tmpdir(), "nemoclaw-policy-add-")); const scriptPath = path.join(tmpDir, "policy-add-check.js"); const script = String.raw` ... `; fs.writeFileSync(scriptPath, script); - return spawnSync(process.execPath, [scriptPath], { - cwd: REPO_ROOT, - encoding: "utf-8", - env: { - ...process.env, - HOME: tmpDir, - }, - }); + try { + return spawnSync(process.execPath, [scriptPath], { + cwd: REPO_ROOT, + encoding: "utf-8", + env: { + ...process.env, + HOME: tmpDir, + }, + }); + } finally { + fs.rmSync(tmpDir, { recursive: true, force: true }); + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/policies.test.js` around lines 23 - 62, The runPolicyAdd helper leaves temporary directories behind; after writing scriptPath and calling spawnSync, remove the tmpDir to clean up artifacts — locate the runPolicyAdd function and add a finally/cleanup step that deletes tmpDir (e.g., using fs.rmSync or fs.rmdirSync with recursive/force) after the spawnSync completes or on error, ensuring scriptPath is removed as well; preserve returning the spawnSync result or rethrow errors as appropriate.scripts/install.sh (2)
601-604: SHA-256 bypass when no hashing tool is available.When neither
sha256sumnorshasumis available, the script warns but proceeds without verification. This is a reasonable compatibility tradeoff, but on systems without these tools (rare), the nvm installer could be tampered with.Consider whether this fallback behavior is acceptable for your threat model, or whether it should fail instead.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/install.sh` around lines 601 - 604, The current fallback sets actual_hash="$NVM_SHA256" and continues when no SHA tool is found, which bypasses integrity verification; change this behavior to abort instead: in the block that currently calls warn "No SHA-256 tool found — skipping nvm integrity check" and sets actual_hash, replace that with a failing path that uses the script's error/exit mechanism (e.g., call error "No SHA-256 tool found — cannot verify installer" and exit 1) so the installer stops rather than proceeding with actual_hash="$NVM_SHA256".
362-366: Array element removal leaves empty strings in the array.The pattern
("${_cleanup_pids[@]/$pid/}")replaces the matching element with an empty string rather than removing it. While this works for the cleanup purposes (killing an empty PID is a no-op), repeatedspin()calls will accumulate empty elements.Consider using a proper removal pattern if this becomes a performance concern:
♻️ Suggested improvement
- _cleanup_pids=("${_cleanup_pids[@]/$pid/}") - _cleanup_files=("${_cleanup_files[@]/$log/}") + local new_pids=() new_files=() + for p in "${_cleanup_pids[@]:-}"; do [[ "$p" != "$pid" && -n "$p" ]] && new_pids+=("$p"); done + for f in "${_cleanup_files[@]:-}"; do [[ "$f" != "$log" && -n "$f" ]] && new_files+=("$f"); done + _cleanup_pids=("${new_pids[@]+"${new_pids[@]}"}") + _cleanup_files=("${new_files[@]+"${new_files[@]}"}")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/install.sh` around lines 362 - 366, The current in-place substitution _cleanup_pids=("${_cleanup_pids[@]/$pid/}") and _cleanup_files=("${_cleanup_files[@]/$log/}") leaves empty strings in the arrays; change these to rebuild each array by filtering out the matching element(s) so entries are actually removed (e.g. iterate over _cleanup_pids and only append values not equal to $pid, then assign the filtered list back to _cleanup_pids; do the same for _cleanup_files and $log). Update the cleanup code that runs after spin()/the EXIT trap to use the new filtered arrays so repeated spin() calls won’t accumulate empty entries.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@scripts/install.sh`:
- Around line 666-668: The Ollama installer is being downloaded and executed
without cryptographic verification; update the install flow that fetches
"$tmpdir/install_ollama.sh" to verify a pinned SHA-256 (like the nvm pattern)
before executing: add an expected hash variable (e.g., OLLAMA_SHA256), compute
the downloaded file's SHA-256 using sha256sum or shasum -a 256, compare it to
OLLAMA_SHA256, and fail with a clear error if it doesn't match, then call
verify_downloaded_script and sh only after the hash check; apply the same change
to the other Ollama download block referenced in the comment.
---
Nitpick comments:
In `@scripts/install.sh`:
- Around line 601-604: The current fallback sets actual_hash="$NVM_SHA256" and
continues when no SHA tool is found, which bypasses integrity verification;
change this behavior to abort instead: in the block that currently calls warn
"No SHA-256 tool found — skipping nvm integrity check" and sets actual_hash,
replace that with a failing path that uses the script's error/exit mechanism
(e.g., call error "No SHA-256 tool found — cannot verify installer" and exit 1)
so the installer stops rather than proceeding with actual_hash="$NVM_SHA256".
- Around line 362-366: The current in-place substitution
_cleanup_pids=("${_cleanup_pids[@]/$pid/}") and
_cleanup_files=("${_cleanup_files[@]/$log/}") leaves empty strings in the
arrays; change these to rebuild each array by filtering out the matching
element(s) so entries are actually removed (e.g. iterate over _cleanup_pids and
only append values not equal to $pid, then assign the filtered list back to
_cleanup_pids; do the same for _cleanup_files and $log). Update the cleanup code
that runs after spin()/the EXIT trap to use the new filtered arrays so repeated
spin() calls won’t accumulate empty entries.
In `@test/policies.test.js`:
- Around line 23-62: The runPolicyAdd helper leaves temporary directories
behind; after writing scriptPath and calling spawnSync, remove the tmpDir to
clean up artifacts — locate the runPolicyAdd function and add a finally/cleanup
step that deletes tmpDir (e.g., using fs.rmSync or fs.rmdirSync with
recursive/force) after the spawnSync completes or on error, ensuring scriptPath
is removed as well; preserve returning the spawnSync result or rethrow errors as
appropriate.
In `@test/telegram-api.test.js`:
- Around line 23-48: The module currently runs OpenSSL and creates temp files at
import time (tmpDir, keyPath, certPath, execFileSync, fs.readFileSync), which
can produce cryptic failures; move this setup into a jest beforeAll hook so the
cert/key generation runs at test startup instead of module load, wrap the
execFileSync call in try/catch to provide a clear skip or error message if
openssl is missing, and ensure cleanup (fs.rmSync) runs in an afterAll or
finally block to remove tmpDir; update usages of key/cert to read from variables
initialized in beforeAll.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: cccfb5d1-8d65-404a-9b01-b59f26a99b11
📒 Files selected for processing (8)
bin/lib/policies.jsbin/lib/telegram-api.jsbin/nemoclaw.jsinstall.shscripts/install.shtest/install-preflight.test.jstest/policies.test.jstest/telegram-api.test.js
🚧 Files skipped from review as they are similar to previous changes (1)
- bin/lib/telegram-api.js
| curl -fsSL https://ollama.com/install.sh -o "$tmpdir/install_ollama.sh" | ||
| verify_downloaded_script "$tmpdir/install_ollama.sh" "Ollama" | ||
| sh "$tmpdir/install_ollama.sh" |
There was a problem hiding this comment.
Ollama installer downloaded without cryptographic verification.
Unlike the nvm installer which verifies against a pinned SHA-256 hash, the Ollama installer is downloaded and executed after only checking for a valid shebang. This creates a supply chain risk—a compromised ollama.com could serve a malicious installer.
Consider pinning the Ollama installer hash similar to nvm, or at minimum documenting this risk for users.
Also applies to: 679-681
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/install.sh` around lines 666 - 668, The Ollama installer is being
downloaded and executed without cryptographic verification; update the install
flow that fetches "$tmpdir/install_ollama.sh" to verify a pinned SHA-256 (like
the nvm pattern) before executing: add an expected hash variable (e.g.,
OLLAMA_SHA256), compute the downloaded file's SHA-256 using sha256sum or shasum
-a 256, compare it to OLLAMA_SHA256, and fail with a clear error if it doesn't
match, then call verify_downloaded_script and sh only after the hash check;
apply the same change to the other Ollama download block referenced in the
comment.
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
scripts/install.sh (1)
364-365: Array element removal pattern does not actually remove elements.The pattern
("${_cleanup_pids[@]/$pid/}")replaces occurrences of$pidwithin each element with empty string but leaves an empty element in the array. This can cause issues if the arrays grow during long-running installs.🔧 Proposed fix for proper element removal
- _cleanup_pids=("${_cleanup_pids[@]/$pid/}") - _cleanup_files=("${_cleanup_files[@]/$log/}") + local new_pids=() new_files=() + for p in "${_cleanup_pids[@]:-}"; do [[ "$p" != "$pid" ]] && new_pids+=("$p"); done + for f in "${_cleanup_files[@]:-}"; do [[ "$f" != "$log" ]] && new_files+=("$f"); done + _cleanup_pids=("${new_pids[@]}") + _cleanup_files=("${new_files[@]}")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/install.sh` around lines 364 - 365, The current removal lines for _cleanup_pids and _cleanup_files use pattern substitution which leaves empty elements; instead rebuild each array by iterating its elements and copying only those not equal to the target (compare element != "$pid" for _cleanup_pids and element != "$log" for _cleanup_files), then assign the filtered list back to the original arrays (_cleanup_pids and _cleanup_files) so matching items are removed cleanly without leaving empty entries.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@scripts/install.sh`:
- Around line 911-927: The inline Node.js validator in scripts/install.sh
currently only checks data.resumable and data.status and needs to also verify
the session schema version matches the expected SESSION_VERSION; update the
inline script that reads "${HOME}/.nemoclaw/onboard-session.json" to parse
data.version and require it equals 1 (the SESSION_VERSION from
src/lib/onboard-session.ts) before exiting 0, so only sessions with version ===
1 and resumable and non-"complete" status will cause --resume to be added.
In `@test/policies.test.js`:
- Around line 22-62: The runPolicyAdd helper creates a temp dir and calls
spawnSync without a timeout and never cleans up the temp dir; update
runPolicyAdd to pass a reasonable timeout (e.g., 5000 ms) in the spawnSync
options (the call that uses process.execPath and scriptPath) and ensure the
tmpDir (and the script file) are removed after spawnSync completes (use
fs.rmSync or fs.unlinkSync/fs.rmdirSync in a finally block) so the temp
directory is always cleaned up even on timeout or error; reference runPolicyAdd,
tmpDir, scriptPath, and the spawnSync invocation when making the change.
---
Nitpick comments:
In `@scripts/install.sh`:
- Around line 364-365: The current removal lines for _cleanup_pids and
_cleanup_files use pattern substitution which leaves empty elements; instead
rebuild each array by iterating its elements and copying only those not equal to
the target (compare element != "$pid" for _cleanup_pids and element != "$log"
for _cleanup_files), then assign the filtered list back to the original arrays
(_cleanup_pids and _cleanup_files) so matching items are removed cleanly without
leaving empty entries.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 5d4958f4-d98c-4ab2-9c1e-c39abc4ad494
📒 Files selected for processing (8)
bin/lib/policies.jsbin/lib/telegram-api.jsbin/nemoclaw.jsinstall.shscripts/install.shtest/install-preflight.test.jstest/policies.test.jstest/telegram-api.test.js
✅ Files skipped from review due to trivial changes (2)
- test/telegram-api.test.js
- test/install-preflight.test.js
🚧 Files skipped from review as they are similar to previous changes (2)
- bin/nemoclaw.js
- bin/lib/telegram-api.js
| if command_exists node && [[ -f "${HOME}/.nemoclaw/onboard-session.json" ]]; then | ||
| if node -e ' | ||
| const fs = require("fs"); | ||
| const file = process.argv[1]; | ||
| try { | ||
| const data = JSON.parse(fs.readFileSync(file, "utf8")); | ||
| const resumable = data && data.resumable !== false; | ||
| const status = data && data.status; | ||
| process.exit(resumable && status && status !== "complete" ? 0 : 1); | ||
| } catch { | ||
| process.exit(1); | ||
| } | ||
| ' "${HOME}/.nemoclaw/onboard-session.json"; then | ||
| info "Found an interrupted onboarding session — resuming it." | ||
| onboard_cmd+=(--resume) | ||
| fi | ||
| fi |
There was a problem hiding this comment.
Inline session validation does not check schema version.
The inline Node.js code checks resumable and status but does not validate data.version === 1 (the SESSION_VERSION constant from src/lib/onboard-session.ts). If the session schema changes in a future version, this could incorrectly detect an incompatible session as resumable.
The CLI's --resume handler will perform proper validation, so this only affects the decision to pass --resume, not data integrity.
🛡️ Proposed fix to add version check
const data = JSON.parse(fs.readFileSync(file, "utf8"));
- const resumable = data && data.resumable !== false;
+ const resumable = data && data.version === 1 && data.resumable !== false;
const status = data && data.status;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if command_exists node && [[ -f "${HOME}/.nemoclaw/onboard-session.json" ]]; then | |
| if node -e ' | |
| const fs = require("fs"); | |
| const file = process.argv[1]; | |
| try { | |
| const data = JSON.parse(fs.readFileSync(file, "utf8")); | |
| const resumable = data && data.resumable !== false; | |
| const status = data && data.status; | |
| process.exit(resumable && status && status !== "complete" ? 0 : 1); | |
| } catch { | |
| process.exit(1); | |
| } | |
| ' "${HOME}/.nemoclaw/onboard-session.json"; then | |
| info "Found an interrupted onboarding session — resuming it." | |
| onboard_cmd+=(--resume) | |
| fi | |
| fi | |
| if command_exists node && [[ -f "${HOME}/.nemoclaw/onboard-session.json" ]]; then | |
| if node -e ' | |
| const fs = require("fs"); | |
| const file = process.argv[1]; | |
| try { | |
| const data = JSON.parse(fs.readFileSync(file, "utf8")); | |
| const resumable = data && data.version === 1 && data.resumable !== false; | |
| const status = data && data.status; | |
| process.exit(resumable && status && status !== "complete" ? 0 : 1); | |
| } catch { | |
| process.exit(1); | |
| } | |
| ' "${HOME}/.nemoclaw/onboard-session.json"; then | |
| info "Found an interrupted onboarding session — resuming it." | |
| onboard_cmd+=(--resume) | |
| fi | |
| fi |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/install.sh` around lines 911 - 927, The inline Node.js validator in
scripts/install.sh currently only checks data.resumable and data.status and
needs to also verify the session schema version matches the expected
SESSION_VERSION; update the inline script that reads
"${HOME}/.nemoclaw/onboard-session.json" to parse data.version and require it
equals 1 (the SESSION_VERSION from src/lib/onboard-session.ts) before exiting 0,
so only sessions with version === 1 and resumable and non-"complete" status will
cause --resume to be added.
| function runPolicyAdd(confirmAnswer) { | ||
| const tmpDir = fs.mkdtempSync(path.join(os.tmpdir(), "nemoclaw-policy-add-")); | ||
| const scriptPath = path.join(tmpDir, "policy-add-check.js"); | ||
| const script = String.raw` | ||
| const registry = require(${REGISTRY_PATH}); | ||
| const policies = require(${POLICIES_PATH}); | ||
| const credentials = require(${CREDENTIALS_PATH}); | ||
| const calls = []; | ||
| policies.selectFromList = async () => "pypi"; | ||
| credentials.prompt = async (message) => { | ||
| calls.push({ type: "prompt", message }); | ||
| return ${JSON.stringify(confirmAnswer)}; | ||
| }; | ||
| registry.getSandbox = (name) => (name === "test-sandbox" ? { name } : null); | ||
| registry.listSandboxes = () => ({ sandboxes: [{ name: "test-sandbox" }] }); | ||
| policies.listPresets = () => [ | ||
| { name: "npm", description: "npm and Yarn registry access" }, | ||
| { name: "pypi", description: "Python Package Index (PyPI) access" }, | ||
| ]; | ||
| policies.getAppliedPresets = () => []; | ||
| policies.applyPreset = (sandboxName, presetName) => { | ||
| calls.push({ type: "apply", sandboxName, presetName }); | ||
| }; | ||
| process.argv = ["node", "nemoclaw.js", "test-sandbox", "policy-add"]; | ||
| require(${CLI_PATH}); | ||
| setImmediate(() => { | ||
| process.stdout.write(JSON.stringify(calls)); | ||
| }); | ||
| `; | ||
|
|
||
| fs.writeFileSync(scriptPath, script); | ||
|
|
||
| return spawnSync(process.execPath, [scriptPath], { | ||
| cwd: REPO_ROOT, | ||
| encoding: "utf-8", | ||
| env: { | ||
| ...process.env, | ||
| HOME: tmpDir, | ||
| }, | ||
| }); | ||
| } |
There was a problem hiding this comment.
Add timeout and temp-dir cleanup in runPolicyAdd.
Line [54] uses spawnSync without a timeout, so a CLI hang can stall the entire test run. Also, Line [23] temp dirs are never cleaned up.
Proposed patch
function runPolicyAdd(confirmAnswer) {
const tmpDir = fs.mkdtempSync(path.join(os.tmpdir(), "nemoclaw-policy-add-"));
const scriptPath = path.join(tmpDir, "policy-add-check.js");
const script = String.raw`
...
`;
fs.writeFileSync(scriptPath, script);
-
- return spawnSync(process.execPath, [scriptPath], {
- cwd: REPO_ROOT,
- encoding: "utf-8",
- env: {
- ...process.env,
- HOME: tmpDir,
- },
- });
+ try {
+ return spawnSync(process.execPath, [scriptPath], {
+ cwd: REPO_ROOT,
+ encoding: "utf-8",
+ timeout: 5000,
+ env: {
+ ...process.env,
+ HOME: tmpDir,
+ },
+ });
+ } finally {
+ fs.rmSync(tmpDir, { recursive: true, force: true });
+ }
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| function runPolicyAdd(confirmAnswer) { | |
| const tmpDir = fs.mkdtempSync(path.join(os.tmpdir(), "nemoclaw-policy-add-")); | |
| const scriptPath = path.join(tmpDir, "policy-add-check.js"); | |
| const script = String.raw` | |
| const registry = require(${REGISTRY_PATH}); | |
| const policies = require(${POLICIES_PATH}); | |
| const credentials = require(${CREDENTIALS_PATH}); | |
| const calls = []; | |
| policies.selectFromList = async () => "pypi"; | |
| credentials.prompt = async (message) => { | |
| calls.push({ type: "prompt", message }); | |
| return ${JSON.stringify(confirmAnswer)}; | |
| }; | |
| registry.getSandbox = (name) => (name === "test-sandbox" ? { name } : null); | |
| registry.listSandboxes = () => ({ sandboxes: [{ name: "test-sandbox" }] }); | |
| policies.listPresets = () => [ | |
| { name: "npm", description: "npm and Yarn registry access" }, | |
| { name: "pypi", description: "Python Package Index (PyPI) access" }, | |
| ]; | |
| policies.getAppliedPresets = () => []; | |
| policies.applyPreset = (sandboxName, presetName) => { | |
| calls.push({ type: "apply", sandboxName, presetName }); | |
| }; | |
| process.argv = ["node", "nemoclaw.js", "test-sandbox", "policy-add"]; | |
| require(${CLI_PATH}); | |
| setImmediate(() => { | |
| process.stdout.write(JSON.stringify(calls)); | |
| }); | |
| `; | |
| fs.writeFileSync(scriptPath, script); | |
| return spawnSync(process.execPath, [scriptPath], { | |
| cwd: REPO_ROOT, | |
| encoding: "utf-8", | |
| env: { | |
| ...process.env, | |
| HOME: tmpDir, | |
| }, | |
| }); | |
| } | |
| function runPolicyAdd(confirmAnswer) { | |
| const tmpDir = fs.mkdtempSync(path.join(os.tmpdir(), "nemoclaw-policy-add-")); | |
| const scriptPath = path.join(tmpDir, "policy-add-check.js"); | |
| const script = String.raw` | |
| const registry = require(${REGISTRY_PATH}); | |
| const policies = require(${POLICIES_PATH}); | |
| const credentials = require(${CREDENTIALS_PATH}); | |
| const calls = []; | |
| policies.selectFromList = async () => "pypi"; | |
| credentials.prompt = async (message) => { | |
| calls.push({ type: "prompt", message }); | |
| return ${JSON.stringify(confirmAnswer)}; | |
| }; | |
| registry.getSandbox = (name) => (name === "test-sandbox" ? { name } : null); | |
| registry.listSandboxes = () => ({ sandboxes: [{ name: "test-sandbox" }] }); | |
| policies.listPresets = () => [ | |
| { name: "npm", description: "npm and Yarn registry access" }, | |
| { name: "pypi", description: "Python Package Index (PyPI) access" }, | |
| ]; | |
| policies.getAppliedPresets = () => []; | |
| policies.applyPreset = (sandboxName, presetName) => { | |
| calls.push({ type: "apply", sandboxName, presetName }); | |
| }; | |
| process.argv = ["node", "nemoclaw.js", "test-sandbox", "policy-add"]; | |
| require(${CLI_PATH}); | |
| setImmediate(() => { | |
| process.stdout.write(JSON.stringify(calls)); | |
| }); | |
| `; | |
| fs.writeFileSync(scriptPath, script); | |
| try { | |
| return spawnSync(process.execPath, [scriptPath], { | |
| cwd: REPO_ROOT, | |
| encoding: "utf-8", | |
| timeout: 5000, | |
| env: { | |
| ...process.env, | |
| HOME: tmpDir, | |
| }, | |
| }); | |
| } finally { | |
| fs.rmSync(tmpDir, { recursive: true, force: true }); | |
| } | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/policies.test.js` around lines 22 - 62, The runPolicyAdd helper creates
a temp dir and calls spawnSync without a timeout and never cleans up the temp
dir; update runPolicyAdd to pass a reasonable timeout (e.g., 5000 ms) in the
spawnSync options (the call that uses process.execPath and scriptPath) and
ensure the tmpDir (and the script file) are removed after spawnSync completes
(use fs.rmSync or fs.unlinkSync/fs.rmdirSync in a finally block) so the temp
directory is always cleaned up even on timeout or error; reference runPolicyAdd,
tmpDir, scriptPath, and the spawnSync invocation when making the change.
|
Closing — the host-side Telegram bridge ( |
Problem
tgApi()inscripts/telegram-bridge.jshas no client-side socket timeout. When the TCP connection silently dies (e.g. NAT timeout, ISP routing change, network blip during the 30-second long-poll window), the HTTPS request hangs forever and the poll loop never recovers — the bot stops responding until the process is restarted.Root cause:
https.request()is called without atimeoutoption, and there is notimeoutevent handler. A dead TCP connection (no RST/FIN received) causes the request to wait indefinitely.Solution
Bug fix — add a 60s socket timeout with
req.destroy()on the timeout event. Longer than Telegram's 30s server-side long-poll, so normal responses are unaffected. The existingpoll()catch block already continues to the next iteration, so the loop self-heals.Extract shared module — move
tgApifrom an inline closure intelegram-bridge.jsintobin/lib/telegram-api.js. Token is now a parameter instead of captured from module scope. This lets tests import the real production function instead of duplicating it.Changes
bin/lib/telegram-api.jstgApi(token, method, body, opts)with timeout + opts for test overridesscripts/telegram-bridge.jstgApiwith one-line wrapper calling shared moduletest/telegram-api.test.jstgApi, verified against local TLS serversTest plan
Signed-off-by: Tommy Lin tommylin@signalpro.com.tw
Summary by CodeRabbit
Refactor
New Features
Tests