Skip to content

Commit 50dc270

Browse files
committed
fix: auto-install node-llama-cpp for local embeddings (memory_search)
OpenClaw's default memorySearch.provider is 'local', which requires node-llama-cpp. This optional peer dependency was never auto-installed, causing memory_search to silently fail for every EverClaw user. Changes: - diagnose.sh: Add check A10 (node-llama-cpp detection) - setup.mjs: Add Stage 5 (auto-install with --skip-embeddings flag) - install.sh: Add node-llama-cpp check on every install/upgrade - Dockerfile: Install node-llama-cpp@3.18.1 in production image - CHANGELOG.md, SKILL.md, README.md: Documentation updates Detection uses NODE_PATH + require.resolve (CJS) since ESM import() ignores NODE_PATH. All installs are non-blocking with graceful fallback.
1 parent 074c150 commit 50dc270

File tree

7 files changed

+114
-0
lines changed

7 files changed

+114
-0
lines changed

CHANGELOG.md

Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,23 @@
22

33
All notable changes to EverClaw are documented here.
44

5+
## [UNRELEASED] — Local Embeddings Fix (node-llama-cpp)
6+
7+
### Fixed
8+
- **Silent `memory_search` failure on all fresh installs** — OpenClaw's default `memorySearch.provider` is `"local"`, which requires `node-llama-cpp`. This optional peer dependency was never auto-installed, causing `memory_search` to silently fail for every EverClaw user. Now detected and installed automatically across all modalities.
9+
10+
### Added
11+
- **`diagnose.sh` check A10** — New diagnostic check detects whether `node-llama-cpp` is installed. Reports clear fix instructions if missing.
12+
- **`setup.mjs` Stage 5 (Memory Search)** — Auto-installs `node-llama-cpp@3.18.1` globally during setup. Shows real-time progress. Post-install verification confirms the module loads correctly.
13+
- **`--skip-embeddings` flag** — New `setup.mjs` flag to skip the node-llama-cpp install (for headless/minimal installs).
14+
- **`install.sh` node-llama-cpp check** — Runs on every install/upgrade, catching existing users who never re-run setup.
15+
- **Dockerfile node-llama-cpp install** — Production Docker image now includes `node-llama-cpp@3.18.1` for container users.
16+
17+
### Technical Notes
18+
- Detection uses `NODE_PATH="$(npm root -g)"` with `require.resolve()` (CJS). ESM `import()` ignores `NODE_PATH` in Node.js — `require.resolve` is the reliable cross-platform approach for global module detection.
19+
- All installs are non-blocking: failure gracefully degrades to remote embedding providers or no memory search.
20+
- Version pinned to `node-llama-cpp@3.18.1` (latest stable, matches OpenClaw's peer dependency).
21+
522
## [2026.4.7.0355] - 2026-04-07 — Morpheus Agent Flavor + Docker Matrix Build
623

724
### Added

Dockerfile

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -105,6 +105,12 @@ WORKDIR /app
105105
# Copy built OpenClaw from stage 1
106106
COPY --from=openclaw-builder --chown=node:node /openclaw /app
107107

108+
# Install node-llama-cpp for local embeddings (optional peer dep of OpenClaw).
109+
# Without this, memory_search silently fails on all fresh installs.
110+
# Uses --no-save to avoid modifying package.json; || true so build doesn't
111+
# fail if native compilation fails on some architectures.
112+
RUN cd /app && npm install node-llama-cpp@3.18.1 --no-save 2>&1 || true
113+
108114
# Copy EverClaw skill into the workspace
109115
COPY --from=openclaw-builder --chown=node:node /everclaw-skill /home/node/.openclaw/workspace/skills/everclaw
110116

README.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -315,6 +315,7 @@ When a session ends, your MOR comes back. Open a new session with the same token
315315
- **ETH or USDC on Base** — to swap for MOR tokens
316316
- **macOS or Linux** — macOS Keychain or libsecret for native key storage; encrypted file fallback works everywhere
317317
- **age, zstd, jq** — for backup/restore features (auto-installed by `install.sh`)
318+
- **node-llama-cpp** — for local memory search embeddings (auto-installed by `setup.mjs` and `install.sh`)
318319

319320
That's it. No external accounts. No API keys. No subscriptions.
320321

SKILL.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -101,6 +101,7 @@ node ~/.openclaw/workspace/skills/everclaw/scripts/setup.mjs --key <API_KEY> --a
101101
| `--restart` | Restart OpenClaw [REDACTED] after apply |
102102
| `--with-ollama` | Also setup local Ollama inference as final fallback |
103103
| `--ollama-model <model>` | Override auto-detected Ollama model (e.g. `gemma4:26b`) |
104+
| `--skip-embeddings` | Skip node-llama-cpp install (local embeddings) |
104105
| `--security-tier <tier>` | Set security tier: `low`, `recommended`, `maximum` |
105106
| `--no-security` | Skip security tier setup |
106107

scripts/diagnose.sh

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -293,6 +293,18 @@ print(count)
293293
elif [[ "$non_streaming" -eq 0 ]]; then
294294
pass "All models have streaming enabled"
295295
fi
296+
297+
# A10: Is node-llama-cpp installed for local embeddings (memory_search)?
298+
if command -v node &>/dev/null; then
299+
if NODE_PATH="$(npm root -g 2>/dev/null)" node -e "try { require.resolve('node-llama-cpp'); process.exit(0) } catch { process.exit(1) }" 2>/dev/null; then
300+
pass "Local embeddings available (node-llama-cpp installed)"
301+
else
302+
warn "node-llama-cpp not installed — memory_search (local embeddings) will not work"
303+
fix "Install it: npm install -g node-llama-cpp@3.18.1"
304+
fix "Or set memorySearch.provider to 'openai', 'gemini', or 'voyage' in openclaw.json"
305+
fix "Memory search works without this, but only with a remote embedding provider"
306+
fi
307+
fi
296308
}
297309

298310
# ═════════════════════════════════════════════════════════════════════════════

scripts/install.sh

Lines changed: 25 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -424,6 +424,31 @@ else
424424
echo "⚠️ npm not found. Install Node.js, then run: cd $EVERCLAW_ROOT && npm install"
425425
fi
426426

427+
# --- Ensure node-llama-cpp for OpenClaw local embeddings ---
428+
if command -v npm &>/dev/null && command -v node &>/dev/null; then
429+
echo "🧠 Checking local embeddings (node-llama-cpp)..."
430+
NPM_GLOBAL_ROOT="$(npm root -g 2>/dev/null)"
431+
if NODE_PATH="$NPM_GLOBAL_ROOT" node -e "try { require.resolve('node-llama-cpp'); process.exit(0) } catch { process.exit(1) }" 2>/dev/null; then
432+
echo " ✅ node-llama-cpp already installed"
433+
else
434+
echo " 📦 Installing local embedding engine (node-llama-cpp@3.18.1)..."
435+
echo " One-time install, ~30-90s. May need build tools on some systems."
436+
if npm install -g node-llama-cpp@3.18.1; then
437+
# Post-install verification
438+
if NODE_PATH="$NPM_GLOBAL_ROOT" node -e "try { require.resolve('node-llama-cpp'); process.exit(0) } catch { process.exit(1) }" 2>/dev/null; then
439+
echo " ✅ node-llama-cpp installed — local memory search enabled"
440+
else
441+
echo " ⚠️ node-llama-cpp installed but import failed"
442+
echo " You may need build tools: Xcode CLT (macOS) or build-essential (Linux)"
443+
fi
444+
else
445+
echo " ⚠️ node-llama-cpp install failed (not critical)"
446+
echo " Install manually: npm install -g node-llama-cpp@3.18.1"
447+
echo " If build fails, you may need: Xcode CLT (macOS) or build-essential + cmake (Linux)"
448+
fi
449+
fi
450+
fi
451+
427452
# --- Bootstrap EverClaw Key (GLM-5 Starter Access) ---
428453
echo ""
429454
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"

scripts/setup.mjs

Lines changed: 52 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -521,6 +521,7 @@ Flags:
521521
--template Override OS auto-detection
522522
--with-ollama Also setup local Ollama inference fallback
523523
--ollama-model Override auto-detected Ollama model (e.g. gemma4:26b)
524+
--skip-embeddings Skip node-llama-cpp install (local embeddings)
524525
--security-tier Set security tier (low|recommended|maximum)
525526
--no-security Skip security tier prompt
526527
`);
@@ -549,6 +550,7 @@ const withOllama = args.includes('--with-ollama');
549550
const ollamaModel = getArg('--ollama-model');
550551
const securityTierArg = getArg('--security-tier');
551552
const noSecurity = args.includes('--no-security');
553+
const skipEmbeddings = args.includes('--skip-embeddings');
552554

553555
// ─── Stage 1: Template Discovery ───────────────────────────────
554556

@@ -776,6 +778,56 @@ if (applyMode) {
776778
console.log(' Or re-run with --restart to do it automatically.\n');
777779
}
778780
}
781+
// ─── Stage 5: Memory Search (Local Embeddings) ────────────────
782+
783+
if (skipEmbeddings) {
784+
console.log('\n ─── Memory Search (Local Embeddings) ────────────────────');
785+
console.log(' ⏭️ Skipped (--skip-embeddings)');
786+
} else {
787+
console.log('\n ─── Memory Search (Local Embeddings) ────────────────────');
788+
let npmGlobalRoot = '';
789+
try {
790+
npmGlobalRoot = execSync('npm root -g', { encoding: 'utf-8', timeout: 5000 }).trim();
791+
} catch {
792+
// npm root -g failed — fall through to install path (safe)
793+
}
794+
try {
795+
execSync('node -e "try { require.resolve(\'node-llama-cpp\'); process.exit(0) } catch { process.exit(1) }"', {
796+
timeout: 10000,
797+
stdio: 'pipe',
798+
env: { ...process.env, NODE_PATH: npmGlobalRoot },
799+
});
800+
console.log(' ✅ node-llama-cpp already installed');
801+
} catch {
802+
console.log(' 📦 Installing local embedding engine (node-llama-cpp@3.18.1)...');
803+
console.log(' One-time install, ~30-90s. May need build tools on some systems.');
804+
try {
805+
execSync('npm install -g node-llama-cpp@3.18.1', {
806+
timeout: 120000,
807+
stdio: 'inherit',
808+
});
809+
// Post-install verification
810+
try {
811+
execSync('node -e "try { require.resolve(\'node-llama-cpp\'); process.exit(0) } catch { process.exit(1) }"', {
812+
timeout: 10000,
813+
stdio: 'pipe',
814+
env: { ...process.env, NODE_PATH: npmGlobalRoot },
815+
});
816+
console.log(' ✅ node-llama-cpp installed — local memory search enabled');
817+
} catch {
818+
console.log(' ⚠️ node-llama-cpp installed but import failed');
819+
console.log(' You may need build tools: Xcode CLT (macOS) or build-essential (Linux)');
820+
console.log(' Memory search will fall back to remote provider if configured');
821+
}
822+
} catch {
823+
console.log(' ⚠️ node-llama-cpp install failed (not critical)');
824+
console.log(' Memory search will use remote provider or be unavailable');
825+
console.log(' Install manually: npm install -g node-llama-cpp@3.18.1');
826+
console.log(' If build fails, you may need: Xcode CLT (macOS) or build-essential + cmake (Linux)');
827+
}
828+
}
829+
}
830+
779831
} else {
780832
// Dry-run — but still allow --test
781833
if (testMode) {

0 commit comments

Comments
 (0)