Skip to content

[NemoClaw][All platforms][Regression] None policy added but NPM install is able to execute #1458

@zNeill

Description

@zNeill

Description

[Description]
onboard without any policy, but npm install is able to execute, but pip install is rejected

[Environment]

  • Device: ubuntu 22.04
  • Node.js: v22.22.2
  • npm: 10.9.7
  • Docker: Docker Engine 29.1.3
  • OpenShell CLI: 0.0.21
  • NemoClaw: 0.0.4
  • OpenClaw: 2026.3.11 (29dc654)

[Steps to Reproduce]

local-lynnh@2u1g-b650-0235:~$ nemoclaw onboard

  NemoClaw Onboarding
  ===================

  [1/7] Preflight checks
  ──────────────────────────────────────────────────
  ✓ Docker is running
  ✓ Container runtime: docker
  ✓ openshell CLI: openshell 0.0.21
  ✓ Port 8080 already owned by healthy NemoClaw runtime (OpenShell gateway)
  ✓ Port 18789 already owned by healthy NemoClaw runtime (NemoClaw dashboard)
  ⓘ No GPU detected — will use cloud inference
  ✓ Memory OK: 127984 MB RAM + 0 MB swap

  [2/7] Starting OpenShell gateway
  ──────────────────────────────────────────────────
  [reuse] Skipping gateway (running)
  Reusing healthy NemoClaw gateway.

  [3/7] Configuring inference (NIM)
  ──────────────────────────────────────────────────

  Inference options:
    1) NVIDIA Endpoints
    2) OpenAI
    3) Other OpenAI-compatible endpoint
    4) Anthropic
    5) Other Anthropic-compatible endpoint
    6) Google Gemini

  Choose [1]: 1

  Cloud models:
    1) Nemotron 3 Super 120B (nvidia/nemotron-3-super-120b-a12b)
    2) Kimi K2.5 (moonshotai/kimi-k2.5)
    3) GLM-5 (z-ai/glm5)
    4) MiniMax M2.5 (minimaxai/minimax-m2.5)
    5) GPT-OSS 120B (openai/gpt-oss-120b)
    6) Other...

  Choose model [1]: 1
  Responses API available — OpenClaw will use openai-responses.
  Using NVIDIA Endpoints with model: nvidia/nemotron-3-super-120b-a12b

  [4/7] Setting up inference provider
  ──────────────────────────────────────────────────
✓ Active gateway set to 'nemoclaw'
Error:   × status: AlreadyExists, message: "provider already exists", details: [],
  │ metadata: MetadataMap { headers: {"content-type": "application/grpc",
  │ "date": "Fri, 03 Apr 2026 09:54:25 GMT"} }

✓ Updated provider nvidia-prod
✓ Updated provider nvidia-prod
Gateway inference configured:

  Route: inference.local
  Provider: nvidia-prod
  Model: nvidia/nemotron-3-super-120b-a12b
  Version: 7
  Timeout: 60s (default)
  ✓ Inference route set: nvidia-prod / nvidia/nemotron-3-super-120b-a12b

  [5/7] Creating sandbox
  ──────────────────────────────────────────────────
  Sandbox name (lowercase, numbers, hyphens) [my-assistant]: test0
  Creating sandbox 'test0' (this takes a few minutes on first run)...
  Building image openshell/sandbox-from:1775210070 from /tmp/nemoclaw-build-iKW07C/Dockerfile
  Step 1/36 : ARG BASE_IMAGE=ghcr.io/nvidia/nemoclaw/sandbox-base:latest
  Step 2/36 : FROM node:22-slim@sha256:4f77a690f2f8946ab16fe1e791a3ac0667ae1c3575c3e4d0d4589e9ed5bfaf3d AS builder
  Step 3/36 : COPY nemoclaw/package.json nemoclaw/tsconfig.json /opt/nemoclaw/
  Step 4/36 : COPY nemoclaw/src/ /opt/nemoclaw/src/
  Step 5/36 : WORKDIR /opt/nemoclaw
  Step 6/36 : RUN npm install && npm run build
  Step 7/36 : FROM ${BASE_IMAGE}
  Step 8/36 : RUN (apt-get remove --purge -y gcc gcc-12 g++ g++-12 cpp cpp-12 make         netcat-openbsd netcat-traditional ncat 2>/dev/null || true)     && apt-get autoremove --purge -y     && rm -rf /var/lib/apt/lists/*
  Step 9/36 : COPY --from=builder /opt/nemoclaw/dist/ /opt/nemoclaw/dist/
  Step 10/36 : COPY nemoclaw/openclaw.plugin.json /opt/nemoclaw/
  Step 11/36 : COPY nemoclaw/package.json nemoclaw/package-lock.json /opt/nemoclaw/
  Step 12/36 : COPY nemoclaw-blueprint/ /opt/nemoclaw-blueprint/
  Step 13/36 : WORKDIR /opt/nemoclaw
  Step 14/36 : RUN npm ci --omit=dev
  Step 15/36 : RUN mkdir -p /sandbox/.nemoclaw/blueprints/0.1.0     && cp -r /opt/nemoclaw-blueprint/* /sandbox/.nemoclaw/blueprints/0.1.0/
  Step 16/36 : COPY scripts/nemoclaw-start.sh /usr/local/bin/nemoclaw-start
  Step 17/36 : RUN chmod 755 /usr/local/bin/nemoclaw-start
  Step 18/36 : ARG NEMOCLAW_MODEL=nvidia/nemotron-3-super-120b-a12b
  Step 19/36 : ARG NEMOCLAW_PROVIDER_KEY=inference
  Step 20/36 : ARG NEMOCLAW_PRIMARY_MODEL_REF=inference/nvidia/nemotron-3-super-120b-a12b
  Step 21/36 : ARG CHAT_UI_URL=http://127.0.0.1:18789
  Step 22/36 : ARG NEMOCLAW_INFERENCE_BASE_URL=https://inference.local/v1
  Step 23/36 : ARG NEMOCLAW_INFERENCE_API=openai-responses
  Step 24/36 : ARG NEMOCLAW_INFERENCE_COMPAT_B64=e30=
  Step 25/36 : ARG NEMOCLAW_DISABLE_DEVICE_AUTH=1
  Step 26/36 : ARG NEMOCLAW_BUILD_ID=1775210070880
  Step 27/36 : ENV NEMOCLAW_MODEL=${NEMOCLAW_MODEL}     NEMOCLAW_PROVIDER_KEY=${NEMOCLAW_PROVIDER_KEY}     NEMOCLAW_PRIMARY_MODEL_REF=${NEMOCLAW_PRIMARY_MODEL_REF}     CHAT_UI_URL=${CHAT_UI_URL}     NEMOCLAW_INFERENCE_BASE_URL=${NEMOCLAW_INFERENCE_BASE_URL}     NEMOCLAW_INFERENCE_API=${NEMOCLAW_INFERENCE_API}     NEMOCLAW_INFERENCE_COMPAT_B64=${NEMOCLAW_INFERENCE_COMPAT_B64}     NEMOCLAW_DISABLE_DEVICE_AUTH=${NEMOCLAW_DISABLE_DEVICE_AUTH}
  Step 28/36 : WORKDIR /sandbox
  Step 29/36 : USER sandbox
  Step 30/36 : RUN python3 -c "import base64, json, os, secrets; from urllib.parse import urlparse; model = os.environ['NEMOCLAW_MODEL']; chat_ui_url = os.environ['CHAT_UI_URL']; provider_key = os.environ['NEMOCLAW_PROVIDER_KEY']; primary_model_ref = os.environ['NEMOCLAW_PRIMARY_MODEL_REF']; inference_base_url = os.environ['NEMOCLAW_INFERENCE_BASE_URL']; inference_api = os.environ['NEMOCLAW_INFERENCE_API']; inference_compat = json.loads(base64.b64decode(os.environ['NEMOCLAW_INFERENCE_COMPAT_B64']).decode('utf-8')); parsed = urlparse(chat_ui_url); chat_origin = f'{parsed.scheme}://{parsed.netloc}' if parsed.scheme and parsed.netloc else 'http://127.0.0.1:18789'; origins = ['http://127.0.0.1:18789']; origins = list(dict.fromkeys(origins + [chat_origin])); disable_device_auth = os.environ.get('NEMOCLAW_DISABLE_DEVICE_AUTH', '') == '1'; allow_insecure = parsed.scheme == 'http'; providers = {     provider_key: {         'baseUrl': inference_base_url,         'apiKey': 'unused',         'api': inference_api,         'models': [{**({'compat': inference_compat} if inference_compat else {}), 'id': model, 'name': primary_model_ref, 'reasoning': False, 'input': ['text'], 'cost': {'input': 0, 'output': 0, 'cacheRead': 0, 'cacheWrite': 0}, 'contextWindow': 131072, 'maxTokens': 4096}]     } }; config = {     'agents': {'defaults': {'model': {'primary': primary_model_ref}}},     'models': {'mode': 'merge', 'providers': providers},     'channels': {'defaults': {'configWrites': False}},     'gateway': {         'mode': 'local',         'controlUi': {             'allowInsecureAuth': allow_insecure,             'dangerouslyDisableDeviceAuth': disable_device_auth,             'allowedOrigins': origins,         },         'trustedProxies': ['127.0.0.1', '::1'],         'auth': {'token': secrets.token_hex(32)}     } }; path = os.path.expanduser('~/.openclaw/openclaw.json'); json.dump(config, open(path, 'w'), indent=2); os.chmod(path, 0o600)"
  Step 31/36 : RUN openclaw doctor --fix > /dev/null 2>&1 || true     && openclaw plugins install /opt/nemoclaw > /dev/null 2>&1 || true
  Step 32/36 : USER root
  Step 33/36 : RUN chown root:root /sandbox/.openclaw     && find /sandbox/.openclaw -mindepth 1 -maxdepth 1 -exec chown -h root:root {} +     && chmod 755 /sandbox/.openclaw     && chmod 444 /sandbox/.openclaw/openclaw.json
  Step 34/36 : RUN sha256sum /sandbox/.openclaw/openclaw.json > /sandbox/.openclaw/.config-hash     && chmod 444 /sandbox/.openclaw/.config-hash     && chown root:root /sandbox/.openclaw/.config-hash
  Step 35/36 : ENTRYPOINT ["/usr/local/bin/nemoclaw-start"]
  Step 36/36 : CMD ["/bin/bash"]
  Built image openshell/sandbox-from:1775210070
  Uploading image into OpenShell gateway...
  Pushing image openshell/sandbox-from:1775210070 into gateway "nemoclaw"
  [progress] Exported 100 MiB
  [progress] Exported 200 MiB
  [progress] Exported 300 MiB
  [progress] Exported 400 MiB
  [progress] Exported 500 MiB
  [progress] Exported 600 MiB
  [progress] Exported 700 MiB
  [progress] Exported 800 MiB
  [progress] Exported 900 MiB
  [progress] Exported 1000 MiB
  [progress] Exported 1100 MiB
  [progress] Exported 1200 MiB
  [progress] Exported 1300 MiB
  [progress] Exported 1400 MiB
  [progress] Exported 1500 MiB
  [progress] Exported 1600 MiB
  [progress] Exported 1700 MiB
  [progress] Exported 1800 MiB
  [progress] Exported 1900 MiB
  [progress] Exported 2000 MiB
  [progress] Exported 2100 MiB
  [progress] Exported 2200 MiB
  [progress] Exported 2300 MiB
  [progress] Exported 2359 MiB
  [progress] Uploaded to gateway
  Image openshell/sandbox-from:1775210070 is available in the gateway.
  Waiting for sandbox to become ready...
  Sandbox reported Ready before create stream exited; continuing.
  Waiting for sandbox to become ready...
  Waiting for NemoClaw dashboard to become ready...
  Dashboard taking longer than expected to start. Continuing...
→ Found forward on sandbox 'test-openai'
✓ Stopped forward of port 18789 for sandbox test-openai
  Setting up sandbox DNS proxy...
Setting up DNS proxy in pod 'test0' (10.200.0.1:53 -> 10.42.0.4)...
  [PASS] DNS forwarder running (pid=370): dns-proxy: 10.200.0.1:53 -> 10.42.0.4:53 pid=370
  [PASS] resolv.conf -> nameserver 10.200.0.1
  [PASS] iptables: UDP 10.200.0.1:53 ACCEPT rule present
  [PASS] getent hosts github.com -> 140.82.116.4    github.com
  DNS verification: 4 passed, 0 failed
  ✓ Sandbox 'test0' created

  [6/7] Setting up OpenClaw inside sandbox
  ──────────────────────────────────────────────────
  ✓ OpenClaw gateway launched inside sandbox

  [7/7] Policy presets
  ──────────────────────────────────────────────────

  Available policy presets:
    ○ discord — Discord API, gateway, and CDN access
    ○ docker — Docker Hub and NVIDIA container registry access
    ○ huggingface — Hugging Face Hub, LFS, and Inference API access
    ○ jira — Jira and Atlassian Cloud access
    ○ npm — npm and Yarn registry access (suggested)
    ○ outlook — Microsoft Outlook and Graph API access
    ○ pypi — Python Package Index (PyPI) access (suggested)
    ○ slack — Slack API, Socket Mode, and webhooks access
    ○ telegram — Telegram Bot API access

  Apply suggested presets (pypi, npm)? [Y/n/list]: n
  Skipping policy presets.

  ──────────────────────────────────────────────────
  Sandbox      test0 (Landlock + seccomp + netns)
  Model        nvidia/nemotron-3-super-120b-a12b (NVIDIA Endpoints)
  NIM          not running
  ──────────────────────────────────────────────────
  Run:         nemoclaw test0 connect
  Status:      nemoclaw test0 status
  Logs:        nemoclaw test0 logs --follow

  OpenClaw UI (tokenized URL; treat it like a password)
  Port 18789 must be forwarded before opening this URL.
  http://127.0.0.1:18789/#token=1138d774e93f74edca58c8335362a0be55c6a1813440d97d4f9ac9a0e84727d8
  ──────────────────────────────────────────────────

local-lynnh@2u1g-b650-0235:~$  nemoclaw test0 connect
sandbox@test0:~$ npm install lodash
(node:611) [UNDICI-EHPA] Warning: EnvHttpProxyAgent is experimental, expect them to change at any time.
(Use `node --trace-warnings ...` to show where the warning was created)

added 1 package in 409ms
npm notice
npm notice New major version of npm available! 10.9.4 -> 11.12.1
npm notice Changelog: https://github.com/npm/cli/releases/tag/v11.12.1
npm notice To update run: npm install -g npm@11.12.1
npm notice
sandbox@test0:~$ exit
exit
local-lynnh@2u1g-b650-0235:~$ nemoclaw list

  Sandboxes:
    test1 *
      model: unknown  provider: unknown  CPU  policies: pypi, npm
    test3
      model: unknown  provider: unknown  CPU  policies: pypi, npm
    test4
      model: unknown  provider: unknown  CPU  policies: pypi, npm
    test-openai
      model: unknown  provider: unknown  CPU  policies: pypi, npm
    test0
      model: unknown  provider: unknown  CPU  policies: none

  * = default sandbox

local-lynnh@2u1g-b650-0235:~$
local-lynnh@2u1g-b650-0235:~$
local-lynnh@2u1g-b650-0235:~$
local-lynnh@2u1g-b650-0235:~$
local-lynnh@2u1g-b650-0235:~$
local-lynnh@2u1g-b650-0235:~$
local-lynnh@2u1g-b650-0235:~$ nemoclaw test0 connect
sandbox@test0:~$ pip install requests
error: externally-managed-environment

× This environment is externally managed
╰─> To install Python packages system-wide, try apt install
    python3-xyz, where xyz is the package you are trying to
    install.

    If you wish to install a non-Debian-packaged Python package,
    create a virtual environment using python3 -m venv path/to/venv.
    Then use path/to/venv/bin/python and path/to/venv/bin/pip. Make
    sure you have python3-full installed.

    If you wish to install a non-Debian packaged Python application,
    it may be easiest to use pipx install xyz, which will manage a
    virtual environment for you. Make sure you have pipx installed.

    See /usr/share/doc/python3.11/README.venv for more information.

note: If you believe this is a mistake, please contact your Python installation or OS distribution provider. You can override this, at the risk of breaking your Python installation or OS, by passing --break-system-packages.
hint: See PEP 668 for the detailed specification.
sandbox@test0:~$ python3 -m venv .venv
sandbox@test0:~$ source .venv/bin/activate
(.venv) sandbox@test0:~$ pip install requests
WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProxyError('Cannot connect to proxy.', OSError('Tunnel connection failed: 403 Forbidden'))': /simple/requests/
WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProxyError('Cannot connect to proxy.', OSError('Tunnel connection failed: 403 Forbidden'))': /simple/requests/
WARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProxyError('Cannot connect to proxy.', OSError('Tunnel connection failed: 403 Forbidden'))': /simple/requests/
WARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProxyError('Cannot connect to proxy.', OSError('Tunnel connection failed: 403 Forbidden'))': /simple/requests/
WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProxyError('Cannot connect to proxy.', OSError('Tunnel connection failed: 403 Forbidden'))': /simple/requests/
ERROR: Could not find a version that satisfies the requirement requests (from versions: none)
ERROR: No matching distribution found for requests
(.venv) sandbox@test0:~$  npm uninstall lodash
(node:4165) [UNDICI-EHPA] Warning: EnvHttpProxyAgent is experimental, expect them to change at any time.
(Use `node --trace-warnings ...` to show where the warning was created)

removed 1 package, and audited 1 package in 181ms

found 0 vulnerabilities
(.venv) sandbox@test0:~$ npm install lodash
(node:4225) [UNDICI-EHPA] Warning: EnvHttpProxyAgent is experimental, expect them to change at any time.
(Use `node --trace-warnings ...` to show where the warning was created)

added 1 package, and audited 2 packages in 411ms

found 0 vulnerabilities
(.venv) sandbox@test0:~$ exit
exit
local-lynnh@2u1g-b650-0235:~$ nemoclaw list

  Sandboxes:
    test1 *
      model: unknown  provider: unknown  CPU  policies: pypi, npm
    test3
      model: unknown  provider: unknown  CPU  policies: pypi, npm
    test4
      model: unknown  provider: unknown  CPU  policies: pypi, npm
    test-openai
      model: unknown  provider: unknown  CPU  policies: pypi, npm
    test0
      model: unknown  provider: unknown  CPU  policies: none

  * = default sandbox

[Expected Behavior]

npm install should be rejected due to none policy

[Actual Behavior]
npm install and uninstall are all success


[NVB# 6045053]

[NVB#6045053]

Metadata

Metadata

Assignees

No one assigned

    Labels

    NV QABugs found by the NVIDIA QA TeamPlatform: UbuntuSupport for Linux UbuntubugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions