Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

nvidia-nemoclaw

NVIDIA NemoClaw plugin for secure sandboxed installation and orchestration of OpenClaw always-on AI assistants via OpenShell

MIT-0 · Free to use, modify, and redistribute. No attribution required.
0 · 105 · 1 current installs · 1 all-time installs
MIT-0
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
!
Purpose & Capability
The SKILL.md describes an installer/orchestrator that needs system-level components (Node.js, Docker, OpenShell) and an NVIDIA API key — which matches the declared functionality. However the registry metadata lists no required environment variables or credentials while the SKILL.md explicitly requires NVIDIA_API_KEY. Also the skill appears to present itself as an 'NVIDIA' plugin but the registry owner and authorship (ara.so / owner kn7a...) are not NVIDIA; this suggests possible impersonation or at least unclear provenance.
!
Instruction Scope
Runtime instructions direct the agent/user to run a remote installer via 'curl | bash', install system components (Node, Docker), run a guided onboarding that collects API keys, create sandboxes under /var, and start auxiliary services (Telegram bridge, tunnels). These are broad system-level actions and the instructions do not limit or explicitly require elevated privileges but imply them. The installer will collect and use an NVIDIA_API_KEY; that credential is needed for the claimed cloud inference purpose, but the skill's declared metadata omitted it.
!
Install Mechanism
The one-line installer uses a direct download-and-execute pattern (curl -fsSL https://nvidia.com/nemoclaw.sh | bash). Even though the domain is a major vendor (nvidia.com), piping remote shell scripts to bash is high-risk because it executes arbitrary code on the host. There is no local reproducible install spec in the registry (no packaged artifact included), and the SKILL.md encourages both the remote installer and a manual install from a GitHub repo - which is better, but the one-liner remains a risky default.
!
Credentials
The instructions require an NVIDIA_API_KEY (and optionally NEMOCLAW_MODEL and NEMOCLAW_SANDBOX_DIR). Requiring the cloud API key is proportionate to routing inference through NVIDIA cloud. However, the registry metadata advertises no required environment variables, creating a mismatch. The onboarding and library code examples call process.env directly, and the wizard prompts for the API key; asking for a cloud API key is expected but must be declared up-front in metadata.
Persistence & Privilege
The skill does not request 'always: true' and allows user invocation. However the installer and runtime will likely create persistent sandboxes, system services, and directories under /var and may install or manage auxiliary services (tunnels, Telegram bridge). That implies lasting system changes and elevated privileges at install/run time even though the skill metadata does not advertise such privileges explicitly.
What to consider before installing
Do not run the provided one-line 'curl | bash' installer without inspection. Before installing: 1) Verify provenance — the SKILL.md references NVIDIA projects but the registry owner is not NVIDIA; confirm the official repository and author. 2) Inspect the installer script (download it and read it) and prefer manual install from the official GitHub repo/release. 3) Expect the installer to require root privileges, install Node/Docker, create /var/sandboxes, and open network tunnels — run it in a disposable VM or isolated host first. 4) Treat your NVIDIA_API_KEY as a secret: only provide it to trusted, verified code. 5) If you need this functionality, prefer installing from an official NVIDIA release or building from source, and check cryptographic signatures/digests when available. If you cannot verify the publisher or contents of the installer, avoid installing this skill.

Like a lobster shell, security has layers — review code before you run it.

Current versionv1.0.0
Download zip
latestvk97bjm28czn6xqgtqt0d54hgv18326r4

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

NVIDIA NemoClaw

Skill by ara.so — Daily 2026 Skills collection.

NVIDIA NemoClaw is an open-source TypeScript CLI plugin that simplifies running OpenClaw always-on AI assistants securely. It installs and orchestrates the NVIDIA OpenShell runtime, creates policy-enforced sandboxes, and routes all inference through NVIDIA cloud (Nemotron models). Network egress, filesystem access, syscalls, and model API calls are all governed by declarative policy.

Status: Alpha — interfaces and APIs may change without notice.


Installation

Prerequisites

  • Linux Ubuntu 22.04 LTS or later
  • Node.js 20+ and npm 10+ (Node.js 22 recommended)
  • Docker installed and running
  • NVIDIA OpenShell installed

One-Line Installer

curl -fsSL https://nvidia.com/nemoclaw.sh | bash

This installs Node.js (if absent), runs the guided onboard wizard, creates a sandbox, configures inference, and applies security policies.

Manual Install (from source)

git clone https://github.com/NVIDIA/NemoClaw.git
cd NemoClaw
npm install
npm run build
npm link  # makes `nemoclaw` available globally

Environment Variables

# Required: NVIDIA cloud API key for Nemotron inference
export NVIDIA_API_KEY="nvapi-xxxxxxxxxxxx"

# Optional: override default model
export NEMOCLAW_MODEL="nvidia/nemotron-3-super-120b-a12b"

# Optional: custom sandbox data directory
export NEMOCLAW_SANDBOX_DIR="/var/nemoclaw/sandboxes"

Get an API key at build.nvidia.com.


Quick Start

1. Onboard a New Agent

nemoclaw onboard

The interactive wizard prompts for:

  • Sandbox name (e.g. my-assistant)
  • NVIDIA API key ($NVIDIA_API_KEY)
  • Inference model selection
  • Network and filesystem policy configuration

Expected output on success:

──────────────────────────────────────────────────
Sandbox      my-assistant (Landlock + seccomp + netns)
Model        nvidia/nemotron-3-super-120b-a12b (NVIDIA Cloud API)
──────────────────────────────────────────────────
Run:         nemoclaw my-assistant connect
Status:      nemoclaw my-assistant status
Logs:        nemoclaw my-assistant logs --follow
──────────────────────────────────────────────────
[INFO]  === Installation complete ===

2. Connect to the Sandbox

nemoclaw my-assistant connect

3. Chat with the Agent (inside sandbox)

TUI (interactive chat):

sandbox@my-assistant:~$ openclaw tui

CLI (single message):

sandbox@my-assistant:~$ openclaw agent --agent main --local -m "hello" --session-id test

Key CLI Commands

Host Commands (nemoclaw)

CommandDescription
nemoclaw onboardInteractive setup: gateway, providers, sandbox
nemoclaw <name> connectOpen interactive shell inside sandbox
nemoclaw <name> statusShow NemoClaw-level sandbox health
nemoclaw <name> logs --followStream sandbox logs
nemoclaw startStart auxiliary services (Telegram bridge, tunnel)
nemoclaw stopStop auxiliary services
nemoclaw deploy <instance>Deploy to remote GPU instance via Brev
openshell termLaunch OpenShell TUI for monitoring and approvals

Plugin Commands (openclaw nemoclaw, run inside sandbox)

Note: These are under active development — use nemoclaw host CLI as the primary interface.

CommandDescription
openclaw nemoclaw launch [--profile ...]Bootstrap OpenClaw inside OpenShell sandbox
openclaw nemoclaw statusShow sandbox health, blueprint state, and inference
openclaw nemoclaw logs [-f]Stream blueprint execution and sandbox logs

OpenShell Inspection

# List all sandboxes at the OpenShell layer
openshell sandbox list

# Check specific sandbox
openshell sandbox inspect my-assistant

Architecture

NemoClaw orchestrates four components:

ComponentRole
PluginTypeScript CLI: launch, connect, status, logs
BlueprintVersioned Python artifact: sandbox creation, policy, inference setup
SandboxIsolated OpenShell container running OpenClaw with policy-enforced egress/filesystem
InferenceNVIDIA cloud model calls routed through OpenShell gateway

Blueprint lifecycle:

  1. Resolve artifact
  2. Verify digest
  3. Plan resources
  4. Apply through OpenShell CLI

TypeScript Plugin Usage

NemoClaw exposes a programmatic TypeScript API for building custom integrations.

Import and Initialize

import { NemoClawClient } from '@nvidia/nemoclaw';

const client = new NemoClawClient({
  apiKey: process.env.NVIDIA_API_KEY!,
  model: process.env.NEMOCLAW_MODEL ?? 'nvidia/nemotron-3-super-120b-a12b',
});

Create a Sandbox Programmatically

import { NemoClawClient, SandboxConfig } from '@nvidia/nemoclaw';

async function createSandbox() {
  const client = new NemoClawClient({
    apiKey: process.env.NVIDIA_API_KEY!,
  });

  const config: SandboxConfig = {
    name: 'my-assistant',
    model: 'nvidia/nemotron-3-super-120b-a12b',
    policy: {
      network: {
        allowedEgressHosts: ['build.nvidia.com'],
        blockUnlisted: true,
      },
      filesystem: {
        allowedPaths: ['/sandbox', '/tmp'],
        readOnly: false,
      },
    },
  };

  const sandbox = await client.sandbox.create(config);
  console.log(`Sandbox created: ${sandbox.id}`);
  return sandbox;
}

Connect and Send a Message

import { NemoClawClient } from '@nvidia/nemoclaw';

async function chatWithAgent(sandboxName: string, message: string) {
  const client = new NemoClawClient({
    apiKey: process.env.NVIDIA_API_KEY!,
  });

  const sandbox = await client.sandbox.get(sandboxName);
  const session = await sandbox.connect();

  const response = await session.agent.send({
    agentId: 'main',
    message,
    sessionId: `session-${Date.now()}`,
  });

  console.log('Agent response:', response.content);
  await session.disconnect();
}

chatWithAgent('my-assistant', 'Summarize the latest NVIDIA earnings report.');

Check Sandbox Status

import { NemoClawClient } from '@nvidia/nemoclaw';

async function checkStatus(sandboxName: string) {
  const client = new NemoClawClient({
    apiKey: process.env.NVIDIA_API_KEY!,
  });

  const status = await client.sandbox.status(sandboxName);

  console.log({
    sandbox: status.name,
    healthy: status.healthy,
    blueprint: status.blueprintState,
    inference: status.inferenceProvider,
    policyVersion: status.policyVersion,
  });
}

Stream Logs

import { NemoClawClient } from '@nvidia/nemoclaw';

async function streamLogs(sandboxName: string) {
  const client = new NemoClawClient({
    apiKey: process.env.NVIDIA_API_KEY!,
  });

  const logStream = client.sandbox.logs(sandboxName, { follow: true });

  for await (const entry of logStream) {
    console.log(`[${entry.timestamp}] ${entry.level}: ${entry.message}`);
  }
}

Apply a Network Policy Update (Hot Reload)

import { NemoClawClient, NetworkPolicy } from '@nvidia/nemoclaw';

async function updateNetworkPolicy(sandboxName: string) {
  const client = new NemoClawClient({
    apiKey: process.env.NVIDIA_API_KEY!,
  });

  // Network policies are hot-reloadable at runtime
  const updatedPolicy: NetworkPolicy = {
    allowedEgressHosts: [
      'build.nvidia.com',
      'api.github.com',
    ],
    blockUnlisted: true,
  };

  await client.sandbox.updatePolicy(sandboxName, {
    network: updatedPolicy,
  });

  console.log('Network policy updated (hot reload applied).');
}

Security / Protection Layers

LayerWhat it protectsHot-reloadable?
NetworkBlocks unauthorized outbound connections✅ Yes
FilesystemPrevents reads/writes outside /sandbox and /tmp❌ Locked at creation
ProcessBlocks privilege escalation and dangerous syscalls❌ Locked at creation
InferenceReroutes model API calls to controlled backends✅ Yes

When the agent attempts to reach an unlisted host, OpenShell blocks the request and surfaces it in the TUI for operator approval.


Common Patterns

Pattern: Minimal Sandbox for Development

const config: SandboxConfig = {
  name: 'dev-sandbox',
  model: 'nvidia/nemotron-3-super-120b-a12b',
  policy: {
    network: { blockUnlisted: false },   // permissive for dev
    filesystem: { allowedPaths: ['/sandbox', '/tmp', '/home/dev'] },
  },
};

Pattern: Production Strict Sandbox

const config: SandboxConfig = {
  name: 'prod-assistant',
  model: 'nvidia/nemotron-3-super-120b-a12b',
  policy: {
    network: {
      allowedEgressHosts: ['build.nvidia.com'],
      blockUnlisted: true,
    },
    filesystem: {
      allowedPaths: ['/sandbox', '/tmp'],
      readOnly: false,
    },
  },
};

Pattern: Deploy to Remote GPU (Brev)

nemoclaw deploy my-gpu-instance --sandbox my-assistant
await client.deploy({
  instance: 'my-gpu-instance',
  sandboxName: 'my-assistant',
  provider: 'brev',
});

Troubleshooting

Error: Sandbox not found

Error: Sandbox 'my-assistant' not found

Fix: Check at the OpenShell layer — NemoClaw errors and OpenShell errors are separate:

openshell sandbox list
nemoclaw my-assistant status

Error: NVIDIA API key missing or invalid

Error: Inference provider authentication failed

Fix:

export NVIDIA_API_KEY="nvapi-xxxxxxxxxxxx"
nemoclaw onboard  # re-run to reconfigure

Error: Docker not running

Error: Cannot connect to Docker daemon

Fix:

sudo systemctl start docker
sudo usermod -aG docker $USER  # add current user to docker group
newgrp docker

Error: OpenShell not installed

Error: 'openshell' command not found

Fix: Install NVIDIA OpenShell first, then re-run the NemoClaw installer.

Agent blocked on outbound request

When you see a blocked request notification in the TUI:

openshell term        # open TUI to approve/deny the request
# OR update policy to allow the host:
nemoclaw my-assistant policy update --allow-host api.example.com

View Full Debug Logs

nemoclaw my-assistant logs --follow
# or with verbose flag
nemoclaw my-assistant logs --follow --level debug

Documentation Links

Files

1 total
Select a file
Select a file to preview.

Comments

Loading comments…