Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Skill Auditor in Sandbox

v1.0.3

Launch a NovitaClaw (OpenClaw) sandbox, install a specified skill, and generate an installation & security audit report. Use when: (1) You want to test a com...

0· 76·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for freecodewu/skill-auditor-in-sandbox.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Skill Auditor in Sandbox" (freecodewu/skill-auditor-in-sandbox) from ClawHub.
Skill page: https://clawhub.ai/freecodewu/skill-auditor-in-sandbox
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install skill-auditor-in-sandbox

ClawHub CLI

Package manager switcher

npx clawhub@latest install skill-auditor-in-sandbox
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
!
Purpose & Capability
The SKILL.md and included scripts clearly require a Novita API key, a SANDBOX_ID, the novitaclaw CLI, and the novita-sandbox package — but the registry metadata lists no required env vars or binaries. Requesting a Novita API key is coherent with launching NovitaClaw sandboxes, but the metadata omission is an inconsistency that could mislead users about what access the skill needs.
!
Instruction Scope
The runtime instructions and scripts perform broad inspections: they grep for risky tokens, enumerate URLs, list external path references, and read and output full text contents of many file types from the installed skill. Emitting full fileContents in the report could reveal secrets embedded in the audited repo. The audit script also builds grep patterns by concatenating string fragments and documents that it does so to 'avoid triggering static scanners' — that deliberate obfuscation is unexpected for a security tool and is a red flag.
!
Install Mechanism
There is no registry install spec (instruction-only), but package.json declares a dependency on 'novita-sandbox' and SKILL.md suggests installing the novitaclaw CLI via curl | bash. The user-run curl|bash instruction pulls a script from a remote host (novitaclaw.novita.ai); downloading/executing a remote install script has higher risk and should be verified. The included scripts run git clones of arbitrary repos into the sandbox (expected for a tester), but that behavior amplifies the need for isolation and scrutiny.
!
Credentials
The scripts require SANDBOX_ID, NOVITA_API_KEY and SKILL_NAME (and SKILL.md asks users to set NOVITA_API_KEY), which are proportionate to launching and managing a Novita sandbox — however these env vars are not declared in the registry metadata. The audit script also reads and outputs package/requirements files and arbitrary text files from the installed skill, which can expose sensitive data if present in the scanned repo. The skill requests more sensitive inputs than the metadata indicates.
Persistence & Privilege
The skill does not request permanent presence (always:false) and does not modify other skills or system-wide agent settings. It executes its actions inside a Novita sandbox via the novita-sandbox API rather than on the host (as intended). Note: the skill can be invoked autonomously by the agent (default), which combined with other concerns increases blast radius — but autonomous invocation alone is not a reason to block.
What to consider before installing
This skill mostly does what it claims, but proceed cautiously. Before installing or running: (1) verify the upstream repository (https://github.com/freecodewu/skill-auditor-in-sandbox) and the novitaclaw install script contents; (2) confirm NOVITA_API_KEY usage and limit its scope if possible; (3) review the two scripts locally — the audit script intentionally obfuscates patterns to avoid static scanners and will capture and include full text of many files (which can leak secrets); (4) run the tool only against untrusted skills inside an isolated Novita sandbox (do not use a production account or high-privilege API key); (5) consider modifying the audit script to avoid exporting sensitive files and to log findings without dumping entire file contents; (6) because the metadata does not declare required env vars, expect to supply SANDBOX_ID and NOVITA_API_KEY manually and verify these prompts before use.

Like a lobster shell, security has layers — review code before you run it.

latestvk979yt01634f9w3q3mm97jq3yn84rpde
76downloads
0stars
4versions
Updated 2w ago
v1.0.3
MIT-0

Skill Auditor in Sandbox

Test and audit Claude Code skills in an isolated NovitaClaw (OpenClaw) sandbox before installing them locally. The skill launches a sandbox, installs the target skill, runs a security scan, and generates a structured risk report.

Quick Reference

SituationAction
Test a ClawHub skill/skill-auditor-in-sandbox owner/skill-name
Test a GitHub skill/skill-auditor-in-sandbox owner/repo-name
Review the reportCheck risk level, suspicious patterns, URLs, external paths
After reviewPause or stop the sandbox to save costs

Prerequisites

Usage

You are given a skill name (or identifier) as $ARGUMENTS. Your job is to launch a sandbox, install the skill, run a security audit, and generate a report.

Step 1: Launch Sandbox

novitaclaw launch --json

Parse the JSON output and extract sandbox_id and webui. Save these for the report.

If launch fails, check error_code and remediation fields:

  • MISSING_API_KEY → ask user for API key
  • SANDBOX_TIMEOUT → retry with --timeout 300

Step 2: Install Skill

Run the install script from the project root:

SANDBOX_ID=<sandbox_id> SKILL_NAME="$ARGUMENTS" node scripts/install-skill.mjs

The script outputs JSON: { success, method, skillDir, files, error? }.

  • If success is false, show the error and stop.
  • Note the method used (clawhub / git-github / git-clawhub) for the report.

Step 3: Security Audit

Run the audit script:

SANDBOX_ID=<sandbox_id> SKILL_NAME="$ARGUMENTS" node scripts/audit-skill.mjs

The script outputs JSON:

  • suspicious[] — lines matching risky code patterns (dynamic execution, shell spawning, encoding, etc.)
  • urls[] — all URL references found in skill files
  • externalPaths[] — references to paths outside the skill directory (system dirs, dotfiles, temp dirs)
  • dependencies — contents of requirements.txt or package.json if present
  • fileContents[] — full contents of all text files for manual review

Step 4: Assess Risk

Based on audit results, assign a risk level:

Risk LevelCriteria
LOWNo suspicious patterns, URLs are legitimate (GitHub, docs), no external paths
MEDIUMSome suspicious patterns but explainable (e.g., fetch() for legitimate API calls)
HIGHUnexplained network calls, access to sensitive paths, obfuscated code
CRITICALCredential harvesting, mining indicators, command injection patterns

Step 5: Generate Report

Output a structured report:

## Skill Installation Report

**Skill:** <skill-name>
**Sandbox ID:** <sandbox_id>
**Web UI:** <webui_url>
**Timestamp:** <current time>

### Installation Status
- **Result:** SUCCESS / FAILED
- **Method:** <clawhub / git-github / git-clawhub>
- **Files Installed:** <count> files

### Installed Files
<table of files and their purpose>

### Security Analysis
- **Risk Level:** LOW / MEDIUM / HIGH / CRITICAL

### Suspicious Patterns Found
| File | Line | Pattern | Severity |
|------|------|---------|----------|
(or "None found")

### URL References
| File | URL | Context |
|------|-----|---------|
(list all URLs and whether they look legitimate)

### External Path References
(list any, or "None found")

### Dependencies
(list any, or "No external dependencies")

### Recommendations
- <recommendation based on findings>

### Sandbox Management
- To access: <webui_url>
- To pause (save costs): `novitaclaw pause <sandbox_id>`
- To stop (permanent): `novitaclaw stop <sandbox_id>`

After generating the report, automatically pause the sandbox to save costs:

novitaclaw pause <sandbox_id> --json

Then inform the user that the sandbox has been paused and can be resumed or stopped:

  • To resume: novitaclaw resume <sandbox_id>
  • To stop (permanent): novitaclaw stop <sandbox_id>

What Gets Scanned

CategoryPatterns
Suspicious codeShell spawning, dynamic code execution, encoding functions, mining indicators
Network callsAll URL references found in skill files
External pathsSystem directories, user home dotfiles, temp directories
Dependenciesrequirements.txt, package.json
File contentsFull text of all .md, .txt, .json, .py, .js, .ts, .sh, .yaml, .yml files

Important Notes

  • Always use --json flag with novitaclaw commands.
  • The sandbox auto-terminates based on keep_alive. Suggest pause to save costs.
  • Prefer pause over stop — stop is irreversible. Confirm before stopping.

Attribution

Comments

Loading comments...