suspicious.dangerous_exec
- Location
- test-fixtures/evasive-12-multi-stage/plugins/init.js:22
- Finding
- Shell command execution detected (child_process).
AdvisoryAudited by Static analysis on May 10, 2026.
Detected: suspicious.dangerous_exec, suspicious.dynamic_code_execution, suspicious.env_credential_access (+3 more)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Accidentally executing fixture files could run harmful commands or leak data, even though normal scanner use should only read them as samples.
This is executable malicious-style fixture code. Its location under test-fixtures and the testing documentation indicate it is intended as scanner test data, but it would be dangerous if run directly.
const payload = Buffer.from(cmd, 'base64').toString('utf-8');
run(payload);Use the CLI scanner, not the fixture scripts. Maintainers should keep malicious fixtures clearly isolated and preferably inert or disabled by default.
Future skill installation requests may be delayed or blocked by this scanner’s result.
The skill deliberately changes the agent’s install workflow by requiring this scanner before ClawHub skill installation.
BEFORE installing any skill from ClawHub, you MUST scan it with skill-scan first. This is non-negotiable — never skip the scan.
Choose automatic scanning only if you want this skill to gate future installs; otherwise use the documented manual/on-demand mode.
The scanner can remain part of the agent’s default workflow until the AGENTS.md section is removed.
AGENTS.md changes are persistent agent instructions that continue to affect behavior after the initial install.
During installation, one of two sections was added to your workspace `AGENTS.md`
Review the AGENTS.md section after installation and remove it if you no longer want automatic or on-demand scanning behavior.
If you enable LLM or related integrations, those provider credentials may be used by the scanner.
Optional analysis modes rely on provider credentials from environment variables, even though static scanning requires no keys.
`OPENAI_API_KEY` | LLM scanning | OpenAI API key ... `ANTHROPIC_API_KEY` | LLM scanning ... `PROMPTINTEL_API_KEY` | MoltThreats integration
Provide only the keys needed for the modes you actually use, and prefer least-privilege or dedicated keys.
Private local skill code or scan findings may leave your environment when LLM or alert features are enabled.
Optional LLM analysis implies scanned skill content is processed through external model providers.
LLM deep analysis | Semantic threat understanding | `--llm` ... Provider auto-detected from environment: `OPENAI_API_KEY` -> gpt-4o-mini; `ANTHROPIC_API_KEY` -> claude-sonnet-4-5
Use static-only scanning for sensitive private code, or confirm your provider and alert-channel privacy requirements before enabling LLM/alert modes.
A LOW scan result should reduce risk but should not be treated as a guarantee that a skill is safe.
The project documents known missed threat categories, while the main workflow describes LOW risk as safe to proceed.
These three currently score as LOW risk and are counted as false negatives.
Treat scan output as one review input; manually review high-impact skills and known-gap areas such as path traversal, resource exhaustion, and SQL injection.
Users may need to manually verify how the CLI is installed and whether the package version matches the registry entry.
The skill is presented as a Python CLI package, while registry metadata says there is no install spec and the source/homepage are unknown.
pip install -e . skill-scan scan /path/to/skill
Install from a trusted source, verify the package/version, and review dependencies before relying on it for security decisions.