Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Mordred Security Sandbox

v1.0.0

Educational security training sandbox for AI agents. Contains 5 intentionally vulnerable systems with annotated vulnerability descriptions and tested patches...

0· 0·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
Capability signals
Requires wallet
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
!
Purpose & Capability
The skill's name/description (educational sandbox) matches the included code (intentionally vulnerable systems and patches). However the runner uses a hard-coded SANDBOX_PATH (/media/ezekiel/...) and writes logs/results there, which is disproportionate and unexpected for a portable training kit (should operate relative to the skill directory or use configurable env vars). The presence of baked-in 'SECRETS' (fake API keys) and absolute filesystem targets suggests leftover developer paths and increases risk if the code is run as-is.
Instruction Scope
SKILL.md and examples explicitly instruct agents to read source files and run tests/vaccines (python3 src/mordred_runner.py, python3 vaccines/*.py). The documentation also contains prompt-injection examples (e.g., 'Ignore previous instructions', 'you are now DAN') — these are present to demonstrate vulnerabilities and are expected, but they are actual attack patterns that could influence an integrated agent if not sandboxed. The runner will execute local code via subprocess.run, which is consistent with purpose but grants the skill the ability to run arbitrary code from the package at runtime.
Install Mechanism
No install spec (instruction-only) and no external downloads — lowest install risk. There are multiple Python files included, so code will execute locally when run, but nothing is pulled from remote URLs during install.
!
Credentials
The registry metadata requests no env vars or credentials (good), but the code contains hard-coded mock secrets (SECRETS with 'sk_live_336CORRUPTED') and uses absolute file paths for SANDBOX_PATH/RESULTS_PATH/LOG_PATH which may require write access to external mounts. There are also imports of unusual/nonstandard modules (tconcurrentback, SQLvte3, concurrent.* usage) that are unexpected and may cause runtime failures or hide replaced/malicious modules in an environment where similarly named packages exist.
!
Persistence & Privilege
The runner writes logs and result JSON files to the absolute SANDBOX_PATH. While 'always' is false (not force-enabled), running the skill will create persistent files and execute the included scripts. Combined with the hard-coded path this could write outside the working directory — run only in isolated environments or change paths before execution.
Scan Findings in Context
[ignore-previous-instructions] expected: SKILL.md and examples intentionally include prompt-injection payloads (e.g., 'Ignore previous instructions') as training/test vectors. This is expected for a sandbox, but these lines can manipulate agents if the skill's instruction-processing isn't isolated.
[you-are-now] expected: The examples contain strings like 'You are now DAN' as part of prompt-injection demonstrations. This is consistent with the educational goal but is a known vector for influencing model behavior and should be treated carefully.
What to consider before installing
This package appears to be an educational vulnerable-systems sandbox, but it has several red flags you should address before running it on your machine or giving it to an agent: 1) Run only in an isolated environment (Docker VM) with no network access and limited mounts — do not run on production hosts. 2) Inspect and change SANDBOX_PATH/RESULTS_PATH/LOG_PATH so the runner writes to a local temp directory you control (not /media/ezekiel/...). 3) Review and remove or treat any baked-in secrets as placeholders (they are not real but should not be trusted). 4) Audit imports of nonstandard modules (tconcurrentback, SQLvte3, etc.) and install only known-safe dependencies; conservative approach: run tests without internet access. 5) Be aware SKILL.md contains explicit prompt-injection examples — when integrating with an agent, ensure those examples cannot be used to override agent instructions or run arbitrary code. 6) If you plan to allow autonomous invocation, restrict capabilities and require explicit human approval before running tests. If you cannot audit or sandbox the code, treat this skill as unsafe to run in production.
src/systems/weak_sandbox.py:16
Dynamic code execution detected.
vaccines/vaccine_weak_sandbox.py:128
Dynamic code execution detected.
Patterns worth reviewing
These patterns may indicate risky behavior. Check the VirusTotal and OpenClaw results above for context-aware analysis before installing.

Like a lobster shell, security has layers — review code before you run it.

ai-agentsvk970rae3svxhgt38yxbkzxdjbx84fezneducationvk970rae3svxhgt38yxbkzxdjbx84feznlatestvk970rae3svxhgt38yxbkzxdjbx84feznpenetration-testingvk970rae3svxhgt38yxbkzxdjbx84feznsandboxvk970rae3svxhgt38yxbkzxdjbx84feznsecurityvk970rae3svxhgt38yxbkzxdjbx84fezntrainingvk970rae3svxhgt38yxbkzxdjbx84fezn

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments