Skill flagged — suspicious patterns detected
ClawHub Security flagged this skill as suspicious. Review the scan results before using.
Ctf Ai Ml
v1.0.0Provides AI and machine learning techniques for CTF challenges. Use when attacking ML models, crafting adversarial examples, performing model extraction, pro...
⭐ 0· 45·2 current·2 all-time
by@gandli
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Suspicious
medium confidencePurpose & Capability
Name/description (CTF AI/ML offensive techniques) aligns with the provided SKILL.md and the three large supporting documents (adversarial-ml.md, llm-attacks.md, model-attacks.md). There are no unexpected credentials, binaries, or install requirements declared in the registry that contradict the stated purpose.
Instruction Scope
SKILL.md contains concrete instructions that go beyond passive explanation: runnable curl/python examples that attempt prompt injection, model extraction, and scripts that could be pointed at live endpoints to retrieve system prompts or flags. These instructions legitimately belong to a CTF/attacker-training skill but also describe techniques that can be used to exfiltrate sensitive data if the operator points them at production services. The SKILL.md also contains a prompt-injection pattern (e.g., "Ignore previous instructions") which the static pre-scan flagged; while this is presented as an example, it could try to manipulate an agent or an automated evaluation if executed without safeguards.
Install Mechanism
This is an instruction-only skill with no install spec and no code files executed by the platform. The doc recommends pip installs, apt/brew commands for a working environment, but nothing is automatically downloaded or run by the skill installer — this is lower risk. Users should still avoid running suggested installs on production machines.
Credentials
The registry declares no required env vars or credentials (proportionate). However, the SKILL.md expects filesystem access (reading model files) and network access (curl/requests against target endpoints). The allowed tools list in SKILL.md (Bash, Read/Write/Edit/Glob/Grep, WebFetch/WebSearch, etc.) grants broad I/O capabilities which are appropriate for model analysis but increase risk if the agent has access to sensitive data or production networks.
Persistence & Privilege
The skill does not request always:true and has no install hook, so it does not demand forced permanent inclusion. There is a metadata inconsistency: registry flags indicate user-invocable: true while SKILL.md metadata contains user-invocable: "false" — this mismatch should be resolved by the publisher before trusting invocation behavior.
Scan Findings in Context
[ignore-previous-instructions] expected: The phrase is present because the skill documents prompt injection techniques and provides example payloads. That is expected for a CTF adversary-teaching skill, but it also means the skill contains literal instructions that could be used to extract secrets if executed against real systems.
What to consider before installing
This skill is coherent with its advertised purpose (CTF/offensive ML guidance) but contains runnable examples for prompt injection, model extraction, and other offensive techniques that can exfiltrate secrets if run against real systems. Before installing or using: 1) Run only in an isolated sandbox or offline VM with no access to production networks or sensitive files. 2) Do not point example curl/requests at real production endpoints; replace targets with local test services. 3) Review and restrict the agent's allowed tools/permissions (file read/write, web access) so the skill cannot access unrelated secrets. 4) Ask the publisher to resolve the metadata mismatch about user-invocable behavior (SKILL.md says user-invocable:false while registry metadata indicates true). 5) If you lack legal/ethical authorization for offensive testing, do not use these techniques on external systems. If you want a safer alternative, request a red-team or CTF sandbox specifically configured for adversarial ML experiments.adversarial-ml.md:331
Prompt-injection style instruction pattern detected.
llm-attacks.md:28
Prompt-injection style instruction pattern detected.
SKILL.md:76
Prompt-injection style instruction pattern detected.
About static analysis
These patterns were detected by automated regex scanning. They may be normal for skills that integrate with external APIs. Check the VirusTotal and OpenClaw results above for context-aware analysis.Like a lobster shell, security has layers — review code before you run it.
latestvk97d4bfs25b0m2pxpqcn3ahne583xxdk
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
