Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Test Safety

v1.0.2

Security guard skill for OpenClaw - Analyzes user input for harmful content, risky commands, and security threats before invoking LLM

1· 162·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The skill claims to analyze input for harmful content and references LLM provider API keys (OpenAI, Anthropic, xAI, Google). Those requirements are proportionate to a safety-guard skill. However, the skill is instruction-only (no code provided) yet shows CLI usage examples for a 'safety-guard' binary that is not included — unclear how runtime behavior is supplied, which reduces confidence in capability claims.
Instruction Scope
SKILL.md describes fetching content from URLs, local files, and YouTube and references optional external services (FIRECRAWL, APIFY). Those actions are plausible for content-extraction and safety checks, but the file also references a user config path (~/.safety-guard/config.json) that isn't declared in the registry metadata. Because the skill can instruct fetching external resources and reading/writing a local config, you should verify exactly what code will run and what data will be read or transmitted before use.
!
Install Mechanism
Registry metadata at the top-level reported 'No install spec', yet SKILL.md includes embedded metadata that lists a pip install step for PyYAML and requires python3. Additionally, the included _meta.json file does not match the registry metadata (different slug/owner/version). These inconsistencies suggest packaging or provenance problems — either the skill is incomplete (instruction-only but references an external CLI not supplied) or files were copied/mislabelled. That mismatch increases risk because you can't verify what will be installed/run.
Credentials
The skill does not declare any required environment variables and only suggests standard LLM provider API keys and optional API tokens for content-extraction services. Those are proportionate for a safety-guard that needs to call LLMs and optional crawlers. No unrelated credentials (cloud keys, SSH keys, etc.) are requested.
Persistence & Privilege
The skill is not always-enabled, is user-invocable, and does not request persistent privileges in the metadata. It mentions an optional local config path (~/.safety-guard/config.json) which would be typical for a CLI tool, but the presence of that path in SKILL.md without being declared is a packaging inconsistency to verify.
What to consider before installing
Do not install or provide API keys yet. Steps to take before trusting this skill: 1) Inspect the referenced GitHub repo and confirm the CLI or Python code actually exists and matches this SKILL.md. 2) Verify author/owner identity (the registry metadata and _meta.json disagree on slug/owner/version). 3) If you plan to use it, run it in a sandboxed environment (no real API keys) and monitor network calls to see what endpoints it contacts. 4) Confirm what files it will read/write (it mentions ~/.safety-guard/config.json) and that it won’t exfiltrate sensitive data. 5) Prefer least-privilege API keys (scoped, revocable) and rotate them after testing. If you can't find source code that implements the CLI behavior described, treat the skill as incomplete/untrustworthy.

Like a lobster shell, security has layers — review code before you run it.

latestvk9701xdtqhrw7kk3ahtwx1w2yn82zk29

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Runtime requirements

🛡️ Clawdis
Binspython3

Comments