Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

safe-guard

v1.0.2

Claude Code / OpenClaw Skill 安全防护工具。 三大能力:(1) 始终生效的 PreToolUse Hook,拦截高危操作; (2) 静态正则 + LLM 语义审计的深度扫描; (3) 沙盒隔离环境运行脚本并监控行为。 支持 scan-only、safe-run、sandbox-test...

1· 139·0 current·0 all-time
byIgloos@igloomatics
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Name/description claim a scanning + sandbox + hook product, and the repository contains matching components (quick_scan.py, sandbox_run.py, danger_guard.py, hooks.json, checklist/known threats). The files and declared capabilities are largely coherent with the stated purpose. However, the code contains deliberate keyword segmentation and many `# noscan` markers (see quick_scan.py, danger_guard.py, sandbox_run.py) — an uncommon pattern for normal tools because it looks like evasion of static detectors. That choice is explainable (to avoid false positives), but it is unusual and should be justified by the author.
Instruction Scope
SKILL.md prescribes scanning by reading every file under the target skill directory, optionally cloning remote repos into a temp dir, running static regex scanner and an optional sandbox run. That scope is consistent with a deep audit tool. Note: reading 'every file' and running scripts in a sandbox implies the tool will access potentially sensitive files contained in scanned repos (e.g., .env, keys if mistakenly committed) — expected for an auditor but a privacy consideration.
Install Mechanism
No external install/download steps are required; this is instruction+code in the skill bundle (no remote fetch or installer). The tool runs local Python scripts and a hook command via hooks.json. No high-risk download URLs or package installs were found.
Credentials
The skill does not request external credentials or environment variables. It performs file reads within target skill directories and may clone remote repos into a temporary directory for scanning; the sandbox uses Path.home() to construct deny-lists. That's proportionate for an auditor, but it means the tool will (by design) examine files that can contain secrets. Also review .claude/settings.local.json included in the package — it lists many permissive Bash invocation patterns which could influence what gets executed when the skill is loaded; understand platform permission semantics before enabling.
!
Persistence & Privilege
The skill ships a PreToolUse hook (hooks/hooks.json) that will be registered by the platform and run automatically to intercept tool calls (Bash, Edit/Write matchers). That hook can block tool operations (exits with nonzero) and persists session state in a temp directory. This behavior is consistent with the claimed 'always-active interception' feature, but it is a high-impact capability: a malicious or buggy hook could block or tamper with other agent actions. The skill's registry metadata does not set platform-level 'always: true', but the hook registration itself grants it an always-running interception role when installed.
What to consider before installing
This skill appears to implement what it claims (static scanner, LLM checklist, sandbox, and a PreToolUse hook), but there are red flags that justify extra caution: - Keyword segmentation and many `# noscan` markers: the code fragments keywords (e.g., '.a'+'ws', 'bash'+'rc') to avoid static detection. While the author may claim this prevents false positives, the same technique is commonly used by malware to evade scanners. Ask the author to justify why segmentation is needed and request a version without obfuscation for review. - PreToolUse hook is high-impact: installing the skill will register a hook that runs on tool calls and can block them. Only enable this in environments where you can tolerate a gatekeeper script; prefer scan-only or sandbox-test modes first. - Review permissions and local settings: inspect .claude/settings.local.json and hooks/hooks.json to ensure no platform-level permission escalations or overly permissive allowed commands are being granted implicitly. - Run tests in an isolated environment: before enabling the hook in your main agent, run the skill on non-sensitive sample skills in a disposable VM or container to observe behavior. Use the sandbox-run and quick-scan locally and verify outputs. - Validate source provenance: the package has no homepage and an unknown owner. Prefer code from auditable, known sources or ask the publisher for a transparency statement and reproducible build steps. If you need a short checklist to proceed safely: (1) ask the author why obfuscation is used; (2) run quick_scan.py and sandbox_run.py locally on a copied sample; (3) do not enable hooks in production until reviewed; (4) consider limiting the hook to 'scan-only' or requiring explicit user confirmation before allowing the hook to persist.
!
scripts/quick_scan.py:224
File read combined with network send (possible exfiltration).
About static analysis
These patterns were detected by automated regex scanning. They may be normal for skills that integrate with external APIs. Check the VirusTotal and OpenClaw results above for context-aware analysis.

Like a lobster shell, security has layers — review code before you run it.

latestvk9781a72xg71sdff1sp3qk4egx833gz7

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments