Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Agent Security Skill Scanner Gitee

v4.1.6

AI Agent 安全扫描器 - 多语言检测 + AST 分析 + 意图识别 + LLM 验证

0· 15·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Name/description (AI Agent security scanner) match included code (AST/static analysis, intent detector, LLM analyzer). However the SKILL metadata/registry lists no required environment variables while the runtime docs repeatedly instruct setting LLM_API_KEY, ENABLE_LLM_ANALYSIS, LLM_API_URL, FEISHU_WEBHOOK and ALERT_EMAIL — a mismatch between declared requirements and what the skill asks the user to provide. The code and docs also reference optional services (Redis message bus, Docker sandbox, external sample libraries) that are not declared in registry metadata; these are plausible for a scanner but should be clearly declared.
!
Instruction Scope
SKILL.md instructs running background daemons (nohup python3 lingshun_scanner_daemon.py ...), running shell scripts (lingshun_optimize.sh, orchestration scripts) and enabling LLM analysis via env vars. It also references scanning arbitrary filesystem paths and running dynamic/sandbox analyses. Those instructions allow long-running processes, network calls, and arbitrary system interactions — appropriate for a scanner, but they broaden runtime scope (daemon persistence, filesystem and network access). The document also contains unicode-control-chars (prompt-injection signal) embedded in SKILL.md which could attempt to manipulate downstream processing or display; this should be investigated.
Install Mechanism
There is no network download/install spec in registry (no install section). The package is delivered with Python source files included. That lowers supply-chain risk compared to an installer that fetches arbitrary archives. The repository contains many scripts and sample tooling; running them still executes local code (so standard code-review/sandbox precautions apply).
!
Credentials
The registry metadata declares no required env vars or credentials, but SKILL.md instructs use of LLM_API_KEY/LLM_API_URL, ENABLE_LLM_ANALYSIS, FEISHU_WEBHOOK and ALERT_EMAIL. Requesting an LLM API key and webhook is plausible for an optional LLM-analysis + alerting feature, but the absence of these in the declared requires.env is an inconsistency. Additionally, docs/code reference other services (redis URL, local sample libraries) and optional dependencies (openai, requests) that imply network/secret access not enumerated in metadata. If you enable LLM analysis you must provide an API key — treat that as sensitive and do not reuse high-privilege keys without review.
Persistence & Privilege
The skill metadata does not request always:true and autonomous invocation is default. However SKILL.md encourages starting a background daemon (nohup ... &), which would create persistent processes on the host. That persistence is not enforced by the platform but is part of runtime instructions — users should be aware that installing/running the skill per its docs may create long-lived processes and log files.
Scan Findings in Context
[unicode-control-chars] unexpected: unicode-control-chars were detected inside SKILL.md. This is a prompt-injection / obfuscation indicator: control chars can hide or change rendered instructions or be used to manipulate downstream parsers. This is not expected for a normal README and should be examined; it reduces trust in the displayed instructions until verified.
What to consider before installing
What to do before installing or running this skill: - Treat the LLM/API instructions as optional but sensitive. Do NOT provide a high-privilege LLM_API_KEY (or reuse an org-wide key) until you've code-reviewed llm_analyzer.py and any code that sends data externally. Prefer a low-privilege or rate-limited test key. - Inspect SKILL.md and the repository for hidden/control characters and ensure text hasn't been tampered with (the pre-scan flagged unicode-control-chars). Open the file in a hex-aware editor or run a script to reveal non-printable characters. - Review the included scripts that the docs tell you to run (lingshun_scanner_daemon.py, lingshun_optimize.sh, lingshun_task_orchestration.sh). Run them only in an isolated sandbox (VM/container) first — they start daemons and may make network calls or write logs. - Check any network endpoints the code calls (LLM_API_URL, webhooks, Redis URLs). Replace placeholder endpoints (api.example.com) with your own trusted endpoints if you intend to enable LLM analysis, or disable LLM_ANALYSIS entirely. - Verify there are no unexpected hardcoded endpoints, credentials, or file paths in the source (search for 'http', 'ftp', 'api', 'token', 'password', '~/', '/etc', redis.from_url, requests.post, urllib, etc.). The docs reference a samples directory on a home path — confirm the release package does not include or exfiltrate local sample data. - Because the skill may start persistent background processes, plan how to stop them (systemd unit or kill scripts) and inspect logs after a test run. - If you lack capacity to audit the code yourself, run the skill only in a tightly isolated environment with no sensitive credentials, no access to production secrets, and network egress blocked (or routed through a proxy you control) until you are satisfied. Low-level reason for 'suspicious': the project is coherent for its stated purpose, but the mismatch between declared metadata and runtime instructions, presence of hidden control characters in SKILL.md, and the instructions to run background processes and external LLM/webhook integrations create opportunities for misuse or unintended data exposure. Manual review or sandbox testing is recommended before granting it access to real credentials or production systems.
src/engine/smart_pattern_detector.py:21
Shell command execution detected (child_process).
src/engine/smart_pattern_detector.py:21
Dynamic code execution detected.
src/multi_language_scanner_v4.py:411
Dynamic code execution detected.
Patterns worth reviewing
These patterns may indicate risky behavior. Check the VirusTotal and OpenClaw results above for context-aware analysis before installing.

Like a lobster shell, security has layers — review code before you run it.

latestvk978xnzx25m1qespsz3jq4d8rn84c900

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments