Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Li Python Sec Check

v0.0.2

Python 安全规范检查工具 - 基于 CloudBase 规范 + 腾讯安全指南 + LLM 智能分析(LLM 功能默认禁用,本地执行优先)

0· 75·0 current·1 all-time
byTerry S Fisher@43622283
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description (Python security checks + optional LLM) matches the included code and docs. The code implements static checks, privacy/data checks, and an optional LLM analyzer. No unrelated credentials or binaries are required.
Instruction Scope
SKILL.md and SECURITY_AND_PRIVACY.md clearly state core checks run locally and that LLM analysis is opt-in via --llm. The LLM module will send code snippets and scan results to the configured API only when an API key is present / LLM is enabled. You should still inspect scripts/python_sec_check.py to confirm LLM calls are gated by the CLI flag before enabling networked analysis.
Install Mechanism
No install spec; package is shipped as code files (no remote downloads at install time). This is low-risk. The only network use is in the optional LLM analyzer which uses requests when an API key is provided.
Credentials
No required environment variables. Optional env vars (LLM_API_KEY, LLM_API_BASE) are reasonable and documented for the LLM feature. The skill does not request unrelated secrets or system config paths.
Persistence & Privilege
always:false and no special privileges are requested. Autonomous invocation is allowed by default (platform standard). If you enable LLM/networking and the agent is allowed to call the skill autonomously, that combination increases blast radius because code snippets can be sent to the configured endpoint — but the skill itself documents and requires explicit LLM usage.
Assessment
This skill is coherent with its purpose, but follow these precautions before use: 1) Do not enable --llm when scanning sensitive or private code unless you trust and control the configured API endpoint. 2) If you must use LLM analysis in an enterprise, set LLM_API_BASE to an internal/private LLM and provide a dedicated key. 3) Inspect scripts/python_sec_check.py and scripts/llm_analyzer.py to confirm LLM calls are only made when the CLI flag is used (and that no API key is picked up silently from environment). 4) Run scans in an isolated environment (container/VM) when first evaluating the tool, and ensure no accidental LLM_API_KEY is present in CI environment variables. 5) If you allow autonomous agents to invoke skills, be cautious about enabling the LLM feature because it will transmit code snippets to the configured endpoint.
examples/unsafe-example/app.py:36
Dynamic code execution detected.
scripts/python_sec_check.py:257
Dynamic code execution detected.
Patterns worth reviewing
These patterns may indicate risky behavior. Check the VirusTotal and OpenClaw results above for context-aware analysis before installing.

Like a lobster shell, security has layers — review code before you run it.

latestvk971bqt3pbft95svk08stqhngs83a2m6

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments