Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

performance-mastery

v3.8.1

全栈性能工程师 — Linux 系统与编程语言性能分析调优专家。覆盖 CPU、内存、磁盘 I/O、网络、内核参数、编译优化、eBPF 追踪、基准测试、容器/K8s。触发场景: (1) 系统变慢/卡顿/负载高/load average 高 (2) 内存不足/OOM Kill/Swap 高/内存泄漏 (3) CPU...

1· 41·0 current·0 all-time
byjoeytao@husttsq
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Pending
View report →
OpenClawOpenClaw
Suspicious
high confidence
Purpose & Capability
The name/description and the included scripts (collect_snapshot.sh, perf_monitor.sh, bench-compare.sh, python-perf-test.py) align with a Linux performance-engineering skill: they collect /proc/sys data, run diagnostics, and offer tuning commands. This is expected for the stated purpose. One mismatch: scripts/run-evals.py is an LLM evaluation runner (calls OpenAI-compatible APIs) which is not mentioned in the skill manifest's requirements (no env vars declared). That file is plausibly for test/eval purposes, but its presence is unexpected relative to the declared manifest.
Instruction Scope
SKILL.md instructs the agent/user to run local collection and monitoring scripts that read kernel logs, /proc, sysctl, dmesg, and other system state — all reasonable for performance analysis. The guidance also includes example one-line commands that persist changes (echo into /etc/sysctl.d, write to /sys/kernel/mm/... etc.). Those are coherent with tuning tasks but are high-impact (require root and modify system-wide configuration). The repo also contains an eval script that can send prompts/data to external LLM endpoints if run with an API key — SKILL.md does not explicitly instruct uploading snapshots to external services, so this external-call capability is an unexpected side-channel.
Install Mechanism
No install spec; this is an instruction-plus-scripts skill with no remote download or archive extraction. Files are local and nothing in the manifest attempts to fetch arbitrary code at install time — lower install risk.
!
Credentials
The declared requirements list no environment variables or primary credential, yet scripts/run-evals.py references OPENAI_API_KEY, OPENAI_BASE_URL, and EVAL_MODEL (and comments show pip deps like openai/pyyaml). That is an undeclared credential requirement which is surprising. Aside from that, the scripts use standard runtime environment variables (TMPDIR) and require typical system tools; they do not otherwise request unrelated cloud or secret credentials. The undeclared OpenAI API usage is the main proportionality mismatch.
Persistence & Privilege
always:false and no automatic model-disable flags — no forced-install privilege. However the content includes explicit example commands that persist kernel/sysctl changes and advice that writes to /etc/sysctl.d and other system paths; these require root and can change system behavior permanently. The skill itself does not declare it will autonomously make such changes, but a user or an automated agent running the provided commands/script could perform privileged modifications.
What to consider before installing
This skill appears to be a legitimate performance troubleshooting toolkit, but review before running: 1) Inspect scripts locally (collect_snapshot.sh, perf_monitor.sh, bench-compare.sh) so you understand what system files they read and what commands they run. They intentionally read kernel logs and /proc and may require root for some checks. 2) Be cautious with example commands that echo into /etc/sysctl.d or write to /sys: these make persistent system-wide changes — test in staging and back up configs before applying. 3) The repository contains scripts/run-evals.py which can call an OpenAI-compatible API if you provide OPENAI_API_KEY / OPENAI_BASE_URL; the skill manifest did not declare any required credentials. If you do not intend to send diagnostic data externally, do not run run-evals.py with an API key and/or remove that script. 4) Avoid running any of these scripts unattended or under an autonomous agent with network access and secrets available; require explicit human confirmation before executing privileged or networked operations. 5) If you will use the eval/LLM integration, audit the data sent and consider sanitizing snapshots to avoid leaking sensitive information.

Like a lobster shell, security has layers — review code before you run it.

latestvk970f1r5mtskdhxr2a3njdg231842q4s

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments