Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Aliyun Skill Creator

v1.0.0

Use when creating, migrating, or optimizing skills for this alicloud-skills repository. Use whenever users ask to add a new skill, import an external skill,...

0· 50·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description (Aliyun Skill Creator for the alicloud-skills repo) aligns with the provided scripts and SKILL.md: scaffolding, validation, smoke-test conventions, benchmarking and reviewer tooling. The included Python scripts (packaging, validation, eval runner, review generator) are coherent with the skill's stated purpose.
Instruction Scope
SKILL.md instructs repository operations (scaffolding under skills/**, updating README index, adding tests, running local validation and benchmark scripts). It also recommends copying external skill source trees when importing — this is expected for migration tasks but introduces risk: copying and running unreviewed external code (or its test scripts) into your workspace can execute arbitrary code. The review/serve tooling embeds and serves workspace files (including arbitrary outputs) — useful for evaluation but could expose sensitive files if run in a repo with secrets.
Install Mechanism
No install spec or external downloads are declared. All bundled tooling is in-repo Python scripts (no installers or network fetches). This is low-risk from an install mechanism standpoint.
Credentials
The skill declares no required environment variables, credentials, or config paths. Runtime instructions reference local repo paths and tools (python, make) which are appropriate for repository tooling.
Persistence & Privilege
always:false and user-invocable:true (default). The skill does not request permanent platform-wide privileges. It does instruct updating repo files (README index) and writing validation/benchmark artifacts to output/, which is expected and scoped to the repository.
Scan Findings in Context
[subprocess_and_process_kill_usage] expected: generate_review.py uses subprocess.run to call lsof and then os.kill to free a port before starting a local server. This is expected for a local review/serve tool, but terminating processes or running lsof affects the host environment and should be run with care (prefer in an isolated environment).
[file_embedding_base64_and_serving] expected: The eval-viewer and generate_review.py embed arbitrary workspace files (including base64-encoded binaries and file contents) into a served HTML page and write feedback.json. This is expected behavior for an eval viewer, but it can expose sensitive files if the workspace contains secrets or private files—audit workspace contents before serving.
Assessment
This skill appears to do what it claims: repository scaffolding, validation, benchmarking, and result viewing. Before use: 1) Review any external skill source you plan to copy — do not blindly import untrusted repos or run their scripts without inspection. 2) Run the tooling in an isolated environment (container or VM) to avoid accidental process kills or leaking local files. 3) Be aware that the review server embeds and serves workspace files (including binaries) and writes feedback.json; remove or relocate any secrets before running. 4) If you will run scripts that call system tools (make, lsof, subprocess), inspect them first and prefer a read-only validation pass where possible. Overall the package is internally coherent and proportionate to its purpose, but standard repository hygiene and isolation practices are recommended.

Like a lobster shell, security has layers — review code before you run it.

latestvk97f3wcxgk65ztssmm4def8kss842vc1

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments