Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Aliyun Skill Creator

v1.0.0

Use when creating, migrating, or optimizing skills for this alicloud-skills repository. Use whenever users ask to add a new skill, import an external skill,...

0· 75·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for cinience/aliyun-skill-creator.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Aliyun Skill Creator" (cinience/aliyun-skill-creator) from ClawHub.
Skill page: https://clawhub.ai/cinience/aliyun-skill-creator
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install aliyun-skill-creator

ClawHub CLI

Package manager switcher

npx clawhub@latest install aliyun-skill-creator
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description (Aliyun Skill Creator for the alicloud-skills repo) aligns with the provided scripts and SKILL.md: scaffolding, validation, smoke-test conventions, benchmarking and reviewer tooling. The included Python scripts (packaging, validation, eval runner, review generator) are coherent with the skill's stated purpose.
Instruction Scope
SKILL.md instructs repository operations (scaffolding under skills/**, updating README index, adding tests, running local validation and benchmark scripts). It also recommends copying external skill source trees when importing — this is expected for migration tasks but introduces risk: copying and running unreviewed external code (or its test scripts) into your workspace can execute arbitrary code. The review/serve tooling embeds and serves workspace files (including arbitrary outputs) — useful for evaluation but could expose sensitive files if run in a repo with secrets.
Install Mechanism
No install spec or external downloads are declared. All bundled tooling is in-repo Python scripts (no installers or network fetches). This is low-risk from an install mechanism standpoint.
Credentials
The skill declares no required environment variables, credentials, or config paths. Runtime instructions reference local repo paths and tools (python, make) which are appropriate for repository tooling.
Persistence & Privilege
always:false and user-invocable:true (default). The skill does not request permanent platform-wide privileges. It does instruct updating repo files (README index) and writing validation/benchmark artifacts to output/, which is expected and scoped to the repository.
Scan Findings in Context
[subprocess_and_process_kill_usage] expected: generate_review.py uses subprocess.run to call lsof and then os.kill to free a port before starting a local server. This is expected for a local review/serve tool, but terminating processes or running lsof affects the host environment and should be run with care (prefer in an isolated environment).
[file_embedding_base64_and_serving] expected: The eval-viewer and generate_review.py embed arbitrary workspace files (including base64-encoded binaries and file contents) into a served HTML page and write feedback.json. This is expected behavior for an eval viewer, but it can expose sensitive files if the workspace contains secrets or private files—audit workspace contents before serving.
Assessment
This skill appears to do what it claims: repository scaffolding, validation, benchmarking, and result viewing. Before use: 1) Review any external skill source you plan to copy — do not blindly import untrusted repos or run their scripts without inspection. 2) Run the tooling in an isolated environment (container or VM) to avoid accidental process kills or leaking local files. 3) Be aware that the review server embeds and serves workspace files (including binaries) and writes feedback.json; remove or relocate any secrets before running. 4) If you will run scripts that call system tools (make, lsof, subprocess), inspect them first and prefer a read-only validation pass where possible. Overall the package is internally coherent and proportionate to its purpose, but standard repository hygiene and isolation practices are recommended.

Like a lobster shell, security has layers — review code before you run it.

latestvk97f3wcxgk65ztssmm4def8kss842vc1
75downloads
0stars
1versions
Updated 3w ago
v1.0.0
MIT-0

Category: tool

Alibaba Cloud Skill Creator

Repository-specific skill engineering workflow for alicloud-skills.

Use this skill when

  • Creating a new skill under skills/**.
  • Importing an external skill and adapting it to this repository.
  • Updating skill trigger quality (name and description in frontmatter).
  • Adding or fixing smoke tests under tests/**.
  • Running structured benchmark loops before merge.

Do not use this skill when

  • The user only needs to execute an existing product skill.
  • The task is purely application code under apps/ with no skill changes.

Repository constraints (must enforce)

  • Skills live under skills/<domain>/<subdomain>/<skill-name>/.
  • Skill folder names use kebab-case and should start with alicloud-.
  • Every skill must include SKILL.md frontmatter with name and description.
  • skills/**/SKILL.md content must stay English-only.
  • Smoke tests must be in tests/<domain>/<subdomain>/<skill-name>-test/SKILL.md.
  • Generated evidence goes to output/<skill-or-test-skill>/ only.
  • If skill inventory changes, refresh README index with scripts/update_skill_index.sh.

Standard deliverable layout

skills/<domain>/<subdomain>/<skill-name>/
├── SKILL.md
├── agents/openai.yaml
├── references/
│   └── sources.md
└── scripts/ (optional)

tests/<domain>/<subdomain>/<skill-name>-test/
└── SKILL.md

Workflow

  1. Capture intent
  • Confirm domain/subdomain and target skill name.
  • Confirm whether this is new creation, migration, or refactor.
  • Confirm expected outputs and success criteria.
  1. Implement skill changes
  • For new skills: scaffold structure and draft SKILL.md + agents/openai.yaml.
  • For migration from external repo: copy full source tree first, then adapt.
  • Keep adaptation minimal but explicit:
    • Replace environment-specific instructions that do not match this repo.
    • Add repository validation and output discipline sections.
    • Keep reusable bundled resources (scripts/, references/, assets/).
  1. Add smoke test
  • Create or update tests/**/<skill-name>-test/SKILL.md.
  • Keep it minimal, reproducible, and low-risk.
  • Include exact pass criteria and evidence location.
  1. Validate locally

Run script compile validation for the skill:

python3 tests/common/compile_skill_scripts.py \
  --skill-path skills/<domain>/<subdomain>/<skill-name> \
  --output output/<skill-name>-test/compile-check.json

Refresh skill index when inventory changed:

scripts/update_skill_index.sh

Confirm index presence:

rg -n "<skill-name>" README.md README.zh-CN.md README.zh-TW.md

Optional broader checks:

make test
make build-cli
  1. Benchmark loop (optional, for major skills)

If the user asks for quantitative skill evaluation, reuse bundled tooling:

  • scripts/run_eval.py
  • scripts/aggregate_benchmark.py
  • eval-viewer/generate_review.py

Prefer placing benchmark artifacts in a sibling workspace directory and keep per-iteration outputs.

Definition of done

  • Skill path and naming follow repository conventions.
  • Frontmatter is complete and trigger description is explicit.
  • Test skill exists and has objective pass criteria.
  • Validation artifacts are saved under output/.
  • README skill index is refreshed if inventory changed.

References

  • references/schemas.md
  • references/sources.md

Comments

Loading comments...