中文技能发布工作流程
AdvisoryAudited by Static analysis on May 8, 2026.
Overview
No suspicious patterns detected.
Findings (0)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
If the token is valid, it could expose or misuse ClawHub publishing authority; if copied, users may also publish with the wrong account credential.
The batch publish template contains a credential-looking ClawHub token rather than a placeholder. A public skill should not embed a reusable account token, especially for publish operations.
TOKEN=clh_D_1J2_rsQs0XZbt_2Gf3Lo... ; clawhub publish ... --token $TOKEN
Revoke the embedded token, remove it from the skill, use a placeholder, and require users to authenticate with their own scoped token through a secure ClawHub login or configuration flow.
Running the script could publish several skills publicly or update registry state without a separate confirmation for each item.
The documented batch script loops over multiple skills and publishes them to ClawHub. This is aligned with the stated batch-publishing purpose, but it can make multiple public/account changes if run unchanged.
for item in "${SKILLS[@]}"; do ... clawhub publish $SKILLS_DIR/$skill --slug chinese-$skill ... --version 1.0.0Review the skill list, names, slugs, changelogs, and account before running; add dry-run or per-skill confirmation steps for batch publishing.
A user could run an unintended or modified local evaluator script, and the --improve option may alter skill files before publication.
The workflow depends on external/local evaluator scripts that are not bundled or declared in the install metadata. This is disclosed and purpose-aligned, but users must verify the provenance of those scripts.
python3 <axioma-skill-evaluator-path>/evaluator.py <skill-path> --verbose --improve ; python3 <axioma-skill-evaluator-path>/eval-skill.py <skill-path> --verbose
Declare required tools and trusted sources, pin or document evaluator versions, and ask users to review diffs before publishing improved skills.
