Skill flagged — suspicious patterns detected
ClawHub Security flagged this skill as suspicious. Review the scan results before using.
军棋暗棋摆阵
v1.0.0使用大模型生成中国军棋双人 25 格暗棋阵型,再用 Python 做硬规则校验与图片渲染。用于用户想要军棋暗棋摆阵、想按风格生成更有策略感的布局、想输出图片卡片、或需要先生成再严格验合法性的场景。
⭐ 0· 0·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Suspicious
medium confidencePurpose & Capability
Skill name/description match included scripts: validate_layout.py and render_layout.py implement the promised hard-rule checks and image rendering. A generator script is present but documented as an older fallback; its presence is not necessarily harmful but is a minor mismatch with the SKILL.md claim that the model, not Python, should create layouts.
Instruction Scope
SKILL.md instructs the LLM to output JSON and says Python only validates and renders. However, validate_layout.py accepts either a JSON string or a filesystem path and will read a local file if given a path—this expands scope and could be misused to read files. Also validate_layout.py enforces an extra 'HQ piece' restriction (only 军旗 or low-value pieces allowed in HQ) that is not listed in the SKILL.md hard rules, which can cause unexpected validation failures and retries.
Install Mechanism
No install spec and no external downloads; everything included in the bundle. This is low-risk from an install/execution distribution perspective.
Credentials
No environment variables, credentials, or config paths requested. The skill does not ask for unrelated secrets or cloud credentials.
Persistence & Privilege
always is false and the skill does not request elevated or persistent platform privileges. It does file I/O limited to reading layout input and writing image output as expected.
What to consider before installing
This skill appears to implement the advertised features, but review two issues before installing or running it in a privileged environment:
1) Validator file-path handling: validate_layout.py accepts either a JSON string or a file path and will read the file if a path is supplied. Ensure the agent never forwards untrusted or model-supplied file paths to the validator (require the model to output JSON text and sanitize inputs). Otherwise an attacker or a confused model could cause the validator to read arbitrary local files.
2) Extra validation rule: The validator enforces an additional 'HQ pieces' restriction (only 军旗 or 排长/连长 allowed in HQ) that is not documented in SKILL.md. This can make legitimate model outputs fail validation unexpectedly. Either update SKILL.md to document this rule or adjust the validator to match the documented rules.
Other suggestions: remove or clearly label the legacy generate_layout.py if it won't be used (to avoid confusion), and consider limiting the validator to parse JSON only (avoid path-mode) or add explicit validation of any path input. Test the full workflow in a sandbox to confirm the LLM<->validator retry loop behaves as you expect. If you need higher assurance, provide runtime details (how the agent passes model output into the script) so I can reassess and raise confidence.Like a lobster shell, security has layers — review code before you run it.
latestvk971zncf6mym6rr4dmc089r69584y9bw
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
