Skill flagged — suspicious patterns detected
ClawHub Security flagged this skill as suspicious. Review the scan results before using.
t-label 自动化标注工具
v1.0.1基于t-label工具实现全量深度学习与自动化流程,支持t-label的部署、运行、样本标注、模型训练、管理与导出全流程自动化,新增支持阿里云通义千问qwen3-vl-plus模型,内置坐标自动转换功能。当用户提到t-label相关操作(部署、标注、训练、导出等)、样本数据处理、模型训练相关需求时使用此技能。
⭐ 0· 53·0 current·0 all-time
byVenwell Chiang@kumamon2019s
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Suspicious
high confidencePurpose & Capability
The skill claims to provide end-to-end automation for t-label/xclabel (deploy, annotate, train, export) which justifies cloning the upstream repository and running deployment/annotation scripts. However, SKILL.md step 2 explicitly instructs traversing all files and deleting original author names, signatures, copyright and personal identifiers '不留任何痕迹' — that is not required for deploying or running xclabel and is ethically/legally questionable. Also the package is labeled 'instruction-only' while many code files (app.py, AiUtils.py, deploy.py, clean_author_info.py, tlabel_cli.py, etc.) are included — the mismatch is notable but not by itself malicious.
Instruction Scope
The runtime instructions tell the agent to: clone https://github.com/beixiaocai/xclabel, then scan and remove all original copyright/author metadata from the project, 'learn' the cleaned codebase, deploy, and run automation. Deleting authorship/license traces is outside the legitimate scope of deploying/operating the tool and constitutes destructive modification of third-party code. The SKILL.md also includes broad language about '全量学习' (read all files) and will cause the agent to read many files. The pre-scan reported unicode-control-chars in SKILL.md/static JS which is a prompt-injection / obfuscation indicator — the included script.js begins with many zero-width/control characters. Overall the instructions are broader and riskier than needed for the stated purpose.
Install Mechanism
There is no install spec (instruction-only), which is lower installer risk. However the skill bundles many source files and a requirements.txt that references packages (including openai). The absence of an explicit install step means an agent or operator may run pip install -r requirements.txt or otherwise execute scripts; the packaged code will write to disk (creating uploads/ and plugins/ directories) and perform network calls. No external download from arbitrary URLs was detected, but the repo clone step will pull code from GitHub and then the included scripts may modify files.
Credentials
requires.env declares no secrets, but the code expects optional API keys for model backends (OpenAI-compatible client configured to use Alibaba dashscope) and the requirements.txt includes openai. That's appropriate for an AI-annotation tool, but the skill does not declare or restrict how API keys will be provided. Missing an explicit primaryEnv is a mismatch but not necessarily malicious. The bigger proportionality issue is the request to remove upstream authorship metadata — that does not require credentials but is a questionable action unrelated to normal tool operation.
Persistence & Privilege
always is false and the skill does not request persistent platform privileges. The code and instructions will create folders (uploads/, plugins/, etc.) and modify files in the cloned repository (including deletion of text). While writing to its own working directories is normal for this kind of tool, the explicit instruction to permanently remove author/copyright traces from the cloned repo is a persistent destructive modification of third-party content and therefore risky.
Scan Findings in Context
[unicode-control-chars] unexpected: Files (e.g., static/script.js) start with many hidden unicode control characters; SKILL.md pre-scan flagged unicode-control-chars. Hidden control characters are not expected for a normal deployment tool and can be used to obfuscate content or attempt prompt-injection/evade simple scanners. This increases the suspicion around the included files and the 'clean_author_info' behavior.
What to consider before installing
Do not run this skill unexamined. Before installing or executing: 1) Review scripts/clean_author_info.py and any code that modifies files — the SKILL.md explicitly instructs removing original authorship/copyright which is unethical and could violate licenses; remove or disable that behavior. 2) Run the code in an isolated environment (VM/container) and inspect what files are changed. 3) Inspect any network calls (repo clone, model API endpoints) and do not supply production API keys until you understand what external services will receive your data. 4) Check the included requirements.txt and installed packages (openai dependency) and prefer using official upstream xclabel sources if you only need the original tool. 5) Ask the publisher why the skill needs to strip attribution and why files contain hidden unicode control characters; lack of satisfactory explanation is a strong reason not to use it.Like a lobster shell, security has layers — review code before you run it.
latestvk979ats3qpt3czdhn37ay4dmtx83yc3w
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
