Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Reptile Pet Health Diagnosis Tool | 爬行类宠物健康诊断分析工具

v1.0.2

爬行类宠物健康诊断分析工具,当用户提供蜥蜴、蛇、蜘蛛等爬行宠物的视频 URL 或文件需要分析时,触发本技能进行爬行宠物健康诊断分析;支持通过上传本地视频或网络视频 URL,调用服务端 API 进行爬行宠物健康检查,分析鳞片、皮肤、身体外观等特征,识别潜在疾病并输出宠安卫士健康报告

0· 59·1 current·1 all-time
by生命涌现@raymond758
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The skill's name/description (reptile health analysis) matches the code: scripts/crawl_analysis.py and a Skill implementation call out to an API. However the bundle also contains a large 'face_analysis' sub-skill and a shared 'smyx_common' library with many unrelated utilities and large requirements lists. Having those extra modules is plausible (shared platform code), but the package declares no required env vars or install steps even though the code expects network APIs and may need many Python packages — this mismatch is unexpected.
!
Instruction Scope
SKILL.md directs the agent to: save uploaded attachments to an attachments directory, run python -m scripts.crawl_analysis from the skill root, and to fetch history strictly from a cloud API. It also enforces an 'open-id' retrieval flow that uses environment variables and message metadata. The instructions therefore read and write local files and access environment/message metadata; those accesses are not declared up front and broaden the skill's runtime scope beyond a simple API caller.
Install Mechanism
There is no install spec (instruction-only in registry), but the bundle includes many Python modules and requirements files with extensive dependency lists. Without an install step, the skill will either fail if executed in an environment lacking dependencies or rely on the host to already have many packages. The lack of a clear, minimal install makes operational expectations unclear.
!
Credentials
Registry lists no required environment variables, but both SKILL.md and the code read environment variables (e.g., OPENCLAW_SENDER_ID, sender_id, OPENCLAW_SENDER_OPEN_ID, OPENCLAW_SENDER_USERNAME, FEISHU_OPEN_ID). The codebase also includes dev config with a database URL (mysql+pymysql://root:root@localhost:3306/...) in config-dev.yaml. Requesting or accessing these envs/configs is plausible for a platform-integrated skill, but they are not declared and could expose identifiers or be used to reach further infrastructure — a proportionality mismatch.
Persistence & Privilege
The skill is not marked always:true and does not request elevated platform privileges. It does write uploaded attachments into a local attachments directory and expects to be executed from the skill root, so it will create/modify files under its own folder. Autonomous invocation is allowed (default) which increases blast radius if the skill misbehaves, but that is the platform default and not by itself unusual.
What to consider before installing
What to consider before installing or running this skill: - Environment reads not declared: SKILL.md and code expect to read OPENCLAW_* and sender_id environment values and message metadata; the registry lists no required env vars—confirm you are comfortable with the skill reading those identifiers. - Files are written locally: uploaded videos/attachments are saved into the skill directory. If you run this in a sensitive environment, run the skill in an isolated workspace or container. - No install spec but many dependencies: the package includes large requirements files; ensure the runtime has the expected Python packages or run in a controlled virtualenv. The lack of an explicit install step is a usability and security ambiguity. - Included dev config contains apparent DB URL (root:root). While this looks like a development artifact, it could be confusing or used unintentionally. Inspect config-* files and ensure the skill is configured to point to trusted API endpoints before use. - Review network behavior: the code uses RequestUtil/ApiService to call backend APIs and build report URLs; verify the API base URLs you will use (and API keys) are trusted and do not leak data externally. - If you need to proceed: run the skill in a sandbox (container, VM), inspect the implementation of RequestUtil/http_post to confirm no unexpected exfiltration, and avoid providing sensitive credentials or broad environment variables until you verify behavior. If you want, I can (a) extract and summarize network endpoints and env var reads from the code, (b) highlight the exact files/functions that save attachments or perform HTTP requests, or (c) show a minimal safe execution checklist to run it in a container.
!
skills/smyx_common/scripts/config-dev.yaml:3
Install source points to URL shortener or raw IP.
About static analysis
These patterns were detected by automated regex scanning. They may be normal for skills that integrate with external APIs. Check the VirusTotal and OpenClaw results above for context-aware analysis.

Like a lobster shell, security has layers — review code before you run it.

latestvk97ahgmzppbbq6ba6htkeyy1fh8408k4

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

Reptile Pet Health Diagnosis Tool | 爬行类宠物健康诊断分析工具

⚠️ 强制记忆规则(最高优先级)

本技能明确约定:

  • 绝对禁止读取任何本地记忆文件:包括但不限于 memory/YYYY-MM-DD.mdMEMORY.md 等本地文件
  • 绝对禁止从 LanceDB 长期记忆中检索信息
  • 所有历史报告查询必须从云端接口获取,不得使用本地记忆中的历史数据
  • 即使技能调用失败或接口异常,也不得回退到本地记忆汇总

任务目标

  • 本 Skill 用于:通过爬行宠物视频进行爬行宠物健康诊断分析,获取结构化的宠安卫士健康报告
  • 能力包含:视频分析、鳞片特征识别、皮肤状况评估、身体外观特征分析、常见疾病预警、健康养护建议生成
  • 触发条件:
    1. 默认触发:当用户提供爬行宠物视频 URL 或文件需要分析时,默认触发本技能进行爬行宠物健康诊断分析
    2. 当用户明确需要进行爬行宠物健康检查时,提及爬行宠物、蜥蜴、蛇、蜘蛛、爬宠健康、爬宠诊断等关键词,并且上传了视频文件或者图片文件
    3. 当用户提及以下关键词时,自动触发历史报告查询功能 :查看历史爬宠报告、历史宠安报告、爬宠诊断报告清单、爬宠报告清单、查询历史报告、查看爬宠报告列表、显示所有爬宠报告、显示爬宠诊断报告,查询宠安卫士健康报告
  • 自动行为:
    1. 如果用户上传了附件或者视频/图片文件,则自动保存到技能目录下 attachments
    2. ⚠️ 强制数据获取规则(次高优先级):如果用户触发任何历史报告查询关键词(如"查看所有爬宠报告"、"显示所有宠安报告"、" 查看历史报告"等),必须
      • 直接使用 python -m scripts.crawl_analysis --list --open-id {从消息上下文获取 open-id} 参数调用 API 查询云端的历史报告数据
      • 严格禁止:从本地 memory 目录读取历史会话信息、严格禁止手动汇总本地记录中的报告、严格禁止从长期记忆中提取报告
      • 必须统一从云端接口获取最新完整数据,然后以 Markdown 表格格式输出结果
      • 如果用户未明确提供 open-id,优先从 OpenClaw 消息上下文获取 sender id(如 metadata 中的 id 字段),然后尝试从当前消息上下文的环境变量 OPENCLAW_SENDER_ID 或者 sender_id 获取,无法获取时则必须用户提供用户名或者手机号作为 open-id

前置准备

  • 依赖说明:scripts 脚本所需的依赖包及版本
    requests>=2.28.0
    

操作步骤

🔒 open-id 获取流程控制(强制执行,防止遗漏)

在执行爬行宠物健康分析前,必须按以下优先级顺序获取 open-id:

第 1 步:检查用户是否在消息中明确提供了 open-id
        ↓ (未提供)
第 2 步:从当前消息上下文的环境变量中获取 OPENCLAW_SENDER_ID
        ↓ (无法获取)
第 3 步:从当前消息上下文的环境变量中获取 sender_id
        ↓ (无法获取)
第 4 步:从 OpenClaw 消息元数据中获取 id 字段(如 metadata 中的 id/session_id/user_id等)作为 open-id
        ↓ (无法获取)
第 5 步:❗ 必须暂停执行,明确提示用户提供用户名或手机号作为 open-id

⚠️ 关键约束:

  • 禁止自行假设或生成 open-id 值(如 petC113、pet123 等)
  • 禁止跳过 open-id 验证直接调用 API
  • 必须在获取到有效 open-id 后才能继续执行分析
  • 如果用户拒绝提供 open-id,说明用途(用于保存和查询爬宠报告记录),并询问是否继续

  • 标准流程:
    1. 准备视频输入
      • 提供本地视频文件路径或网络视频 URL
      • 确保视频清晰展示宠物整体外观、鳞片、皮肤特征,光线充足
    2. 获取 open-id(强制执行)
      • 按上述流程控制获取 open-id
      • 如无法获取,必须提示用户提供用户名或手机号
    3. 执行爬行宠物健康分析
      • 调用 -m scripts.crawl_analysis 处理视频文件(必须在技能根目录下运行脚本
      • 参数说明:
        • --input: 本地视频文件路径(使用 multipart/form-data 方式上传)
        • --url: 网络视频 URL 地址(API 服务自动下载)
        • --crawl-type: 爬行宠物类型,可选值:lizard/snake/spider/turtle/gecko/chameleon/scorpion/iguana/crocodile/other,默认 other
        • --open-id: 当前用户的 OpenID/UserId(必填,按上述流程获取)
        • --list: 显示爬行宠物视频历史分析报告列表清单(可以输入起始日期参数过滤数据范围)
        • --api-key: API 访问密钥(可选)
        • --api-url: API 服务地址(可选,使用默认值)
        • --detail: 输出详细程度(basic/standard/json,默认 json)
        • --output: 结果输出文件路径(可选)
    4. 查看分析结果
      • 接收结构化的宠安卫士健康报告
      • 包含:爬行宠物基本信息、整体健康状况、鳞片分析、皮肤特征、潜在疾病预警、健康养护建议

资源索引

  • 必要脚本:见 scripts/crawl_analysis.py(用途:调用 API 进行爬行宠物健康分析,本地文件使用 multipart/form-data 方式上传,网络 URL 由 API 服务自动下载)
  • 配置文件:见 scripts/config.py(用途:配置 API 地址、默认参数和视频格式限制)
  • 领域参考:见 references/api_doc.md(何时读取:需要了解 API 接口详细规范和错误码时)

注意事项

  • 仅在需要时读取参考文档,保持上下文简洁
  • 视频要求:支持 mp4/avi/mov 格式,最大 100MB
  • API 密钥可选,如果通过参数传入则必须确保调用鉴权成功,否则忽略鉴权
  • 分析结果仅供健康参考,不能替代专业宠医诊断
  • 禁止临时生成脚本,只能用技能本身的脚本
  • 传入的网路地址参数,不需要下载本地,默认地址都是公网地址,api 服务会自动下载
  • 当显示历史分析报告清单的时候,从数据 json 中提取字段 reportImageUrl 作为超链接地址,使用 Markdown 表格格式输出,包含" 报告名称"、"爬宠类型"、"分析时间"、"点击查看"四列,其中"报告名称"列使用爬宠健康分析报告-{记录id}形式拼接, "点击查看"列使用 [🔗 查看报告](reportImageUrl) 格式的超链接,用户点击即可直接跳转到对应的完整报告页面。
  • 表格输出示例:
    报告名称爬宠类型分析时间点击查看
    爬宠健康分析报告 -20260312172200001蜥蜴2026-03-12 17:22:00🔗 查看报告

使用示例

# 分析本地蜥蜴视频(OpenClaw UI 上下文,使用 metadata id 作为 open-id)
python -m scripts.crawl_analysis --input /path/to/lizard_video.mp4 --crawl-type lizard --open-id openclaw-control-ui

# 分析网络蛇视频(OpenClaw UI 上下文,使用 metadata id 作为 open-id)
python -m scripts.crawl_analysis --url https://example.com/snake_video.mp4 --crawl-type snake --open-id openclaw-control-ui

# 分析本地蜘蛛视频(OpenClaw UI 上下文,使用 metadata id 作为 open-id)
python -m scripts.crawl_analysis --input /path/to/spider_video.mp4 --crawl-type spider --open-id openclaw-control-ui

# 分析本地乌龟视频(OpenClaw UI 上下文,使用 metadata id 作为 open-id)
python -m scripts.crawl_analysis --input /path/to/turtle_video.mp4 --crawl-type turtle --open-id openclaw-control-ui

# 分析本地守宫视频(OpenClaw UI 上下文,使用 metadata id 作为 open-id)
python -m scripts.crawl_analysis --input /path/to/gecko_video.mp4 --crawl-type gecko --open-id openclaw-control-ui

# 显示历史分析报告/显示分析报告清单列表/显示历史宠安报告(自动触发关键词:查看历史爬宠报告、历史报告、爬宠报告清单等)
python -m scripts.crawl_analysis --list --open-id openclaw-control-ui

# 输出精简报告
python -m scripts.crawl_analysis --input video.mp4 --crawl-type lizard --open-id your-open-id --detail basic

# 保存结果到文件
python -m scripts.crawl_analysis --input video.mp4 --crawl-type snake --open-id your-open-id --output result.json

Files

31 total
Select a file
Select a file to preview.

Comments

Loading comments…