Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Reptile Pet Health Diagnosis Tool | 爬行类宠物健康诊断分析工具

v1.0.0

Analyzes uploaded reptile or arachnid videos to identify scale, skin, and body issues, then generates a detailed health diagnosis report.

0· 66·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for smyx-sunjinhui/smyx-crawl-analysis.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Reptile Pet Health Diagnosis Tool | 爬行类宠物健康诊断分析工具" (smyx-sunjinhui/smyx-crawl-analysis) from ClawHub.
Skill page: https://clawhub.ai/smyx-sunjinhui/smyx-crawl-analysis
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install smyx-crawl-analysis

ClawHub CLI

Package manager switcher

npx clawhub@latest install smyx-crawl-analysis
Security Scan
Capability signals
Requires sensitive credentials
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Pending
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The code implements video upload + remote API calls and report listing/export which fit the stated purpose. However the repository bundles a large shared 'smyx_common' library and a separate 'face_analysis' skill, increasing scope beyond a minimal reptile-analysis tool. That shared code reads/writes configuration and creates a local SQLite DB under a workspace data directory — capabilities somewhat beyond a simple 'call remote API and return report', but plausibly part of a production client that caches/query reports.
!
Instruction Scope
SKILL.md explicitly forbids reading local memory files and LanceDB, and states all history queries must come from the cloud. But code paths in skills/smyx_common and the dao/util modules: (1) load YAML config files under skills/smyx_common/scripts/config.yaml (and related env-specific files), (2) may create those config files if missing, and (3) create/use a local SQLite DB under <workspace>/data via Dao.get_db_path. The skill also auto-saves uploaded attachments to an attachments directory. These file reads/writes contradict the 'absolute prohibition' in the SKILL.md and expand the scope of data access/persistence.
Install Mechanism
No install spec is provided (instruction-only at manifest level), so nothing is auto-downloaded from external URLs. The code includes requirements.txt files with many dependencies (smyx_common lists many packages) which is expected for a non-trivial Python tool; absence of an install script reduces immediate remote-install risk but means dependency management is left to the operator.
!
Credentials
The skill manifest declares no required environment variables, but the code reads environment variables (e.g. OPENCLAW_WORKSPACE, OPENCLAW_SENDER_OPEN_ID, OPENCLAW_SENDER_USERNAME, FEISHU_OPEN_ID) inside ConstantEnum.init and Dao.get_db_path. SKILL.md prescribes an 'open-id' discovery process based on config files and user input but does not mention environment variables; this is an undeclared capability and mismatch. The skill also uses API keys/config stored in YAML under skills/smyx_common which may contain secrets if populated — again not declared in the manifest.
!
Persistence & Privilege
Although 'always' is false, the code will create/read local files and a SQLite DB under the workspace data directory and may write attachments to an attachments folder. BaseEnum/YamlUtil will create config.yaml files if missing. This produces persistent storage on the host and can retain uploaded videos, config values, and report records. The SKILL.md forbids using local long-term memory for history queries, yet the code contains a local DAO and DB that can store records — an important inconsistency.
What to consider before installing
Key points to consider before installing or running this skill: - Mismatch: SKILL.md forbids reading local memory and instructs all history queries to come from the cloud, but the code (skills/smyx_common and dao) will load and may create YAML config files and stores/reads a local SQLite DB under a workspace data directory. Expect persistent files (attachments, DB) to be created. - Undeclared env vars: The code reads OPENCLAW_WORKSPACE, OPENCLAW_SENDER_OPEN_ID / OPENCLAW_SENDER_USERNAME, and FEISHU_OPEN_ID to populate defaults; these were not declared in the skill metadata. If you run this, be aware the skill can read those environment variables. - Data handling: Uploaded videos may be saved locally (attachments) and reports cached in a local DB. If you have privacy concerns about video files or report data being written locally, do not run this on a sensitive host or review/modify code first. - Verify endpoints and RequestUtil: The skill delegates HTTP calls to RequestUtil and uses configured base URLs (skills/smyx_common/scripts/config.yaml and config-prod/test/dev). Confirm the API base URLs (e.g., lifeemergence.com paths in provided config) are expected and safe, and inspect RequestUtil implementation to see how requests are authenticated and where data is sent. - Open-id behavior: SKILL.md requires a strict open-id retrieval flow (checking specific config files and asking the user if missing). But the code also accepts environment variables and command-line --open-id. Decide which method you trust, and do not supply sensitive identifiers if you don't want them stored or transmitted. - Run in sandbox first: If you want to test, run the skill in an isolated environment (container or throwaway VM) so you can observe what files it creates, what network endpoints it contacts, and whether it stores data locally. - If you need stricter guarantees: ask the author to clarify and reconcile SKILL.md rules with code (explicitly list env vars used, confirm that local DB will not be used for history queries, or remove local persistence), or strip/inspect the smyx_common components before use. Confidence note: The assessment is 'suspicious' with medium confidence because the repo contents clearly implement remote analysis but also contain local persistence/config behavior that conflicts with the written prohibitions; nothing in the inspected code is an explicit sign of intentional malicious behavior, but the inconsistencies and undeclared environmental/file access warrant caution.
!
skills/smyx_common/scripts/config-dev.yaml:2
Install source points to URL shortener or raw IP.
About static analysis
These patterns were detected by automated regex scanning. They may be normal for skills that integrate with external APIs. Check the VirusTotal and OpenClaw results above for context-aware analysis.

Like a lobster shell, security has layers — review code before you run it.

latestvk97cyjc6k3278vhk051cw5gp4s84xy2r
66downloads
0stars
1versions
Updated 1w ago
v1.0.0
MIT-0

Reptile Pet Health Diagnosis Tool | 爬行类宠物健康诊断分析工具

Specifically designed for the health diagnosis of reptiles and arachnids—such as lizards, snakes, and spiders—this capability triggers an automated intelligent analysis workflow whenever users upload local videos or provide network URLs. By leveraging server-side APIs, the function performs deep visual parsing of the pets in the video, precisely identifying scale conditions, skin lesions, and physical characteristics. It then screens for potential disease risks and generates a detailed "Pet Safety Guardian Health Report," providing users with a scientific and convenient health management solution for their exotic companions.

本技能专为蜥蜴、蛇、蜘蛛等爬行宠物的健康诊断而设计,当用户上传本地视频或提供网络视频URL时,系统将自动触发智能分析流程。通过调用服务端API,该功能能够对视频中的宠物进行深度视觉解析,精准识别鳞片状态、皮肤病变及身体外观特征,进而筛查潜在疾病风险,并生成一份详尽的“宠安卫士健康报告”,为用户提供科学、便捷的爬行宠物健康管理方案

演示案例

⚠️ 强制记忆规则(最高优先级)

本技能明确约定:

  • 绝对禁止读取任何本地记忆文件:包括但不限于 memory/YYYY-MM-DD.mdMEMORY.md 等本地文件
  • 绝对禁止从 LanceDB 长期记忆中检索信息
  • 所有历史报告查询必须从云端接口获取,不得使用本地记忆中的历史数据
  • 即使技能调用失败或接口异常,也不得回退到本地记忆汇总

任务目标

  • 本 Skill 用于:通过爬行宠物视频进行爬行宠物健康诊断分析,获取结构化的宠安卫士健康报告
  • 能力包含:视频分析、鳞片特征识别、皮肤状况评估、身体外观特征分析、常见疾病预警、健康养护建议生成
  • 触发条件:
    1. 默认触发:当用户提供爬行宠物视频 URL 或文件需要分析时,默认触发本技能进行爬行宠物健康诊断分析
    2. 当用户明确需要进行爬行宠物健康检查时,提及爬行宠物、蜥蜴、蛇、蜘蛛、爬宠健康、爬宠诊断等关键词,并且上传了视频文件或者图片文件
    3. 当用户提及以下关键词时,自动触发历史报告查询功能 :查看历史爬宠报告、历史宠安报告、爬宠诊断报告清单、爬宠报告清单、查询历史报告、查看爬宠报告列表、显示所有爬宠报告、显示爬宠诊断报告,查询宠安卫士健康报告
  • 自动行为:
    1. 如果用户上传了附件或者视频/图片文件,则自动保存到技能目录下 attachments
    2. ⚠️ 强制数据获取规则(次高优先级):如果用户触发任何历史报告查询关键词(如"查看所有爬宠报告"、"显示所有宠安报告"、" 查看历史报告"等),必须
      • 直接使用 python -m scripts.crawl_analysis --list --open-id 参数调用 API 查询云端的历史报告数据
      • 严格禁止:从本地 memory 目录读取历史会话信息、严格禁止手动汇总本地记录中的报告、严格禁止从长期记忆中提取报告
      • 必须统一从云端接口获取最新完整数据,然后以 Markdown 表格格式输出结果

前置准备

  • 依赖说明:scripts 脚本所需的依赖包及版本
    requests>=2.28.0
    

操作步骤

🔒 open-id 获取流程控制(强制执行,防止遗漏)

在执行爬行宠物健康分析前,必须按以下优先级顺序获取 open-id:

第 1 步:【最高优先级】检查技能所在目录的配置文件(优先)
        路径:skills/smyx_common/scripts/config.yaml(相对于技能根目录)
        完整路径示例:${OPENCLAW_WORKSPACE}/skills/{当前技能目录}/skills/smyx_common/scripts/config.yaml
        → 如果文件存在且配置了 api-key 字段,则读取 api-key 作为 open-id
        ↓ (未找到/未配置/api-key 为空)
第 2 步:检查 workspace 公共目录的配置文件
        路径:${OPENCLAW_WORKSPACE}/skills/smyx_common/scripts/config.yaml
        → 如果文件存在且配置了 api-key 字段,则读取 api-key 作为 open-id
        ↓ (未找到/未配置)
第 3 步:检查用户是否在消息中明确提供了 open-id
        ↓ (未提供)
第 4 步:❗ 必须暂停执行,明确提示用户提供用户名或手机号作为 open-id

⚠️ 关键约束:

  • 禁止自行假设,自行推导,自行生成 open-id 值(如 openclaw-control-ui、default、petC113、pet123 等)
  • 禁止跳过 open-id 验证直接调用 API
  • 必须在获取到有效 open-id 后才能继续执行分析
  • 如果用户拒绝提供 open-id,说明用途(用于保存和查询爬宠报告记录),并询问是否继续

  • 标准流程:
    1. 准备视频输入
      • 提供本地视频文件路径或网络视频 URL
      • 确保视频清晰展示宠物整体外观、鳞片、皮肤特征,光线充足
    2. 获取 open-id(强制执行)
      • 按上述流程控制获取 open-id
      • 如无法获取,必须提示用户提供用户名或手机号
    3. 执行爬行宠物健康分析
      • 调用 -m scripts.crawl_analysis 处理视频文件(必须在技能根目录下运行脚本
      • 参数说明:
        • --input: 本地视频文件路径(使用 multipart/form-data 方式上传)
        • --url: 网络视频 URL 地址(API 服务自动下载)
        • --crawl-type: 爬行宠物类型,可选值:lizard/snake/spider/turtle/gecko/chameleon/scorpion/iguana/crocodile/other,默认 other
        • --open-id: 当前用户的 open-id(必填,按上述流程获取)
        • --list: 显示爬行宠物视频历史分析报告列表清单(可以输入起始日期参数过滤数据范围)
        • --api-key: API 访问密钥(可选)
        • --api-url: API 服务地址(可选,使用默认值)
        • --detail: 输出详细程度(basic/standard/json,默认 json)
        • --output: 结果输出文件路径(可选)
    4. 查看分析结果
      • 接收结构化的宠安卫士健康报告
      • 包含:爬行宠物基本信息、整体健康状况、鳞片分析、皮肤特征、潜在疾病预警、健康养护建议

资源索引

  • 必要脚本:见 scripts/crawl_analysis.py(用途:调用 API 进行爬行宠物健康分析,本地文件使用 multipart/form-data 方式上传,网络 URL 由 API 服务自动下载)
  • 配置文件:见 scripts/config.py(用途:配置 API 地址、默认参数和视频格式限制)
  • 领域参考:见 references/api_doc.md(何时读取:需要了解 API 接口详细规范和错误码时)

注意事项

  • 仅在需要时读取参考文档,保持上下文简洁
  • 视频要求:支持 mp4/avi/mov 格式,最大 100MB
  • API 密钥可选,如果通过参数传入则必须确保调用鉴权成功,否则忽略鉴权
  • 分析结果仅供健康参考,不能替代专业宠医诊断
  • 禁止临时生成脚本,只能用技能本身的脚本
  • 传入的网路地址参数,不需要下载本地,默认地址都是公网地址,api 服务会自动下载
  • 当显示历史分析报告清单的时候,从数据 json 中提取字段 reportImageUrl 作为超链接地址,使用 Markdown 表格格式输出,包含" 报告名称"、"爬宠类型"、"分析时间"、"点击查看"四列,其中"报告名称"列使用爬宠健康分析报告-{记录id}形式拼接, "点击查看"列使用 [🔗 查看报告](reportImageUrl) 格式的超链接,用户点击即可直接跳转到对应的完整报告页面。
  • 表格输出示例:
    报告名称爬宠类型分析时间点击查看
    爬宠健康分析报告 -20260312172200001蜥蜴2026-03-12 17:22:00🔗 查看报告

使用示例

# 分析本地蜥蜴视频(以下只是示例,禁止直接使用openclaw-control-ui 作为 open-id)
python -m scripts.crawl_analysis --input /path/to/lizard_video.mp4 --crawl-type lizard --open-id openclaw-control-ui

# 分析网络蛇视频(以下只是示例,禁止直接使用openclaw-control-ui 作为 open-id)
python -m scripts.crawl_analysis --url https://example.com/snake_video.mp4 --crawl-type snake --open-id openclaw-control-ui

# 分析本地蜘蛛视频(以下只是示例,禁止直接使用openclaw-control-ui 作为 open-id)
python -m scripts.crawl_analysis --input /path/to/spider_video.mp4 --crawl-type spider --open-id openclaw-control-ui

# 分析本地乌龟视频(以下只是示例,禁止直接使用openclaw-control-ui 作为 open-id)
python -m scripts.crawl_analysis --input /path/to/turtle_video.mp4 --crawl-type turtle --open-id openclaw-control-ui

# 分析本地守宫视频(以下只是示例,禁止直接使用openclaw-control-ui 作为 open-id)
python -m scripts.crawl_analysis --input /path/to/gecko_video.mp4 --crawl-type gecko --open-id openclaw-control-ui

# 显示历史分析报告/显示分析报告清单列表/显示历史宠安报告(自动触发关键词:查看历史爬宠报告、历史报告、爬宠报告清单等)
python -m scripts.crawl_analysis --list --open-id openclaw-control-ui

# 输出精简报告
python -m scripts.crawl_analysis --input video.mp4 --crawl-type lizard --open-id your-open-id --detail basic

# 保存结果到文件
python -m scripts.crawl_analysis --input video.mp4 --crawl-type snake --open-id your-open-id --output result.json

Comments

Loading comments...