Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

process-data-monitor-claw

v1.0.0

过程数据监控虾 — 实时监控业务运行中的全链路数据状态,像雷达一样扫描业务流程每个节点,第一时间发现异常并告警。 **当以下情况时使用此 Skill**: (1) 需要监控业务流程节点状态(订单履约、库存同步、支付链路、数据管道等) (2) 需要设置数据异常阈值告警(数值超限、状态卡顿、趋势恶化) (3) 需要配...

0· 79·0 current·0 all-time
byRicky@tujinsama

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for tujinsama/process-data-monitor-claw.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "process-data-monitor-claw" (tujinsama/process-data-monitor-claw) from ClawHub.
Skill page: https://clawhub.ai/tujinsama/process-data-monitor-claw
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install process-data-monitor-claw

ClawHub CLI

Package manager switcher

npx clawhub@latest install process-data-monitor-claw
Security Scan
Capability signals
CryptoCan make purchases
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
!
Purpose & Capability
Name/description match the included docs and scripts (a monitoring/alerting tool). However the skill declares no required binaries or env vars while its scripts clearly depend on curl, jq and Feishu tokens and the docs reference many credentials (DB passwords, cloud keys). The absence of declared requirements is incoherent.
Instruction Scope
SKILL.md stays within monitoring scope (databases, APIs, MQs, logs) and explains converting NL requirements to YAML configs. It explicitly instructs the agent to read config/monitor-config.yaml, query data sources, and send alerts via webhooks/token-backed APIs. Those actions are expected for a monitor but grant broad access to system files/credentials depending on the produced config.
Install Mechanism
No install spec (instruction-only), which minimizes automatic disk writes. There are shipped scripts (monitor-daemon.sh, alert-sender.sh) that will be run by users; nothing downloads arbitrary code during install.
!
Credentials
The skill requests no env vars in metadata but the docs and scripts require several sensitive variables (FEISHU_WEBHOOK, FEISHU_BOT_TOKEN, MYSQL_PASSWORD, PG_PASSWORD, cloud provider keys, etc.). This mismatch prevents the platform from prompting users for the right secrets and is a risk for accidental credential exposure or misuse.
Persistence & Privilege
always:false and no platform-level persistence requested. The scripts create PID/log files under /tmp and run a user-level daemon; they do not alter other skills or system-wide agent config.
What to consider before installing
This skill appears to be a legitimate monitoring/alerting helper, but there are mismatches you should address before installing or running it: (1) Required binaries: the scripts call curl and jq (and imply cron); ensure those are available. (2) Missing declared env vars: the metadata lists none but the code/docs use FEISHU_WEBHOOK, FEISHU_BOT_TOKEN and many example credentials (DB passwords, cloud keys). Treat those as sensitive — do not supply high-privilege credentials. (3) Review config/monitor-config.yaml before starting the daemon; the monitor will execute queries and call external endpoints defined there. (4) Limit network exposure: verify any webhook URLs and bot tokens point to trusted receivers to avoid accidental data exfiltration. (5) Run first in an isolated/test environment with read-only monitoring accounts and least privilege, and inspect logs (in /tmp) and code. If the publisher/source is unknown, ask for provenance (source repo, homepage, or signed release) or request the skill declare required env vars and binaries explicitly before use.

Like a lobster shell, security has layers — review code before you run it.

latestvk979xmkbxje821z5yzj72mcqqn84eg6x
79downloads
0stars
1versions
Updated 2w ago
v1.0.0
MIT-0

过程数据监控虾

业务流程的"哨兵",负责发现问题并及时预警。

工作流程

Step 1 — 定义监控对象:明确监控的流程节点、数据指标、业务对象(订单状态、库存数量、支付成功率、任务执行状态等)。

Step 2 — 设置监控规则:为每个监控对象配置正常范围、异常阈值、检查频率、告警级别。参考 references/alert-rules.md 中的规则模板。

Step 3 — 数据采集方式:根据数据源类型选择采集方式,参考 references/data-sources.md

  • 数据库轮询(MySQL/PostgreSQL/MongoDB)
  • API 接口调用(REST/GraphQL)
  • 消息队列监听(Kafka/RabbitMQ)
  • 日志文件解析

Step 4 — 异常检测:将实时数据与规则对比,识别数值异常、状态异常、趋势异常、关联异常。

Step 5 — 告警推送:根据告警级别推送,使用 scripts/alert-sender.sh

  • 紧急(urgent):飞书群消息 + 私信
  • 重要(warning):飞书私信
  • 一般(info):飞书群消息

Step 6 — 生成报告:定期汇总异常事件、响应时效、改进建议。

监控配置格式

用户提供自然语言需求时,将其转化为标准配置结构:

monitor_name: "监控任务名称"
check_interval: 300  # 秒,默认5分钟
data_source:
  type: "mysql"  # mysql/postgresql/api/log
  connection: "..."
rules:
  - name: "规则名称"
    metric: "指标名或SQL查询"
    operator: "gt"  # gt/lt/eq/ne/gte/lte
    threshold: 0
    alert_level: "urgent"  # urgent/warning/info
    message: "告警消息模板,支持 {value} {count} 等变量"
    cooldown: 300  # 告警降噪间隔(秒),默认5分钟
notify:
  feishu_webhook: "https://open.feishu.cn/open-apis/bot/v2/hook/..."
  feishu_users: []  # open_id 列表,用于私信

关键设计原则

  • 告警降噪:相同问题 cooldown 秒内只告警一次,避免告警风暴
  • 可追溯性:所有告警记录写入日志,支持回溯分析
  • 自适应阈值:建议根据历史数据定期调整阈值,减少误报

References

  • references/metrics-library.md — 常见业务场景预定义监控指标体系(电商/SaaS/营销)
  • references/alert-rules.md — 告警规则配置模板(阈值/趋势/关联/时间窗口)
  • references/data-sources.md — 各类数据源接入指南

Scripts

  • scripts/monitor-daemon.sh — 监控守护进程(start/stop/status/reload)
  • scripts/alert-sender.sh — 多渠道告警推送(飞书 Webhook)

Comments

Loading comments...