Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

universal-data-analyst

v1.0.3

基于数据本体论自动识别数据类型,生成分析方案及脚本,输出数据质量报告和多格式智能分析报告,支持多种数据格式。

1· 319·1 current·1 all-time
byyamaz@yamaz49

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for yamaz49/universal-data-analyst.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "universal-data-analyst" (yamaz49/universal-data-analyst) from ClawHub.
Skill page: https://clawhub.ai/yamaz49/universal-data-analyst
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install universal-data-analyst

ClawHub CLI

Package manager switcher

npx clawhub@latest install universal-data-analyst
Security Scan
Capability signals
CryptoCan make purchases
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
high confidence
Purpose & Capability
The SKILL.md emphasizes that every decision is done 'by the LLM' and 'no keyword hardcoding', but the code contains clear heuristics and hard-coded lists (e.g., _detect_join_keys candidate_names, default ontology values). The skill both generates LLM prompts and also implements local heuristics/processing — that's not itself malicious but contradicts the 'no hardcoding' claim and should be understood by users.
!
Instruction Scope
The runtime instructions and code generate LLM prompts, request the user (or autonomous flow) to obtain LLM responses, then generate full Python analysis scripts and execute them. Executing code created by an LLM is inherently risky: a generated script could contain network calls, file-system operations, or other exfiltration logic. The SKILL.md and orchestrator allow 'autonomous' modes and skipping manual review; the code also includes subprocess and execution plumbing (truncated but referenced). The skill also supports SQL connection strings — this implies access to databases but the skill does not declare or limit how credentials are used. There are explicit instructions/examples showing how to call an external LLM (Anthropic) but no enforced or declared guardrails or required manual review step.
Install Mechanism
No install spec; the package is delivered as code files and relies on standard Python dependencies. That lowers supply-chain risk compared to fetching arbitrary archives or binaries. Dependencies listed are normal data-analysis libs (pandas, numpy, matplotlib, etc.).
Credentials
The skill declares no required environment variables or primary credential, but the README and SKILL.md show examples of calling external LLM APIs (Anthropic) and mention SQL connection strings. In practice, using this skill will typically require API keys or DB credentials supplied by the user — the package does not request or document required env vars in its manifest, which is a mismatch the user should be aware of.
!
Persistence & Privilege
always:false (good), but the platform default allows autonomous invocation and the SKILL.md/code reference autonomous modes and automated end-to-end flows. Combined with the ability to auto-generate and execute LLM-produced scripts and to accept database connection strings/files, this increases blast radius: if invoked autonomously and paired with model access, the flow could run arbitrary code without human review. The skill does not include explicit sandboxing or network-restriction guidance.
What to consider before installing
Before installing or running this skill, consider the following: - Do not enable fully autonomous execution unless you trust the environment and have strict sandboxing (e.g., no outbound network, limited filesystem rights, isolated VM/container). - The skill generates Python scripts from LLM output and can execute them. Always review generated scripts manually (use the step-by-step mode) before executing, especially if the data is sensitive or the runtime has network/database access. - The package doesn’t declare required LLM API keys or DB creds, but its examples expect you to use external LLMs or DB connection strings — treat those credentials as sensitive. Provide them only in controlled ways and prefer ephemeral/separated accounts. - If you need to run this in production, run it in a sandbox with egress blocked or restricted, and use least-privilege credentials for any database connections. - Note the claimed "no hardcoding / always LLM-driven" promise is not strictly true: the code includes heuristics and defaults. Expect hybrid LLM+heuristic behavior. - If possible, run the code audit (search for subprocess, os.system, exec, open(..., 'w'), network libraries) and add explicit guardrails (denylist network calls, require signed/approved scripts) before enabling autonomous use.

Like a lobster shell, security has layers — review code before you run it.

latestvk97a545bddkj4sktpbznb8esm1848kdz
319downloads
1stars
4versions
Updated 3w ago
v1.0.3
MIT-0

通用数据分析专家(Universal Data Analyst)

简介

一个基于数据本体论的智能数据分析技能。不依赖关键词硬编码,每次分析均通过大模型进行推理判断,自动识别数据类型、选择分析方法、生成分析脚本并输出报告。

支持经济型数据(零售、订阅、金融等)和非经济型数据(科学测量、社交网络、文本等),可处理 CSV、Excel、Parquet、JSON 等多种格式。


触发方式

上传数据文件,或发送以下类型的消息即可触发:

  • "帮我分析这份数据"
  • "这份 CSV 里有什么规律?"
  • "探索一下这个数据集"
  • "帮我看看数据质量"
  • 直接上传 .csv / .xlsx / .parquet / .json 文件

核心设计:四层分析框架

第一层:数据本体论(Data Ontology)
        ↓  这是关于什么的存在?实体类型?生成机制?
第二层:问题类型学(Problem Typology)
        ↓  描述型 / 诊断型 / 预测型 / 规范型 / 因果型?
第三层:方法论映射(Methodology Mapping)
        ↓  匹配领域公认分析框架
第四层:验证与输出(Validation & Output)
           数据质量报告 + 分析脚本 + HTML/MD报告

每一层都调用大模型进行推理,不使用任何硬编码规则。


分析流程(7步)

步骤内容说明
1数据加载自动识别格式,支持多种文件类型
2本体识别LLM 判断数据实体类型和生成机制
3质量校验自动检测缺失值、异常值、重复行等问题,输出质量评分
4方案规划LLM 结合用户诉求选择分析框架和路径
5脚本生成LLM 生成可执行的 Python 分析脚本
6执行分析运行脚本,生成图表和数值结果
7综合报告输出 HTML + Markdown 双格式报告

流程健康监控(新增)

每个步骤都有状态追踪和错误处理:

  • 步骤依赖检查 - 前置步骤失败时自动阻止后续步骤执行
  • 清晰错误提示 - 步骤失败时给出明确的错误原因和修复建议
  • 流程健康报告 - 最终输出完整的执行状态和问题汇总

如果某一步骤失败,您会看到:

⚠️ 流程已中断!
   原因: 关键步骤 '数据加载' 失败: 编码错误

修复建议:
  1. 文件编码可能不是UTF-8,尝试手动指定encoding参数
  2. 常见中文编码: gbk, gb2312, gb18030

支持的数据类型

经济型数据

数据特征识别为自动匹配框架
订单 + 价格 + SKU零售经济价值链 / ABC-XYZ / RFM
用户 + 订阅周期 + Churn订阅经济LTV / Cohort / 留存曲线
点击 / 加购 / 购买事件链注意力经济漏斗分析 / AARRR
GMV + 平台撮合佣金经济双边网络效应 / 单位经济
职位 + 技能 + 薪资劳动力市场技能溢价 / 经验弹性
OHLCV 价格数据金融时序技术分析 / 波动率模型

非经济型数据

数据类型自动匹配框架
传感器 / 时序连续数据时间序列分解、极值分析
社交 / 网络关系数据中心性分析、社区发现
地理 / 空间数据空间自相关、热点分析
文本语料主题模型、情感分析
生物医学数据生存分析、差异表达

支持的文件格式

  • CSV / TSV.csv, .tsv, .txt)- 自动编码检测,支持 utf-8、gbk、latin1 等
  • Excel.xlsx, .xls
  • Parquet.parquet, .pq
  • JSON.json
  • SQL 数据库 (通过连接字符串)

编码容错

CSV 文件加载时自动尝试多种编码:

  • 自动编码检测(如有 chardet 库)
  • 回退编码:utf-8、utf-8-sig、gbk、gb2312、gb18030、latin1 等
  • 引擎回退:C 引擎失败时自动切换 Python 引擎,跳过损坏行

输出内容

每次分析生成以下内容:

session_YYYYMMDD_HHMMSS/
├── step2_ontology_prompt.txt     # 本体识别提示词(可复用)
├── step3_validation_report.json  # 数据质量报告
├── step3_cleaning_report.txt     # 数据清洗建议
├── step4_planning_prompt.txt     # 分析方案提示词(可复用)
├── step5_script_prompt.txt       # 脚本生成提示词(可复用)
├── analysis_report.html          # 综合 HTML 报告(含图表)
├── analysis_report.md            # Markdown 报告
└── charts/                       # 所有分析图表(PNG)

使用示例

示例一:分析电商销售数据

用户:帮我分析一下这份销售数据,想了解哪些商品卖得好、哪些客户价值高

[上传 orders.csv]

Skill 自动完成:

  1. 识别为「零售经济 × 交易/事件型数据」
  2. 选择 RFM 客户价值分析 + ABC 商品分类框架
  3. 生成分析脚本并执行
  4. 输出客户分层分布图、商品销售排名、RFM 热力图及 HTML 报告

示例二:分析用户行为日志

用户:这是我们 App 的用户行为日志,想看看用户转化漏斗

[上传 events.csv]

Skill 自动完成:

  1. 识别为「注意力/转化经济 × 事件序列数据」
  2. 选择漏斗分析 + 会话序列挖掘框架
  3. 输出各步骤转化率、流失节点分析、用户路径桑基图

示例三:分析气象观测数据

用户:帮我分析这份气象站观测记录,了解温度和降水的规律

[上传 weather.csv]

Skill 自动完成:

  1. 识别为「地球科学 × 时序/轨迹型数据 × 仪器测量生成」
  2. 选择时间序列分解 + 季节性分析 + 极值统计框架
  3. 输出趋势图、季节性分解图、异常值报告

依赖

pandas >= 1.3
numpy >= 1.21
matplotlib >= 3.4
seaborn >= 0.11
scipy >= 1.7
openpyxl >= 3.0   # Excel 支持
chardet >= 4.0    # 编码自动检测(可选,但推荐)
pyarrow >= 6.0    # Parquet 支持(可选)
sqlalchemy >= 1.4 # SQL 支持(可选)

版本

v1.1.0 · 作者:Claude · 许可证:CC BY-NC-SA 4.0

v1.1.0 更新内容(2026-03-23)

  1. 流程健康监控 - 新增步骤状态追踪、依赖检查、错误提示
  2. 编码容错增强 - CSV/TSV 加载时自动尝试多种编码(utf-8、gbk、latin1 等)
  3. 引擎回退 - C 引擎失败时自动切换 Python 引擎,跳过损坏行

v1.0.0

  • 初始版本:四层分析框架 + 7步分析流程

Comments

Loading comments...