data-parser-toolkit

v1.0.0

智能解析CSV、JSON、XLSX、Parquet与SQL文件,自动检测编码并修复常见格式与内容问题,提取结构化数据。

0· 112·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for qiuwenxi416488212-ship-it/data-parser-toolkit.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "data-parser-toolkit" (qiuwenxi416488212-ship-it/data-parser-toolkit) from ClawHub.
Skill page: https://clawhub.ai/qiuwenxi416488212-ship-it/data-parser-toolkit
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install data-parser-toolkit

ClawHub CLI

Package manager switcher

npx clawhub@latest install data-parser-toolkit
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
Name/description (data parsing for CSV/JSON/XLSX/Parquet/SQL) match the included Python modules and README. Declared dependencies (pandas, openpyxl, chardet, pyarrow, xlrd) are appropriate for the stated functionality. The README and SKILL.md advertise many helper functions (convert_folder, clean_pipeline, parse_from_url, detect_corruption, etc.); the provided code implements large portions of parsing functionality but some advertised utilities are not obviously present or are truncated in the provided files. This is likely sloppy packaging or documentation drift rather than malicious mismatch.
Instruction Scope
SKILL.md instructs the agent to parse files, detect encoding, and install typical Python deps. The runtime instructions and code focus on reading local files and archive members and performing in-memory transformations. There are no explicit instructions to read unrelated system config files or to transmit data to external endpoints. However, the README references parse_from_url and URL-reading examples; URL reads (pandas.read_csv supports URLs) permit network fetches — the exact implementation of parse_from_url is not visible, so network behavior is possible but not proven.
Install Mechanism
No install spec provided in the skill registry; SKILL.md gives a pip install line for standard Python packages (pandas, openpyxl, chardet, pyarrow, xlrd). That is an expected, low-risk method for this type of library. No arbitrary downloads, obscure URLs, or extract operations were seen in the install metadata.
Credentials
The skill requests no environment variables, no credentials, and no config paths. This is proportionate for a local file-parsing toolkit. No hidden env access was detected in SKILL.md or in the visible source.
Persistence & Privilege
always is false and default agent invocation is allowed (platform default). The package does not request persistent elevated privileges or permanent system-wide changes in the provided files. It exposes functions to read arbitrary files (expected for a parser) but does not attempt to modify other skills' configs or set global agent settings.
Assessment
This skill appears to be a normal data-file parser and its declared dependencies match the code. Before installing or giving it broad access, consider: 1) Review the full source (the SKILL.md and README mention functions such as parse_from_url and many helper utilities — verify they exist and inspect any network code). 2) If you will run it in an environment with sensitive files, restrict the skill's file access (run in a sandbox or grant it only specific directories), because it can read arbitrary local files and archive members. 3) Confirm whether it performs any network I/O (parse_from_url or pandas.read_csv on a URL) and whether that behavior is acceptable; if you need to prevent exfiltration, run offline or in a restricted network environment. 4) Because some documentation features look out-of-sync with the code, you may want to run the tests/examples in an isolated environment to validate behavior before trusting it with production data.

Like a lobster shell, security has layers — review code before you run it.

latestvk97cqjv9k7gqrnx1jwbbtcjz5n8537nq
112downloads
0stars
1versions
Updated 1w ago
v1.0.0
MIT-0

数据文件解析技能 (Data File Parser)

技能描述

智能解析各种数据文件格式(CSV/JSON/XLSX/Parquet/SQL),自动检测编码、修复常见问题、提取结构化数据。

支持格式

1. CSV (逗号分隔值)

常见问题及修复:

  • 编码问题: 自动尝试 UTF-8 → GBK → GB2312 → Latin1
  • 标题行数: 自动检测 (常见: 1行、2行、混合单元格表头)
  • 数字格式: 处理逗号千分位 (如 "1,234.56")、中文数字 (如 "3")
  • 空值: 处理 "-", "—", "null", "None", 空字符串
  • 换行符: 处理 CSV 内嵌换行 (需引号包裹)

自动检测:

# 检测标题行数
def detect_header_lines(content):
    lines = content.split('\n')[:10]
    for i, line in enumerate(lines):
        if '合约代码' in line or '交易代码' in line or 'symbol' in line.lower():
            return i
    return 1  # 默认1行

2. JSON (JavaScript Object Notation)

常见问题及修复:

  • BOM头: 移除 \ufeff
  • 尾部逗号: {"a": 1,}{"a": 1}
  • 单引号: {'a': 1}{"a": 1}
  • Python注释: 移除 # 注释
  • 数值精度: 处理科学计数法

修复函数:

def fix_json(text):
    # 移除BOM
    text = text.replace('\ufeff', '')
    # 修复尾部逗号
    text = re.sub(r',(\s*[}\]])', r'\1', text)
    # 单引号转双引号
    text = re.sub(r"'([^']*)'", r'"\1"', text)
    # 移除注释
    text = re.sub(r'//.*$', '', text, flags=re.MULTILINE)
    text = re.sub(r'#.*$', '', text, flags=re.MULTILINE)
    return text

3. XLSX (Excel)

常见问题及修复:

  • 损坏文件: "File is not a zip file" → XLSX本质是ZIP,需重新保存
  • 合并单元格: 读取时需处理 merged_cells 范围
  • 空行: 跳过全为 None 的行
  • 日期格式: 转换为标准 ISO 格式
  • 公式: 使用 data_only=True 读取计算值

检测XLSX是否损坏:

import zipfile
import openpyxl

def is_valid_xlsx(path):
    try:
        # 方法1: 检查ZIP有效性
        with zipfile.ZipFile(path, 'r'):
            pass
        # 方法2: 尝试用openpyxl打开
        wb = openpyxl.load_workbook(path, data_only=True)
        wb.close()
        return True
    except:
        return False

4. Parquet (列式存储)

特点: 高压缩率、适合大数据分析

import pyarrow.parquet as pq

def read_parquet(path):
    table = pq.read_table(path)
    return table.to_pandas()

5. SQL脚本

常见问题:

  • 字符集声明: CHARSET=utf8mb4
  • 批量插入: 处理 INSERT INTO ... VALUES (...), (...), ...
  • 转义字符: 处理 \''''

核心工具函数

自动编码检测

import chardet

def detect_encoding(path):
    with open(path, 'rb') as f:
        raw = f.read(10000)  # 读取前10KB
    result = chardet.detect(raw)
    return result['encoding'] or 'utf-8'

智能读取CSV

import pandas as pd
import chardet

def smart_read_csv(path, **kwargs):
    # 1. 检测编码
    enc = detect_encoding(path)
    
    # 2. 尝试读取
    try:
        df = pd.read_csv(path, encoding=enc, **kwargs)
    except:
        # 备用编码
        for alt_enc in ['gbk', 'gb2312', 'utf-8-sig', 'latin1']:
            try:
                df = pd.read_csv(path, encoding=alt_enc, **kwargs)
                break
            except:
                continue
    
    return df

智能读取XLSX

def smart_read_xlsx(path):
    """带自动修复的XLSX读取"""
    
    # 检查文件是否有效
    if not is_valid_xlsx(path):
        print(f"警告: {path} 可能损坏")
        return None
    
    wb = openpyxl.load_workbook(path, data_only=True)
    ws = wb.active
    
    # 读取为列表
    data = []
    for row in ws.iter_rows(values_only=True):
        # 跳过全空行
        if not any(row):
            continue
        data.append(list(row))
    
    wb.close()
    return data

使用示例

解析任何数据文件

from data_parser import parse_file

# 自动识别格式并解析
data = parse_file("data.csv")      # 返回 DataFrame/List
data = parse_file("data.json")     # 返回 dict/List
data = parse_file("data.xlsx")     # 返回 List[List]
data = parse_file("data.parquet")  # 返回 DataFrame

批量转换

from data_parser import convert_folder

# 将文件夹内所有XLSX转为CSV
convert_folder(
    input_dir="D:/data/xlsx",
    output_dir="D:/data/csv",
    output_format="csv"
)

依赖安装

pip install pandas openpyxl chardet pyarrow

注意事项

  1. XLSX文件如果显示"File is not a zip file",说明文件损坏,需重新从源头获取
  2. CSV编码问题最常见,优先检测编码
  3. 大文件用 Parquet 格式更高效
  4. 读取XLSX时用 data_only=True 获取计算值,否则得到公式

Comments

Loading comments...