A股数据获取 A specialized data collection tool for Chinese A-share market
PassAudited by ClawScan on May 1, 2026.
Overview
The artifacts describe a coherent public stock-market data collector, with expected local storage, network fetching, manual Python setup, and optional scheduling that users should review before use.
This appears suitable if you want a local A-share data collector. Before installing or running it, use a virtual environment, review the fixed D:\xistock storage path, start with limited fetch commands, back up the database before reset/repair operations, and inspect any cron schedule before enabling automatic updates.
Findings (3)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Running all-stock or parallel modes may consume bandwidth, take time, hit provider rate limits, and update or overwrite local market-data files.
The skill documents broad and parallel fetch commands that can make many external API calls and update local data files/databases. This is consistent with a stock data collector, but users should notice the bulk-operation scope.
python scripts/day.py get all --limit 10 ... # Traditional usage (fetch all active stocks) ... python scripts/day_parallel.py
Start with limited commands, confirm the output directory, and only run all-stock or parallel modes when you intend a large batch update.
Unpinned packages can change over time, and installing unnecessary or wrong packages can affect the user's Python environment.
The setup instructions rely on unpinned PyPI package installs, while the registry metadata declares no required binaries or environment requirements. These dependencies are plausible for the stated purpose but should be installed carefully.
pip install requests pip install sqlite3 pip install pandas pip install akshare
Use a virtual environment, pin dependency versions where possible, and note that sqlite3 is normally part of Python rather than a separate package.
If scheduled, the skill could continue making API calls and updating local files on a recurring basis.
The file structure advertises a cron configuration, indicating the skill may support scheduled/background data collection. That is purpose-aligned for regular market-data updates, but scheduled persistence should be explicit to the user.
scripts/schedule_config.py # OpenClaw cron job configuration
Review the schedule configuration before enabling it, and disable or limit scheduled jobs if you only want manual data collection.
