A Share Site Crawl
PassAudited by VirusTotal on May 11, 2026.
Overview
Type: OpenClaw Skill Name: a-share-site-crawl Version: 1.0.0 The skill bundle provides a highly structured and professional framework for crawling and normalizing Chinese A-share market data from five legitimate financial platforms (Eastmoney, CLS, CNInfo, Jiuyangongshe, and Xueqiu). It includes detailed instructions for handling anti-bot measures, data quality risks, and multi-tier source verification without any evidence of malicious intent, data exfiltration, or unauthorized execution. The instructions in SKILL.md and the reference files (entrypoints.md, risks.md) are strictly focused on the stated purpose of financial news aggregation and normalization.
Findings (0)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
If used with a real logged-in profile or cookies, the agent may see account-specific site content and operate through the user's session.
The skill may request access through a user's logged-in browser session or cookies, but the metadata declares no credentials and the artifacts do not define strict handling, isolation, or read-only boundaries.
Ask for stronger access only when the user explicitly wants better extraction from a restricted site, especially 雪球. Examples: attached Chrome relay tab; logged-in browser profile; cookies or authenticated environment
Use a separate, low-privilege browser profile or temporary session if authenticated access is needed; avoid sharing raw cookies; confirm the task is read-only and scoped to the named sites.
The skill may generate browsing or fetch traffic to the listed market-information sites and reveal the user's query interests to those sites.
Browser and fetch access to external financial sites is central to the skill's stated purpose, but users should understand that crawl requests are sent to third-party sites and may encounter access controls.
Prefer `browser` for page truth and `web_fetch` for cheap probing.
Keep crawls scoped to the requested sites and pages, respect login walls and anti-bot restrictions, and avoid bulk or scripted collection unless the user explicitly approves it.
If the user later wires this into a scheduler, it could repeatedly collect and retain public market records.
The skill discusses recurring cron-style workflows, but the artifacts do not include code that installs persistence or runs autonomously.
building repeatable market-news collection, normalization, and cron workflows
Only enable recurring jobs deliberately, with clear rate limits, retention rules, site scope, and a way to stop the job.
