Back to skill
Skillv1.0.0
ClawScan security
GI ValueSider Superinvestor Data · ClawHub's context-aware review of the artifact, metadata, and declared behavior.
Scanner verdict
BenignMar 16, 2026, 1:18 PM
- Verdict
- Benign
- Confidence
- high
- Model
- gpt-5-mini
- Summary
- The skill's code, instructions, and requirements are coherent with its stated purpose (scraping and parsing ValueSider 13F pages); it requests no credentials or unusual installs and contains only straightforward parsing and optional HTTP fetch logic.
- Guidance
- What to consider before installing: - This skill scrapes valuesider.com and parses the returned page text. It requires no API keys or secrets. - If invoked, the agent will make outbound HTTP requests to valuesider.com (either via the platform's web_fetch or by running the included fetch script). If you have network or privacy policies, verify that such scraping is acceptable. - The skill provides a local test mode (scripts/run_test.sh with sample files) so you can validate parsing without hitting the network — try that first. - The included scripts are small and readable (no obfuscated code). If you require extra assurance, review scripts/fetch_valuesider.py and scripts/parse_fetched_content.py (they only request and parse HTML). - Consider ValueSider's terms of service and rate limits before heavy use; the parser is brittle by nature of scraping and may mis-parse if the site HTML changes. - If you do not want the agent to autonomously fetch pages, you can restrict the skill (disable autonomous invocation in your agent settings) or require manual fetch-and-paste into the parser.
Review Dimensions
- Purpose & Capability
- okThe name/description (fetch ValueSider 13F holdings and activity) matches the included scripts and SKILL.md. The repo contains a parser and an optional fetcher; neither asks for unrelated credentials, unusual binaries, or unrelated platform access.
- Instruction Scope
- noteSKILL.md instructs the agent to fetch two ValueSider pages (portfolio and portfolio-activity) via web_fetch or to run the included fetch script, then parse the returned text with the provided parser. The instructions are focused on the task and only reference temporary files or stdin/stdout. Note: it explicitly advises using web_fetch to avoid 403 responses and expects full page text input — this is appropriate for scraping but means the agent will repeatedly retrieve external pages when invoked.
- Install Mechanism
- noteThere is no platform install spec (instruction-only), which is low-risk. The package includes Python scripts and a requirements.txt (requests, beautifulsoup4) so dependencies must be installed manually or be present in the runtime. No downloads from untrusted URLs or archives, and no code obfuscation present.
- Credentials
- okNo environment variables, credentials, or config paths are required. The code sets a benign User-Agent header for HTTP requests and does not attempt to read secrets or unrelated files. The requested access is proportional to scraping/parsing web pages.
- Persistence & Privilege
- okalways is false, and the skill does not modify other skills or system-wide settings. It does not persist credentials or change agent configuration. Autonomous invocation is allowed by default (platform default) but not combined with other privileged behavior.
