Back to skill
Skillv1.0.3
ClawScan security
Clawfeed Digest · ClawHub's context-aware review of the artifact, metadata, and declared behavior.
Scanner verdict
BenignMar 4, 2026, 11:33 AM
- Verdict
- benign
- Confidence
- high
- Model
- gpt-5-mini
- Summary
- The skill's code, docs, and runtime instructions are consistent with its stated purpose (fetching ClawFeed digests and writing them into an Obsidian directory); it makes a simple HTTP request and writes markdown files with no unexpected credentials or privileged installs.
- Guidance
- This skill appears to do exactly what it says: fetch public ClawFeed digests and write them as markdown files to an Obsidian vault. Before installing or scheduling it, consider these practical points: - Verify output location: by default it writes to ~/OneDrive/文档/Obsidian Vault/AI新闻. If you don't want results synced to OneDrive/cloud, supply --output to a local-only folder. - Run manually first: execute python scripts/fetch_clawfeed.py locally to confirm the content and filenames meet your expectations before adding a cron job. - Review and trust source: the package metadata has no homepage; docs point to a GitHub repo (adminlove520/clawfeed-digest). If provenance matters, inspect that upstream repo for updates or tampering. - Data handling: the script will overwrite files with the same generated filename; back up any important notes you might overwrite. - Environment hygiene: install 'requests' in a virtualenv rather than system-wide if you prefer isolation. Overall there are no code-level signs of credential harvesting, unexpected network endpoints, or privileged operations, so the skill is internally consistent with its stated purpose.
Review Dimensions
- Purpose & Capability
- okName/description claim to fetch ClawFeed digests and save them to Obsidian; the included script fetches from https://clawfeed.kevinhe.io/api/digests and writes markdown files into an Obsidian directory. No unrelated credentials, binaries, or config paths are requested.
- Instruction Scope
- noteSKILL.md tells the agent to pip install requests and run scripts/fetch_clawfeed.py (matching the provided script). The script performs only an unauthenticated GET to the stated API and writes files to a user vault path (default: ~/OneDrive/文档/Obsidian Vault/AI新闻). Note: writing into a OneDrive-synced vault means the results will be uploaded to the user's cloud storage — this is expected but worth awareness.
- Install Mechanism
- okNo install spec is provided; the skill is instruction-only and only recommends installing the widely used 'requests' Python package. No downloads from arbitrary URLs or archive extraction are present in the skill itself.
- Credentials
- okThe skill requests no environment variables, no credentials, and does not access other config paths. Its behavior (network GET and filesystem writes) is proportionate to the declared purpose.
- Persistence & Privilege
- okThe skill is not always-enabled and is user-invocable; it does not request elevated privileges or modify other skills' configurations. The provided cron example runs the script on a schedule — scheduling is appropriate for the stated functionality.
