Back to skill
Skillv0.2.1

ClawScan security

hum · ClawHub's context-aware review of the artifact, metadata, and declared behavior.

Scanner verdict

BenignApr 12, 2026, 11:21 AM
Verdict
benign
Confidence
medium
Model
gpt-5-mini
Summary
The skill’s code, instructions, and required data are coherent with a social-media content-writing/automation tool — it asks for platform credentials and local data files that match its stated purpose, but those requests are sensitive and worth reviewing before use.
Guidance
This package is internally consistent with a social-media content writer, but it will access sensitive data and credentials. Before installing or running it: 1) Inspect or run in an isolated environment (VM/container). 2) Review credentials storage: the skill expects X and LinkedIn tokens or JSON credential files under ~/.hum/credentials/ (and may ask you to paste X session cookies CT0/AUTH_TOKEN). Prefer API tokens over copied session cookies; store files with restrictive permissions (chmod 600). 3) Review content-samples/ and VOICE.md that the skill will read — it purposely uses your real posts to imitate your voice. 4) Confirm how Telegram digest sending is configured and what token (if any) it will use. 5) Audit third-party dependencies in requirements.txt (yt-dlp and web scraping libs) before pip installing. If you are not comfortable providing social credentials or sharing local writing samples, skip installation or run the skill on a copy of data and a test account. If you want, provide specific files or commands you plan to run and I can highlight exact lines that read or transmit secrets.

Review Dimensions

Purpose & Capability
okName/description (content writer for X/LinkedIn) matches the included code: connectors for X and LinkedIn, feed crawlers (X, YouTube, HN, RSS), publishing, brainstorming, and engagement orchestration. The code reads/writes a local data directory for voice, samples, drafts, and feed data — all expected for this purpose.
Instruction Scope
noteRuntime docs and SKILL.md instruct the agent and operator to read the user's VOICE.md, content-samples/, knowledge/, and feed files — this is necessary for voice-matching but means the agent will process real user posts and local notes. It also instructs extracting X session cookies (AUTH_TOKEN and CT0) from browser devtools for some feed operations; that is sensitive but consistent with the stated approach of using direct APIs/Bird rather than a formal API key. The skill returns browser-scrape instructions (needs_browser) for profiles — again coherent but broad in scope.
Install Mechanism
okNo automated install spec in the registry; the skill is distributed as code files and the SKILL.md asks the operator to run setup.sh and pip install dependencies. Dependencies are standard Python packages (feedparser, trafilatura, yt-dlp, requests, etc.). No remote arbitrary binary downloads or obscure install URLs were found in the manifest.
Credentials
noteThe skill expects credentials for X and LinkedIn (either env vars or JSON files under a credentials directory) and may request X session cookies (AUTH_TOKEN/CT0) for some feed fetching. Those credentials are proportionate to posting and fetching a user's feeds but are sensitive. The gemini-extension.json exposes settings for CREDENTIALS_DIR, X and LinkedIn tokens; SKILL.md and COMMANDS.md reference HUM_DATA_DIR and credential file locations. No unrelated cloud credentials or excessive environment access were requested.
Persistence & Privilege
okThe skill does not request forced always:true inclusion and does not modify other skills. It writes and reads files in a configurable data directory (HUM_DATA_DIR) and a credentials directory — expected for a local content-authoring tool. The skill may store tokens/credentials in ~/.hum/credentials/ which is normal but requires careful file-permission handling.