Skill flagged — suspicious patterns detected
ClawHub Security flagged this skill as suspicious. Review the scan results before using.
social-reader
v1.0.0Social media content scraping and automation skill. Supports real-time single post reading, as well as scheduled batch patrol, LLM distillation, and review n...
⭐ 0· 346·0 current·0 all-time
byAIWareTop@hacksing
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Benign
medium confidencePurpose & Capability
Name/description (social scraping, pipeline, LLM distillation) matches the included Python modules (fetcher, watcher, processor, notifier, run_pipeline). Network calls (fxtwitter, syndication CDN) and LLM calls are expected for this functionality. Declared dependency (requests) and environment variables (LLM_API_KEY, LLM_BASE_URL, LLM_MODEL) line up with the implementation.
Instruction Scope
SKILL.md keeps interactive usage scoped to fetcher.py (stateless) and warns about using the pipeline for interactive calls. Pipeline instructions cause reading/writing of local JSON files and will call an external LLM API. The notifier starts a local HTTP review server and opens a browser; that behavior is within the stated purpose but increases runtime surface (see persistence_privilege). The instructions do not ask to read unrelated system files or unrelated credentials.
Install Mechanism
No automated install spec is provided; SKILL.md only asks to pip install requests. No downloads from unknown hosts or archive extraction are present in the package.
Credentials
Only the LLM-related environment variables are required for pipeline mode. No other credentials or secrets are requested. The declared primary credential (LLM_API_KEY) is necessary and proportionate to calling an external LLM for distillation.
Persistence & Privilege
The skill writes/updates local JSON files (seen_ids.json, pending_tweets.json, drafts.json, archive.json) which is expected for a pipeline. always is false. The notifier launches a local HTTP server (port 18923) and opens a browser review page — this is expected but raises operational concerns: depending on how the server is bound, the review endpoint could be reachable beyond the local machine. The code sets Access-Control-Allow-Origin: * for responses (enables cross-origin browser access), which increases attack surface if the server is not restricted to localhost.
Assessment
This package appears to do what it says: fetch public social posts, optionally call an LLM to synthesize commentary, and present a local review UI. Before installing or running it:
- Treat pipeline mode as networked: it requires an LLM API key and will send scraped content to the configured LLM endpoint (default: OpenAI). Only use a trusted API key and be aware of any sensitive content you feed to the LLM.
- The notifier starts a local HTTP server and opens a browser. Verify the server binds only to localhost (127.0.0.1) if you do not want it reachable from the network. If the server binds to all interfaces, external actors could call /api/regenerate or /api/review if your machine/network is reachable.
- The skill writes local files (seen_ids.json, pending_tweets.json, drafts.json, archive.json) in the skill directory; run it in an environment/directory where this persistence is acceptable.
- Source and homepage are unknown — exercise caution. If you need higher assurance, ask the maintainer for the origin, check how notifier starts the HTTPServer (inspect whether it binds to 'localhost' vs ''), and consider running in an isolated container or VM and restricting outbound network access for the process.
- If you want lower risk for interactive usage, follow the SKILL.md guidance to call fetcher.get_tweet() only (stateless) rather than running the full pipeline.Like a lobster shell, security has layers — review code before you run it.
latestvk97fg2n0e82cawzdhvxyxppwy5821ct2
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
