Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

social-reader

Social media content scraping and automation skill. Supports real-time single post reading, as well as scheduled batch patrol, LLM distillation, and review n...

MIT-0 · Free to use, modify, and redistribute. No attribution required.
0 · 262 · 0 current installs · 0 all-time installs
byAIWareTop@HackSing
MIT-0
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
Name/description (social scraping, pipeline, LLM distillation) matches the included Python modules (fetcher, watcher, processor, notifier, run_pipeline). Network calls (fxtwitter, syndication CDN) and LLM calls are expected for this functionality. Declared dependency (requests) and environment variables (LLM_API_KEY, LLM_BASE_URL, LLM_MODEL) line up with the implementation.
Instruction Scope
SKILL.md keeps interactive usage scoped to fetcher.py (stateless) and warns about using the pipeline for interactive calls. Pipeline instructions cause reading/writing of local JSON files and will call an external LLM API. The notifier starts a local HTTP review server and opens a browser; that behavior is within the stated purpose but increases runtime surface (see persistence_privilege). The instructions do not ask to read unrelated system files or unrelated credentials.
Install Mechanism
No automated install spec is provided; SKILL.md only asks to pip install requests. No downloads from unknown hosts or archive extraction are present in the package.
Credentials
Only the LLM-related environment variables are required for pipeline mode. No other credentials or secrets are requested. The declared primary credential (LLM_API_KEY) is necessary and proportionate to calling an external LLM for distillation.
Persistence & Privilege
The skill writes/updates local JSON files (seen_ids.json, pending_tweets.json, drafts.json, archive.json) which is expected for a pipeline. always is false. The notifier launches a local HTTP server (port 18923) and opens a browser review page — this is expected but raises operational concerns: depending on how the server is bound, the review endpoint could be reachable beyond the local machine. The code sets Access-Control-Allow-Origin: * for responses (enables cross-origin browser access), which increases attack surface if the server is not restricted to localhost.
Assessment
This package appears to do what it says: fetch public social posts, optionally call an LLM to synthesize commentary, and present a local review UI. Before installing or running it: - Treat pipeline mode as networked: it requires an LLM API key and will send scraped content to the configured LLM endpoint (default: OpenAI). Only use a trusted API key and be aware of any sensitive content you feed to the LLM. - The notifier starts a local HTTP server and opens a browser. Verify the server binds only to localhost (127.0.0.1) if you do not want it reachable from the network. If the server binds to all interfaces, external actors could call /api/regenerate or /api/review if your machine/network is reachable. - The skill writes local files (seen_ids.json, pending_tweets.json, drafts.json, archive.json) in the skill directory; run it in an environment/directory where this persistence is acceptable. - Source and homepage are unknown — exercise caution. If you need higher assurance, ask the maintainer for the origin, check how notifier starts the HTTPServer (inspect whether it binds to 'localhost' vs ''), and consider running in an isolated container or VM and restricting outbound network access for the process. - If you want lower risk for interactive usage, follow the SKILL.md guidance to call fetcher.get_tweet() only (stateless) rather than running the full pipeline.

Like a lobster shell, security has layers — review code before you run it.

Current versionv1.0.0
Download zip
latestvk97fg2n0e82cawzdhvxyxppwy5821ct2

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

Social Reader Skill

This skill provides a social media content scraping and monitoring workflow. It offers two usage modes:

  • Interactive Mode: Agent fetches a single post in real-time for reading, discussion, or reply generation within a conversation.
  • Pipeline Mode: Background batch patrol of sources, with LLM distillation and review notifications.

Dependencies

pip install requests

Configuration Files

FilePurpose
prompt.txtLLM system prompt for the Processor node
sources.jsonList of monitored accounts and fetch intervals (pipeline mode)
input_urls.txtManually entered post URLs (one per line, # for comments)
seen_ids.jsonDeduplication cache for seen post IDs (pipeline mode only)
pending_tweets.jsonQueue of unprocessed posts from the Watcher
drafts.jsonLLM-distilled drafts from the Processor
archive.jsonArchived history records

Environment Variables (required only for Pipeline Mode Processor)

VariableDescriptionDefault
LLM_API_KEYLLM API key (required)None
LLM_BASE_URLAPI endpointhttps://api.openai.com/v1
LLM_MODELModel namegpt-4o-mini

Mode 1: Agent Interactive Call (Recommended)

When a user sends a social media post link and asks you to "read and discuss" or "generate a quality reply", call fetcher.py directly — do NOT use run_pipeline.py.

run_pipeline.py triggers deduplication cache, fixed LLM distillation, and browser popups, which are unsuitable for interactive scenarios.

Usage Example

import sys

skill_dir = r"d:\AIWareTop\Agent\openclaw-skills\social-reader"
if skill_dir not in sys.path:
    sys.path.append(skill_dir)

from fetcher import get_tweet

result = get_tweet("https://x.com/user/status/123456")

if result.get("success"):
    content = result["content"]
    # Now you can discuss the content with the user or generate a reply

get_tweet() Return Structure

{
  "source": "fxtwitter",
  "success": true,
  "type": "tweet",
  "content": {
    "text": "Post body text",
    "author": "Display name",
    "username": "Username handle",
    "created_at": "Publish time",
    "likes": 123,
    "retweets": 45,
    "views": 6789,
    "replies": 10,
    "media": ["image_url_1", "image_url_2"]
  }
}

When type is "article" (long-form post), content additionally contains:

  • title: Article title
  • preview: Preview text
  • full_text: Full article body (Markdown format)
  • cover_image: Cover image URL

This call is completely stateless — it writes no cache files and triggers no notification services.


Mode 2: Background Pipeline Batch Processing

Use run_pipeline.py to chain Watcher → Processor → Action nodes. Suitable for scheduled tasks or batch processing.

Three Core Nodes

  1. Watcher (watcher.py)

    • Reads input_urls.txt or sources.json, deduplicates via seen_ids.json, writes new posts to pending_tweets.json.
  2. Processor (processor.py)

    • Reads pending_tweets.json, calls LLM to generate commentary, outputs to drafts.json.
    • Requires LLM_API_KEY environment variable.
  3. Action (notifier.py)

    • Starts a local HTTP review server (port 18923), opens a browser review page with approve/reject/rewrite/archive controls.

CLI Examples

# Full pipeline
python run_pipeline.py

# Specific URL
python run_pipeline.py https://x.com/elonmusk/status/123456

# Single node execution
python run_pipeline.py --watch-only
python run_pipeline.py --process-only
python run_pipeline.py --notify-only

Files

13 total
Select a file
Select a file to preview.

Comments

Loading comments…