Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Reddit Pain Workflow

v1.0.0

Daily automated pipeline: Reddit scan → classify → generate report → push to GitHub → metrics tracking. Cron-friendly with short timeouts. Drives star growth...

0· 45·0 current·0 all-time
byMaya Tao@minirr890112-byte

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for minirr890112-byte/reddit-pain-workflow.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Reddit Pain Workflow" (minirr890112-byte/reddit-pain-workflow) from ClawHub.
Skill page: https://clawhub.ai/minirr890112-byte/reddit-pain-workflow
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install reddit-pain-workflow

ClawHub CLI

Package manager switcher

npx clawhub@latest install reddit-pain-workflow
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
high confidence
Purpose & Capability
The stated purpose (scan Reddit, classify, generate a report, push to GitHub, track metrics) matches the SKILL.md content. However, the SKILL.md expects external CLIs and tokens (GitHub token, Feishu/lark-cli app token, 'hermes' cron tooling, and local scripts) that are not declared in the registry metadata or requirements, creating an incoherence between claimed dependencies and actual needs.
!
Instruction Scope
Runtime instructions tell the agent to: call Reddit JSON APIs (OK), walk comment trees, generate DAILY-REPORT.md, commit and push to GitHub using an environment token, create cronjobs via a 'hermes' CLI, and call Feishu's API via lark-cli. The SKILL.md references local paths (~/HermesMade/scripts), environment variables (GITHUB_TOKEN) and external CLIs but the skill package does not supply these scripts, binaries, or declare required env vars—so the agent would be expected to access local files and network endpoints not described in the registry metadata.
Install Mechanism
This is an instruction-only skill (no install spec) which is lower-risk in that nothing is written by the skill itself. However, it instructs use of multiple external CLIs ('hermes', 'lark-cli') and local Python scripts that are not provided or documented as required binaries; the absence of declared tooling is a discrepancy to be resolved by the user before running.
!
Credentials
Registry metadata declares no required env vars, yet the SKILL.md explicitly uses os.environ['GITHUB_TOKEN'] for commits and implies Feishu/lark-cli credentials and possibly 'hermes' credentials. A GitHub token with repo write and traffic/read scopes is high-privilege; those credentials are not requested or scoped in the registry, which is disproportionate and unexpected.
Persistence & Privilege
The skill is not flagged 'always:true' and does not modify other skills. It does instruct creating persistent cron jobs (via 'hermes cronjob create') which would schedule autonomous network activity and commits. This is expected for a cron-driven pipeline but increases operational risk and requires careful credential handling; the skill itself does not request persistent platform privileges in the registry.
What to consider before installing
This skill looks like a plausible Reddit→GitHub pipeline, but the runtime instructions rely on tools and credentials that are not declared in the registry. Before installing or running it: 1) Don’t provide broad tokens blindly — the SKILL.md uses GITHUB_TOKEN for commits and traffic API access; create a least-privilege token limited to the specific repo and scopes required (repo:contents, repo:status, and traffic/read if needed). 2) Confirm or install the external CLIs it expects ('hermes', 'lark-cli') and ensure you understand their auth needs. 3) Inspect the local scripts it references (~/HermesMade/scripts/daily-pipeline, scripts/github-metrics) — the skill package doesn’t include them, so you must supply and audit them yourself. 4) Test on a throwaway repository first to validate behavior and cron setup. 5) Be cautious about the GitHub search optimization guidance (topics/description) — some suggested keywords (e.g., “censorship-bypass”) may have policy or reputation implications. The mismatch between declared requirements and actual instructions is a red flag; resolve these gaps and narrow credentials before proceeding.

Like a lobster shell, security has layers — review code before you run it.

latestvk9729dvvv3mdfekdpt8xe6sbrd85kk09
45downloads
0stars
1versions
Updated 2d ago
v1.0.0
MIT-0

Reddit Pain → GitHub Report Daily Pipeline

A fully automated cron-driven pipeline that scans Reddit for pain points, classifies them against existing tools, generates a daily report pushed to GitHub, and tracks repo metrics for growth.

When to use

  • Building a data-driven open-source project that needs daily content
  • Wanting automated pain discovery with GitHub as the delivery surface
  • Need a growth engine: daily reports → discoverable on GitHub → drives stars

Architecture

Cron (8 AM daily)
  ↓
Reddit scan (5 subreddits, native .json API, 8s timeout)
  ↓
Pain classification (8 categories, matched against existing tools)
  ↓
DAILY-REPORT.md generation (markdown with quotes, links, tool candidates)
  ↓
Git commit + push to repo
  ↓
Metrics snapshot (stars, views, clones, search ranking)

Key Implementation Details

Reddit Scan

  • Use https://www.reddit.com/r/{sub}/hot.json (no auth needed)
  • 5 subreddits max for cron speed: ChatGPT, ClaudeAI, LocalLLaMA, programming, webdev
  • Timeout: 8s per request, 0.5s delay between requests
  • Walk comment tree to depth 2 for replies
  • Pushshift.io is DEAD (403), PRAW needs client_id (skip for public)

Pain Classification

Categories match our tool coverage:

CategoryExisting Tool
AI Censorship / Safetyprompt-inspector
AI Model Degradationmodel-watch
AI API Pricing / Costapi-cost
GitHub / CI-CD Issuesnone yet
AI Code Qualitynone yet
Local LLM / Deploymentnone yet
Supply Chain Securitynone yet
AI Detection / Deepfakenone yet

Threshold: ≥3 signals in a category with no existing tool → flagged as "New Tool Candidate"

GitHub Report Generation

  • Output: DAILY-REPORT.md in repo root
  • Format: summary → per-category top 3 quotes with permalinks → tool candidates → growth tip → metrics
  • Auto-committed with timestamp, pushed to main

GitHub Search Optimization (Critical for Star Growth)

GitHub search indexes: repo description + topics (20 max) + lightly on README.

Recipe that worked (verified 2026-04):

  1. Description: keyword-dense, comma-separated: "AI CLI tools: prompt censorship checker & bypass, model quality watchdog & degradation monitor, API cost comparison for OpenAI Claude DeepSeek Gemini. Built from real Reddit user complaints."
  2. Topics (19): ai, python, cli, api, llm, openai, claude, deepseek, devtools, reddit, prompt-engineering, cost-optimization, benchmark, censorship, censorship-bypass, model-monitoring, model-degradation, cost-comparison, llm-pricing
  3. Result: repo ranks #1 for searches like "model degradation monitor cli", "prompt censorship bypass cli", "llm cost comparison"

Metrics Tracking

Separate script scripts/github-metrics records:

  • Stars, forks, watchers
  • Views (from traffic API)
  • Clones
  • Search ranking for 7 target keywords

Cron Job Setup

# Daily pipeline (8 AM)
hermes cronjob create --name daily-reddit-pipeline --schedule "0 8 * * *" \
  --prompt "Run python3 ~/HermesMade/scripts/daily-pipeline run. Then present a 3-line summary."

# Daily metrics (9 AM)
hermes cronjob create --name github-metrics-daily --schedule "0 9 * * *" \
  --prompt "Run python3 ~/HermesMade/scripts/github-metrics snapshot then report."

Feishu Bitable Setup (one-time)

# Create app
lark-cli api POST /open-apis/bitable/v1/apps --data '{"name":"Pain Points"}'

# Create table with fields
lark-cli api POST /open-apis/bitable/v1/apps/{app_token}/tables --data '{
  "table": {
    "name": "痛点清单",
    "fields": [
      {"field_name": "序号", "type": 2},
      {"field_name": "分类", "type": 3},
      {"field_name": "痛点名称", "type": 1},
      {"field_name": "频次指数", "type": 3},
      {"field_name": "用户原声", "type": 1},
      {"field_name": "Hermes方案", "type": 1},
      {"field_name": "状态", "type": 3}
    ]
  }
}'

# Batch insert
lark-cli api POST "/open-apis/bitable/v1/apps/{token}/tables/{table}/records/batch_create" \
  --data "$(cat records.json)"

GitHub API via urllib (Fallback — No gh CLI Required)

When the github skill/tools aren't available but a GitHub token is, use Python stdlib urllib for file commits:

import json, base64, urllib.request

token = os.environ["GITHUB_TOKEN"]
repo = "owner/repo"
file_path = "path/in/repo.sh"

with open("/tmp/file.sh", "rb") as f:
    content = f.read()
encoded = base64.b64encode(content).decode()

# Step 1: Check if file exists (get SHA for update)
get_url = f"https://api.github.com/repos/{repo}/contents/{file_path}"
get_req = urllib.request.Request(get_url, headers={
    "Authorization": f"Bearer {token}",
    "Accept": "application/vnd.github+json",
    "User-Agent": "hermes-agent"
})

sha = None
try:
    with urllib.request.urlopen(get_req, timeout=10) as resp:
        sha = json.loads(resp.read()).get("sha")
except urllib.error.HTTPError as e:
    if e.code == 404:
        pass  # File doesn't exist, will create
    else:
        raise

# Step 2: PUT create or update
put_url = f"https://api.github.com/repos/{repo}/contents/{file_path}"
payload = {
    "message": "feat: auto-generated report [HERMES-N]",
    "content": encoded,
    "branch": "main"
}
if sha:
    payload["sha"] = sha

put_req = urllib.request.Request(put_url, 
    data=json.dumps(payload).encode("utf-8"),
    method="PUT",
    headers={
        "Authorization": f"Bearer {token}",
        "Accept": "application/vnd.github+json",
        "User-Agent": "hermes-agent"
    })

with urllib.request.urlopen(put_req, timeout=15) as resp:
    result = json.loads(resp.read())
    print(f"Committed: {result['commit']['sha'][:7]}")

Pitfalls:

  • User-Agent header is required by GitHub API, otherwise 403
  • Accept: application/vnd.github+json needed for newer API endpoints
  • For binary files: base64 encode the raw bytes (no text decode step)
  • For first commit on a new repo: file won't exist → 404 → omit sha

Pip Package Pattern (for individual tools)

Each tool is a standalone pip-installable package:

tool-name/
├── pyproject.toml       # build-backend = "setuptools.build_meta"
├── README.md
└── tool_name/
    ├── __init__.py
    └── cli.py

Install: pip install git+https://github.com/{user}/{repo}.git#subdirectory=tool-name

setuptools.backends._legacy:_Backend does NOT work. Use setuptools.build_meta.

Pitfalls

  • Reddit cloud browser access: blocked by security. Use native API only.
  • gh search repos has slower index than web UI search — web results may show repo that API doesn't yet
  • Topic limit: 20 max. Remove generic ones (productivity, tools) to fit search-critical ones
  • lark-cli is interactive on first run. Set LARK_LANGUAGE=zh env var before first use
  • Pip install in sandbox: use --break-system-packages on macOS Homebrew Python

Comments

Loading comments...