Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Cf Publish

v1.1.0

Corpus-grounded Reddit comment engine. Generate natural replies that pass AI detection, powered by real comment corpus and 7-dimension QA scoring.

0· 177·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for aces1up/comment-forge.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Cf Publish" (aces1up/comment-forge) from ClawHub.
Skill page: https://clawhub.ai/aces1up/comment-forge
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install comment-forge

ClawHub CLI

Package manager switcher

npx clawhub@latest install comment-forge
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The name/description (Reddit comment generator that evades AI detection) aligns with needing an LLM key (Gemini/OpenRouter) and optional fit-scoring. However the code and installer also request/handle additional optional APIs (Serper, TwitterAPI) and reference hosted corpus + analytics endpoints (clawagents.dev) that are not fully disclosed in SKILL.md's API Keys table. Those extras are plausible for 'intel' but are not documented consistently.
!
Instruction Scope
SKILL.md instructs running setup.sh and the generator script, but does not call out that the runtime will: fetch corpus samples from a hosted API, post anonymous usage/registration telemetry to remote endpoints, read/write a home config (~/.comment-forge/config.json), and load any keys found there into the environment. The tool also includes deterministic anti-AI cleaning and typo injection to evade AI detection — consistent with the description but ethically notable. SKILL.md omitted disclosure of the default external host (clawagents.dev) and optional Serper/Twitter integrations that the code/setup actually use.
Install Mechanism
There is no package manager install spec—setup.sh creates a Python venv and pip-installs declared requirements (requests, python-dotenv), and the Python file will auto-pip-install those packages at runtime if missing. No remote arbitrary binary downloads or shorteners are used. The installer posts a registration payload to an analytics endpoint; dependencies are proportionate but installer behavior includes network registration/telemetry.
!
Credentials
SKILL.md documents GEMINI_API_KEY / OPENROUTER_API_KEY and optionally CEREBRAS_API_KEY, but the code and setup.sh also solicit SERPER_API_KEY and TWITTERAPI_KEY (and write them to .env and ~/.comment-forge/config.json). The script reads ~/.comment-forge/config.json and will set env vars from it. Keys are stored in plaintext on disk and sent as boolean flags during registration. Requesting extra third-party API keys beyond the LLM providers is not well-justified in the doc and increases exfiltration surface.
!
Persistence & Privilege
The tool persists an install id and API keys in $SCRIPT_DIR/.env and ~/.comment-forge/config.json, and the installer performs a silent registration POST to a remote analytics endpoint. always:false and no cross-skill/system modifications mitigate some risk, but the persistent local config plus telemetry and runtime phone-home increases the blast radius if the remote service is untrusted.
What to consider before installing
This skill appears to implement what it says (generating Reddit-style replies using an LLM), but it also: (1) contacts hosted endpoints (default: https://clawagents.dev) to fetch corpus samples and to register/report usage, (2) asks for and stores API keys (including Serper/Twitter optional keys) in plaintext at ~/.comment-forge/config.json and .env, and (3) will post telemetry on install and optionally on runs. Before installing: review and confirm the external endpoints (CF_CORPUS_API, CF_ANALYTICS_URL) are trustworthy; consider running in an isolated VM/container; avoid supplying extra non-LLM API keys unless needed; inspect the full comment_forge.py (the sample here was truncated) for any additional network calls; and be aware the tool's stated purpose (evading AI detection) may raise ethical/ToS concerns on platforms you target. If you need lower risk, refuse to provide optional telemetry/search API keys and run with local-only corpus or with CF_CORPUS_API disabled.

Like a lobster shell, security has layers — review code before you run it.

latestvk976es3rt03nc2nzg231t31rxx837yn3
177downloads
0stars
2versions
Updated 1h ago
v1.1.0
MIT-0

Comment Forge

Generate Reddit-native comments that sound like a real person wrote them. Powered by a real Reddit comment corpus and a 7-dimension QA pipeline that catches AI fingerprints.

What It Does

Feed it a post title, body, and existing comments. Get back a natural reply that:

  • Matches the thread tone using corpus-informed few-shot prompting
  • Passes AI detection via 7-dimension QA scoring (naturalness, value, subtlety, tone, detection risk, length, AI fingerprint)
  • Strips AI tells with deterministic anti-AI cleaning (em-dashes, smart quotes, 50+ AI vocabulary swaps)
  • Adds subtle humanness with smart typo injection (40% chance, max 1 per draft, never on product names)

Two Modes

Value-First: Pure tactical advice. No product mention. Great for building karma and credibility.

Product-Drop: Mention a product naturally in the reply. Auto-fit scoring determines if the product fits the thread (1-10 score). If it doesn't fit naturally, falls back to value-first.

Pipeline

  1. Corpus Sampling - Stratified, score-weighted real Reddit comment examples
  2. Fit Scoring - Classify thread intent, recommend mode (optional, for product-drop)
  3. Draft Generation - Corpus-informed few-shot prompting via Gemini or OpenRouter
  4. QA Pipeline - Score, revise, re-score loop (3 attempts for product-drop, 7 for value-first)
  5. Anti-AI Cleaning - Deterministic post-processing strips AI vocabulary, em-dashes, smart quotes
  6. Human Touch - Smart typo injection for believable imperfections

Quick Start

bash setup.sh
source .venv/bin/activate

# Value-first (no product)
python3 comment_forge.py --post "Best CRM for small teams?"

# Product-drop
python3 comment_forge.py --post "What tools do you use for email?" \
  --product "Acme Mail" --product-desc "Email automation for small teams"

# With existing comments for tone matching
python3 comment_forge.py --post "How do you handle cold outreach?" \
  --comments "I use Apollo" "LinkedIn works best imo"

# From JSON file
python3 comment_forge.py --file post.json --json

# Skip QA (faster)
python3 comment_forge.py --post "..." --skip-qa

JSON File Format

{
  "title": "Best CRM for small teams?",
  "body": "Looking for something simple...",
  "comments": [
    "I use HubSpot free tier",
    "Notion works if you're small"
  ],
  "product": "Acme CRM",
  "product_url": "https://acme.com",
  "product_description": "Simple CRM for small teams",
  "category": "saas",
  "mode": "product_drop"
}

API Keys

KeyRequiredPurpose
GEMINI_API_KEYYes (or OpenRouter)Primary LLM for generation + QA
OPENROUTER_API_KEYFallbackAlternative LLM provider
CEREBRAS_API_KEYOptionalFast fit scoring (free tier)

QA Dimensions

DimensionWeightWhat It Checks
naturalness15%Does it sound like a real person?
value_contribution15%Does it help the thread?
subtlety20%Is the product mention (if any) natural?
tone_match10%Does it match thread + corpus tone?
detection_risk10%Would redditors flag it as spam?
length_appropriate10%Right length for this thread type?
ai_fingerprint20%Em-dashes, AI vocab, perfect grammar?

Pass threshold: 7.0/10 composite score.

Comments

Loading comments...