Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

ClawHub Skill Publisher

v1.0.1

Research, structure, and publish skills to ClawHub. Analyzes top listings for content patterns, generates gap reports against your draft, patches README/SKIL...

0· 416·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for ragesaq/lum-skill-publisher.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "ClawHub Skill Publisher" (ragesaq/lum-skill-publisher) from ClawHub.
Skill page: https://clawhub.ai/ragesaq/lum-skill-publisher
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required binaries: clawhub
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install lum-skill-publisher

ClawHub CLI

Package manager switcher

npx clawhub@latest install lum-skill-publisher
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
Name/description match the runtime instructions: the skill uses the clawhub CLI to search, install, analyze marketplace SKILL.md/README.md files, patch drafts, and run clawhub publish. Requiring the clawhub binary (declared in metadata) is appropriate for this purpose.
Instruction Scope
Instructions tell the agent to install third‑party skills into /tmp/ch-research, read their SKILL.md and README.md, patch the user's README/SKILL.md, and run 'clawhub publish'. These steps are coherent with the stated goal, but they include destructive actions (patching files and publishing under the user's auth) without explicit advice to create backups, show diffs, or request explicit human confirmation before publishing.
Install Mechanism
This is an instruction-only skill with no install spec or external downloads. That minimizes supply‑chain risk. All runtime actions rely on the existing 'clawhub' CLI already declared as a required binary.
Credentials
No environment variables, credentials, or config paths are requested. The skill relies on the user's existing clawhub CLI auth (e.g., local token), which is proportional to publishing functionality.
Persistence & Privilege
The skill is not always-enabled and does not request persistent platform privileges. It does instruct actions that affect the user's workspace and account (file patches, 'clawhub publish'), but it does not modify other skills' configs or claim elevated persistent access.
Assessment
This skill appears coherent for preparing and publishing ClawHub skills, but take these precautions before use: (1) Ensure you trust the clawhub CLI on your machine and understand what 'clawhub install' does (does it run install scripts?). (2) Run the workflow in an isolated workspace or create backups of README.md and SKILL.md so you can inspect/undo patches. (3) Require a human review step: ask the agent to show diffs and obtain explicit approval before running 'clawhub publish' under your account. (4) Confirm authentication intentionally (run 'clawhub whoami' yourself) and be cautious when installing third‑party skill packages into your environment.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🏗️ Clawdis
OSmacOS · Linux · Windows
Binsclawhub
latestvk979b6vdrbyjqmwzpgbzsw30dx823p18
416downloads
0stars
2versions
Updated 14h ago
v1.0.1
MIT-0
macOS, Linux, Windows

ClawHub Skill Publisher v1

Turn a rough skill idea into a polished, publish-ready ClawHub listing — informed by what's actually working in the marketplace.

Use this skill when you want to:

  • Publish a new skill to ClawHub
  • Audit an existing skill draft against marketplace standards
  • Research what top-performing skills look like before writing yours

Workflow

Step 1 — Research Top Listings

Install the most relevant published skills in a temp directory and read their SKILL.md + README.md:

mkdir -p /tmp/ch-research

# Search for skills in your category
clawhub search "your-category-keyword"

# Install top 3-5 results for analysis
clawhub install <slug1> --dir /tmp/ch-research --force
clawhub install <slug2> --dir /tmp/ch-research --force
# (rate limit: add 3s sleep between installs)

What to capture per skill:

  • Description line: length, tone, value-first or feature-first?
  • First sentence of SKILL.md: does it state the use case immediately?
  • Structure: does it use tables, code blocks, headers?
  • Word count (target: 400–700 words for SKILL.md)
  • Sections present: commands, when-to-use, safety, version history
  • Trust signals: safety section, version history, explicit opt-outs

Step 2 — Gap Analysis

Compare your draft against findings. Score each dimension:

DimensionBest PracticeYour DraftAction
Description line≤160 chars, value-first, no buzzwords?Patch or OK
"When to use"Explicit trigger + do/don't?Patch or OK
Commands/interfaceSlash commands or trigger phrases?Patch or OK
Word count (SKILL.md)400–700 words?Trim or expand
Tables vs. proseTables preferred for comparisons?Patch or OK
Version historyPresent, at bottom?Add or OK
Safety sectionExplicit "never does X" list?Add or OK
ExamplesConcrete ✅/❌ pairs?Add or OK
AttributionLink back to openclaw.ai / clawhub.ai?Add or OK

Step 3 — Patch the Draft

Apply gap findings. Priority order:

  1. Description line (most visible — fix first)
  2. "When to use" section (drives installs)
  3. Trim word count if over 700 (cut prose, keep tables)
  4. Add missing sections (safety, version history)
  5. Convert prose comparisons to tables
  6. Add examples file if none exists

Step 4 — Publish

# Verify auth
clawhub whoami

# Publish (run from workspace root or skill parent dir)
clawhub publish ./skills/<your-skill> \
  --slug <your-slug> \
  --name "Your Skill Name" \
  --version 1.0.0 \
  --changelog "Initial release"

Published URL: https://clawhub.ai/skills/<your-slug>


ClawHub Listing Anatomy

Description Field (≤160 chars)

The most important text. Shows in search results and install prompts.

Formula: [What it does] + [how] + [key outcome].

✅ Good: "Reduce AI costs by batching related asks into fewer responses. ~30–50% fewer API calls, no quality loss." ❌ Bad: "ClawSaver — Combines Linked Asks into Well-structured Sets for Affordable, Verified, Efficient Responses"

SKILL.md Structure (what the agent reads)

---
name: skill-name
version: X.Y.Z
description: "Same as listing description"
metadata: {"openclaw":{"emoji":"🔧"}}
---

# Skill Name vX

> One-line positioning statement.

[One paragraph: what it does and why.]

## When to Use
[Use / Do not use — explicit conditions]

## Core Behavior / Commands
[Tables preferred. Trigger phrases, commands, decision rules.]

## Safety
[What it never does. Explicit opt-outs.]

## Installation
[clawhub install command]

## Version History
[- X.Y.Z — what changed]

README.md Structure (humans + listing body)

# Skill Name
> Tagline

## Why [Skill Name]?
[Problem → solution in 2-3 sentences]

## What It Does
[Numbered or bulleted feature list]

## [Key Decision Table or Usage Example]

## Safety Model

## Installation

## Version

Marketplace Patterns (Observed Feb 2026)

What top skills have in common

  • Value-first description (outcome before feature list)
  • "When to use" is explicit — most top skills have do/don't lists
  • Tables over prose for anything comparative
  • Safety section is a trust signal — include it even if short
  • Version history at the bottom — shows maintenance
  • Word count 400–700 for SKILL.md; README can be longer

What separates good from great

  • Great: examples file with concrete ✅/❌ pairs
  • Great: trigger phrase detection (tells agent when to activate)
  • Great: explicit opt-outs ("say X to disable")
  • Good but not great: long prose descriptions, missing opt-outs
  • Avoid: backronyms or clever names in the description line (save for README)

Category density (as of Feb 2026)

  • Cost/token tracking: saturated — need a differentiated angle
  • Batch/workflow: sparse — opportunity
  • Provider-specific tools: mixed — Kimi-heavy, OpenAI moderate
  • Productivity/meta-skills: sparse — opportunity

File Checklist Before Publishing

  • SKILL.md — frontmatter has name, version, description
  • SKILL.md — word count 400–700
  • SKILL.md — has "When to Use" section
  • SKILL.md — has Safety section
  • SKILL.md — has Version History
  • README.md — value-first, ≤600 words
  • README.md — installation command correct
  • examples/ — at least one example file (optional but recommended)
  • Description line — ≤160 chars, value-first
  • clawhub whoami — auth confirmed before publish

Skill Type: Behavior-Change vs. Active Tool

Most ClawHub skills are behavior-change skills — they work by shaping agent judgment through instructions, not by running code or intercepting requests at the system level. This is the same mechanism as execution-loop-breaker, token-saver, and most top listings.

When writing a behavior-change skill:

  • Be explicit in the description that it works through agent behavior, not automated interception
  • Use language like "trains your agent to..." or "gives your agent the judgment to..." — not "automatically detects" or "intercepts"
  • Don't overstate automation. "Teaches your agent to consolidate related asks" is honest. "Automatically batches requests" implies system-level routing that the skill doesn't do.
  • The benefit is still real — behavior change produces real cost and efficiency improvements

When a skill needs to be an active tool:

  • Requires pre-response hooks or middleware (OpenClaw doesn't currently expose these)
  • Requires script files (analyzer.js, optimizer.js) that actually run
  • Example: a real token optimizer that reads context size and trims it before sending

Bottom line: Instruction-based skills are legitimate and valuable. Just be honest about the scope. Users trust skills that set accurate expectations.


Version History

  • 1.0.1 — Added "Skill Type: Behavior-Change vs. Active Tool" lesson from ClawSaver development
  • 1.0.0 — Initial release. Research workflow, gap analysis framework, listing anatomy, marketplace patterns from Feb 2026 analysis.

Comments

Loading comments...