Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Auto-Skill Extractor

v1.0.2

Automatically learn from your AI's work and turn repeated subagent tasks into reusable skills

0· 79·0 current·0 all-time
byWahaj Ahmed@wahajahmed010

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for wahajahmed010/auto-skill-extractor.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Auto-Skill Extractor" (wahajahmed010/auto-skill-extractor) from ClawHub.
Skill page: https://clawhub.ai/wahajahmed010/auto-skill-extractor
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install auto-skill-extractor

ClawHub CLI

Package manager switcher

npx clawhub@latest install auto-skill-extractor
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The code and runtime instructions align with the description: it observes subagent results, scores complexity, creates draft skill directories, and promotes/archives drafts. No unrelated credentials, network calls, or external services are required.
Instruction Scope
SKILL.md and the scripts focus on ingesting a small JSON trigger (stdin or temp file) and operate only on a configurable workspace. The documentation explicitly warns to avoid placing secrets in transcript_summary. However, the tool asks to be wired into AGENTS.md so it will routinely receive summarized subagent outputs — you must avoid sending sensitive transcripts. generate_skill_name uses transcript_summary to create file and folder names (sanitized), so accidental sensitive tokens could end up partially reflected in filenames even if not persisted as raw transcripts.
Install Mechanism
No network downloads or external installers; this is an instruction-only skill with included Python scripts. install.json lists setup directories but there is no package download or extraction from untrusted URLs.
Credentials
No credentials or secrets are requested. Only an optional AUTO_SKILL_WORKSPACE path is used (default current directory). The requested environment access is proportional to the task.
!
Persistence & Privilege
The scripts write to your workspace (skills/auto-draft, skills/auto, etc.) and can promote drafts to an 'active' skills directory after meeting thresholds (default 3 invocations). If your OpenClaw runtime auto-loads skills from that directory, this gives the extractor the ability to add new skills to the agent without a human review step. While current implementation creates mainly SKILL.md and small metadata files (no remote calls or arbitrary code downloads), automatic creation/promotion of skills has a notable privilege/persistence implication that requires operational controls.
What to consider before installing
What to consider before installing: - Review the code yourself (or in a staging environment). The included Python scripts are the runtime behavior; they operate on the workspace and create/rename directories and files. - Set AUTO_SKILL_WORKSPACE to an isolated, dedicated directory (not the system root or a directory with other sensitive configs). This confines where drafts and promoted skills are written. - Do not pipe raw transcripts or secrets into the trigger. The author warns to send only brief transcript_summary; follow that. generate_skill_name will derive filenames from that summary and could leak fragments. - Require human review before promotion: either increase PROMOTE_THRESHOLD or run skill-lifecycle.py process manually rather than letting automated invocations promote drafts. Consider removing/locking automated invocation in production. - Audit the skills/auto directory for new promotions and add monitoring/alerts when new files are added. - If your runtime auto-loads skills from the active directory, treat this tool as a capability that can expand agent behavior — run it with least privilege and in a sandboxed agent instance first. Why 'suspicious' vs 'benign': the code is coherent and implements what's described, but the ability to autonomously create and promote skills into an active skills directory is a persistence/privilege risk that requires operational controls (workspace isolation, review steps) before use. If you can confirm you will run it only in an isolated workspace with manual promotion, confidence would increase.

Like a lobster shell, security has layers — review code before you run it.

latestvk975f6q327j1ktxft7be33epeh850rz4
79downloads
0stars
3versions
Updated 1w ago
v1.0.2
MIT-0

Auto-Skill Extractor

Turn your agent's work into reusable skills. Automatically.

When to Use

✅ You run complex subagent tasks repeatedly
✅ You want to build a skill library without manual authoring
✅ You run multi-domain tasks (files + system + web)
✅ You want your agent to learn from its own patterns

When NOT to Use

❌ Simple 1-2 tool tasks (not worth skilling)
❌ One-off exploratory work
❌ You prefer manually authoring every skill

Quick Start

1. Install

clawhub install auto-skill-extractor

2. Create Directories

mkdir -p skills/auto-draft skills/auto skills/manual

3. Wire Into Your Agent

Add to AGENTS.md after subagent completion:

# Auto-skill extraction trigger
import subprocess
import json

# Write trigger input
trigger_data = {
    "completion_status": "success",
    "tool_calls": tool_call_count,  # from subagent result
    "transcript_summary": brief_summary,  # Keep brief, avoid secrets
    "session_id": session_key,
    "multi_domain": True  # if applicable
}

# RECOMMENDED: Pipe via stdin (no file on disk)
result = subprocess.run(
    ["python3", "scripts/auto-skill-trigger.py"],
    input=json.dumps(trigger_data),
    capture_output=True,
    text=True
)

# ALTERNATIVE: File-based (delete immediately after)
# with open("/tmp/trigger.json", "w") as f:
#     json.dump(trigger_data, f)
# result = subprocess.run(
#     ["python3", "scripts/auto-skill-trigger.py", "/tmp/trigger.json"],
#     capture_output=True, text=True
# )
# os.remove("/tmp/trigger.json")  # SECURITY: Delete after use

output = json.loads(result.stdout)
if output.get("action") == "extract":
    print(f"🔄 Created DRAFT skill: {output['skill_name']}")

4. Use It

Run a subagent with complex work:

Spawn subagent to analyze codebase...
- Read 5 config files ✓
- Check processes ✓
- Write summary report ✓

Result: skills/auto-draft/codebase-analyzer-abc123/ created automatically

How It Works

Step 1: Trigger Evaluation

After every subagent completion:

CheckMust Pass
Statussuccess
Tool calls≥ 3
Complexity≥ 4

Step 2: Complexity Scoring

Base:    tool_calls × 0.7  (max 5 pts)
          3 tools = 2 pts, 5 tools = 4 pts

Bonus:   +2  multi-domain (files + system + web)
         +2  error recovery (retry logic)
         +1  fail-then-succeed

Threshold: 4 points = extract

Step 3: DRAFT Creation

skills/auto-draft/my-skill-abc123/
├── SKILL.md      ← Template with metadata
└── meta.json     ← Invocation tracking

Step 4: Evaluation Period

  • Use the DRAFT skill 3 times successfully
  • Each use logged in meta.json
  • After 3rd use → auto-promoted to skills/auto/

Step 5: ACTIVE Status

Promoted skills are:

  • Visible in /skills auto list
  • Ready for manual completion
  • Versioned and tracked

Configuration

Edit scripts/auto-skill-trigger.py:

COMPLEXITY_THRESHOLD = 4    # Lower = more drafts, more curation
MAX_QUEUE_SIZE = 50         # Pending extraction limit
PROMOTE_THRESHOLD = 3       # Invocations before promotion

Manual Control

Force Extraction

Ignore thresholds:

#skill: force

List Drafts

python3 scripts/skill-lifecycle.py drafts

Promote Early

python3 scripts/skill-lifecycle.py promote my-skill-name

Archive Stale Drafts

python3 scripts/skill-lifecycle.py process
# Removes drafts unused for 7+ days

Safety

  • Collision detection — Won't overwrite existing skills
  • Path sanitization../../../etc → blocked
  • Atomic promotion — Copy → verify → move → delete
  • Queue limits — Max 50 pending extractions
  • Return value checks — Errors logged, not silent

Verification

Check extraction worked:

# See recent DRAFTs
ls -la skills/auto-draft/

# Check extraction queue
cat scripts/skill-extraction-queue.json

# View specific skill
cat skills/auto-draft/my-skill-abc123/SKILL.md

Pitfalls

ProblemCauseFix
No DRAFTs createdThreshold too highLower COMPLEXITY_THRESHOLD
Too many DRAFTsThreshold too lowRaise threshold, manually curate
Promotion never happensNot using DRAFTsRun /skills promote manually
Skills not usefulNoise in extractionTune thresholds, review DRAFTs weekly

Architecture

Subagent completes
    ↓
auto-skill-trigger.py
    ↓
Score complexity (0-10)
    ↓
If ≥ 4: Create DRAFT
    ↓
skill-lifecycle.py
    ↓
After 3 uses: PROMOTE → skills/auto/
    ↓
After 7 days: ARCHIVE

Related

References

Comments

Loading comments...