Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Auto Skill Distiller

v1.0.0

Auto-distill successful workflows into reusable skills. Use after completing any multi-step task to evaluate if the workflow should be saved as a skill. Trig...

0· 86·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for christianye/auto-skill-distiller.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Auto Skill Distiller" (christianye/auto-skill-distiller) from ClawHub.
Skill page: https://clawhub.ai/christianye/auto-skill-distiller
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install auto-skill-distiller

ClawHub CLI

Package manager switcher

npx clawhub@latest install auto-skill-distiller
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description match the runtime instructions: the document describes how to extract workflows, generalize them, and save SKILL.md files under ~/.openclaw/skills. References to integration (trinity-harness Layer 3) are optional integrations, not hidden requirements.
Instruction Scope
SKILL.md instructs the agent to inspect recent workflow steps, generalize them, run quality checks, and write files to ~/.openclaw/skills/<slug>/SKILL.md (and optional references/). It also suggests using commands like ls and read against the skills directory. There are no instructions to read unrelated system files, access credentials, or send data to external endpoints.
Install Mechanism
Instruction-only skill with no install spec, no code to download, and no required binaries — lowest-risk install surface.
Credentials
No environment variables, credentials, or config paths are required. The only filesystem access described is under the user's ~/.openclaw/skills and memory files, which is proportional to the stated purpose.
Persistence & Privilege
The skill directs writing files into ~/.openclaw/skills (its expected scope). It does not set always:true. However, if your agent/platform enables autonomous triggers (Layer 3 compound mode), the agent could create SKILL.md files automatically — the doc advises announcing creations, but you should confirm the platform enforces review before committing changes.
Assessment
This instruction-only skill is coherent and low-risk: it only creates/edits SKILL.md under ~/.openclaw/skills and performs local checks. Before installing or enabling automatic distillation: (1) ensure you want the agent to be able to write into ~/.openclaw/skills (backup that directory if needed); (2) confirm your platform enforces a review/approval step so created skills aren’t added silently (the SKILL.md itself says to announce creations); (3) review any generated SKILL.md for accidental inclusion of sensitive conversation content before saving; and (4) if you do not want autonomous creation, keep autonomous invocation constrained or disable any Layer 3 automatic distillation integration. Overall this skill appears to do what it claims.

Like a lobster shell, security has layers — review code before you run it.

automationvk9777pcmxznf1gvk7cpcdyzc7584hb7fcompoundvk9777pcmxznf1gvk7cpcdyzc7584hb7fdistillvk9777pcmxznf1gvk7cpcdyzc7584hb7flatestvk9777pcmxznf1gvk7cpcdyzc7584hb7flearningvk9777pcmxznf1gvk7cpcdyzc7584hb7fskillvk9777pcmxznf1gvk7cpcdyzc7584hb7fworkflowvk9777pcmxznf1gvk7cpcdyzc7584hb7f
86downloads
0stars
1versions
Updated 2w ago
v1.0.0
MIT-0

Skill Distiller

Turn successful workflows into reusable skills — automatically.

Inspired by Hermes Agent's learning loop, but with quality gates to prevent skill bloat.

When to Distill

Not every task deserves a skill. Evaluate these three criteria:

All three must be YES to proceed:

  1. Novel? — Did this task require a workflow you haven't done before? (If you already have a skill for this, update it instead of creating a new one)
  2. Successful? — Did the task complete with verified results? (Failed tasks produce lessons, not skills — write to memory/lessons-learned.md instead)
  3. Reusable? — Will this exact workflow likely be needed again? (One-off tasks don't need skills)

Quick scoring:

Novel + Successful + Reusable = CREATE SKILL
Novel + Successful + One-off  = WRITE TO MEMORY (lesson learned, not a skill)
Novel + Failed                = WRITE TO LESSONS-LEARNED
Not Novel                     = UPDATE EXISTING SKILL (or skip)

Distillation Process

Step 1: Extract the Workflow

Look back at what you just did and identify:

  • Trigger: What kind of request started this? (pattern, not specific instance)
  • Steps: What were the key steps, in order?
  • Tools: Which tools were used and how?
  • Decisions: What non-obvious choices were made and why?
  • Gotchas: What almost went wrong or required retry?

Step 2: Generalize

Transform the specific instance into a reusable pattern:

  • Replace specific file names with <input_file>, <output_path> etc.
  • Replace specific content with descriptions of what goes there
  • Extract magic numbers into named parameters
  • Identify which steps are always needed vs. conditional

Bad (too specific):

1. Read ch10-multi-agent-comm-patterns.md
2. Convert markdown to docx using python-docx
3. Upload to feishu folder nodcnxdXVfsiCVDuiigFVpnCPoc

Good (generalized):

1. Read source markdown file(s)
2. Convert to docx using python-docx (see references/docx-patterns.md)
3. Upload to target feishu folder

Step 3: Write SKILL.md

Generate the skill following the standard format:

---
name: <slug>
description: "<when to use this skill — be specific about triggers>"
---

# <Skill Name>

## When to Use
<1-2 sentences on the trigger pattern>

## Workflow
<Numbered steps — the core of the skill>

## Key Decisions
<Non-obvious choices and their rationale>

## Gotchas
<Things that can go wrong and how to handle them>

## References
<Links to detailed docs if needed>

Size target: SKILL.md body should be under 200 lines. If longer, split into SKILL.md (workflow) + references/ (details).

Step 4: Quality Check

Before saving, verify:

  • Description clearly states when this skill should trigger
  • Steps are ordered and each has a clear action
  • No hardcoded values that should be parameters
  • Gotchas are specific, not generic ("handle errors properly" = useless)
  • Doesn't duplicate an existing skill (check ls ~/.openclaw/skills/)

Step 5: Save and Register

Save to ~/.openclaw/skills/<slug>/SKILL.md.

If the skill has reference materials, save them to ~/.openclaw/skills/<slug>/references/.

After saving, verify the skill loads:

ls ~/.openclaw/skills/<slug>/SKILL.md

Automatic Distillation Mode

When integrated with trinity-harness's Layer 3 (Compound), distillation happens automatically:

  1. Task completes → Layer 3 Compound phase triggers
  2. Evaluate Novel + Successful + Reusable
  3. If all YES → run distillation process
  4. If NO → write lesson to memory instead
  5. Announce to user: "Distilled skill: <name>. Review with read ~/.openclaw/skills/<slug>/SKILL.md"

Never auto-distill silently. Always announce what was created so the user can review, edit, or delete.

Skill Maintenance

Update vs. Create

Before creating a new skill, check if a related one exists:

ls ~/.openclaw/skills/ | grep -i <keyword>

If a similar skill exists, update it (add the new pattern as a variant) rather than creating a near-duplicate.

Pruning

Periodically (during Dream Task), review skills:

  • Skills unused for 30+ days → candidate for archival
  • Skills with overlapping triggers → merge
  • Skills that have been superseded → mark deprecated

Anti-Patterns

Don'tWhyDo Instead
Distill every taskSkill bloat, noise drowns signalApply the 3-question gate
Include conversation historyWastes tokens, not reusableExtract only the workflow pattern
Write vague gotchas"Be careful" helps no oneSpecific: "API X returns 429 after 3 concurrent requests"
Hardcode paths/namesNot portableUse <parameter> placeholders
Skip quality checkGarbage skills waste future contextAlways verify before saving

Integration with Memory System

Distillation complements, not replaces, the memory system:

OutputGoes toWhen
Reusable workflow~/.openclaw/skills/<slug>/SKILL.mdNovel + Successful + Reusable
Lesson learnedmemory/lessons-learned.mdSuccessful but one-off, or failed
Quick notememory/YYYY-MM-DD.mdRoutine observations
Core insightMEMORY.mdFundamental principle change

Comments

Loading comments...