procedural-distiller

v1.0.0

Distill successful multi-step OpenClaw sessions into reusable learned skills before compaction. Use when a task involved many tool calls, environment setup,...

0· 138·0 current·0 all-time
by曹广雨@xiaocaijic

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for xiaocaijic/procedural-distiller.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "procedural-distiller" (xiaocaijic/procedural-distiller) from ClawHub.
Skill page: https://clawhub.ai/xiaocaijic/procedural-distiller
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install procedural-distiller

ClawHub CLI

Package manager switcher

npx clawhub@latest install procedural-distiller
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
Name/description match the included SKILL.md and distill_logic.py. Required capabilities (reading a trace, extracting events, writing skill files) align with the stated goal. No unrelated credentials, binaries, or install steps are requested.
!
Instruction Scope
The runtime instructions and the script read a user-supplied trace JSON and write a learned skill (SKILL.md, agents/openai.yaml, memory.json) under a skills/learned/ directory. This is expected, but the workflow explicitly preserves concrete commands, file paths, and code snippets from the trace without sanitization, which could capture and persist sensitive data (secrets, credentials, absolute paths).
Install Mechanism
No install spec; this is an instruction-only skill with an included Python script. That is low-risk—nothing is downloaded or executed automatically beyond running the provided script locally.
Credentials
The skill requests no environment variables or credentials (proportional). However, because it serializes whatever is in the trace into persistent files, it may store sensitive environment-derived data present in traces. The absence of declared credentials is appropriate, but users should verify trace contents before distillation.
Persistence & Privilege
always is false and the skill only writes files into an output-root/learned skill directory (default 'skills/learned/...'). It does not modify other skills, system-wide agent settings, or request permanent platform privileges.
Assessment
This skill does exactly what it says: it reads a provided trace JSON and writes a learned skill (SKILL.md, agents/openai.yaml, memory.json) containing concrete commands, file paths, and snippets. Before running: (1) inspect the trace for secrets, credentials, or sensitive paths and redact them if present; (2) run the script in a controlled environment and set --output-root to a directory you control; (3) consider lowering --max-events or raising --min-tool-calls to limit persisted data; (4) review generated files before sharing or committing them to version control. The tool does not exfiltrate data over the network, but it will persist whatever is in the trace—treat that output as potentially sensitive.

Like a lobster shell, security has layers — review code before you run it.

latestvk97ehnshkmrgvzppg3h76n1wgn837x75
138downloads
0stars
1versions
Updated 1mo ago
v1.0.0
MIT-0

Procedural Distiller

Use this skill after a task has succeeded and the session contains enough signal to preserve. The goal is to extract procedural knowledge, not to summarize the conversation.

Triggering Rules

Run the distillation flow when all of the following are true:

  1. The task is finished successfully.
  2. The session contains at least 5 relevant tool calls across exec, read, write, edit, or browser.
  3. At least one of these applies:
    • The user explicitly asks to remember the workflow.
    • The task involved environment setup, debugging, or a multi-step repair.
    • Compaction risk is high and the trace contains concrete parameters worth preserving.

Do not run while the task is still active. Skip trivial sessions dominated by a single read or a one-line answer.

Distillation Workflow

  1. Read the recent trace and keep only relevant tool events.
  2. Separate successful steps from failed attempts.
  3. Preserve concrete commands, file paths, parameter values, and code snippets that materially contributed to the outcome.
  4. Convert the result into three sections:
    • Success Pattern
    • Failure Triggers
    • Snippets
  5. Write a learned skill under skills/learned/learned-<task-slug>/.
  6. Persist a utility score and source metadata in memory.json.

Read references/trace-format.md only if the incoming trace shape is unclear.

Execution Notes

  • Favor exact commands over abstract summaries.
  • Keep failed steps only when they teach a future agent what to avoid.
  • Collapse repetitive probes into one representative line.
  • If a write or edit step changed the final behavior, include the smallest useful snippet.
  • If a trace already contains a user rating, store it. Otherwise default to 3 and let a future caller update memory.json.

Local CLI

Run the bundled script directly:

python distill_logic.py --trace /path/to/trace.json --task "repair build cache" --output-root /path/to/skills

Useful flags:

  • --utility-score 4
  • --learned-root learned
  • --min-tool-calls 5
  • --max-events 20
  • --force

Output Contract

The generated learned skill must contain:

  • SKILL.md with valid frontmatter (name, description) and procedural sections
  • agents/openai.yaml for UI metadata
  • memory.json with utility_score, source task details, and generation metadata

Stop Conditions

Stop and do not emit a learned skill when:

  • the trace is marked unsuccessful
  • there are too few relevant tool calls
  • the trace lacks enough detail to reconstruct a reusable procedure

Comments

Loading comments...