Knowledge Distillation

v1.0.1

Distill OpenClaw daily memory, session transcripts, and newly generated report files into new knowledge points and deeper knowledge leads. Use when the input...

1· 347·4 current·4 all-time
byhaidong@harrylabsj

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for harrylabsj/knowledge-distillation.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Knowledge Distillation" (harrylabsj/knowledge-distillation) from ClawHub.
Skill page: https://clawhub.ai/harrylabsj/knowledge-distillation
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Canonical install target

openclaw skills install harrylabsj/knowledge-distillation

ClawHub CLI

Package manager switcher

npx clawhub@latest install knowledge-distillation
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The skill's name/description match its behavior: it reads OpenClaw workspace memory and report files and creates a dated Markdown distillation. Minor oddity: the helper script uses a hard-coded default path (/Users/jianghaidong/.openclaw/...), which is author-specific but not inconsistent with the stated purpose.
Instruction Scope
SKILL.md explicitly instructs the agent to operate on agent-native materials (MEMORY.md, memory/*.md, session logs, reports) and to produce a dated Markdown file. It does not instruct network exfiltration or reading unrelated system files. Reading internal conversation logs is expected for this skill's purpose and should be considered sensitive but in-scope.
Install Mechanism
No install spec. The skill is instruction-first and includes two small shell scripts. No external downloads or package installs are requested.
Credentials
The skill declares no environment variables, no credentials, and no special config paths. The only file-access implication is reading workspace files (as intended).
Persistence & Privilege
The skill does not request always:true and does not attempt to modify other skills or system-wide settings. It writes outputs to a local dist/ directory by default (or to the provided output-dir).
Assessment
This skill appears coherent and safe for its stated job, but it does read internal workspace files (memory, session logs, reports). Before running: (1) inspect the included scripts (scripts/distill.sh and scripts/test.sh); (2) when invoking the script, pass an explicit memory directory if you do not want it to default to the hard-coded /Users/jianghaidong/... path; (3) be mindful that session transcripts and memory can contain sensitive data—limit who can run the skill and where outputs are stored; (4) ensure the output directory is appropriate and not world-readable if the distilled file will contain private information.

Like a lobster shell, security has layers — review code before you run it.

latestvk97cp834dsqg1fbhek9q028n4582vdf5
347downloads
1stars
2versions
Updated 1mo ago
v1.0.1
MIT-0

Knowledge Distillation

Overview

This skill is an OpenClaw internal knowledge distiller.

Its job is not to summarize everything. Its job is to scan agent-native working materials, identify what is newly learned, and separate that from what should be investigated, connected, or strengthened next.

Input Scope

Use this skill when the source materials come from the OpenClaw environment, especially:

  • MEMORY.md
  • memory/*.md
  • session transcripts or conversation logs
  • newly generated report files
  • daily review notes
  • task summaries and execution reports

Treat these as raw internal learning material.

Core Objective

From the input set, produce two things:

  1. New Knowledge Points

    • information that now appears stable enough to retain
    • repeatable patterns, conclusions, heuristics, rules, or insights
    • decisions or lessons that deserve long-term reuse
  2. Knowledge Leads Worth Deepening

    • incomplete but promising patterns
    • recurring signals without enough confidence yet
    • tensions, contradictions, anomalies, or open questions
    • topics worth another round of observation, validation, or focused research

Workflow

1. Classify the source material

Identify what each input contributes:

  • long-term memory
  • recent memory
  • session/process evidence
  • generated report or analysis artifact

Do not treat all sources equally. Give more weight to repeated evidence across multiple sources.

2. Extract candidate signals

Look for:

  • repeated observations
  • recurring user preferences
  • stable work rules
  • decision patterns
  • successful or failed workflows
  • bottlenecks that appear more than once
  • newly surfaced concepts or frameworks

Prefer signal over chronology.

3. Distinguish stable knowledge from emerging leads

Promote something to New Knowledge Points only when at least one of these is true:

  • it appears repeatedly across days or sessions
  • it has already affected real decisions or behavior
  • it has clear reuse value
  • it is specific enough to guide future action

Keep something in Knowledge Leads Worth Deepening when:

  • evidence is partial
  • it shows potential but not enough stability
  • it conflicts with older observations
  • it needs targeted follow-up material

4. Merge duplicates and raise abstraction

Do not list near-duplicate observations separately.

Merge them upward into:

  • a principle
  • a rule of thumb
  • a workflow lesson
  • a reusable framework
  • a watchpoint for future review

5. Add explicit basis

Each knowledge point should include a short basis such as:

  • what kind of source supported it
  • whether it appeared once or repeatedly
  • whether it is high-confidence or tentative

Do not fabricate precision. Keep basis brief and honest.

6. End with next-step deepening suggestions

For each deepen-able knowledge point, explain how to deepen it, for example:

  • keep observing for 3-7 more days
  • compare against older sessions
  • collect one more concrete case
  • convert into an explicit workflow rule
  • ask a targeted question next time
  • create a dedicated report around the topic

Output Requirements

The output must be a dated Markdown file.

Filename format:

  • knowledge-distillation-YYYY-MM-DD.md

If multiple runs happen on the same day, use one of:

  • knowledge-distillation-YYYY-MM-DD-01.md
  • knowledge-distillation-YYYY-MM-DD-02.md

Required Output Structure

Use this structure unless the user explicitly asks for another one:

# Knowledge Distillation - YYYY-MM-DD

## Input Summary
- Memory files:
- Session/log sources:
- Report files:

## New Knowledge Points
### 1. Title
- Conclusion:
- Basis:
- Value:
- Scope:

### 2. Title
- Conclusion:
- Basis:
- Value:
- Scope:

## Knowledge Leads Worth Deepening
### 1. Title
- Current observation:
- Why worth deepening:
- Current gaps:
- Next step suggestions:

### 2. Title
- Current observation:
- Why worth deepening:
- Current gaps:
- Next step suggestions:

## Distillation Conclusions This Round
- Most worth retaining (1-3 points):
- Most worth tracking (1-3 leads):

For reusable variants, read references/output-templates.md.

Quality Rules

  • Do not write a generic summary of the inputs.
  • Do not merely restate chronology.
  • Do not promote weak hints into firm knowledge.
  • Do not bury the “new knowledge” section under background detail.
  • Prefer fewer stronger points over many shallow points.
  • If nothing truly qualifies as new knowledge, say so honestly.

Good Trigger Examples

Use this skill for requests like:

  • “把最近的 memory 和 session 蒸馏一下”
  • “从最近日报和会话里提炼新的知识点”
  • “看这些报告文件,找出值得沉淀和继续深化的点”
  • “把 OpenClaw 这几天的运行材料蒸馏成知识”
  • “输出一个今天的知识蒸馏 md 文件”

Resources

references/

  • references/output-templates.md: dated Markdown output variants for standard runs, report-heavy runs, and follow-up runs

Comments

Loading comments...