Skill Distiller

v0.2.1

Fit more skills in your context window — compress without losing what matters.

0· 82·0 current·0 all-time
byLee Brown@leegitw

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for leegitw/neon-skill-distiller.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Skill Distiller" (leegitw/neon-skill-distiller) from ClawHub.
Skill page: https://clawhub.ai/leegitw/neon-skill-distiller
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install neon-skill-distiller

ClawHub CLI

Package manager switcher

npx clawhub@latest install neon-skill-distiller
Security Scan
Capability signals
Requires sensitive credentials
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name and description (skill compression/distillation) align with the provided SKILL.md, reference docs, and test fixtures. Required capabilities (parsing markdown, scoring sections, producing compressed output) are consistent with what is present; no unrelated cloud or system access is demanded in metadata or files.
Instruction Scope
SKILL.md instructs reading/parsing skill markdown files provided by the user and producing compressed output — expected. It also documents provider detection (ollama, GEMINI_API_KEY, OPENAI_API_KEY), and describes writing calibration data to .learnings/skill-distiller/calibration.jsonl. These actions are within the skill's purpose but do include persistent local writes and runtime checks for local tooling (ollama).
Install Mechanism
This is an instruction-only skill (no install spec). There are no downloads, package installs, or extract operations. The only executable artifact is a included test_integration.sh for manual testing; it does not run automatically.
Credentials
Registry metadata declares no required env vars, but the SKILL.md and README describe probing for ollama and optionally using GEMINI_API_KEY or OPENAI_API_KEY for cloud inference. That is reasonable for an LLM-driven tool, but the skill did not declare those env vars in its registry metadata — users should be aware the runtime will check/use any provider credentials their agent has configured.
Persistence & Privilege
The skill documents saving calibration entries to .learnings/skill-distiller/calibration.jsonl (append, with rotation). This is reasonable for calibration but is a persistent write to the host filesystem. always:false and disable-model-invocation:true reduce autonomous risk; the skill does not request system-wide config changes or other skills' credentials.
Assessment
This skill appears to do what it says: compress skill documents. Before installing, consider these points: - The skill will read skill markdown files you point it at (expected), and it documents writing calibration data to .learnings/skill-distiller/calibration.jsonl in the host environment. If you prefer no on-disk traces, plan to monitor or clean that path. - The docs show it will prefer a local ollama model or fall back to GEMINI_API_KEY / OPENAI_API_KEY if set. Those provider env vars are not listed in the registry metadata — if you have cloud keys configured, the skill's runtime explanation indicates it may use them for LLM calls (this is normal for LLM-based tools, but verify you are comfortable with the agent using your configured provider). - There is no installer that fetches external code, and no unrelated credentials or network endpoints are embedded in the files — risk from supply-chain downloads is low. - disable-model-invocation is true (the skill is not allowed to autonomously invoke the model), which reduces autonomous behavior risk. The included test script references ollama usage but is only for manual testing. If you want extra caution: review or sandbox the first run, inspect .learnings after usage, and ensure any provider keys you use are scoped appropriately.

Like a lobster shell, security has layers — review code before you run it.

compressionvk97cyee44nvsq383ze8gnmcf4d84wckfcontext-windowvk97cyee44nvsq383ze8gnmcf4d84wckfformulavk97cyee44nvsq383ze8gnmcf4d84wckflatestvk97cyee44nvsq383ze8gnmcf4d84wckfmetaglyphvk97cyee44nvsq383ze8gnmcf4d84wckfopenclawvk97cyee44nvsq383ze8gnmcf4d84wckfoptimizationvk97cyee44nvsq383ze8gnmcf4d84wckfskillsvk97cyee44nvsq383ze8gnmcf4d84wckftoken-reductionvk97cyee44nvsq383ze8gnmcf4d84wckf
82downloads
0stars
3versions
Updated 1w ago
v0.2.1
MIT-0

Skill Distiller

Compress verbose skills to reduce context window usage. This skill is self-compressed using formula notation (~400 tokens, ~90% functionality, LLM-estimated). Full reference version: SKILL.reference.md.

Note: This skill uses formula notation — the LLM executes these operations directly. You don't need to understand the math. For prose explanation, see SKILL.reference.md.

Legend

S = {TRIGGER, CORE, CONSTRAINT, OUTPUT, EXAMPLE, EDGE, EXPLAIN, VERBOSE}
I(s) ∈ [0,1]        # importance score
P = {yaml.name, yaml.desc, N-count, task-create, checkpoint, BEFORE/AFTER}  # protected
θ ∈ [0,1]           # threshold (default 0.9)
n ∈ ℕ               # target tokens

Operations

compress(skill, θ)

∀s ∈ skill: type(s) → S, score(s) → I(s)
s ∈ P ⇒ I(s) := max(I(s), 0.85)
keep = {s | I(s) ≥ θ ∨ s ∈ P}
output = (skill[keep], Σ I(keep)/|S|, |skill| - |keep|)
# Score divides by |S| (8 types), not |keep| — rewards diverse section coverage

compress_tokens(skill, n)

min_tokens = |{s | type(s) ∈ {TRIGGER, CORE}}|
n < min_tokens ⇒ summarize(skill) → n
n ≥ min_tokens ⇒ compress(skill, θ) where |output| ≤ n

oneliner(skill)

output = "TRIGGER: " + extract(skill, TRIGGER) +
         "\nACTION: " + extract(skill, CORE) +
         "\nRESULT: " + extract(skill, OUTPUT)

recomp(examples, coverage_target=0.8)

scored = [(e, pattern_coverage(e), uniqueness(e)) | e ∈ examples]
selected = top(scored, n=2, by=coverage × uniqueness)
coverage(selected) ≥ 0.8 ⇒ phase1
  output = selected ∪ {trigger(e) → result(e) | e ∈ examples \ selected}
coverage(selected) < 0.8 ⇒ phase2
  output = synthesize(examples) → single_example

token_score(section) — for type ∈ {EXAMPLE, EDGE, EXPLAIN, VERBOSE}

∀phrase ∈ section:
  self_info(phrase) = -log(P(phrase|context))
  high_info ⇒ KEEP, low_info ⇒ PRUNE
prune while preserving sentence structure
>50% low_info ⇒ remove entire section

Symbols (MetaGlyph)

SymbolMeaning
results in, maps to
implies, therefore
element of, in
for all
¬not
and
or
:=assign

Invocation

/skill-distiller path --threshold=0.9  →  compress(skill, 0.9)
/skill-distiller path --tokens=500     →  compress_tokens(skill, 500)
/skill-distiller path --mode=oneliner  →  oneliner(skill)

Errors

ConditionResponse
skill = ∅"No content"
¬∃ yaml.name"Add frontmatter"
n < min_tokens"Summarizing..."

Variants

VariantTokensFunctionality
main (this)~400~90% (formula)
compressed~975~90% (prose)
oneliner~100~70%

Full reference: SKILL.reference.md (~2,500 tokens, ~90%)

Token counts use 4 chars/token heuristic (+/-20%). Functionality scores are LLM-estimated.

Comments

Loading comments...