Session Distiller
v0.5.1Batch-distill completed and live OpenClaw session transcripts into structured daily memory files. Two components: distill.py (batch + live session distillati...
⭐ 0· 124·1 current·1 all-time
by@pjmorr
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Benign
medium confidencePurpose & Capability
The name/description (session distillation) match the code and SKILL.md: scripts read OpenClaw session JSONL files, run local LLM distillation calls, append to daily memory files, and optionally send Telegram alerts and auto-distill live sessions. Required tools (trash CLI, openclaw CLI, LiteLLM endpoint) are consistent with the described functionality.
Instruction Scope
Instructions and code operate on ~/.openclaw session files, call a configured LLM endpoint (default http://localhost:4000), invoke the openclaw CLI for gateway status, and use curl to send Telegram alerts. These behaviors are expected for the stated purpose, but they mean session content is read and sent to an LLM endpoint — verify that endpoint is local/trusted. The context-gate also reads live JSONL files to estimate tokens (expected but privacy-relevant).
Install Mechanism
No install spec; this is instruction-plus-scripts only. There are no remote downloads or package installs embedded in the skill bundle, so nothing arbitrary is fetched during install. Risk from install mechanism is low.
Credentials
SKILL.md documents CONTEXT_WARN_PCT, CONTEXT_HARD_PCT, BOT_TOKEN, and CHAT_ID. These are proportional: the Telegram bot token and chat ID are only required for alerts. Registry metadata indicated no required env vars — a minor mismatch (BOT_TOKEN/CHAT_ID are optional at runtime but needed for alerting). No unrelated credentials (cloud keys, etc.) are requested.
Persistence & Privilege
The skill writes local runtime state files (offsets.json, gate-state.json, ingested JSONs) and appends to memory/YYYY-MM-DD.md as intended. always:false (not force-installed). It spawns subprocesses (curl, openclaw, python) which is expected. Note: context-gate has a hardcoded openclaw path (/opt/homebrew/bin/openclaw) and requires macOS 'trash' CLI — brittle but not malicious.
Assessment
Before installing, verify these operational and privacy points:
- LLM endpoint: By default the skill posts transcripts to http://localhost:4000. Confirm that endpoint is local and trusted. If you point it to a remote third-party LLM, session contents (possibly sensitive) will be transmitted off-host.
- Telegram alerts: BOT_TOKEN and CHAT_ID enable alerts; only set them if you want outbound notifications. Treat the bot token as a secret and rotate/revoke it if exposed. The skill uses curl to call api.telegram.org with the token in the URL — that's how the bot API works.
- Files touched: The scripts read ~/.openclaw/agents/main/sessions/*, may read active .jsonl files, append distilled content to memory/YYYY-MM-DD.md, and can move/trash processed session files. If you have sensitive content in sessions, expect it to be persisted into daily memory files.
- Live allowlist & offsets: Review LIVE_ALLOWLIST_KEYS to ensure only approved group sessions are processed live. The skill persists offsets.json and gate-state.json in the skill directory.
- Platform/tooling: The skill assumes macOS (trash CLI) and calls /opt/homebrew/bin/openclaw by default; update those paths or dependencies on other systems.
- Test in dry-run mode first: Use --dry-run for distill.py and context-gate.py to see what would be processed and what alerts would be sent without writing/trashing files or sending messages.
If you validate the LLM endpoint is local/trusted, limit or avoid setting BOT_TOKEN if you don't need alerts, and review the allowlist, the skill appears coherent and appropriate for its stated purpose.Like a lobster shell, security has layers — review code before you run it.
latestvk978vfqd3mj7bg8dggny0dcqgx8321vh
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
