Chat Learnings Extractor

v1.0.2

Extract structured learnings (lessons, decisions, patterns, dead ends) from AI conversation exports using a local Ollama model or any OpenAI-compatible API....

0· 34·0 current·0 all-time
byDeonte Cooper@djc00p
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
Capability signals
Requires sensitive credentials
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
Name/description match the included files and behavior: parsers for OpenAI/Anthropic exports, extraction logic, and optional use of a local Ollama model or OpenAI-compatible API. Required binary (python3) is appropriate. Nothing in the code or SKILL.md requests unrelated cloud provider credentials or unrelated system-level access.
Instruction Scope
Instructions and scripts operate on user-provided export files and write extracted summaries to the workspace (memory/semantic/learnings-from-exports.md) and maintain a .processed_ids dedupe file. The skill transmits conversation summaries to a model endpoint (local Ollama by default, or a remote OpenAI-compatible API if OPENAI_API_KEY/OPENAI_BASE_URL or custom OLLAMA_BASE_URL are set). That network behavior is expected for a summarization/extraction tool but is the primary privacy consideration: conversation contents will be sent to whichever model endpoint is used.
Install Mechanism
This is an instruction-only skill with Python scripts; there is no installer, third-party package download, or archive extraction. Risk from install mechanism is low.
Credentials
The registry lists no required env vars (none are mandatory), and the skill optionally uses OPENAI_API_KEY, OPENAI_BASE_URL, OLLAMA_BASE_URL, and OPENCLAW_WORKSPACE. Those variables are proportional to the feature set (choosing local vs remote model, API key for remote models, workspace path). One minor mismatch: the registry metadata declared no required env/config paths, while the skill reads OPENCLAW_WORKSPACE (optional) and writes to a workspace path — this is expected but worth noting.
Persistence & Privilege
always:false and normal autonomous invocation. The skill writes outputs and a .processed_ids file into the user's workspace (or ~/.openclaw/workspace by default). It does not request to modify other skills or system-wide settings.
Assessment
This skill does what it says: it parses conversation export JSON files and sends condensed summaries to a model (local Ollama by default, or a remote OpenAI-compatible endpoint if you set OPENAI_API_KEY / OPENAI_BASE_URL or change OLLAMA_BASE_URL). Before installing or running it: 1) Review and trust the model endpoint you will use — any endpoint you point it to will receive conversation excerpts (so don't set OLLAMA_BASE_URL or OPENAI_BASE_URL to an untrusted server). 2) Be aware it writes outputs and a .processed_ids dedupe file to your OpenClaw workspace (defaults to ~/.openclaw/workspace); ensure that location is acceptable. 3) The provided listing truncated the tail of scripts/extract.py in the materials you supplied, so inspect the complete extract.py in the package (especially load/save logic) before granting broad access or running it on sensitive conversations. If you plan to use a remote API, limit the data you send (use --limit or --dry-run for testing) and avoid exporting chats that contain secrets or PII unless you trust the destination.

Like a lobster shell, security has layers — review code before you run it.

latestvk977bcy6zfbsbz6t3kjekq36j184ttvj

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Runtime requirements

🧠 Clawdis
OSLinux · macOS · Windows
Binspython3

Comments