Chat Learnings Extractor

v1.0.2

Extract structured learnings (lessons, decisions, patterns, dead ends) from AI conversation exports using a local Ollama model or any OpenAI-compatible API....

0· 79·0 current·0 all-time
byDeonte Cooper@djc00p

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for djc00p/chat-learnings-extractor.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Chat Learnings Extractor" (djc00p/chat-learnings-extractor) from ClawHub.
Skill page: https://clawhub.ai/djc00p/chat-learnings-extractor
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required binaries: python3
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install chat-learnings-extractor

ClawHub CLI

Package manager switcher

npx clawhub@latest install chat-learnings-extractor
Security Scan
Capability signals
Requires sensitive credentials
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
Name/description match the included files and behavior: parsers for OpenAI/Anthropic exports, extraction logic, and optional use of a local Ollama model or OpenAI-compatible API. Required binary (python3) is appropriate. Nothing in the code or SKILL.md requests unrelated cloud provider credentials or unrelated system-level access.
Instruction Scope
Instructions and scripts operate on user-provided export files and write extracted summaries to the workspace (memory/semantic/learnings-from-exports.md) and maintain a .processed_ids dedupe file. The skill transmits conversation summaries to a model endpoint (local Ollama by default, or a remote OpenAI-compatible API if OPENAI_API_KEY/OPENAI_BASE_URL or custom OLLAMA_BASE_URL are set). That network behavior is expected for a summarization/extraction tool but is the primary privacy consideration: conversation contents will be sent to whichever model endpoint is used.
Install Mechanism
This is an instruction-only skill with Python scripts; there is no installer, third-party package download, or archive extraction. Risk from install mechanism is low.
Credentials
The registry lists no required env vars (none are mandatory), and the skill optionally uses OPENAI_API_KEY, OPENAI_BASE_URL, OLLAMA_BASE_URL, and OPENCLAW_WORKSPACE. Those variables are proportional to the feature set (choosing local vs remote model, API key for remote models, workspace path). One minor mismatch: the registry metadata declared no required env/config paths, while the skill reads OPENCLAW_WORKSPACE (optional) and writes to a workspace path — this is expected but worth noting.
Persistence & Privilege
always:false and normal autonomous invocation. The skill writes outputs and a .processed_ids file into the user's workspace (or ~/.openclaw/workspace by default). It does not request to modify other skills or system-wide settings.
Assessment
This skill does what it says: it parses conversation export JSON files and sends condensed summaries to a model (local Ollama by default, or a remote OpenAI-compatible endpoint if you set OPENAI_API_KEY / OPENAI_BASE_URL or change OLLAMA_BASE_URL). Before installing or running it: 1) Review and trust the model endpoint you will use — any endpoint you point it to will receive conversation excerpts (so don't set OLLAMA_BASE_URL or OPENAI_BASE_URL to an untrusted server). 2) Be aware it writes outputs and a .processed_ids dedupe file to your OpenClaw workspace (defaults to ~/.openclaw/workspace); ensure that location is acceptable. 3) The provided listing truncated the tail of scripts/extract.py in the materials you supplied, so inspect the complete extract.py in the package (especially load/save logic) before granting broad access or running it on sensitive conversations. If you plan to use a remote API, limit the data you send (use --limit or --dry-run for testing) and avoid exporting chats that contain secrets or PII unless you trust the destination.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🧠 Clawdis
OSLinux · macOS · Windows
Binspython3
latestvk977bcy6zfbsbz6t3kjekq36j184ttvj
79downloads
0stars
3versions
Updated 1w ago
v1.0.2
MIT-0
Linux, macOS, Windows

Conversation Learnings Extractor

Extract structured learnings (lessons, decisions, patterns, dead ends) from exported AI conversations using either a local Ollama model or any OpenAI-compatible API. This skill is designed to work with exports from OpenAI and Anthropic, and pairs well with the chat-history-importer skill for a complete conversation analysis workflow.

Quick Start

Using Ollama (default)

python3 scripts/extract.py --dir /path/to/exports --limit 3 --dry-run
python3 scripts/extract.py --file single-conversation.json
python3 scripts/extract.py --dir /path/to/exports --since 2026-04-01

Using OpenAI-compatible API (e.g., OpenAI, Anthropic Bedrock, etc.)

export OPENAI_API_KEY=sk-...
export OPENAI_BASE_URL=https://api.openai.com/v1  # optional, defaults to OpenAI
python3 scripts/extract.py --dir /path/to/exports --model gpt-4o-mini

How It Works

  1. Parse OpenAI/Anthropic JSON exports using bundled parsers (from sibling chat-history-importer skill)
  2. Deduplicate via .processed_ids file (skip already-processed chats)
  3. Summarize conversation to key excerpts (to fit model context)
  4. Extract structured learnings using your chosen model: lessons, decisions, patterns, dead ends
  5. Append results to memory/semantic/learnings-from-exports.md

Integration with chat-history-importer

This skill pairs with chat-history-importer:

  1. First, run chat-history-importer to ingest raw conversations into episodic memory (memory/episodic/YYYY-MM-DD.md)
  2. Then, run this skill to extract structured learnings into semantic memory (memory/semantic/learnings-from-exports.md)

This workflow keeps raw conversation logs separate from actionable insights, enabling better knowledge organization.

Configuration

Using Ollama (Local)

Prerequisites: Ollama running at http://127.0.0.1:11434 (default)

# Use default model (gemma4:26b)
python3 scripts/extract.py --dir /path/to/exports

# Use a different local model
python3 scripts/extract.py --dir /path/to/exports --model llama2

# Custom Ollama endpoint
export OLLAMA_BASE_URL=http://ollama.example.com:11434
python3 scripts/extract.py --dir /path/to/exports

Environment Variables:

  • OLLAMA_BASE_URL — Ollama API endpoint (default: http://127.0.0.1:11434)

Using OpenAI-compatible API

Any API supporting the OpenAI /chat/completions endpoint (OpenAI, Bedrock, LM Studio, etc.)

export OPENAI_API_KEY=sk-...
export OPENAI_BASE_URL=https://api.openai.com/v1  # optional
python3 scripts/extract.py --dir /path/to/exports --model gpt-4o-mini

Environment Variables:

  • OPENAI_API_KEY — API key (required to enable this mode; if set, OpenAI mode is used instead of Ollama)
  • OPENAI_BASE_URL — API base URL (default: https://api.openai.com/v1)

Model auto-selection:

  • If OPENAI_API_KEY is set → defaults to gpt-4o-mini
  • If OPENAI_API_KEY is not set → defaults to gemma4:26b (Ollama)

Flags

  • --dir DIR — Process all JSON files in directory
  • --file FILE — Process single file
  • --limit N — Process only first N conversations (useful for testing or limiting API costs)
  • --since YYYY-MM-DD — Skip conversations before this date
  • --model MODEL — Override default model name
  • --dry-run — Print output without writing to disk or updating dedup state

Output Format

Results are appended to memory/semantic/learnings-from-exports.md with this structure:

## Chat Title (YYYY-MM-DD)

### Lessons Learned

- [bullet points]

### Decisions Made

- [bullet points]

### Patterns Noticed

- [bullet points]

### Dead Ends

- [bullet points]

Each category is optional — if a conversation doesn't have notable insights for a category, it will show "None".

References

  • references/prompt-template.md — The extraction prompt sent to the model
  • scripts/extract.py — Main script (reuses parsers from sibling skill)

Implementation Notes

  • Tracks processed chat IDs in .processed_ids to avoid re-processing
  • Workspace detection: checks OPENCLAW_WORKSPACE env var, falls back to ~/.openclaw/workspace
  • Automatically detects OpenAI vs Anthropic export formats
  • Truncates long messages for context efficiency

Comments

Loading comments...