Chatgpt Memory Extraction

v1.0.1

Extract structured personal memories from ChatGPT export data (conversations JSON). Produces organized timeline, people profiles, and thematic records by dee...

0· 152·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for cyresearch/chatgpt-memory-extraction.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Chatgpt Memory Extraction" (cyresearch/chatgpt-memory-extraction) from ClawHub.
Skill page: https://clawhub.ai/cyresearch/chatgpt-memory-extraction
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install chatgpt-memory-extraction

ClawHub CLI

Package manager switcher

npx clawhub@latest install chatgpt-memory-extraction
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description claim to convert ChatGPT export JSON into structured notes — the included Python script and SKILL.md directly implement that. Required tooling (Python 3.8+) and local file inputs are proportionate; no unrelated credentials, binaries, or external services are requested.
Instruction Scope
SKILL.md directs the user to export data from ChatGPT and run the included script on local files, processing conversations quarter-by-quarter with human review. Instructions only reference user-provided export files and local output paths; they do not instruct the agent to read unrelated system files or to transmit data to third parties.
Install Mechanism
No install spec — instruction-only with a bundled Python script. README suggests optional git clone or npx clawhub install which is typical for OpenClaw skills. Nothing in the package downloads or extracts remote archives at runtime.
Credentials
The skill requires no environment variables, no credentials, and no config paths. The Python script only reads input_dir files and writes local output; it does not access environment secrets or networked tokens.
Persistence & Privilege
The skill is not always-enabled and does not request elevated or persistent privileges. It does not modify other skills or system-wide settings. Autonomous invocation defaults are standard and not concerning here.
Assessment
This skill appears to do exactly what it says: parse your ChatGPT export JSON into local, readable files. Before running it: (1) review the included script yourself (it's short and local) to confirm it matches expectations; (2) run it on a copy of your export in a trusted/local environment (not a public machine) because extracted files will contain personal data; (3) back up the original export first; (4) avoid uploading extracted archives to third-party services unless you trust them; and (5) if you intend to clone a GitHub repo mentioned in the README, verify the repo's authenticity and that its contents match the code included with the skill.

Like a lobster shell, security has layers — review code before you run it.

latestvk97a4x2w4pkz18q0wj820brhmn83k6qp
152downloads
0stars
2versions
Updated 1mo ago
v1.0.1
MIT-0

ChatGPT Memory Extraction

Transform ChatGPT conversation exports into a structured personal memory archive.

⚠️ For Users

AI agents cut corners on large text volumes. Review each batch. Praise quality, not speed.

Read quality rules for ChatGPT-specific pitfalls and known AI failure modes.

Workflow

  1. Prepare: User exports ChatGPT data:
    • Go to ChatGPT → Settings → Data controls → Export data → Confirm export
    • OpenAI will send an email when the export is ready (may take hours to days depending on data size)
    • Download the zip file from the email link (requires being logged into ChatGPT)
    • Unzip to get conversations-*.json files and other data
  2. Extract: Run scripts/extract_conversations.py to convert JSON → readable text files + conversation index
  3. Read & Write: Process one quarter at a time. Read every conversation fully. Write timeline per output-format.md. User reviews before proceeding. Split into monthly batches for 100+ conversations.
  4. Extract Dimensions: Update people files and topic files. Every person mentioned → their file updated.
  5. Incremental: On new exports, compare IDs, process only new content.

Output Structure

See output-format.md.

Comments

Loading comments...