Skill flagged — suspicious patterns detected
ClawHub Security flagged this skill as suspicious. Review the scan results before using.
Apply Learnings
v1.0.0Analyze Claude Code session history to extract learnings that would have been helpful if provided earlier, then persist them for future sessions. Use when th...
⭐ 0· 65·0 current·0 all-time
byAjit Singh@ajitsingh25
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Suspicious
medium confidencePurpose & Capability
Name and description match the provided SKILL.md and the included analysis script: both are designed to scan Claude session transcripts, extract 'learnings', and persist them to per-project or global files (e.g., ~/.claude/CLAUDE.md, ~/.claude/MEMORY.md, or skill-specific reference directories). No unrelated credentials or binaries are requested.
Instruction Scope
The runtime instructions and script operate over session histories and tool call inputs across scopes 'current', 'project', or 'all'. The parser explicitly extracts user messages and tool inputs/tool failures — these can contain secrets or sensitive context. The SKILL.md and script propose persisting learnings into global files and into other skills' reference directories. There are no explicit safeguards described for detecting or redacting secrets, for limiting what is collected, or for preventing writes to other skills' directories.
Install Mechanism
There is no external install/download action: the skill is instruction-only and includes a local Python script. No URL downloads, package installs, or archive extraction are present in the provided metadata.
Credentials
The skill requests no environment variables, but it will read session transcripts and tool inputs (likely under ~/.claude and project directories) and then write learnings to global files and possibly other skill directories. This broad access to user data (including potential secrets typed in sessions or tool inputs) is not declared or limited, and no sanitization policy is described.
Persistence & Privilege
The skill does not declare always:true, but the SKILL.md instructs writing to shared/global locations (e.g., ~/.claude/MEMORY.md, ~/.claude/CLAUDE.md) and to append or create files in ~/.claude/skills/* (modifying other skills' reference files). Modifying other skills' directories and writing global, cross-machine files increases risk and should be explicitly consented to and audited.
What to consider before installing
Before installing or running this skill, be aware it will scan your conversation/session transcripts and tool inputs and can write extracted "learnings" into global files (like ~/.claude/CLAUDE.md or ~/.claude/MEMORY.md) and into other skills' reference directories. That process can capture and persist sensitive data (passwords, API keys, secrets pasted in sessions, or other PII). Recommended steps:
- Inspect the full scripts locally (scripts/analyze_session.py) to verify exactly which paths are read and written and whether there is any network I/O. Confirm there are explicit redaction measures for secrets.
- Run initially with the narrowest scope (e.g., --scope current) and review all extracted learnings before allowing any writes.
- Back up the target files/directories you allow the skill to write (project CLAUDE.md, ~/.claude/*, and ~/.claude/skills/*) so you can revert unintended changes.
- Deny or carefully review suggestions to persist learnings to shared/global files or to append into other skills' directories; prefer storing only project-local, user-reviewed notes.
- If you use cross-machine syncing (TerraBlob or similar), consider disabling it while testing so learnings (which may include sensitive context) are not propagated automatically.
- If you are unsure about whether the script modifies other skills or performs network access, run it in an isolated environment (e.g., a throwaway account or VM) first and monitor filesystem changes.
If you want, I can (1) scan the rest of the script for any network calls or explicit file-write code you should watch for, or (2) suggest a safe reduced workflow for extracting and redacting learnings before persistence.Like a lobster shell, security has layers — review code before you run it.
latestvk977scqqmy7qfbhahc8cxen9y583bn6e
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
