Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Retrospect

v1.4.0

Session retrospective that analyzes conversation history to produce structured feedback for both user and LLM. Use this skill whenever the user says '复盘', 'r...

0· 84·0 current·0 all-time
byzhangbc@zbc0315

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for zbc0315/retrospect.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Retrospect" (zbc0315/retrospect) from ClawHub.
Skill page: https://clawhub.ai/zbc0315/retrospect
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required binaries: node
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install retrospect

ClawHub CLI

Package manager switcher

npx clawhub@latest install retrospect
Security Scan
Capability signals
CryptoCan make purchases
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Pending
View report →
OpenClawOpenClaw
Suspicious
high confidence
Purpose & Capability
Name/description: retrospective on conversation history for a project. The packaged Node parser and SKILL.md are consistent with that goal, but the parser also scans global session directories (~/.codex/sessions and OpenCode session paths) in addition to the project-specific Claude path. That global scan is not justified by the 'project' framing and is broader than a user would reasonably expect.
!
Instruction Scope
Runtime instructions tell the agent to run the bundled parser against the current working directory and then spawn a subagent with the full transcript. The parser reads session files from multiple home-directory locations (Claude, Codex, OpenCode) and will merge them into one transcript. This collects potentially unrelated session data and then passes the full transcript to another agent — increasing risk of sensitive data exposure. The SKILL.md claims the parser will find 'all session JSONL files for this project' but the code will include whole ~/.codex and opencode session folders without project filtering.
Install Mechanism
No install spec — instruction-only with a bundled script. No remote downloads or package installs are performed by the skill itself, which reduces supply-chain risk.
!
Credentials
The skill declares no required env vars, but its SKILL.md uses ${CLAUDE_SKILL_DIR} (not declared) to locate the script. More importantly, the parser inspects files in the user's home directories (e.g., ~/.claude, ~/.codex, OpenCode session dirs) and will process any session logs found there. Requesting no credentials is appropriate, but reading broad home-directory session logs is disproportionate to a strictly project-scoped retrospective and could expose unrelated or sensitive conversations.
Persistence & Privilege
always:false (no forced installation) and no system config changes. The skill writes a transcript to /tmp and the resulting feedback files to the project root, and instructs launching a subagent. Autonomous invocation is allowed (platform default) — combined with the broad file-read scope this increases blast radius, but there is no persistent/system-level privilege escalation requested.
What to consider before installing
This skill purposefully collects conversation logs and runs an analysis subagent. Before installing or invoking it, consider: (1) It scans global session folders (~/.codex/sessions and OpenCode session paths) in addition to project-specific Claude paths — it may include transcripts from other projects or sessions you didn't intend to share. (2) It spawns a subagent and passes the full merged transcript to that agent — review whether you want those transcripts handed to another agent/process. (3) The SKILL.md references ${CLAUDE_SKILL_DIR} though no env var is declared; verify your runtime supplies it. Recommended actions: inspect scripts/parse_session.js yourself (it is included), run the parser manually in a safe environment to see what files it finds, or modify the script to restrict scanning to only the intended project paths before allowing it to launch any subagent. If your session logs contain secrets or sensitive info, avoid running this skill until you confirm it will only read the intended files.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

Binsnode
latestvk97b2r32cs4m5v4bgrxgmg552n84c7mv
84downloads
0stars
5versions
Updated 3w ago
v1.4.0
MIT-0

Retrospect — Session Retrospective: Critique & Self-Critique

Perform a structured retrospective on all conversation sessions in the current project. This produces two deliverables:

  1. FEEDBACK_TO_HUMAN.md — Critique of the user's prompting behavior
  2. FEEDBACK_TO_LLM.md — Self-critique of the LLM's performance

Step 1: Locate and parse all session transcripts

Run the bundled parser, passing the current working directory. It will automatically find all session JSONL files for this project, merge them in chronological order, and output a unified transcript.

node ${CLAUDE_SKILL_DIR}/scripts/parse_session.js --project-dir "$(pwd)" > /tmp/session_transcript.md

The parser:

  • Derives the Claude Code project path from the working directory (e.g., /Users/tom/myproject~/.claude/projects/-Users-tom-myproject/)
  • Finds all .jsonl files in that directory (excluding subagent logs)
  • Sorts them by modification time (oldest first)
  • Merges them into one transcript with session boundaries marked
  • Auto-detects JSONL format (Claude Code, Codex, OpenCode)

If the transcript is very long, the parser automatically summarizes older sessions (keeping only user messages and key exchanges) while preserving full detail for the most recent sessions.

Step 2: Spawn the analysis subagent

Launch a single subagent (via the Agent tool, or equivalent in your platform) with the full transcript content. The subagent reads the transcript and writes both feedback files to the project root directory.

Pass the subagent this prompt structure (fill in the transcript and project root):


You are a session retrospective analyst. You have been given conversation transcripts from all sessions in a project between a user and an LLM. Your job is to produce two analysis documents.

Read the transcript carefully, then write both files to: <project-root>

The transcript is below:

<transcript> {content of /tmp/session_transcript.md} </transcript>

File 1: FEEDBACK_TO_HUMAN.md

Analyze the user's behavior across all sessions. Structure the document as:

Overall Assessment

A 2-3 sentence summary of how effectively the user communicated with the LLM across this project.

Round-by-Round Analysis

For each significant exchange (skip trivial ones like "ok" or tool confirmations), analyze:

  • What the user asked for
  • Whether the request was clear and specific enough
  • If the LLM did something the user didn't want — was it because the user's prompt was ambiguous, or because the LLM misunderstood a clear instruction?

When the user expresses frustration or rejection of LLM output, perform a responsibility analysis:

  • Was the user's previous instruction genuinely unclear or misleading? → The user shares responsibility
  • Was the user's instruction clear but the LLM made its own wrong assumption? → LLM's responsibility
  • Be honest and fair — sometimes the user IS at fault, sometimes the LLM is

Prompting Patterns

Identify recurring patterns across sessions (good and bad):

  • Does the user give enough context upfront, or drip-feed requirements?
  • Does the user specify constraints, or leave too much to LLM judgment?
  • Does the user correct effectively, or repeat the same vague correction?
  • Are there patterns that recur across multiple sessions?

Suggestions

Concrete, actionable advice for how the user could prompt more effectively in future sessions. Focus on what would save the most time and frustration.

File 2: FEEDBACK_TO_LLM.md

Analyze the LLM's behavior across all sessions. Structure the document as:

Overall Assessment

A 2-3 sentence summary of the LLM's performance across this project.

Mistakes & Errors

For each significant mistake the LLM made:

  • What went wrong
  • Root cause (wrong assumption, outdated knowledge, misread instruction, etc.)
  • How it was eventually resolved
  • What the correct approach should have been from the start

Pay special attention to:

  • Incorrect API/library usage that required multiple attempts to fix
  • Cases where the LLM confidently did the wrong thing
  • Unnecessary detours or wasted effort
  • Mistakes that recur across sessions (the LLM didn't learn from previous failures)

Counter-Intuitive Learnings

Information encountered in these sessions that a general-purpose LLM would NOT know or would likely get wrong. Examples:

  • Project-specific configurations that break standard assumptions
  • Library quirks, undocumented behavior, or version-specific API differences
  • Environment-specific gotchas

For each item, explain: what the intuitive assumption would be, what the reality is, and why this matters.

Self-Improvement Notes

What should the LLM do differently next time when facing similar tasks?


Important guidelines for the subagent:

  • Write in the same language the user primarily used in the conversations (Chinese if they spoke Chinese, English if English, etc.)
  • Be honest and balanced — the goal is genuine improvement, not flattery or self-flagellation
  • Use specific quotes or references from the transcript to support your analysis
  • When analyzing multiple sessions, note cross-session patterns (e.g., "the same mistake appeared in Session 3 and Session 7")
  • If the sessions were short or uneventful, say so — don't manufacture insights

Step 3: Report completion

After the subagent finishes, tell the user where the files are and give a one-line summary of each file's key finding.

Comments

Loading comments...