Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Self Reflection

v1.0.0

Periodic self-reflection on recent sessions. Analyzes what went well, what went wrong, and writes concise, actionable insights to the appropriate workspace f...

0· 1.6k·14 current·15 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for brennerspear/agent-self-reflection.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Self Reflection" (brennerspear/agent-self-reflection) from ClawHub.
Skill page: https://clawhub.ai/brennerspear/agent-self-reflection
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install agent-self-reflection

ClawHub CLI

Package manager switcher

npx clawhub@latest install agent-self-reflection
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
!
Purpose & Capability
The skill legitimately needs access to recent session transcripts and to write workspace/memory files — the SKILL.md and the included script show exactly that. However, the skill metadata declares no required config paths or binaries even though the script reads ~/.openclaw/agents/main/sessions and invokes external commands (openclaw, tail, python3). The omitted declarations are an incoherence between purpose and declared requirements.
Instruction Scope
The runtime instructions and script stay within the stated purpose: they list active sessions, tail the last ~50 lines (explicitly warn not to load full transcripts), extract user/assistant/tool_result entries, and route concise insights to specific local files. There are no network exfiltration commands or hidden endpoints in the script. Note: session transcripts can contain sensitive user data; the instructions attempt to limit exposure by tailing, but that still reads user content and writes derived insights to workspace files.
Install Mechanism
There is no install spec (instruction-only plus a local script), which is the lower-risk model. The only code present is a small shell script; nothing is downloaded from the network or extracted. Risk is limited to local file reads/writes and execution of standard system tools.
!
Credentials
The skill requests no environment variables or credentials in its metadata, which is good, but the script requires access to ~/.openclaw/agents/main/sessions and calls the openclaw CLI, python3, and tail. Those config paths and binaries are not declared in the skill's 'required config paths' or 'required binaries' fields—this is a proportionality and disclosure gap. No external credentials are requested, which is appropriate for the task.
Persistence & Privilege
The skill does not request always:true and does not modify other skills or global agent settings. It writes to workspace files (AGENTS.md, TOOLS.md, memory/*, etc.) which is appropriate for a reflection task, but users should verify write targets are acceptable.
What to consider before installing
This skill appears to do what it says (periodically read recent sessions, extract lessons, and append them to local workspace files), but the packaging is sloppy: it fails to declare that it reads session files (~/.openclaw/agents/main/sessions) and that it expects the openclaw CLI, python3, and standard Unix tools (tail). Before installing, ask the author to: (1) update metadata to list the required config path(s) and required binaries, (2) confirm a privacy policy/consent for reading transcripts, (3) limit which sessions are considered (e.g., opt-in only), and (4) consider anonymizing or redacting sensitive content before writing memories. If you proceed, inspect the script in your environment and run it in a sandbox or with a restricted account first to verify it only reads and writes the intended local files. Autonomous invocation is allowed by default but not enabled as always:true — consider whether you want the agent to run this cron-style behavior automatically.

Like a lobster shell, security has layers — review code before you run it.

latestvk97b71npt8e0pm2zmn7x4y7p8981trpt
1.6kdownloads
0stars
1versions
Updated 1h ago
v1.0.0
MIT-0

Self-Reflection Skill

Reflect on recent sessions and extract actionable insights. Runs hourly via cron.

Step 1: Gather Recent Sessions

# List sessions active in the last 2 hours
openclaw sessions --active 120 --json

Parse the output to get session keys and IDs. Skip subagent sessions (they're task workers, not interesting for reflection). Focus on:

  • Telegram group/topic sessions (real user interactions)
  • Direct sessions (1:1 with Brenner)
  • Cron-triggered sessions (how did automated tasks go?)

Step 2: Read Session History

For each interesting session from Step 1, read the JSONL transcript:

# Read the last ~50 lines of each session file (keep it bounded!)
tail -50 ~/.openclaw/agents/main/sessions/<sessionId>.jsonl

⚠️ CRITICAL: Never load full session files. Use tail -50 or Read with offset/limit. Sessions can be 100k+ tokens.

Parse the JSONL to understand what happened. Look for:

  • type: "user" or type: "human" — what was asked
  • type: "assistant" — what you responded
  • type: "tool_use" / type: "tool_result" — what tools were called and results
  • Error patterns, retries, confusion

Step 3: Analyze & Extract Insights

For each session, ask yourself:

What went well?

  • Tasks completed smoothly on first try
  • Good tool usage patterns worth reinforcing
  • Efficient approaches to remember

What went wrong?

  • Errors, retries, wrong approaches
  • Misunderstandings of user intent
  • Tools that didn't work as expected
  • Context that was missing

Lessons learned?

  • "Next time, do X instead of Y"
  • "Remember that Z works this way"
  • "Tool A needs parameter B or it fails"
  • "When user says X, they usually mean Y"

Quality bar: Each insight must be:

  • Specific — not "be more careful" but "check if file exists before editing"
  • Actionable — something future-you can directly apply
  • Non-obvious — skip things any competent agent would know
  • New — don't repeat insights already captured

Step 4: Route Insights to the Right Files

Each insight belongs somewhere specific. Route them:

AGENTS.md

  • Process improvements (how to handle sessions, memory, etc.)
  • New conventions or workflow rules
  • Safety lessons

TOOLS.md

  • Tool-specific gotchas ("gog needs --json flag for parsing")
  • Environment details (paths, configs, quirks)
  • New tool patterns discovered

memory/YYYY-MM-DD.md (today's date)

  • Session-specific context ("Brenner asked about X project")
  • Temporary facts that matter today but not forever
  • What happened today (events, decisions, requests)

memory/about-user.md

  • New preferences discovered
  • Communication style observations
  • Project/interest updates

skills/<skill-name>/SKILL.md

  • Improvements to specific skill instructions
  • Bug fixes in skill workflows
  • New parameters or approaches for a skill

MEMORY.md

  • Updates to the memory index if new memory files are created

Step 5: Write the Insights

For each insight, append or edit the appropriate file. Use the Edit tool for surgical changes to existing content. Use append (write to end) for daily memory files.

Format for daily memory files:

## Self-Reflection — HH:MM ET

### Insights
- [source: session-key] Lesson learned here
- [source: session-key] Another insight

### Tool Notes
- Discovered: tool X needs Y configuration

### User Context
- Brenner mentioned interest in Z

Step 6: Summary

After writing all insights, produce a brief summary of what you reflected on and what you wrote. This is your output — keep it to 2-4 sentences max.

If there's nothing interesting to reflect on (quiet period, only heartbeats), just say so. Don't manufacture insights.

Quality Checklist

Before writing any insight:

  • Is this actually new? (Check existing files first)
  • Is this specific and actionable?
  • Am I routing it to the right file?
  • Am I keeping daily memory files concise (not dumping full transcripts)?
  • Did I respect the token budget (no huge file reads)?

Anti-Patterns (Don't Do These)

  • ❌ Don't summarize every session — only extract lessons
  • ❌ Don't read full JSONL files — tail/limit only
  • ❌ Don't write vague insights ("improve response quality")
  • ❌ Don't duplicate existing knowledge
  • ❌ Don't create new files when appending to existing ones works
  • ❌ Don't reflect on your own reflection sessions (skip cron:self-reflection sessions)

Comments

Loading comments...