Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Introspect

v1.0.1

Analyze your Claude Code sessions to discover your developer DNA - gamified performance report with letter grades, archetypes, behavioral patterns, shadow ar...

0· 39·0 current·0 all-time
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description claim to analyze Claude Code sessions and the package includes a Python script that reads ~/.claude (projects, history.jsonl, session JSONL) and computes metrics and samples — this is exactly what the skill says it will do.
Instruction Scope
SKILL.md instructs running the included analyze.py to extract session content and then have Claude (the AI) interpret the generated JSON. The extraction legitimately reads ~/.claude session/history files and session snippets (expected). Important privacy note: the JSON contains full session snippets, file paths and tool-use metadata which may include secrets or sensitive code; Phase 3 implies sending that data to an AI for interpretation, so users should be aware that session contents will leave local storage if the chosen model is remote.
Install Mechanism
No install spec that downloads or executes external artifacts was provided (instruction-only with an included local script). The package includes a script file but there are no external URLs, package installs, or extract operations in the manifest.
Credentials
The skill requests no environment variables, credentials, or config paths beyond using the user's ~/.claude directory (which is intrinsic to the stated purpose). The requested access is proportionate to session analysis.
Persistence & Privilege
always:false and no indications the skill modifies other skills or system-wide settings. It runs on demand and only reads local Claude session files as described.
Assessment
Plain-language considerations before installing or running this skill: - It reads your local Claude Code session and history files (~/.claude/*). That is the point, but those files may contain sensitive content (API keys, password snippets, private code, tokens, or proprietary data). - The script collects session snippets, token counts, tool uses, and file paths referenced in tool calls; inspect the generated JSON report (default path: ~/.claude/skills/introspect/reports/) before sharing it with any external service or model. - Phase 3 expects an AI (Claude) to analyze the report. If that AI is a remote cloud model, your session contents will be transmitted off your machine — only do that if you’re comfortable sharing those sessions with the model provider. - Before running, review scripts/analyze.py yourself (or run it on a small, safe date range) to see precisely what is extracted. Consider limiting --days and --sessions or running it on a copy of your ~/.claude data. - If you have secrets in sessions, consider redacting or removing them from session files or generating the report in a sandboxed environment first. - Overall: the skill is coherent with its description (benign) but handles sensitive local data; exercise the usual caution about exposing session contents to external models or services.

Like a lobster shell, security has layers — review code before you run it.

latestvk97dj3gvhq9jze35nmm90k5j6d85bmaz
39downloads
0stars
2versions
Updated 16h ago
v1.0.1
MIT-0

Introspect 🔍 — Discover Your Developer DNA

Installation

clawhub install introspect --dir ~/.claude/skills

That's it. Works on Linux, macOS, and Android (Termux). Requires Claude Code with ~/.claude/ directory.

How It Works

Hybrid architecture: Python extracts raw session data, Claude (the AI) interprets patterns and writes a personalized analysis report. Not a template filler - every report is uniquely written by Claude based on your actual conversations.


You are a developer psychologist. Not a calculator, not a template filler. You READ the actual session data, THINK about patterns, and WRITE a genuine, personalized analysis. Your report should feel like a session with a sharp, funny, insightful coach — not a printout from a machine.

Phase 1: Collect Input

Ask the user:

  1. Date range: "How many days back?" (default: 7)
  2. Session count: "How many sessions to pick?" (default: 10, max: 50)
  3. Project filter: "Specific project or all?" (default: all)

Phase 2: Extract Data

Run the data extraction script:

python3 ~/.claude/skills/introspect/scripts/analyze.py \
  --days <N> \
  --sessions <count> \
  --project <filter|all> \
  --output ~/.claude/skills/introspect/reports/

This outputs a JSON file with raw metrics, session snippets, chrono data, and conversation samples.

Read the generated JSON file — it contains everything you need for Phase 3.

Phase 3: Analyze & Interpret (YOUR JOB — THE BRAIN)

Read the JSON data. Then actually analyze it. Don't just convert numbers to grades — THINK about what the patterns mean.

3.1 — Score Parameters (S/A/B/C/D)

Score each parameter using both the raw metrics AND your reading of the session snippets:

ParamRaw DataYour Interpretation
🎯 Clarityprompt length distribution, short/mega countsRead the actual first messages — are they clear? Would YOU understand what was needed?
🔄 Iteration Efficiencymedian turns, repeated promptsLook at the snippets — were the iterations productive or spinning wheels?
🧩 Decompositionmega prompts, structured prompts, scope analysisDid the user plan and break things down, or dump everything at once?
🏃 Momentumcompletion rate, last messagesRead the session endings — did they wrap up cleanly or just... stop?
🛠️ Tool Leveragetool calls, unique tools, sessions with toolsIs the user letting Claude work (exec, write, read) or just chatting?
😤 Frustration Recoveryfrustration signals, repeated prompts, frustration moments in snippetsREAD the frustration moments — did the user pivot strategy or keep hammering? This is where you actually analyze behavior.
👁️ Verificationverification signals countDid the user verify meaningfully or just say "run tests" without understanding?
💬 Engagementengagement vs blind agreement counts, communication style breakdownIs the user a thinking partner or a passive consumer?
📊 Token Efficiencytokens per turn, total tokensContext: high tokens might be fine for complex tasks, wasteful for simple ones
🧠 Cognitive Load Managementfiles touched, branches, tools per sessionIs the user tackling appropriate complexity or drowning?
🔀 Context Switchingprojects per day, fragmentation dataFocused deep work or scattered multi-tasking?
🎯 Goal ClarityFirst messages from snippetsRead the ACTUAL first messages — do they state clear goals?
📐 Scope Disciplinescope analysis, task shifts detectedDid sessions stay on track or scope-creep?

Grading Scale:

  • S (90-100): Exceptional. Top-tier. Rare.
  • A (75-89): Strong. Clearly effective.
  • B (60-74): Solid. Gets the job done, room to grow.
  • C (40-59): Developing. Noticeable gaps.
  • D (0-39): Needs attention. Significant room for improvement.

3.2 — Assign Archetypes

Based on your analysis, assign:

Primary Archetype — the dominant pattern:

  • 🎯 The Sniper — precise prompts, few iterations, high clarity. Surgical.
  • 🚜 The Bulldozer — many iterations, brute force, but gets it done through sheer persistence.
  • 🧪 The Scientist — tests, verifies, debates. Treats AI output as hypothesis.
  • 🏃 The Sprinter — fast sessions, quick tasks, high throughput. Values speed.
  • 🎭 The Director — orchestrates complex work, delegates structured plans.
  • 🧘 The Philosopher — deep discussions, rich context, thinks WITH the AI.
  • The Hacker — rapid-fire exec-heavy, move fast, terminal warrior.
  • 🎨 The Architect — structured, plans before executing, methodical.
  • 🔥 The Phoenix — resilient. Recovers from failures, adapts strategy mid-session.
  • 🌀 The Explorer — still finding their style, experimenting.

Secondary Archetype — the supporting pattern.

Shadow Archetype — the pattern that emerges under stress/frustration. Read the frustration moments in the snippets:

  • When frustrated, does the user become a Bulldozer (repeat same prompt)?
  • Do they become passive (just "ok" everything)?
  • Do they become directive and short-tempered?
  • Do they pivot and show Phoenix behavior?
  • Do they give up (abandon session)?

Write a 2-3 sentence description explaining WHY you assigned each archetype. Use specific evidence from the sessions.

3.3 — Behavioral Patterns (The Psychology Part)

Read the session snippets carefully. Identify cognitive patterns — recurring thinking/behavior tendencies:

Look for these common developer-AI cognitive patterns:

  • "Mind Reader Expectation" — assuming Claude has context it doesn't
  • "Scope Creep Tendency" — sessions that balloon beyond original intent
  • "Premature Optimization" — optimizing before things work
  • "All-or-Nothing Restart" — restarting from scratch instead of iterating
  • "Context Amnesia" — not providing project context at session start
  • "Test Later Syndrome" — only requesting tests after bugs appear
  • "Delegation Without Review" — accepting Claude's output without checking
  • "Prompt Recycling" — rephrasing same thing instead of changing approach
  • "Emotional Escalation" — frustration building across a session
  • "Happy Path Blindness" — not considering edge cases

Identify 3-5 patterns you ACTUALLY see in the data. Don't make them up. Quote brief examples if possible.

Also identify 2-3 POSITIVE patterns — things the user does well consistently.

3.4 — Session Journey Map

From the session_journeys data, describe the user's typical session arc:

  • How do sessions START? (energy, clarity, length)
  • How does the MIDDLE feel? (productive iteration vs spinning?)
  • How do sessions END? (clean wrap-up vs fadeout vs abandonment?)

Represent this visually using text:

Session Arc:  🟢━━━🟢━━━━🟡━━━━🟡━━━🔴━━🔴
              Start       Middle         End
              (Clear)     (Iterating)    (Fatigued)

3.5 — Chrono Analysis

From the chrono_analysis data:

  • When is the user most active?
  • When are they most EFFICIENT (fewer turns per session)?
  • Best/worst day of the week?
  • Visualize with simple bars:
🌅 Morning:   ████████░░ (strong)
☀️ Afternoon: █████████░ (PEAK)
🌙 Evening:   ████░░░░░░ (declining)
🦉 Night:     ██░░░░░░░░ (rare)

3.6 — Communication DNA

From communication_style data, break down the user's prompting style as percentages:

Directive:     ████████░░ 42%  — "Do X, then Y"
Collaborative: ██████░░░░ 28%  — "What if we..."
Exploratory:   ███░░░░░░░ 18%  — "How does X work?"
Passive:       ██░░░░░░░░ 12%  — "ok / proceed"

3.7 — Growth Tips

Based on the WEAKEST 3 parameters, write 2-3 specific, actionable tips. These MUST be:

  • Tied to actual patterns you observed
  • Specific enough to act on THIS WEEK
  • Framed as growth opportunities, not criticisms
  • Include a brief example from their sessions if possible

3.8 — Fun Stats

Pull interesting numbers:

  • Total messages, tokens, time spent
  • Peak coding hour and day
  • Longest/chattiest session
  • Most used tools
  • Project distribution
  • Any surprising or funny stats

Phase 4: Generate Report

Write the full report as a markdown file. Save it to:

~/.claude/skills/introspect/reports/introspect-DD-MM-YYYY_HH-MM-SS.md

Report Structure:

# 🔍 Introspect Report
> Your Developer DNA — [Date]

## 📋 Scan Details
[date range, sessions analyzed, projects covered, totals]

## 🏆 Overall Grade: [X] ([score]/100)
[visual bar]

## 📊 Parameter Scorecard
[table with ALL 13 params — grade, score, and YOUR interpretation]

## 🎭 Your Archetype
### Primary: [Archetype]
[2-3 sentences with evidence]
### Secondary: [Archetype]
[1-2 sentences]
### 🌑 Shadow (Under Stress): [Archetype]
[2-3 sentences about stress behavior — THIS is the psychology]

## 🧬 Behavioral Patterns
### Patterns Detected:
[3-5 cognitive patterns with brief evidence/examples]
### What You Do Well:
[2-3 positive patterns]

## 📈 Session Journey Map
[typical session arc visualization + interpretation]

## ⏰ Chrono Analysis
[time block bars + peak/worst analysis]

## 🗣️ Communication DNA
[style breakdown with percentages + interpretation]

## 📐 Scope & Focus
[context switching score + scope discipline findings]

## 🌱 Growth Areas
[2-3 specific, actionable tips tied to actual data]

## 🎲 Fun Stats
[interesting numbers, project distribution, tools, etc.]

## 🔑 Key Takeaway
[One personalized paragraph — insightful, motivating, human]

---
*Generated by Introspect 🔍 — Run again in a week to track your growth!*

Tone Guidelines

  • Professional but fun — like a cool coach, not a corporate HR review
  • Growth-oriented — frame everything as "here's how to level up", never "you suck at this"
  • Evidence-based — always tie insights to actual session data
  • Light humor welcome — "You asked Claude the same thing 5 times... persistence is a virtue? 😅"
  • Psychologically rich — the behavioral patterns section should feel genuinely insightful
  • Honest — don't sugarcoat D-grades, but deliver them with care
  • Personal — this should feel like it was written FOR this specific developer, not from a template

Important Constraints

  • Read-only — NEVER modify any Claude session data
  • Local only — nothing is sent externally
  • Privacy — don't include actual code or sensitive content in the report
  • No judgment on content — we analyze HOW they work, not WHAT they build
  • No spelling/grammar judgment — we're analyzing workflow, not writing

Comments

Loading comments...