clawlens

Analyze OpenClaw conversation history and generate a deep usage insights report covering usage stats, task classification, friction analysis, skills ecosyste...

MIT-0 · Free to use, modify, and redistribute. No attribution required.
0 · 39 · 0 current installs · 0 all-time installs
MIT-0
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description, declared inputs, and the included script all align: the skill reads OpenClaw session files and installed-skills metadata, extracts facets and aggregates stats, and sends session data to a configured LLM (via litellm) to generate reports. The requested environment variables (provider API keys) are appropriate for that purpose.
Instruction Scope
The SKILL.md and the script explicitly read ~/.openclaw/agents/{agentId}/sessions/* and ~/.openclaw/skills/ and send per-session transcripts (truncated to ~80K chars) to an external LLM for facet extraction and aggregated summaries. This is coherent with the stated analysis goal, but it does mean potentially sensitive chat content (which may include PII, credentials, tokens pasted by users, or other secrets) will be transmitted to a third-party LLM. The doc claims it does not read or store API keys or other OpenClaw auth files; the visible code supports the declared file reads and cache writes.
Install Mechanism
No install spec is present (instruction-only execution with an included Python script). That is low-risk in terms of automatic binary downloads or remote installers. The single Python script depends on litellm which the user must have available; no remote arbitrary archive downloads are specified.
Credentials
The skill requires one of the standard LLM API keys (DEEPSEEK_API_KEY, OPENAI_API_KEY, or ANTHROPIC_API_KEY) depending on --model, which matches the declared external LLM usage. No unrelated credentials, system-level secrets, or unusual environment variables are requested.
Persistence & Privilege
The skill is user-invocable (not always:true) and writes a local cache under the agent sessions directory (~/.openclaw/agents/{agentId}/sessions/.clawlens-cache/) to avoid re-sending the same sessions. It does not request permanent platform presence or modify unrelated skill configs according to the SKILL.md and the script contents shown.
Assessment
This skill does what it says: it will read your OpenClaw session logs and installed-skills directory and send session transcripts to whatever LLM provider you configure (DeepSeek/OpenAI/Anthropic) to extract facets and generate a Markdown report. Before installing/running: (1) Be aware transcripts may contain sensitive data (PII, passwords, API keys pasted in chats, secrets returned by tools). If that is a concern, sanitize or filter logs first, restrict the --days/--max-sessions, or run against a local/on-prem model (if supported). (2) Verify you trust the configured LLM provider and understand its data retention/usage policy. (3) Review the script source (scripts/clawlens.py) yourself to confirm it matches the behavior described and that no additional endpoints are contacted. (4) Note the tool writes a local cache under the agent sessions path; if you remove results later, you should clear that cache. If you need stricter privacy, prefer running with a local LLM or in an isolated environment.

Like a lobster shell, security has layers — review code before you run it.

Current versionv1.0.3
Download zip
latestvk974sgfmxdrx1n9fc3ncw491qx8318nd

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

Clawlens - OpenClaw Usage Insights

Generate a comprehensive usage insights report by analyzing conversation history.

When to Use

User SaysAction
"show me my usage report"Run full report
"analyze my conversations"Run full report
"how am I using Claw"Run full report
"clawlens" / "claw lens"Run full report
"usage insights" / "usage analysis"Run full report

How to Run

Execute the analysis script:

python3 scripts/clawlens.py [OPTIONS]

Options

FlagDefaultDescription
--agent-idmainAgent ID to analyze
--days180Analysis time window in days
--modelrequiredLLM model in litellm format (e.g. deepseek/deepseek-chat). API key must be set via env var.
--langzhReport language: zh or en
--no-cachefalseIgnore cached facet extraction results
--max-sessions2000Maximum sessions to process
--concurrency10Max parallel LLM calls
--verbosefalsePrint progress to stderr
-o / --outputstdoutOutput file path

Examples

# DeepSeek, 180 days, Chinese
DEEPSEEK_API_KEY=sk-xxx python3 scripts/clawlens.py --model deepseek/deepseek-chat

# OpenAI, English, last 7 days
OPENAI_API_KEY=sk-xxx python3 scripts/clawlens.py --model openai/gpt-4o --lang en --days 7

# Verbose, save to file
ANTHROPIC_API_KEY=sk-xxx python3 scripts/clawlens.py --model anthropic/claude-sonnet-4-20250514 --verbose -o /tmp/clawlens-report.md

Output

The script outputs a Markdown report to stdout (or to the file specified by -o). Progress messages go to stderr when --verbose is set.

The report includes all dimensions: usage overview, task classification, friction analysis, skills ecosystem, autonomous behavior audit, and multi-channel analysis.

Present the Markdown output directly to the user. Do not summarize or truncate it.

Model Configuration

--model is required. The model name and API key must follow litellm's provider format:

Provider--model valueRequired env var
DeepSeekdeepseek/deepseek-chatDEEPSEEK_API_KEY
OpenAIopenai/gpt-4oOPENAI_API_KEY
Anthropicanthropic/claude-sonnet-4-20250514ANTHROPIC_API_KEY
OpenAI-compatibleopenai/<model-id> + set OPENAI_API_BASEOPENAI_API_KEY

The format is always <provider>/<model-id>. Refer to litellm docs for the full list of supported providers and their env var naming conventions.

Data Source

The script reads conversation data from:

  • ~/.openclaw/agents/{agentId}/sessions/sessions.json (session index)
  • ~/.openclaw/agents/{agentId}/sessions/*.jsonl (per-session logs, including unindexed historical files)
  • ~/.openclaw/skills/ (installed skills directory for ecosystem analysis)

Cache is written to ~/.openclaw/agents/{agentId}/sessions/.clawlens-cache/facets/ to avoid re-analyzing the same sessions.

Privacy Notice

This skill sends conversation transcript data to an external LLM provider (specified by --model) for analysis. Specifically:

  • Stage 2 (Facet Extraction): Each session's conversation transcript (truncated to ~80K chars) is sent to the LLM to extract structured analysis (task categories, friction points, etc.). Results are cached locally so each session is only sent once.
  • Stage 4 (Report Generation): Aggregated statistics and session summaries (not raw transcripts) are sent to the LLM to generate the report sections.

No API keys or credentials are read from or stored by this skill. The user must provide the LLM API key via environment variables before running the script. This skill does not access openclaw.json or auth-profiles.json.

Files

4 total
Select a file
Select a file to preview.

Comments

Loading comments…