clawlens

ReviewAudited by ClawScan on May 10, 2026.

Overview

Clawlens appears purpose-aligned, but it needs review because it reads broad OpenClaw conversation history, uses local model auth profiles, sends summaries to an LLM provider, and caches analysis results.

Install or run this only if you are comfortable with a usage-report tool reading your OpenClaw chat history and using your configured model credentials. Before running, choose the provider deliberately, limit the date/session scope, and check or delete the .clawlens-cache afterward if the conversations contain sensitive information.

Findings (3)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

Private conversation history and derived summaries may leave the local machine and may also remain in a local cache after the report is generated.

Why it was flagged

This shows the skill processes historical chat logs, stores derived cache data, and sends conversation-derived summaries to an external model provider.

Skill content
reads: ~/.openclaw/agents/{agentId}/sessions/*.jsonl ... writes: ~/.openclaw/agents/{agentId}/sessions/.clawlens-cache/ ... external-api: LLM provider specified by --model via litellm (sends conversation transcript summaries for analysis)
Recommendation

Before running, reduce the scope with --days and --max-sessions, confirm the destination model/provider, avoid using it on highly sensitive sessions, and inspect or delete the .clawlens-cache when finished.

What this means

The report generation can use the user’s configured model account and quota, and it relies on sensitive local credential material.

Why it was flagged

The script reads the local OpenClaw auth profile and uses an API key/token for model calls.

Skill content
auth_path = openclaw_dir / "agents" / agent_id / "agent" / "auth-profiles.json" ... Returns: (litellm_model_string, api_base_url, api_key)
Recommendation

Only run it for an agent profile you trust, verify the selected provider before approving, and prefer a manually supplied model/API key if you do not want it to use OpenClaw’s saved auth profile.

What this means

Large histories may generate many model requests, especially with the documented defaults of a long analysis window and high session limit.

Why it was flagged

The implementation is designed to make per-session and parallel LLM calls; this is expected for the report but can consume provider quota or cost.

Skill content
Stage 2: LLM Facet Extraction (per-session, cached) ... Stage 4: Report Generation (parallel LLM)
Recommendation

Start with a small --days value or lower --max-sessions, and use --verbose so you can monitor progress and stop if the run is larger than expected.