clawlens
ReviewAudited by ClawScan on May 10, 2026.
Overview
Clawlens appears purpose-aligned, but it needs review because it reads broad OpenClaw conversation history, uses local model auth profiles, sends summaries to an LLM provider, and caches analysis results.
Install or run this only if you are comfortable with a usage-report tool reading your OpenClaw chat history and using your configured model credentials. Before running, choose the provider deliberately, limit the date/session scope, and check or delete the .clawlens-cache afterward if the conversations contain sensitive information.
Findings (3)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Private conversation history and derived summaries may leave the local machine and may also remain in a local cache after the report is generated.
This shows the skill processes historical chat logs, stores derived cache data, and sends conversation-derived summaries to an external model provider.
reads: ~/.openclaw/agents/{agentId}/sessions/*.jsonl ... writes: ~/.openclaw/agents/{agentId}/sessions/.clawlens-cache/ ... external-api: LLM provider specified by --model via litellm (sends conversation transcript summaries for analysis)Before running, reduce the scope with --days and --max-sessions, confirm the destination model/provider, avoid using it on highly sensitive sessions, and inspect or delete the .clawlens-cache when finished.
The report generation can use the user’s configured model account and quota, and it relies on sensitive local credential material.
The script reads the local OpenClaw auth profile and uses an API key/token for model calls.
auth_path = openclaw_dir / "agents" / agent_id / "agent" / "auth-profiles.json" ... Returns: (litellm_model_string, api_base_url, api_key)
Only run it for an agent profile you trust, verify the selected provider before approving, and prefer a manually supplied model/API key if you do not want it to use OpenClaw’s saved auth profile.
Large histories may generate many model requests, especially with the documented defaults of a long analysis window and high session limit.
The implementation is designed to make per-session and parallel LLM calls; this is expected for the report but can consume provider quota or cost.
Stage 2: LLM Facet Extraction (per-session, cached) ... Stage 4: Report Generation (parallel LLM)
Start with a small --days value or lower --max-sessions, and use --verbose so you can monitor progress and stop if the run is larger than expected.
