ClawCache Free - LLM Cost Tracking & Caching

PassAudited by ClawScan on May 10, 2026.

Overview

This documentation-only skill appears aligned with local LLM cost tracking and caching, but users should verify the external Python package and manage the local cache of prompts and responses.

Before installing, confirm whether the intended package is `clawcache` or `clawcache-free`, review the external package source if possible, and remember that prompts and responses may be stored under `~/.clawcache` unless you configure another location.

Findings (2)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

Installing the package will run code that was not included in these reviewed artifacts.

Why it was flagged

The skill itself includes no runnable code or install spec, so actual behavior depends on an external, unpinned Python package installed by the user.

Skill content
```bash
pip install clawcache
```
Recommendation

Verify the intended PyPI package, source repository, and version before installing; prefer pinning a reviewed version.

What this means

Sensitive prompts or responses may remain on disk and later be reused from cache instead of making a fresh model call.

Why it was flagged

The documented workflow saves prompts and LLM responses into a persistent local cache directory.

Skill content
await cache.aset(prompt, response, model=model) ... export CLAWCACHE_HOME=/path/to/cache  # Default: ~/.clawcache
Recommendation

Choose a protected cache location, avoid caching secrets, and periodically clear or manage the cache when working with sensitive data.