Prompt Cache
SHA-256 prompt deduplication for LLM and TTS calls — hash normalize prompts, check cache before calling APIs, store results for instant replay. Use when maki...
MIT-0 · Free to use, modify, and redistribute. No attribution required.
⭐ 0 · 267 · 2 current installs · 2 all-time installs
byNissan Dookeran@nissan
MIT-0
Security Scan
OpenClaw
Suspicious
high confidencePurpose & Capability
The stated purpose (local prompt cache for LLM/TTS) matches the code's intent, but required pieces are missing or inconsistent: the implementation imports a 'database' module (not provided or declared), and the SKILL.md shows a schema that does not match the INSERT used in code (INSERT references prompt_text which the schema does not define). These inconsistencies mean the skill cannot reliably provide the advertised capability without additional configuration or code changes.
Instruction Scope
SKILL.md promises local-only cache operations and lists a schema, usage, and supported backends, but it gives no instructions on how to supply or configure the 'database' module/connection (no connection string, no env vars, no adapter code). The code itself performs DB queries via db.execute — that could target local or remote DBs depending on the runtime binding, which is unspecified.
Install Mechanism
There is no install spec (instruction-only), which minimizes installer risk. However, the package includes a code file that depends on an external 'database' module/backend. The absence of an install or dependency declaration means the environment must already provide a compatible database binding, which is an operational gap rather than an install risk.
Credentials
The skill declares no required environment variables or primary credential, yet real use with Postgres/Turso/SQLite will require DB credentials or connection configuration. This mismatch (no declared env vars but DB dependency present) is disproportionate and leaves unclear how sensitive connection data should be provided or protected.
Persistence & Privilege
The skill does not request always-on installation and does not declare extra privileges. Autonomous invocation is allowed by default (not flagged). Nothing in the package requests system-wide persistence or modifies other skills.
What to consider before installing
Do not assume this will work out of the box. Before installing or using: (1) confirm how the 'database' module is provided — the skill does not declare or install a DB adapter or connection details; (2) fix the schema/SQL mismatch (SKILL.md's CREATE TABLE lacks prompt_text but the code INSERTs prompt_text) or adjust the INSERT to match the schema; (3) ensure DB connection credentials/config are specified (and declared in requires.env) and stored securely; (4) consider removing the broad exception swallow in set_cached so cache failures are visible; (5) decide whether the truncated hash (first 32 hex chars) and inconsistent normalization (language not lowercased in hash) are acceptable for your collision/lookup needs. If the author provides a corrected release addressing these points (clear DB adapter, declared env vars, matching schema), the skill would be coherent; in its current form it is unreliable and potentially misleading.Like a lobster shell, security has layers — review code before you run it.
Current versionv1.0.0
Download ziplatest
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
Runtime requirements
💾 Clawdis
SKILL.md
Prompt Cache
A lightweight caching layer that prevents regenerating identical content. Saved approximately 60% of API quota in production by catching duplicate prompts before they hit the API.
How It Works
- Normalize the prompt (lowercase, collapse whitespace)
- Combine with context keys (user name, language, model)
- SHA-256 hash the combined key
- Check cache table for existing result
- On miss: call API, store result. On hit: return cached result instantly.
Usage
import prompt_cache
# Check before calling expensive API
cached = await prompt_cache.get_cached(
prompt="Tell me a story about clouds",
child_name="Sophie",
language="fr"
)
if cached:
return cached # Free! No API call needed.
# Cache miss — call the API
result = await generate_story(prompt, child_name, language)
# Store for next time
await prompt_cache.set_cached(prompt, child_name, language, result)
Schema
CREATE TABLE IF NOT EXISTS prompt_cache (
prompt_hash TEXT NOT NULL,
child_name TEXT NOT NULL,
language TEXT NOT NULL,
story_json TEXT,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (prompt_hash, child_name, language)
);
Adapt the Keys
The default implementation uses (prompt, child_name, language) as the cache key. Adapt to your domain:
- Chat completions:
(system_prompt, user_message, model) - TTS:
(text, voice_id, model_id) - Image gen:
(prompt, seed, model, size)
Files
scripts/prompt_cache.py— Cache implementation (35 lines)
Files
2 totalSelect a file
Select a file to preview.
Comments
Loading comments…
