usewhisper-autohook

Automatically fetches and injects Whisper memory context before responses and ingests conversation turns after, optimizing token usage for Telegram agents.

MIT-0 · Free to use, modify, and redistribute. No attribution required.
0 · 487 · 0 current installs · 0 all-time installs
MIT-0
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The skill's name and SKILL.md describe automatic pre-query context retrieval and post-response ingestion for a Whisper Context service; the code implements those exact actions, plus an optional local proxy to reduce tokens. Required env vars (WHISPER_CONTEXT_API_KEY, WHISPER_CONTEXT_PROJECT, optional WHISPER_CONTEXT_API_URL) match the described external service. No unrelated credentials or binaries are requested.
Instruction Scope
Instructions ask the agent to call get_whisper_context before responding and ingest_whisper_turn after responses (and provide a system-prompt snippet to enforce this). This is consistent with the skill's goal but is prescriptive ('Always do this. Never skip.') — functionally normal for a memory helper, but it means the agent will routinely send user messages and assistant replies to an external service (privacy/PD concerns). The SKILL.md documents required headers and proxy usage; it does not instruct reading arbitrary local files or unrelated env vars.
Install Mechanism
This is an instruction-only skill with an included Node script; there is no install spec that downloads arbitrary code. The repository ships a single .mjs file which is run via node; no external install URLs or archive extracts are used.
Credentials
Declared env vars (WHISPER_CONTEXT_API_KEY, WHISPER_CONTEXT_PROJECT, optional WHISPER_CONTEXT_API_URL) are proportionate to the purpose. The script optionally uses OPENAI_API_KEY or ANTHROPIC_API_KEY when run as a proxy; SKILL.md documents this. Users should be aware that running the proxy requires providing an upstream API key (the script will use it to call the upstream provider) and that the Whisper Context API key will be used to send full user/assistant content to the external service.
Persistence & Privilege
The skill persists a per-user/session context_hash to the local filesystem (in the user's home directory) to enable delta compression — this is consistent with its stated behavior but creates local files. The skill does not request always:true and does not modify other skills or system-wide agent settings. If you run the HTTP proxy, it will accept requests and forward them to an upstream provider using your upstream API key — run it only on trusted/private networks and protect that key.
Assessment
This skill appears to do what it says: it queries a Whisper Context service before responses and ingests turns afterwards, and it can run a local proxy to reduce token usage. Before installing or running it, consider: 1) Privacy: the script will transmit full user messages and assistant replies to the external Whisper Context API (and may auto-create the project); do not use for sensitive or regulated data unless you trust the provider and understand their retention policy. 2) Local files: it stores small state (context_hash) under your user home directory to enable delta compression — inspect the script to find the exact path if you need to manage it. 3) Proxy usage: if you run the proxy you must supply your upstream OPENAI/ANTHROPIC API key; run the proxy only on a local/private host and ensure the port is not publicly accessible. 4) Review the source: the author recommends reviewing the script before use; if you are not comfortable, don't provide API keys or run the proxy. If you want a deeper check, provide the full untruncated usewhisper-autohook.mjs for line-by-line review (I reviewed the provided excerpts).

Like a lobster shell, security has layers — review code before you run it.

Current versionv1.0.0
Download zip
latestvk972hnn4whce0mv3tj9gs0tc1h8122wj

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

usewhisper-autohook (OpenClaw Skill)

This skill is a thin wrapper designed to make "automatic memory" easy:

  • get_whisper_context(user_id, session_id, current_query) for pre-response context injection
  • ingest_whisper_turn(user_id, session_id, user_msg, assistant_msg) for post-response ingestion

It defaults to the token-saving settings you almost always want:

  • compress: true
  • compression_strategy: "delta"
  • use_cache: true
  • include_memories: true

It also persists the last context_hash locally (per api_url + project + user_id + session_id) so delta compression works by default without you needing to pass previous_context_hash.

Install (ClawHub)

npx clawhub@latest install usewhisper-autohook

Setup

Set env vars wherever OpenClaw runs your agent:

WHISPER_CONTEXT_API_URL=https://context.usewhisper.dev
WHISPER_CONTEXT_API_KEY=YOUR_KEY
WHISPER_CONTEXT_PROJECT=openclaw-yourname

Notes:

  • WHISPER_CONTEXT_API_URL is optional (defaults to https://context.usewhisper.dev).
  • The helper will auto-create the project on first use if it does not exist yet.

The "Auto Loop" Prompt (Copy/Paste)

Add this to your agent's system instruction (or equivalent):

Before you think or respond to any message:
1) Call get_whisper_context with:
   user_id = "telegram:{from_id}"
   session_id = "telegram:{chat_id}"
   current_query = the user's message text
2) If the returned context is not empty, prepend it to your prompt as:
   "Relevant long-term memory:\n{context}\n\nNow respond to:\n{user_message}"

After you generate your final response:
1) Call ingest_whisper_turn with the same user_id and session_id and:
   user_msg = the full user message
   assistant_msg = your full final reply

Always do this. Never skip.

If you are not on Telegram, keep the same structure: the important part is that user_id and session_id are stable.

If Your Agent Still Replays Full Chat History (Proxy Mode)

If you cannot control how your agent/framework constructs prompts (it always sends the full conversation history), a system prompt cannot reduce token spend: the tokens are already sent to the model.

In that case, run the built-in OpenAI-compatible proxy so the network payload is actually reduced. The proxy:

  • receives POST /v1/chat/completions
  • queries Whisper memory
  • strips chat history down to system + last user message
  • injects Relevant long-term memory: ...
  • calls your upstream OpenAI-compatible provider
  • ingests the turn back into Whisper

Start the proxy:

export OPENAI_API_KEY="YOUR_UPSTREAM_KEY"
node usewhisper-autohook.mjs serve_openai_proxy --port 8787

Then point your agent’s OpenAI base URL to http://127.0.0.1:8787 (exact env/config depends on your agent).

If your agent supports overriding the upstream base URL, you can set:

  • OPENAI_BASE_URL (for OpenAI-compatible upstreams)
  • ANTHROPIC_BASE_URL (for Anthropic upstreams)

Or pass --upstream_base_url when starting the proxy.

For correct per-user/session memory, pass headers on each request:

  • x-whisper-user-id: telegram:{from_id}
  • x-whisper-session-id: telegram:{chat_id}

Anthropic Native Proxy (/v1/messages)

If your agent uses Anthropic's native API (not OpenAI-compatible), run the Anthropic proxy instead:

export ANTHROPIC_API_KEY="YOUR_ANTHROPIC_KEY"
node usewhisper-autohook.mjs serve_anthropic_proxy --port 8788

Then point your agent’s Anthropic base URL to http://127.0.0.1:8788.

Pass IDs via headers (recommended):

  • x-whisper-user-id: telegram:{from_id}
  • x-whisper-session-id: telegram:{chat_id}

If you do not pass headers, the proxies will attempt to infer stable IDs from OpenClaw's system prompt / session key if present. This is best-effort; headers are still the most reliable.

CLI Usage (what the tools call)

All commands print JSON to stdout.

Get packed context

node usewhisper-autohook.mjs get_whisper_context \
  --current_query "What did we decide last time?" \
  --user_id "telegram:123" \
  --session_id "telegram:456"

Ingest a completed turn

node usewhisper-autohook.mjs ingest_whisper_turn \
  --user_id "telegram:123" \
  --session_id "telegram:456" \
  --user_msg "..." \
  --assistant_msg "..."

For large content, pass JSON via stdin:

echo '{ "user_msg": "....", "assistant_msg": "...." }' | node usewhisper-autohook.mjs ingest_whisper_turn --session_id "telegram:456" --user_id "telegram:123" --turn_json -

Output Format

get_whisper_context returns:

  • context: the packed context string to prepend
  • context_hash: a short hash you can store and pass back as previous_context_hash next time (optional)
  • meta: cache hit and compression info (useful for debugging)

Files

3 total
Select a file
Select a file to preview.

Comments

Loading comments…