TurboQuant Context

v0.1.0

Compress conversation context into a TurboQuant vector store inside OpenClaw memory, then retrieve the most relevant entries on demand to stay within token b...

0· 148·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for twacqwq/turboquant.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "TurboQuant Context" (twacqwq/turboquant) from ClawHub.
Skill page: https://clawhub.ai/twacqwq/turboquant
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required binaries: uv
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install turboquant

ClawHub CLI

Package manager switcher

npx clawhub@latest install turboquant
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description (compress conversation context into a compressed vector store) match the instructions: ingest, assemble, compact, and store-info commands all operate on a local store path. The declared dependency ('uv') is the only external requirement and matches the CLI invocations shown.
Instruction Scope
SKILL.md only instructs the agent to run the uv CLI against a local store and local .npy/.npz files, and to persist data under ~/.openclaw/memory/turboquant-$OPENCLAW_SESSION_ID. It does not ask the agent to read unrelated system files, call external web endpoints, or exfiltrate data. It does assume the caller produces embeddings and places them in files the tool will read.
Install Mechanism
There is no install spec (instruction-only), so nothing will be downloaded or written by the skill itself. The only runtime requirement is that a trusted 'uv' binary be available on PATH; the skill also mentions running 'uv sync' once — this is an operational note but not an installer.
Credentials
No environment variables, credentials, or config paths are required by the skill. It references $OPENCLAW_SESSION_ID and $TURN_NUMBER as context-derived names for store paths/ids, which is proportionate to maintaining per-session memory.
Persistence & Privilege
The skill persists a store under the user's home (~/.openclaw/memory/...), which is expected for a memory compression tool. always is false and autonomous invocation is allowed (platform default). Persistence across restarts is a design choice appropriate for the stated purpose but users should be aware data will be stored on disk.
Assessment
This skill is coherent with its stated purpose, but check a few practical things before installing: (1) ensure the 'uv' CLI you provide is a trusted implementation (inspect the upstream repo https://github.com/openclaw/openclaw-turboquant if you want to review code), (2) be aware the skill will create and read files under ~/.openclaw/memory/, so avoid placing highly sensitive data there unless you accept local persistence, (3) embeddings must be produced by you and saved as .npy files the tool will read—confirm your workflow for generating and securing those files, and (4) if you need stricter privacy, consider encrypting the memory directory or using ephemeral session IDs so the store does not persist across restarts.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

Clawdis
Binsuv
latestvk972n6w9xeyyfabmp5x9dn9hkh83vmmv
148downloads
0stars
1versions
Updated 4w ago
v0.1.0
MIT-0

TurboQuant Context Compression Skill

Maintains a compressed vector store in OpenClaw's memory directory (~/.openclaw/memory/turboquant-$OPENCLAW_SESSION_ID) and provides three lifecycle commands: ingest → assemble → compact.

Store path convention

The store path is automatically derived from the session:

STORE=~/.openclaw/memory/turboquant-$OPENCLAW_SESSION_ID

All commands default to this path when --store is omitted.


Phase A — Ingest (after each turn)

After every message, save the embedding vector of that message text to the store. Embeddings must be pre-computed and saved as a .npy file by the caller.

# Add one message to the store
uv run openclaw-turboquant ingest \
  --id "turn_$TURN_NUMBER" \
  --text "Full message text goes here" \
  --embedding /tmp/turn_embedding.npy

# Output:
# {"action":"ingest","entry_id":"turn_5","store_size":5,"store_path":"...","ok":true}

--store, --dim, --bit-width are inferred automatically on subsequent calls. Set --dim only when creating a store for the first time.


Phase B — Assemble (before building the prompt)

Retrieve the most relevant past entries that fit within the token budget.

uv run openclaw-turboquant assemble \
  --query /tmp/current_query.npy \
  --token-budget 4096

# Output (one JSON object per line, sorted by relevance):
# {"role":"context","content":"...","entry_id":"turn_3","score":0.912}
# {"role":"context","content":"...","entry_id":"turn_1","score":0.743}

Include the returned content fields as context messages in the system prompt.


Phase C — Compact (when the context window is filling up)

Remove the least relevant entries when the store grows too large. Trigger this when store_size exceeds a threshold (e.g., 50).

uv run openclaw-turboquant compact \
  --query /tmp/current_query.npy \
  --keep-ratio 0.5

# Output:
# {"action":"compact","before":50,"after":25,"removed":25,"ok":true}

Inspect the store

uv run openclaw-turboquant store-info

# Output:
# {"path":"/...","size":25,"dim":1536,"bit_width":4,"memory_bytes":12288,"memory_kb":12.0}

Low-level commands (batch file operations)

# Compress a batch of embeddings from a .npy file
uv run openclaw-turboquant compress \
  --input embeddings.npy --output compressed.npz --bit-width 4

# Retrieve top-k from a compressed file index
uv run openclaw-turboquant retrieve \
  --query query.npy --index compressed.npz --top-k 5

# Benchmark distortion quality
uv run openclaw-turboquant benchmark --dim 128 --bit-width 4 --n-vectors 1000

Notes

  • Embeddings are not generated by this skill. The caller must produce .npy vectors using any embedding model (e.g., text-embedding-3-small).
  • The store persists across turns inside ~/.openclaw/memory/ so context survives session restarts.
  • bit-width 4 is the recommended default: ~6× compression with negligible quality loss.
  • Use prod mode (default) for retrieval; it provides unbiased inner-product estimation.
  • Requires uv on $PATH and uv sync run once in the project directory.

Comments

Loading comments...