Context Hub for OpenClaw

v0.1.1

Use Context Hub (chub) to fetch versioned API docs/skills before coding, then persist learnings with annotations.

0· 151·0 current·0 all-time
byswissashley@victorlin-houzz
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description match the required binary ('chub') and the install spec (npm package @aisuite/chub). No unrelated env vars, binaries, or config paths are requested.
Instruction Scope
SKILL.md instructs the agent to run local chub commands, save fetched docs to project paths (e.g., .context/<id>.md), and use annotate/feedback commands that likely send data to the Context Hub service. The doc explicitly advises asking the user before sending feedback, which mitigates risk, but the skill does give the agent capabilities to persist and potentially transmit project-derived information.
Install Mechanism
Install uses an npm package (@aisuite/chub) that creates the 'chub' binary. This is a common, expected install method for a CLI, but npm packages can run install scripts and pull code from registries — moderate risk compared to instruction-only skills. No arbitrary URL downloads or obscure hosts are used.
Credentials
No environment variables, credentials, or external config paths are requested. The skill's declared requirements are minimal and appropriate for its purpose.
Persistence & Privilege
The skill is not always-enabled and does not request elevated system-wide changes. However, autonomous invocation + ability to run chub annotate/feedback means the agent could upload annotations or send feedback if permitted — SKILL.md recommends asking the user before sending, which you should enforce in practice.
Assessment
This skill appears coherent: it installs a chub CLI (npm package) and instructs the agent to fetch, store, and annotate API docs. Before installing, consider: 1) review the @aisuite/chub package source (and its published maintainer) to ensure you trust the registry package; 2) run installs in a controlled environment if you want to inspect install scripts; 3) be careful not to annotate or upload sensitive code, secrets, or private endpoints — the chub annotate/feedback commands likely transmit data externally; ensure the agent asks for explicit user confirmation before sending anything off-host; and 4) if you require stricter controls, block network/write actions or audit the annotations directory (.context) after use.

Like a lobster shell, security has layers — review code before you run it.

apivk974h2k07aezd4yx5d3rq2xprs831pvqcontextvk974h2k07aezd4yx5d3rq2xprs831pvqdocsvk974h2k07aezd4yx5d3rq2xprs831pvqlatestvk974h2k07aezd4yx5d3rq2xprs831pvq

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Runtime requirements

🧠 Clawdis
Binschub

Install

Install Context Hub CLI (npm)
Bins: chub
npm i -g @aisuite/chub

SKILL.md

Context Hub for OpenClaw

Use this skill whenever implementation depends on third-party APIs/SDKs or fast-changing tools.

When to use

Trigger this skill when the user asks to:

  • integrate with OpenAI/Anthropic/Stripe/etc.
  • write SDK/API code where versions matter
  • debug integration failures likely caused by doc drift
  • create reusable internal implementation playbooks

Do not rely only on memorized API shapes. Fetch current docs first.

OpenClaw trigger heuristics (explicit)

Use this decision rule before coding:

  • HIGH confidence trigger (use Context Hub immediately):

    • Request mentions an external API/SDK by name (openai, anthropic, stripe, supabase, etc.)
    • Task includes auth, webhooks, function calling, streaming, uploads, pagination, or retries
    • User asks for production-ready integration code/tests
  • MEDIUM confidence trigger (use Context Hub unless local repo docs are clearly authoritative and current):

    • Refactor/migrate API client code
    • Fix runtime errors that look like contract drift (400 invalid param, schema mismatch, deprecated endpoint)
    • Add features across multiple languages/runtimes where SDK behavior may differ
  • LOW confidence trigger (Context Hub optional):

    • Pure business logic with no third-party integration
    • Trivial formatting/renaming changes
    • Internal-only modules with stable local docs

Fast OpenClaw checklist

If 2+ of these are true, run chub first:

  1. External API/SDK involved
  2. Version-specific behavior likely
  3. Endpoint/schema uncertainty exists
  4. Failure cost is high (payments/auth/data integrity)
  5. Existing code recently broke after dependency updates

Concrete trigger examples by provider

Stripe — trigger Context Hub first when:

  • Implementing or fixing webhook signature verification
  • Creating subscription/payment-intent flows with idempotency requirements
  • Handling API version mismatch errors or changed field semantics

Example:

chub search "stripe webhooks" --json
chub get stripe/api --lang js --json

OpenAI — trigger Context Hub first when:

  • Implementing chat/responses APIs with tool/function calling
  • Streaming responses or migrating from older endpoints/SDK patterns
  • Debugging model parameter mismatches, structured output schemas, or file/tool workflows

Example:

chub search "openai chat responses function calling" --json
chub get openai/chat --lang js --json

Core workflow (doc-first coding)

  1. Search candidates
chub search "<vendor api or sdk>" --json
  1. Fetch the best match (pin language/version when available)
chub get <id> --lang js --json
# or
chub get <id> --lang py --version <sdk-version> --json
  1. Pull only needed references to reduce token noise
chub get <id> --file references/errors.md
# or
chub get <id> --full
  1. Implement against fetched docs, not assumptions.

  2. Persist new learnings (only non-obvious, high-value findings)

chub annotate <id> "<gotcha + fix + context>"
  1. Optional quality feedback (ask user before sending)
chub feedback <id> up
chub feedback <id> down --label outdated --label wrong-examples

Annotation quality standard

Good annotation format:

  • Symptom: what broke
  • Cause: why docs/code failed in practice
  • Fix: exact change that worked
  • Scope: version/environment constraints

Example:

chub annotate stripe/api "Webhook signature failed in Next.js route handlers; use raw request body before JSON parse. Verified on stripe-node 17.x."

Avoid annotations that just restate obvious doc text.

OpenClaw integration pattern

  • Treat Context Hub as first source for coding accuracy.
  • Save references to project files when useful:
mkdir -p .context
chub get <id> --lang js -o .context/<id>.md
  • For repeated workflows, maintain a compact project playbook (e.g., shared/<project>/context-notes.md) and keep chub annotations concise.

Useful commands

chub update
chub cache status
chub search "stripe webhooks"
chub get stripe/api --lang js
chub annotate --list --json
chub feedback --status

Safety + ops notes

  • Prefer --json for machine parsing in agent flows.
  • Keep annotation volume low and signal high.
  • If docs conflict with runtime behavior, annotate locally and continue with verified behavior.
  • Use version targeting (--version) when package major versions differ.

Files

2 total
Select a file
Select a file to preview.

Comments

Loading comments…