Inception Token Optimizer

v1.0.0

Optimize Inception Labs token usage to minimize costs. Use when choosing Inception models (Mercury, etc.), crafting prompts for Inception, analyzing token co...

0· 81·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The name/description (token optimization for Inception models) matches the provided instructions and the included helper scripts (LRU cache and token-bucket rate limiter). No unrelated binaries, credentials, or config paths are requested.
Instruction Scope
SKILL.md stays focused on prompt compression, context pruning, caching, model selection, and budget guarding. It instructs the agent to estimate tokens, prune context, use an in-memory LRU cache, and call the TokenBucket before API calls. There are no directions to read system files, retrieve unrelated credentials, or transmit data to unexpected endpoints.
Install Mechanism
This is an instruction-only skill with small, included Python helper scripts and no install spec or remote downloads. No high-risk install behavior is present.
Credentials
The skill declares no required environment variables, credentials, or config paths. That aligns with its stated function; it does not ask for unrelated secrets.
Persistence & Privilege
The skill does not request persistent or elevated platform privileges (always is false). The included code uses only in-memory data structures and does not modify other skills or system-wide configuration.
Assessment
This skill appears coherent and safe for its stated goal. Before installing: (1) be aware the token-bucket implementation counts tokens by storing timestamps (large token counts could use significant memory), and the LRU cache is in-memory only (no durable storage or encryption for cached prompts/responses); (2) you will still need your Inception API credentials elsewhere to actually call the API — this skill intentionally does not request them; (3) review and tune parameters (cache size, token estimation heuristic, max_tokens) for your workload. If you plan to use it in a multi-tenant or high-throughput environment, test for memory/latency impact and consider a persistent or centralized cache/rate-limiter implementation.

Like a lobster shell, security has layers — review code before you run it.

latestvk97evandns8t542d80nax6dvyn8393r1

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments