token-sisyphus

v1.0.2

Burn LLM tokens toward a target count to satisfy corporate AI usage KPIs. Trigger when user says: burn tokens, consume tokens, fill KPI, push the boulder, si...

0· 137·0 current·0 all-time
byNear@neardws

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for neardws/token-sisyphus.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "token-sisyphus" (neardws/token-sisyphus) from ClawHub.
Skill page: https://clawhub.ai/neardws/token-sisyphus
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required env vars: [object Object]
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install token-sisyphus

ClawHub CLI

Package manager switcher

npx clawhub@latest install token-sisyphus
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description match the included script and SKILL.md: the tool repeatedly issues chat/generation requests to consume tokens. The env vars and optional SDK installs (OpenAI/Anthropic/Gemini) correspond to the providers the skill targets. Note: the registry metadata shows 'Required env vars: [object Object]' which appears to be a formatting/parsing bug but the SKILL.md clearly documents the three provider API env vars.
Instruction Scope
Runtime instructions are narrowly scoped to installing provider SDKs, setting an API key, and running the bundled burn.py. The script does not read unrelated system files or secrets beyond provider API keys. One noteworthy capability: openai provider accepts an arbitrary --base-url, which is useful for compatible providers but means requests can be pointed to custom endpoints—ensure any custom endpoint is trusted to avoid sending data/tokens to an untrusted server.
Install Mechanism
This is instruction-only with a bundled Python script; no remote downloads or archive extraction. It optionally instructs pip installs for provider SDKs (openai, anthropic, google-generativeai), which is typical and low-risk compared to arbitrary downloads. The script will exit with an error if the required SDK is missing at runtime.
Credentials
Only provider API keys (OPENAI_API_KEY, ANTHROPIC_API_KEY, GEMINI_API_KEY) are used, which is proportional to the stated functionality. These are sensitive credentials that grant billing and API access — the user should prefer scoped/test keys rather than org-wide or admin keys. The SKILL.md marks these env vars optional, but a live run will need one of them (or --api-key) to make real requests.
Persistence & Privilege
The skill is not always-on, is user-invocable, and does not request elevated or persistent system privileges or attempt to modify other skills or system-wide agent settings. Autonomous invocation is allowed by default but not combined with other red flags here.
Assessment
This skill is coherent but intentionally wasteful: it will make many API calls and can incur significant cost. Before running: 1) Use --dry-run to verify behavior without cost. 2) Use a scoped or test API key (do NOT supply org-wide admin keys). 3) If you use --base-url, only point to endpoints you trust (untrusted endpoints could receive the prompt text). 4) Start with a small target and monitor billing/usage. The registry metadata appears to have a formatting bug for env vars — double-check the env var names in SKILL.md before supplying credentials.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

Env[object Object], [object Object], [object Object]
claudevk975y09a0x5b4dppna43y9f7c983bs9tgeminivk975y09a0x5b4dppna43y9f7c983bs9tkpivk975y09a0x5b4dppna43y9f7c983bs9tlatestvk975y09a0x5b4dppna43y9f7c983bs9tllmvk975y09a0x5b4dppna43y9f7c983bs9topenaivk975y09a0x5b4dppna43y9f7c983bs9tsatirevk975y09a0x5b4dppna43y9f7c983bs9ttokenvk975y09a0x5b4dppna43y9f7c983bs9t
137downloads
0stars
2versions
Updated 1mo ago
v1.0.2
MIT-0

token-sisyphus

Push the boulder. Watch it roll back. At least your KPI is green.

The burn script is bundled at scripts/burn.py — no external download required.

Setup

Install the SDK for your chosen provider:

pip install openai              # for openai provider (default)
pip install anthropic           # for claude provider
pip install google-generativeai # for gemini provider

Set the corresponding env var:

ProviderEnv var
OpenAI / compatibleOPENAI_API_KEY
ClaudeANTHROPIC_API_KEY
GeminiGEMINI_API_KEY

Usage

Run the bundled script directly:

python {skillDir}/scripts/burn.py --target <amount> [options]

  --target       Token count: 50000, 100k, 1m  (required)
  --provider     openai | claude | gemini  (default: openai)
  --model        Model name (omit to use provider default)
  --api-key      API key (falls back to env var)
  --base-url     Custom endpoint URL (openai provider only)
  --max-tokens   Max tokens per request (default: 500)
  --delay        Seconds between requests (default: 0.5)
  --dry-run      Simulate without real API calls

Common invocations

# OpenAI (default, gpt-4o-mini)
python {skillDir}/scripts/burn.py --target 100k

# Claude Haiku
python {skillDir}/scripts/burn.py --target 100k --provider claude --model claude-3-haiku-20240307

# Gemini Flash
python {skillDir}/scripts/burn.py --target 100k --provider gemini --model gemini-1.5-flash

# DeepSeek
python {skillDir}/scripts/burn.py --target 100k --base-url https://api.deepseek.com/v1 --model deepseek-chat

# Qwen / Tongyi
python {skillDir}/scripts/burn.py --target 100k --base-url https://dashscope.aliyuncs.com/compatible-mode/v1 --model qwen-turbo

# Kimi / Moonshot
python {skillDir}/scripts/burn.py --target 100k --base-url https://api.moonshot.cn/v1 --model moonshot-v1-8k

# Dry run (no real API calls, no cost)
python {skillDir}/scripts/burn.py --target 100k --dry-run

Provider defaults

ProviderDefault model
openaigpt-4o-mini
claudeclaude-3-haiku-20240307
geminigemini-1.5-flash

Cost note

Each request uses up to --max-tokens (default 500) tokens. Running --target 100k will make ~200 requests. Use --dry-run first to verify behavior without incurring API costs. Prefer scoped/limited API keys when testing.

Comments

Loading comments...