SuperThink
PassAudited by ClawScan on May 10, 2026.
Overview
SuperThink is a disclosed automated research pipeline, but it needs an Anthropic API key, runs unattended for hours, stores outputs locally, and may notify user-configured endpoints.
Before installing, make sure you are comfortable with a multi-hour unattended run, the stated API cost, local persistence under `/data/superthink` and `./batch-jobs`, and any configured notification destination. Use a dedicated Anthropic key if possible and inspect generated helper scripts before executing them.
Findings (5)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
A run can consume Anthropic API credits and the key must be protected like a password.
The skill requires a sensitive Anthropic API key for its main batch-processing workflow. This is expected for the stated purpose, but it grants access to a paid external API.
ANTHROPIC_API_KEY ... required for all batch API calls ... sensitive: true
Use a dedicated or restricted API key if possible, monitor usage, and do not set optional notification tokens unless you need them.
After you approve the scope, the workflow may continue for 6–12 hours without asking for more approvals.
The skill intentionally continues work after the initial confirmation using long-running pollers. The artifacts describe this behavior and say the pollers are self-terminating.
fully hands-off after scope confirmation ... Each stage uses an idempotent self-terminating cron poller
Only start a run when the scope and expected cost are acceptable, and know how to stop or clean up the pollers in your environment.
Job metadata, result paths, or final delivery notices may be sent outside the local environment if notification variables are configured.
The notification mechanism can send completion information to user-configured external channels. This is optional and disclosed.
Send an HTTP request — POST to a webhook URL from `NOTIFY_WEBHOOK_URL` env var ... Send a Telegram message — if `TELEGRAM_BOT_TOKEN` and `TELEGRAM_CHAT_ID` env vars are set
Only configure webhook or Telegram destinations you trust, and avoid sending sensitive research topics or outputs to shared channels.
The real runtime behavior depends on the generated helper scripts, including how they handle API keys, files, and notifications.
The package supplies specifications for local helper scripts rather than the helper code itself, so the actual executed code would be generated or implemented later.
Implement this as: `batch-worker.py` ... python3 batch-worker.py submit --job-file path/to/job.json
Review the generated `batch-worker.py` and `md2docx.py` before running them, especially the network, file-writing, and credential-handling code.
Research topics, generated analysis, and job state may remain on disk after the run completes.
The skill stores generated research outputs and pipeline state on disk for later stages and final delivery. This is disclosed and scoped to the pipeline.
All output is written to a persistent directory ... /data/superthink/[topic-slug]/ ... pipeline-state.json ... master-brief.md ... synthesis.md ... brief.md
Use an appropriate storage path, avoid sensitive topics unless the environment is trusted, and delete old run directories when no longer needed.
