suspicious.exposed_secret_literal
- Location
- references/clawhub_publish_pack.md:243
- Finding
- File appears to expose a hardcoded API secret or token.
AdvisoryAudited by Static analysis on May 10, 2026.
Detected: suspicious.exposed_secret_literal
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Installing and running the provider depends on code that is not included in the reviewed skill artifacts.
The runnable provider is installed from PyPI using a version range, while the skill package itself contains no code. This is central to the purpose and includes verification steps, but users should verify the external package.
python -m pip install --upgrade "cascadeflow[openclaw]>=0.7,<0.8"
Install from the documented upstream source only, review the PyPI/GitHub package, and pin an exact version and hash where possible.
These keys can spend money or access provider accounts if mishandled.
The skill expects sensitive provider API keys and service auth tokens. This is purpose-aligned for an LLM provider, but registry metadata declares no required credentials.
Provider key(s): `ANTHROPIC_API_KEY=...` and/or `OPENAI_API_KEY=...` ... Service tokens: `--auth-token ...` and `--stats-auth-token ...`
Use separate least-privilege/test keys, keep them out of source control, and rotate tokens if exposed.
Sensitive prompt content or metadata may be processed by the local CascadeFlow service and the configured LLM provider.
The data flow is disclosed and purpose-aligned, but user prompts and routing metadata pass through CascadeFlow and selected upstream model providers.
OpenClaw sends requests to CascadeFlow through OpenAI-compatible `/v1/chat/completions`. CascadeFlow reads prompt context plus OpenClaw-native event/domain metadata
Avoid sending secrets in prompts, keep the service on localhost when possible, and use TLS plus strong tokens for any remote deployment.
OpenClaw cost displays or routing assumptions may not reflect actual OpenAI/Anthropic billing.
The example OpenClaw provider config declares zero model cost even though the setup uses paid upstream provider keys. This may be a placeholder, but users should not interpret it as free usage.
"cost": {"input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0}Monitor upstream provider billing and CascadeFlow stats, and adjust local cost accounting if OpenClaw uses these fields for budgeting.
A background provider process may continue using API keys or serving requests until stopped.
The documentation includes an optional background mode that can leave the provider running after the setup command returns. It is disclosed and user-directed, not hidden persistence.
nohup cascadeflow-gateway --port 8084 --mode agent --config examples/configs/anthropic-only.yaml > /tmp/cf.log 2>&1 &
Run in foreground during testing, know how to stop the process, and restrict network exposure before using background mode.