Back to skill
Skillv3.3.1

ClawScan security

Free Scaling · ClawHub's context-aware review of the artifact, metadata, and declared behavior.

Scanner verdict

ReviewMar 16, 2026, 3:22 AM
Verdict
Review
Confidence
high
Model
gpt-5-mini
Summary
The skill's code and runtime instructions broadly match the stated purpose (ensemble routing, ELO-based online learning) but there are several clear mismatches and missing declarations around required credentials, environment variables, and external integrations that make the package incoherent and warrant caution.
Guidance
This package appears to implement what it claims (an ensemble/cascade with online ELO-based model selection), but there are several operational and security mismatches you should consider before installing: - Credentials not declared: SKILL.md and README instruct you to export NVIDIA_API_KEY and mention optional Copilot and Discord integrations, but the registry metadata lists no required env vars or primary credential. Inspect the code (call_model, call_copilot, feedback) to find exactly which tokens and endpoints are used before providing keys. - Data sent to remote services: Your 'context' is placed into the system prompt and sent to external model endpoints (NVIDIA NIM and optionally Copilot). Do not pass secrets, private PII, or sensitive code you don't want transmitted to third-party models. - Persistent local state: The skill writes ELO/tracking to ~/.cache/free-scaling/elo.json (or FREE_SCALING_STATE_DIR). That file contains aggregated vote history and could include snippets of raw responses. If that concerns you, set FREE_SCALING_STATE_DIR to a controlled location or inspect/reset the file regularly. - Undeclared optional integrations: Discord reaction feedback and GitHub Copilot routing are referenced but tokens for those services are not declared in metadata. If you intend to use those features, find the exact env var names and confirm where tokens are stored and how the code uses them. - Review network calls: If you need higher assurance, search call_model / voter / feedback code for HTTP hosts and endpoints (build.nvidia.com, GitHub/Copilot, Discord) and verify they are legitimate. Run the code in an isolated environment or on non-sensitive data first. Bottom line: the skill is functionally coherent with its stated purpose, but the missing declarations around required env vars/credentials and the persistence of deployment data make it suspicious until you verify the code paths that talk to external services and confirm which credentials will be used.
Findings
[system-prompt-override] expected: The skill intentionally places 'context' into the model system message to keep the user question clean, which the scanner flagged as 'system-prompt-override'. This behavior is expected for a model-evaluation/orchestration library, but it also increases risk: arbitrary context becomes a part of the system prompt and will be sent to remote models, so sensitive content may be disclosed or may influence model instructions.

Review Dimensions

Purpose & Capability
noteThe files implement an ensemble/cascade, ELO scoring, benchmarking, and feedback loops consistent with the 'Free Scaling' description. However the SKILL metadata declares no required environment variables or credentials while the README and SKILL.md explicitly instruct the user to set an NVIDIA_API_KEY and the code references other environment hooks (FREE_SCALING_STATE_DIR, OPENCLAW_WORKSPACE) and optional integrations (GitHub Copilot aliases, Discord reactions). This mismatch between declared requirements and actual needs is inconsistent.
Instruction Scope
concernSKILL.md instructs users to export NVIDIA_API_KEY and to use feedback.resolve_by_reaction with Discord message ids and mentions Copilot integration; the runtime instructions and code place user-provided 'context' into model system prompts. The skill also automatically logs every scale() call into persistent ELO state and runs shadow challengers. The instructions therefore direct read/write of local state (~/.cache/free-scaling/elo.json) and transmission of user-provided context to remote model endpoints (NIM and optional Copilot). The SKILL metadata did not declare these external endpoints or credential needs, and the practice of putting arbitrary context into the system prompt can let that content influence model behavior (and be sent to external services).
Install Mechanism
okNo install spec (instruction-only skill) and package uses only stdlib per SKILL.md. There is no remote download/install step in the registry metadata. The code bundle is included in the skill (many Python files) but nothing indicates it will fetch arbitrary executables or archives on install. No high-risk download URLs were found in the provided files.
Credentials
concernThe skill requires at least an NVIDIA_API_KEY at runtime (SKILL.md, README) and the code references FREE_SCALING_STATE_DIR and OPENCLAW_WORKSPACE, but the registry metadata lists no required env vars or primary credential. Additionally the code documents optional integrations (GitHub Copilot aliases 'cp-*' and Discord reaction-based feedback) that would require additional tokens/credentials (not declared). Requesting an API key for the model provider is reasonable, but the skill failing to declare these env vars / credentials and to document what additional tokens (Copilot, Discord) are needed is an incoherence and an operational risk (credential leakage or surprise network calls).
Persistence & Privilege
noteThe skill persists online-learning state to disk (default: ~/.cache/free-scaling/elo.json or FREE_SCALING_STATE_DIR), and automatically logs votes from every scale() call. 'always' is false. The skill writes its own state (normal for this functionality) but this persistent logging means usage data and possibly user-provided contexts are stored locally and used to alter routing. That persistent behavior is coherent with the stated online-learning design but should be understood by users because it creates a long-lived record derived from inputs.