Back to skill
Skillv1.0.0
ClawScan security
OpenClaw Copilot CLI Wrapper · ClawHub's context-aware review of the artifact, metadata, and declared behavior.
Scanner verdict
SuspiciousMar 4, 2026, 8:32 AM
- Verdict
- suspicious
- Confidence
- medium
- Model
- gpt-5-mini
- Summary
- The skill is coherent with its stated purpose (wrapping the GitHub Copilot CLI) but its runtime instructions encourage broad local and network access (e.g., --allow-all, auto-loading AGENTS.md, reading ~/.copilot/logs and using interactive PTY/send-keys), creating a real data-exposure risk that users should evaluate before enabling.
- Guidance
- This skill is coherent with its stated purpose (wrapping the GitHub Copilot CLI) but it instructs you to run Copilot with flags and modes that can access and transmit local files and session data. Before installing/use: 1) Only install the copilot binary from official sources (npm/@github or Homebrew). 2) Avoid using --allow-all / --yolo unless you understand and accept that Copilot may access local files, URLs, and system tools. 3) Be cautious about AGENTS.md auto-loading and session outputs (session.md, ~/.copilot/logs) — these can contain workspace content; review them and store them securely. 4) Consider running Copilot in a sandbox/container or a copy of the workspace with sensitive data removed. 5) If you enable autonomous agent invocation, restrict when and how this skill can be called (or require explicit user confirmation) because the combination of autonomous calls + broad flags increases exposure risk. If you want, I can suggest safer invocation flags and a minimal test workflow to verify behavior in a controlled environment.
Review Dimensions
- Purpose & Capability
- okName/description match the instructions: the SKILL.md only documents invoking the GitHub Copilot CLI, installing via npm/brew, and authenticating via GitHub. There are no unrelated required env vars, binaries, or config paths declared.
- Instruction Scope
- concernThe instructions explicitly recommend flags and modes that grant the Copilot process broad access (e.g., --allow-all / --yolo), run it interactively via PTY and use process send-keys, and reference auto-loading of AGENTS.md and logs at ~/.copilot/logs. Those steps can cause the CLI to read local files, session state, or upload workspace content to external services — behavior beyond merely generating text and therefore a potential data-exfiltration/privacy risk.
- Install Mechanism
- okNo install spec baked into the skill (instruction-only). The README-style instructions point to npm or Homebrew official installs for @github/copilot, which are standard installation routes and not itself suspicious.
- Credentials
- noteThe skill declares no required env vars or credentials, which is proportionate. However, runtime use requires a GitHub login/Copilot subscription (interactive OAuth/token storage), and the recommended flags may cause local files or credentials to be accessed or transmitted by the Copilot service. The SKILL.md does not request unrelated credentials, but it implicitly relies on GitHub auth and local stored tokens.
- Persistence & Privilege
- notealways:false and no requests to modify other skills — that's appropriate. But allow-list flags and interactive automation combined with the platform-default ability for agents to invoke skills autonomously increase potential blast radius if the agent is permitted to call this skill without human oversight.
