Back to skill
Skillv1.0.3
ClawScan security
Airplane AI / 断网 AI 助手 · ClawHub's context-aware review of the artifact, metadata, and declared behavior.
Scanner verdict
BenignApr 30, 2026, 10:15 AM
- Verdict
- benign
- Confidence
- medium
- Model
- gpt-5-mini
- Summary
- The skill is internally consistent with its stated purpose (a local browser UI for local LLMs) but includes an explicit arbitrary-file-read feature and a small external connectivity check that users should be aware of before running.
- Guidance
- This skill appears to do what it advertises: provide a local browser chat UI for local LLM backends. Before running it, review and consider the following: 1) The frontend/server will accept a file path and return file contents to the model when the assistant outputs <<READ:/path>> — this is powerful but risky. If you plan to use it, consider editing the script to restrict readable directories (e.g., only a specific workspace) or require user confirmation before returning file contents. 2) The health check performs one external HTTPS probe to detect online status; disconnect your network to truly test offline behavior or remove that probe if you require zero external calls. 3) Run the script as a normal user (not root), inspect the code yourself (it's small and stdlib-only), and consider running it in an isolated environment (VM/container) if you have high-value secrets on the machine. 4) Confirm the LLM backend you point it at is trusted and running locally (127.0.0.1); otherwise the UI will proxy to whatever URL you configure. If you want, I can show the exact lines that implement the file-read endpoint and how to patch them to enforce directory restrictions or require confirmation.
- Findings
[urllib_request] expected: The script uses urllib.request to talk to local LLM endpoints (/v1/models and /v1/chat/completions) which is expected. It also performs a single external connectivity probe (https://clawhub.ai) in health_check to detect whether the machine is online; this is benign but is an external network call and should be known to the user. [socket_bind_listen] expected: The script binds a localhost port (default 127.0.0.1:8765) to serve the browser UI. Binding to localhost is expected for a local web UI; confirm you are comfortable with an HTTP server listening on that port in your user session. [filesystem_read_api] expected: There is an API that reads arbitrary filesystem paths requested by the frontend when the assistant returns <<READ:...>>. This behavior is documented and intentional for the skill's purpose, but it can expose sensitive local files (SSH keys, tokens, passwords) to the model and should be considered a privacy/security risk unless you restrict it.
Review Dimensions
- Purpose & Capability
- okName/description match the delivered artifacts: the SKILL.md and scripts implement a local browser chat UI that proxies to a local LLM endpoint (LM Studio, Ollama, vLLM, etc.). No cloud credentials or unrelated binaries are requested, and the code's behavior (binding a localhost port, querying /v1/models, proxying chat completions) is appropriate for this purpose.
- Instruction Scope
- concernThe SKILL.md documents and the code implement an explicit file‑read feature: when the assistant replies with <<READ:/path/to/file>> the frontend posts that path to the local server which reads the file and returns content to the model. This is an intentional capability for local workflows but allows arbitrary filesystem reads (including secrets) if the model or user triggers it. Additionally, the health check performs an outbound HTTPS request (to https://clawhub.ai) to detect WiFi—this is a small external network call that is not exfiltrating chat content but contradicts the 'never sends anything externally' reassurance unless the user understands it is only a connectivity probe.
- Install Mechanism
- okNo install spec (instruction-only with code file) — nothing is downloaded or installed automatically. The package includes a single Python script that uses only the standard library. This low-friction approach is proportionate for the stated functionality.
- Credentials
- okNo required credentials or secret environment variables are declared. The script accepts optional environment overrides (host/port/LLM URL/model/persona) which are reasonable. There are no requests for unrelated cloud keys or system-level credentials.
- Persistence & Privilege
- okalways is false; the skill does not request permanent platform privileges and does not attempt to modify other skills or system-wide agent settings. The macOS Automator packaging instructions simply run the local Python script in user context.
