Back to skill
Skillv1.0.1

ClawScan security

MLX Local AI · ClawHub's context-aware review of the artifact, metadata, and declared behavior.

Scanner verdict

SuspiciousMar 9, 2026, 8:11 PM
Verdict
suspicious
Confidence
medium
Model
gpt-5-mini
Summary
The skill broadly matches its described purpose (deploy a local LLM+embedding stack), but several mismatches and risky choices — especially use of an unknown Hugging Face mirror plus the --trust-remote-code behavior and missing embedding server file — make it suspicious and worth manual review before running.
Guidance
This package mostly does what it says (sets up a local LLM + embedding stack), but there are several red flags you should address before running: - Unknown model mirror: install/start set HF_ENDPOINT to https://hf-mirror.com. That mirror will serve model artifacts and possibly model-level code. Treat it as untrusted until you verify its operator and content. Prefer official Hugging Face endpoints or local model files. - Remote code execution risk: start_ai.sh launches the chat server with --trust-remote-code. When combined with an untrusted model mirror this allows arbitrary Python from the model repository to run on your machine. Only use --trust-remote-code with repositories you trust. - Missing file: install.sh expects scripts/embedding_server.py to exist (or copies it to $HOME/embedding_server.py), but the package does not include it. That means embedding may not start as advertised; inspect or provide that server script before installing. - Supply-chain & reproducibility: install.sh uses pip install with no pinned versions. That can install unexpected package versions. Review the pip packages (mlx, mlx-lm, sentence-transformers, etc.) and consider pinning versions or auditing them before installation. - Persistent environment changes: SKILL.md suggests appending a source line to your ~/.zshrc and install.sh creates files under $HOME. Back up your rc files first and review config/env.example before sourcing it. Recommended actions: review the code for mlx and mlx-lm packages, verify the identity/trustworthiness of hf-mirror.com or switch HF_ENDPOINT to a trusted source, remove --trust-remote-code or only use it for vetted model repos, supply or inspect embedding_server.py, and run the installation in an isolated environment (VM/container) if you want to test it safely.

Review Dimensions

Purpose & Capability
noteThe files and scripts implement a local LLM + embedding service as described, but there are small inconsistencies: scripts expect an embedding_server.py to be copied to $HOME (scripts/embedding_server.py is not present in the package), and config/openclaw.json sets the agent primary model to an external model (baiduqianfancodingplan/...) while the skill advertises local models — this could cause unexpected network usage. Overall capability is aligned but some referenced artifacts are missing or point to external models.
Instruction Scope
concernRuntime instructions and scripts set HF_ENDPOINT to an unknown domain (https://hf-mirror.com) and the chat start command uses --trust-remote-code. That combination lets model loading pull and execute remote model code from the mirror, which can run arbitrary Python during load. SKILL.md and install.sh also instruct modifying shell startup (sourcing env.example into ~/.zshrc), which changes user environment persistently. The scripts otherwise only manipulate files under the user's home directory.
Install Mechanism
concernThere is no signed/verified install package; install.sh uses pip install (no pinned versions) and relies on downloading models from an unverified HF mirror (hf-mirror.com). While no explicit wget/curl downloads of executables appear, model downloads via the mirror and pip installs are a supply-chain vector. The missing embedding_server.py referenced by install.sh is another coherence problem.
Credentials
concernThe skill declares no required env vars, but scripts rely on HF_ENDPOINT and provide comments for optional BAIDU_API_KEY/TAVILY_API_KEY. config/env.example is written and the SKILL.md instructs appending a source line to the user's ~/.zshrc, which introduces persistent environment changes. The openclaw.json also embeds an apiKey value 'local-mlx' (local-only), and the agent default primary model points to an external model — causing the agent to potentially call remote services unrelated to the local-only claim.
Persistence & Privilege
noteThe skill does not request elevated privileges and is not 'always' enabled. However install.sh copies start_ai.sh to ~/start_ai.sh, creates a virtual environment at $HOME/mlx-env, writes config/env.example and suggests modifying ~/.zshrc to source it — these are persistent changes to the user's home environment. Uninstall.sh offers removal steps, but model caches are left unless manually removed.