Back to skill
Skillv0.1.0

ClawScan security

Openclaw Skill Langchain Local · ClawHub's context-aware review of the artifact, metadata, and declared behavior.

Scanner verdict

BenignMar 10, 2026, 10:14 AM
Verdict
benign
Confidence
high
Model
gpt-5-mini
Summary
The skill's code and instructions match its stated purpose (running LangChain against a local Ollama model); nothing requests unrelated credentials or external installs, though there are minor implementation notes to be aware of.
Guidance
This skill appears to do what it says: run LangChain chains against a locally-hosted Ollama model. Before installing or running it: 1) Ensure you actually run ollama serve and have pulled a trusted local model (phi4-mini) — the skill will send your prompts to whatever OLLAMA_BASE_URL is configured. 2) Do not change OLLAMA_BASE_URL to an unknown remote host unless you trust that host, because prompts/responses would be transmitted there. 3) Be cautious with 'devops' mode: the assistant returns shell commands; do not copy/paste commands that you haven't reviewed. 4) Note the small implementation mismatch: 'rag' mode is supported in prompts but not in LLM_CONFIG (it will use the chat LLM settings); if you rely on RAG/document retrieval, verify the skill's document integration before trusting outputs.

Review Dimensions

Purpose & Capability
noteName/description align with the included code and SKILL.md: the skill uses langchain_ollama to call a local Ollama server (MODEL_NAME phi4-mini) and provides modes for coding, devops, chat, and rag. Minor inconsistency: LLM_CONFIG defines 'coding','chat','devops' but not 'rag' — the code falls back to the 'chat' config for unknown modes while using the 'rag' system prompt. This is unlikely malicious but is an implementation inconsistency.
Instruction Scope
okSKILL.md instructs running a local Python script and pulling/running Ollama locally. The runtime instructions and code only interact with a local Ollama HTTP endpoint (default http://localhost:11434) and stdout; there are no instructions to read unrelated files, exfiltrate data, or contact external endpoints.
Install Mechanism
okNo install spec is provided (instruction-only), and included code files are plain Python. Requirements are local: Ollama must be installed and a model pulled, plus Python packages. No downloads from arbitrary URLs or archive extraction are present.
Credentials
okThe skill requires no environment variables or credentials. It uses a configurable OLLAMA_BASE_URL in code (defaulting to localhost). That is proportionate; only caution: if a user changes OLLAMA_BASE_URL to a remote host, queries would be sent there.
Persistence & Privilege
okThe skill has no elevated persistence flags (always:false) and does not modify other skills or system-wide settings. It runs as an interactive CLI script and does not attempt autonomous persistence.