Back to skill
Skillv1.1.1
ClawScan security
Xpulse · ClawHub's context-aware review of the artifact, metadata, and declared behavior.
Scanner verdict
BenignMar 14, 2026, 6:17 PM
- Verdict
- benign
- Confidence
- high
- Model
- gpt-5-mini
- Summary
- The skill's code, instructions, and configuration are coherent with its stated purpose (DuckDuckGo scraping + local Ollama + optional Kalshi position gating); it requests no unrelated credentials and does not contain obvious hidden endpoints or obfuscated behavior.
- Guidance
- This skill appears to do what it says: it scrapes X via DuckDuckGo, analyzes posts with a local Qwen model (Ollama), and optionally gates alerts using your Kalshi positions. Before installing: 1) Review and confirm you trust the skill source (owner unknown) and inspect the included scripts (they are present and readable). 2) Place Kalshi credentials and private key in a secure config file (the skill reads ~/.openclaw/config.yaml or alternative paths); ensure file permissions restrict access. 3) Be aware the skill will create and store signal history and caches under ~/.openclaw/state (remove or encrypt if you don't want local persistence). 4) Ensure Ollama runs locally (localhost:11434) and that you are comfortable running a local LLM; the skill does not call any external LLM APIs. 5) Install Python dependencies from requirements.txt (ddgs, pyyaml, kalshi-python) in an isolated virtualenv. 6) If you want to be extra cautious, run the skill in an isolated environment or container to limit exposure of local state and keys. If you need the skill to avoid writing history, review/modify the code to change or disable state persistence before use.
- Findings
[SUBPROCESS_RUN] expected: scripts/xpulse.py uses subprocess.run(['ollama','run',model]) as a CLI fallback to call local Ollama; expected for invoking a local binary. [LOCAL_HTTP_CALL] expected: Code calls http://localhost:11434/api/generate to reach local Ollama. This is coherent with the requirement to use a local LLM instance. [WRITE_STATE_FILES] expected: The skill writes history and caches to ~/.openclaw/state/x_signal_history.json and related cache files. This is needed for materiality gating and morning-brief integration but means local storage of alerts. [ROBUST_JSON_PARSING] expected: scripts/json_utils.py implements permissive JSON extraction to handle varied model outputs. It's expected for resilient parsing of LLM responses, but permissive parsing means malformed or adversarial LLM output may be accepted in some fallback modes.
Review Dimensions
- Purpose & Capability
- okName/description (Xpulse: X/Twitter scanner for prediction market traders) match the code and SKILL.md. The skill uses DuckDuckGo scraping (ddgs fallback), a local Ollama Qwen model for LLM filtering, and Kalshi position reads for gating — all expected for the described functionality. Minor documentation inconsistency: SKILL.md pip install line omits kalshi-python while requirements.txt includes it; this is a small packaging/documentation mismatch but not malicious.
- Instruction Scope
- noteRuntime instructions and code confine activity to scanning site:x.com via DuckDuckGo, asking a local Ollama instance for analysis, and optionally fetching Kalshi positions. The skill reads configuration files from user/home paths (~/.xpulse, ~/.openclaw, /etc/xpulse) and persists state to ~/.openclaw/state (x_signal_history.json and caches). This file I/O is expected for history/caching but is a persistence/privacy consideration (user alerts and signal caches are stored locally). The materiality gate's fail-closed behavior and the position gate's suppression of alerts are implemented as documented.
- Install Mechanism
- noteThere is no install spec in registry (instruction-only), but the package includes Python scripts and a requirements.txt. The SKILL.md instructs pip install ddgs pyyaml (omitting kalshi-python) and instructs installing Ollama locally. Because code is shipped with the skill, users should run pip install -r requirements.txt or the documented packages. No remote download/install from unknown URLs is present. Overall low-to-moderate install risk (standard Python deps + local model runtime).
- Credentials
- okThe skill does not request environment variables or unrelated credentials. It requires a Kalshi API key ID and a local private key file path configured in config.yaml (documented), and a locally running Ollama instance. Those credentials are proportional to the position-matching and local LLM needs. Note: credentials are expected to be stored in a config file under user-owned paths, so users should ensure the private key file permissions and config file storage meet their security policies.
- Persistence & Privilege
- noteThe skill creates and writes state under the user's home directory (~/.openclaw/state) and caches signals/history there. It does not request always:true or attempt to modify other skills; it reads config files from several standard locations. Writing local state is expected for its purpose but is a persistence/privacy consideration (sensitive signal history and cached summaries are stored on disk).
