Epistemic Council
PassAudited by ClawScan on May 1, 2026.
Overview
The skill appears to be a local Epistemic Council pipeline runner that executes Python, uses a local model service, and keeps workspace state, with no artifact-backed evidence of exfiltration, credential abuse, or destructive behavior.
Before installing, be comfortable with the skill running local Python from the OpenClaw workspace, calling a localhost Ollama model, and retaining pipeline data in local substrate/log files. Avoid sensitive inputs unless you trust the local model and workspace, and review the code/provenance if the pipeline will influence important decisions.
Findings (4)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Running the skill will execute Python code in the local OpenClaw workspace and may change the pipeline's local records.
The skill directs the agent to run a local Python pipeline through shell-style execution. This is central to the stated purpose, but it is still local code execution.
Use the `exec` tool ... cd /root/.openclaw/workspace-epistemic-council-bot/epistemic_council && python epistemic_skill.py "run council"
Invoke it only when you intend to run the council pipeline, and review the workspace/code source before using it with important data.
Users have less independent context for verifying where the code came from or whether it matches an upstream project.
The artifacts do not provide an upstream source or homepage for provenance verification, even though the skill includes runnable Python files.
Source: unknown; Homepage: none
Install only if you trust the publisher/package, and prefer reviewing the included code or obtaining it from a known source.
Incorrect, adversarial, or sensitive claims stored in the substrate could be carried forward into future pipeline outputs.
The pipeline reuses stored substrate claims as model context and writes new insights back to persistent storage, so bad or low-quality stored content can influence later runs.
Pull visible claims from both domains off the substrate ... Send them to the model ... Write anything above the threshold to the substrate as an insight
Review stored claims/insights and use the audit or validation commands before relying on results, especially after importing or generating sensitive content.
Any sensitive text included in prompts or stored claims may be processed by the local model service.
Prompts containing user queries and pipeline claims are sent to a local Ollama-compatible model service.
base_url: str = "http://localhost:11434" ... requests.post(f"{self.base_url}/api/generate", json={"model": self.model_name, "prompt": prompt, ...})Use only a trusted local model service and avoid submitting sensitive data unless local model retention and access controls are acceptable.
