Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Peer Review

v1.0.0

Multi-model peer review layer using local LLMs via Ollama to catch errors in cloud model output. Fan-out critiques to 2-3 local models, aggregate flags, synthesize consensus. Use when: validating trade analyses, reviewing agent output quality, testing local model accuracy, checking any high-stakes Claude output before publishing or acting on it. Don't use when: simple fact-checking (just search the web), tasks that don't benefit from multi-model consensus, time-critical decisions where 60s latency is unacceptable, reviewing trivial or low-stakes content. Negative examples: - "Check if this date is correct" → No. Just web search it. - "Review my grocery list" → No. Not worth multi-model inference. - "I need this answer in 5 seconds" → No. Peer review adds 30-60s latency. Edge cases: - Short text (<50 words) → Models may not find meaningful issues. Consider skipping. - Highly technical domain → Local models may lack domain knowledge. Weight flags lower. - Creative writing → Factual review doesn't apply well. Use only for logical consistency.

0· 803·14 current·17 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
!
Purpose & Capability
The described purpose (fan-out critiques to local Ollama models) matches the listed dependencies (Ollama, local models, jq, curl). However the SKILL.md also describes posting to Discord channels (#the-deep, #swarm-lab, #reef-logs) and packaging a ‘Reef API’ endpoint for agent calls; those capabilities imply network access and service credentials but the skill declares no required environment variables, credentials, or config paths. That omission is an incoherence between stated capabilities/workflows and declared requirements.
!
Instruction Scope
The runtime instructions focus on running local model fan-out and aggregating critiques, which is in-scope. But they also instruct posting to specific Discord channels and packaging an API endpoint / logging with TPR tracking. The skill references bash scripts under workspace/scripts/ for review and seeding tests, yet no scripts are bundled or install steps provided. The instructions do not specify how to authenticate to Discord or how agents should call the Reef API, leaving ambiguous what data would be sent where and under what credentials.
Install Mechanism
This is an instruction-only skill with no install spec and no code files, which minimizes install-time risk. Dependencies are declared in prose (Ollama and specific local models, jq, curl), which is appropriate for this kind of local workflow. No external downloads or archive extraction are specified.
!
Credentials
The skill declares no required environment variables or credentials, yet the SKILL.md explicitly references posting to Discord channels and exposing a POST /review API endpoint and logging to '#reef-logs'. Those behaviors normally require tokens/URLs and configuration. The absence of any declared credential requirements is disproportionate to the external-communication behaviors described and creates ambiguity about where sensitive cloud model outputs or reviews would be sent/stored.
Persistence & Privilege
The skill does not request persistent always:true privileges, does not modify other skills, and has no install-time persistence. It appears to run on-demand and use local Ollama instances, so privilege/persistence concerns are low on their face.
What to consider before installing
Key things to check before installing or using this skill: - Local model & resource requirements: It expects Ollama running locally with specific models (mistral:7b, tinyllama:1.1b, llama3.1:8b). Those models consume disk, memory, and will increase latency — confirm you have hardware and are comfortable hosting them locally. - Missing scripts and automation: The SKILL.md refers to bash scripts in workspace/scripts/, but no scripts are bundled. Ask the publisher for the scripts or inspect the intended implementation before running ad-hoc commands. - External posting and credentials: The workflow mentions posting results to Discord channels and exposing a Reef API endpoint, but the skill declares no environment variables or tokens. Clarify where reviews/logs are sent, what credentials are required, and ensure any outbound webhooks or bot tokens are provided and stored securely (not hard-coded). - Data exposure: This skill is designed to send cloud-model outputs to local models and potentially to external logs or chats. Do not run it on sensitive data until you confirm where the outputs and derived critiques are stored or transmitted and who has access. - Test with non-sensitive data: Validate behavior on harmless samples first (check where files are written, what network calls are made, whether Discord/HTTP endpoints are invoked). - Operational tuning: Understand consensus thresholds, storage location (experiments/peer-review-results), and retention policies for logged reviews and TPR metrics. If the publisher provides the missing scripts and a clear configuration for external endpoints (with explicit environment variables for tokens and endpoints), the skill can be much less ambiguous. Without that, the mismatch between described external behaviors and declared requirements is the main reason for caution.

Like a lobster shell, security has layers — review code before you run it.

latestvk974mhy52kmtn9cywzwyg5brjx810x04

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments