{"skill":{"slug":"peer-review","displayName":"Peer Review","summary":"Multi-model peer review layer using local LLMs via Ollama to catch errors in cloud model output.\nFan-out critiques to 2-3 local models, aggregate flags, synthesize consensus.\n\nUse when: validating trade analyses, reviewing agent output quality, testing local model accuracy,\nchecking any high-stakes Claude output before publishing or acting on it.\n\nDon't use when: simple fact-checking (just search the web), tasks that don't benefit from\nmulti-model consensus, time-critical decisions where 60s latency is unacceptable,\nreviewing trivial or low-stakes content.\n\nNegative examples:\n- \"Check if this date is correct\" → No. Just web search it.\n- \"Review my grocery list\" → No. Not worth multi-model inference.\n- \"I need this answer in 5 seconds\" → No. Peer review adds 30-60s latency.\n\nEdge cases:\n- Short text (<50 words) → Models may not find meaningful issues. Consider skipping.\n- Highly technical domain → Local models may lack domain knowledge. Weight flags lower.\n- Creative writing → Factual review doesn't apply well. Use only for logical consistency.","tags":{"latest":"1.0.0"},"stats":{"comments":0,"downloads":994,"installsAllTime":19,"installsCurrent":16,"stars":0,"versions":1},"createdAt":1770927797485,"updatedAt":1777525123289},"latestVersion":{"version":"1.0.0","createdAt":1770927797485,"changelog":"Initial release — multi-model consensus layer using local LLMs via Ollama","license":null},"metadata":null,"owner":{"handle":"staybased","userId":"publishers:staybased","displayName":"staybased","image":"https://avatars.githubusercontent.com/u/216957304?v=4"},"moderation":{"isSuspicious":true,"isMalwareBlocked":false,"verdict":"suspicious","reasonCodes":["suspicious.llm_suspicious","suspicious.vt_suspicious"],"summary":"Detected: suspicious.llm_suspicious, suspicious.vt_suspicious","engineVersion":"v2.4.5","updatedAt":1777525123289}}