{"skill":{"slug":"geo-hallucination-checker","displayName":"Geo Hallucination Checker","summary":"Detect and annotate hallucinations, unsupported claims, fabricated studies, and incorrect conclusions in text so that AI only cites verifiable, trustworthy c...","tags":{"latest":"0.1.0"},"stats":{"comments":0,"downloads":288,"installsAllTime":0,"installsCurrent":0,"stars":0,"versions":1},"createdAt":1773107746958,"updatedAt":1777525781831},"latestVersion":{"version":"0.1.0","createdAt":1773107746958,"changelog":"Initial release of geo-hallucination-checker skill:\n\n- Detects and annotates hallucinations, unsupported claims, fabricated studies, and incorrect conclusions in any text.\n- Flags vague, unsourced, or overconfident claims, with attention to high-risk areas (medical, financial, scientific, technical).\n- Provides a structured analysis with claim classification (Supported/Unsupported/Problematic/Contradicted/Speculative), risk levels, reasons, and concrete recommendations.\n- Outputs a markdown table for easy review, plus a high-level summary of hallucination risk.\n- Offers hallucination-safe rewrites on request, ensuring content is citation-ready and avoids fabrication.\n- Prioritizes user-provided sources and enforces strong constraints against inventing information.","license":"MIT-0"},"metadata":null,"owner":{"handle":"geoly-geo","userId":"publishers:geoly-geo","displayName":"GEOLY AI","image":"https://avatars.githubusercontent.com/u/70360114?v=4"},"moderation":null}