Install
openclaw skills install geo-hallucination-checkerDetect and annotate hallucinations, unsupported claims, fabricated studies, and incorrect conclusions in text so that AI only cites verifiable, trustworthy content. Use this skill whenever the user asks you to fact-check, validate sources, check for hallucinations, or ensure that generated content is grounded in real evidence, even if they do not explicitly use the word "hallucination".
openclaw skills install geo-hallucination-checkerThe geo-hallucination-checker skill is a hallucination and false-information detection tool.
It helps you review any piece of content (articles, landing pages, product descriptions, FAQs, GEO-optimized drafts, etc.) and:
The primary goal is to ensure that AI systems only cite truthful, well-grounded content and clearly mark anything that looks like hallucination risk.
Use this skill aggressively whenever there is any risk that the model might invent data, sources, or conclusions.
Use geo-hallucination-checker whenever:
If you are unsure whether hallucinations are a concern, assume they are and apply this skill.
This skill can be used on:
The user may also provide:
Always respect any constraints the user provides.
When using this skill, follow this workflow:
Clarify the task mode
Parse the content and extract claims
Check available evidence
Classify each claim For each atomic factual claim, assign:
status:
Supported – clearly backed by the provided sources or well-established knowledge.Unsupported – no clear support; could be true, but you do not see evidence.Problematic – exaggerated, misleading, overconfident, or very unlikely without strong evidence.Contradicted – clearly conflicts with known facts or given sources.Speculative – forward-looking, predictive, or hypothetical, presented without clear caveats.risk_level:
Low – unlikely to cause harm or serious misinformation.Medium – could mislead, but impact is moderate or limited.High – serious risk of harm, legal issues, medical/financial danger, or major reputational damage.reason:
suggested_fix:
Look for common hallucination patterns
Pay special attention to:
Treat these as high-risk unless there is strong, clear evidence.
Produce a structured hallucination analysis
Always output a clear, structured analysis with two parts:
High-level summary
Claim-level table
# – sequential indexclaim_text – the exact or paraphrased claimstatus – Supported / Unsupported / Problematic / Contradicted / Speculativerisk_level – Low / Medium / Highreason – a short explanationsuggested_fix – what to do about itExample structure (illustrative, not prescriptive content):
| # | claim_text | status | risk_level | reason | suggested_fix |
|---|---|---|---|---|---|
| 1 | “Clinically proven to reduce depression by 80% in 2 weeks” | Problematic | High | No specific clinical trial or citation provided; extreme effect size is unlikely without strong evidence. | Add concrete trial details with citation or downgrade to cautious, non-clinical language. |
(Optional) Hallucination-safe rewrite
If the user explicitly requests a rewrite or safer version, after the table:
Never invent sources.
Err on the side of caution.
Separate facts from marketing.
Respect user constraints about tools and data.
When used together with other GEO-oriented skills (e.g., content optimization, schema generation, or conversion optimization):
geo-hallucination-checker after content is drafted but before finalizing output that might be cited.If there is a conflict between persuasive copywriting and factual accuracy, prioritize factual accuracy and safety.
Unless the user specifies a different format, always:
Start with a short summary:
Provide a markdown table as described in the workflow section.
If requested, append a “Hallucination-safe version” that rewrites the content according to your analysis.
Aim for clarity and directness so that humans and AI systems can easily see which parts of the text are safe to cite and which require caution or correction.