{"skill":{"slug":"hallucination-check","displayName":"Hallucination Check","summary":"LLM hallucination detector with dual strategy (UQLM + rule-based fallback). Scores any AI output's confidence and flags potential hallucination risks.","tags":{"latest":"1.0.0"},"stats":{"comments":0,"downloads":102,"installsAllTime":0,"installsCurrent":0,"stars":0,"versions":1},"createdAt":1778345464551,"updatedAt":1778492888959},"latestVersion":{"version":"1.0.0","createdAt":1778345464551,"changelog":"- Initial release of Hallucination Check: an LLM hallucination detector with dual strategy (UQLM + rule-based fallback).\n- Scores AI outputs for confidence and flags potential hallucination risks.\n- Uses UQLM uncertainty quantification as the primary method; falls back to rule-based detection if dependencies are missing.\n- Provides CLI and Python API for easy integration.\n- Outputs risk level and suggestions based on configurable thresholds.","license":"MIT-0"},"metadata":null,"owner":{"handle":"li8476295-bot","userId":"s1731jjgx5bq9c9nhbty31etyd86crj9","displayName":"li8476295-bot","image":"https://avatars.githubusercontent.com/u/265793590?v=4"},"moderation":null}