别瞎说 - AI事实核查器/Checktruth - AI Fact-Checker
PassAudited by VirusTotal on May 10, 2026.
Overview
Type: OpenClaw Skill Name: checktruth Version: 1.0.3 The 'checktruth' skill is a legitimate fact-checking tool designed to verify the accuracy of AI responses or general text. Its core functionality is implemented via instructions in SKILL.md using the agent's built-in LLM and WebSearch tools. It includes an optional 'reference/' directory containing Python scripts (e.g., multi_model_verify.py, verify_answer.py) that demonstrate how to perform multi-model verification using external APIs like OpenAI, DeepSeek, and ZhipuAI. These scripts correctly handle API keys through environment variables and are transparently documented as optional developer resources, showing no signs of malicious intent, data exfiltration, or unauthorized execution.
Findings (0)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Sensitive or private claims pasted for checking could be reflected in search queries.
The fact-checking workflow sends search queries to external search providers, which is purpose-aligned but externally exposes query terms derived from the user's content.
WebSearch queries may be sent to search engines (google.com, bing.com, etc.)
Avoid submitting confidential material, or redact names, identifiers, and unreleased information before using web-backed verification.
The tool may produce a misleading confidence score if the retrieved sources are wrong, biased, or adversarial.
Retrieved web content becomes evidence for the agent's judgment, so low-quality or manipulated sources could affect the fact-checking result.
使用 WebSearch 或 WebFetch 搜索相关内容,获取 2-3 个权威参考来源。记录参考来源的核心信息作为验证依据。
Review the cited sources, prefer official or primary sources, and treat the score as assistance rather than a final authority.
If a user runs the optional scripts, their API keys authorize paid third-party LLM calls.
The included optional scripts can use sensitive provider credentials, though the artifact clearly states they are not required or executed by default.
The `reference/` folder contains optional Python scripts that DO require external LLM API keys (GLM, DeepSeek, Hunyuan, Kimi, MiniMax).
Only run the reference scripts after reviewing them, use limited-scope keys where possible, and monitor provider usage/costs.
Manually installing the optional dependencies could pull newer package versions than the author tested.
The optional reference code depends on external PyPI packages with version ranges rather than pinned hashes or lockfiles.
openai>=1.0.0 zhipuai>=2.0.0 dashscope>=1.0.0 moonshot-sdk>=0.1.0 pyyaml>=6.0
If using the reference code, install it in an isolated environment and consider pinning reviewed package versions.
