dingo data quality
PassAudited by VirusTotal on May 11, 2026.
Overview
Type: OpenClaw Skill Name: dingo Version: 1.0.4 The skill bundle provides an integration for 'Dingo', a legitimate data quality and fact-checking tool. The included Python script (scripts/fact_check.py) demonstrates strong security practices, such as validating file paths to prevent traversal attacks against special filesystems and using secure temporary file creation. The instructions in SKILL.md correctly guide the agent to handle API keys securely and follow best practices for Python multiprocessing. While the documentation mentions non-existent model names (e.g., gpt-5.4), there is no evidence of malicious intent, data exfiltration, or harmful prompt injection.
Findings (0)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Installing the skill's recommended package gives that package code execution in the user's Python environment.
The skill instructs users to install an external PyPI package, including optional extras, without a pinned version. This is purpose-aligned and user-directed, but it is still a supply-chain dependency users should trust before installing.
pip install dingo-python
Install from the official package source, consider pinning a known-good version, and use a virtual environment for evaluation work.
Provider keys may authorize API usage and costs for LLM or search calls.
The skill can use provider credentials for LLM evaluation and optional web search. This matches the stated functionality and there is no evidence of credential leakage or unrelated use.
"OPENAI_API_KEY": "API key for LLM-based evaluation and ArticleFactChecker fact-checking", "TAVILY_API_KEY": "Tavily API key for web search verification in ArticleFactChecker"
Use scoped or disposable API keys where possible, monitor usage, and avoid placing secrets in shared config files.
Sensitive training data, RAG context, or article text could be sent to the configured model provider during LLM-based evaluation.
LLM-based evaluation is explicitly configured to call an OpenAI-compatible provider endpoint. Dataset, RAG, or article content being evaluated may be processed by the selected provider as part of the intended workflow.
"API key required | No | Yes (any OpenAI-compatible API)" and "api_url": "https://api.deepseek.com/v1"
Use rule-based mode for confidential data when possible, or confirm the provider's privacy and retention terms before using LLM-based metrics.
Evaluation outputs may leave local copies of sensitive source text or derived claims after the run completes.
The skill documents local intermediate and output files that may retain the original article text and extracted claims. This is expected for evaluation reporting, but users should be aware of local persistence.
ArticleFactChecker also saves intermediate artifacts: `article_content.md`, `claims_extracted.jsonl`, `claims_verification.jsonl`, `verification_report.json`
Store outputs in an appropriate directory, avoid evaluating secrets unless necessary, and delete generated artifacts when no longer needed.
