dingo data quality

PassAudited by VirusTotal on May 11, 2026.

Overview

Type: OpenClaw Skill Name: dingo Version: 1.0.4 The skill bundle provides an integration for 'Dingo', a legitimate data quality and fact-checking tool. The included Python script (scripts/fact_check.py) demonstrates strong security practices, such as validating file paths to prevent traversal attacks against special filesystems and using secure temporary file creation. The instructions in SKILL.md correctly guide the agent to handle API keys securely and follow best practices for Python multiprocessing. While the documentation mentions non-existent model names (e.g., gpt-5.4), there is no evidence of malicious intent, data exfiltration, or harmful prompt injection.

Findings (0)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

Installing the skill's recommended package gives that package code execution in the user's Python environment.

Why it was flagged

The skill instructs users to install an external PyPI package, including optional extras, without a pinned version. This is purpose-aligned and user-directed, but it is still a supply-chain dependency users should trust before installing.

Skill content
pip install dingo-python
Recommendation

Install from the official package source, consider pinning a known-good version, and use a virtual environment for evaluation work.

What this means

Provider keys may authorize API usage and costs for LLM or search calls.

Why it was flagged

The skill can use provider credentials for LLM evaluation and optional web search. This matches the stated functionality and there is no evidence of credential leakage or unrelated use.

Skill content
"OPENAI_API_KEY": "API key for LLM-based evaluation and ArticleFactChecker fact-checking", "TAVILY_API_KEY": "Tavily API key for web search verification in ArticleFactChecker"
Recommendation

Use scoped or disposable API keys where possible, monitor usage, and avoid placing secrets in shared config files.

What this means

Sensitive training data, RAG context, or article text could be sent to the configured model provider during LLM-based evaluation.

Why it was flagged

LLM-based evaluation is explicitly configured to call an OpenAI-compatible provider endpoint. Dataset, RAG, or article content being evaluated may be processed by the selected provider as part of the intended workflow.

Skill content
"API key required | No | Yes (any OpenAI-compatible API)" and "api_url": "https://api.deepseek.com/v1"
Recommendation

Use rule-based mode for confidential data when possible, or confirm the provider's privacy and retention terms before using LLM-based metrics.

What this means

Evaluation outputs may leave local copies of sensitive source text or derived claims after the run completes.

Why it was flagged

The skill documents local intermediate and output files that may retain the original article text and extracted claims. This is expected for evaluation reporting, but users should be aware of local persistence.

Skill content
ArticleFactChecker also saves intermediate artifacts: `article_content.md`, `claims_extracted.jsonl`, `claims_verification.jsonl`, `verification_report.json`
Recommendation

Store outputs in an appropriate directory, avoid evaluating secrets unless necessary, and delete generated artifacts when no longer needed.