Denario (Autonomous Research Pipeline)

PassAudited by VirusTotal on May 12, 2026.

Overview

Type: OpenClaw Skill Name: denario-skill Version: 1.0.0 The skill is classified as suspicious primarily due to a hardcoded `PERPLEXITY_API_KEY` found in `scripts/test_citations.py`, which exposes a secret. Additionally, the Python scripts (`scripts/test_citations.py`, `scripts/test_paper.py`) modify the `PATH` environment variable to include a custom TinyTeX path, a risky capability, even if plausibly for LaTeX compilation. While there's no clear evidence of intentional malicious behavior targeting the user or platform (like user credential theft or backdoor installation), these practices represent significant security risks and poor security hygiene.

Findings (0)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

Citation generation may run under an unknown Perplexity account/key, creating unclear billing, authorization, and credential-exposure risk.

Why it was flagged

The citation script embeds and sets a third-party API credential at runtime instead of asking the user to provide one or declaring it in the skill metadata.

Skill content
os.environ["PERPLEXITY_API_KEY"] = "pplx-...ae00"
Recommendation

Remove the hard-coded key, revoke it if real, and require a user-provided, clearly declared Perplexity credential only when citations need it.

What this means

Research text or citation queries may be sent to an external provider the user was not clearly told about.

Why it was flagged

The citations path configures an additional external provider for citation generation, while the skill description only discloses Z.ai/Zhipu integration.

Skill content
os.environ["PERPLEXITY_API_KEY"] = "pplx-..." ... d.get_paper(llm=glm, journal="NeurIPS", add_citations=True)
Recommendation

Disclose all external providers, what data they receive, and require explicit user configuration before enabling citation-provider calls.

What this means

A user could receive or share a scientific paper containing fabricated or placeholder results without realizing they were not produced from real data.

Why it was flagged

The paper-generation script injects fixed mock results before creating a paper, despite the skill presenting the paper stage as compiling the research pipeline output.

Skill content
# Set mock results
mock_results = """... DATCER achieved near-oracle performance ..."""
d.set_results(mock_results)
...
d.get_paper(llm=glm, journal="NeurIPS")
Recommendation

Make mock-data use explicit, require user confirmation, and default to using actual validated results or clearly watermarked draft output.

What this means

Dependency behavior can change over time, and a compromised or incompatible package version could affect the skill when it is run.

Why it was flagged

On first use, the wrapper creates a virtual environment and installs unpinned packages from the Python package ecosystem.

Skill content
python3 -m venv "$VENV_DIR"
"$VENV_DIR/bin/pip" install -q denario langchain-openai
Recommendation

Pin dependency versions, use a lockfile or hashes, and declare the install behavior in the skill metadata/install spec.