Denario (Autonomous Research Pipeline)

ReviewAudited by ClawScan on May 10, 2026.

Overview

This skill mostly behaves like a Denario research wrapper, but it embeds an undisclosed Perplexity API key and can generate papers from hard-coded mock results.

Review this skill before installing. Remove the hard-coded Perplexity key, use your own scoped API keys, avoid providing private research data until all external data flows are clear, and treat generated papers/results as drafts that require human validation.

Findings (4)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

Citation generation may run under an unknown Perplexity account/key, creating unclear billing, authorization, and credential-exposure risk.

Why it was flagged

The citation script embeds and sets a third-party API credential at runtime instead of asking the user to provide one or declaring it in the skill metadata.

Skill content
os.environ["PERPLEXITY_API_KEY"] = "pplx-...ae00"
Recommendation

Remove the hard-coded key, revoke it if real, and require a user-provided, clearly declared Perplexity credential only when citations need it.

What this means

Research text or citation queries may be sent to an external provider the user was not clearly told about.

Why it was flagged

The citations path configures an additional external provider for citation generation, while the skill description only discloses Z.ai/Zhipu integration.

Skill content
os.environ["PERPLEXITY_API_KEY"] = "pplx-..." ... d.get_paper(llm=glm, journal="NeurIPS", add_citations=True)
Recommendation

Disclose all external providers, what data they receive, and require explicit user configuration before enabling citation-provider calls.

What this means

A user could receive or share a scientific paper containing fabricated or placeholder results without realizing they were not produced from real data.

Why it was flagged

The paper-generation script injects fixed mock results before creating a paper, despite the skill presenting the paper stage as compiling the research pipeline output.

Skill content
# Set mock results
mock_results = """... DATCER achieved near-oracle performance ..."""
d.set_results(mock_results)
...
d.get_paper(llm=glm, journal="NeurIPS")
Recommendation

Make mock-data use explicit, require user confirmation, and default to using actual validated results or clearly watermarked draft output.

What this means

Dependency behavior can change over time, and a compromised or incompatible package version could affect the skill when it is run.

Why it was flagged

On first use, the wrapper creates a virtual environment and installs unpinned packages from the Python package ecosystem.

Skill content
python3 -m venv "$VENV_DIR"
"$VENV_DIR/bin/pip" install -q denario langchain-openai
Recommendation

Pin dependency versions, use a lockfile or hashes, and declare the install behavior in the skill metadata/install spec.