Denario (Autonomous Research Pipeline)
ReviewAudited by ClawScan on May 10, 2026.
Overview
This skill mostly behaves like a Denario research wrapper, but it embeds an undisclosed Perplexity API key and can generate papers from hard-coded mock results.
Review this skill before installing. Remove the hard-coded Perplexity key, use your own scoped API keys, avoid providing private research data until all external data flows are clear, and treat generated papers/results as drafts that require human validation.
Findings (4)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Citation generation may run under an unknown Perplexity account/key, creating unclear billing, authorization, and credential-exposure risk.
The citation script embeds and sets a third-party API credential at runtime instead of asking the user to provide one or declaring it in the skill metadata.
os.environ["PERPLEXITY_API_KEY"] = "pplx-...ae00"
Remove the hard-coded key, revoke it if real, and require a user-provided, clearly declared Perplexity credential only when citations need it.
Research text or citation queries may be sent to an external provider the user was not clearly told about.
The citations path configures an additional external provider for citation generation, while the skill description only discloses Z.ai/Zhipu integration.
os.environ["PERPLEXITY_API_KEY"] = "pplx-..." ... d.get_paper(llm=glm, journal="NeurIPS", add_citations=True)
Disclose all external providers, what data they receive, and require explicit user configuration before enabling citation-provider calls.
A user could receive or share a scientific paper containing fabricated or placeholder results without realizing they were not produced from real data.
The paper-generation script injects fixed mock results before creating a paper, despite the skill presenting the paper stage as compiling the research pipeline output.
# Set mock results mock_results = """... DATCER achieved near-oracle performance ...""" d.set_results(mock_results) ... d.get_paper(llm=glm, journal="NeurIPS")
Make mock-data use explicit, require user confirmation, and default to using actual validated results or clearly watermarked draft output.
Dependency behavior can change over time, and a compromised or incompatible package version could affect the skill when it is run.
On first use, the wrapper creates a virtual environment and installs unpinned packages from the Python package ecosystem.
python3 -m venv "$VENV_DIR" "$VENV_DIR/bin/pip" install -q denario langchain-openai
Pin dependency versions, use a lockfile or hashes, and declare the install behavior in the skill metadata/install spec.
