Evidence-First Research

v0.1.0

Evidence-first workflow for scientific research, literature review, method selection, study planning, biomedical analysis, and research writing. Use when Cod...

0· 8·0 current·0 all-time
byZack@zackz2025
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name, description, and runtime instructions all describe an evidence-first research workflow (literature search, evidence appraisal, method selection). The skill requests no binaries, credentials, or installs that would be unrelated to that purpose.
Instruction Scope
SKILL.md instructs the agent to search literature, evaluate sources, and—when relevant—inspect local project artifacts. Those instructions stay within the research remit and do not direct the agent to read unrelated system files, credentials, or to transmit data to unknown endpoints.
Install Mechanism
No install spec or code files that would be written to disk; the skill is instruction-only, which minimizes installation risk.
Credentials
No required environment variables, primary credential, or config paths are declared. The skill does not request secrets or external credentials beyond what a normal research assistant would need.
Persistence & Privilege
always is false and the skill is user-invocable. It does not request permanent presence or modification of other skills or system-wide settings.
Assessment
This skill is internally consistent and low-risk as an instruction-only workflow. Before using it: (1) confirm whether your agent environment will allow the model to read local project files—only grant that if you want the agent to inspect those artifacts; (2) avoid submitting patient-identifiable or sensitive data unless you have appropriate protections and consent; (3) treat the skill's outputs as literature-informed recommendations, not definitive clinical advice—verify with domain experts and primary sources; and (4) review any citations or suggested tools/datasets for licensing or security implications before reuse.

Like a lobster shell, security has layers — review code before you run it.

latestvk979x1ht81nkrdkw9kqfdw17bx8529hr
8downloads
0stars
1versions
Updated 4h ago
v0.1.0
MIT-0

Evidence First Research

Overview

Use a research-before-starting workflow. Search existing evidence, tools, datasets, and established patterns before drafting an analysis plan, recommending a method, or producing scientific content.

Default to reuse or adaptation of validated approaches. Only introduce a novel method, pipeline, or claim of novelty after establishing that the gap is real.

Core Workflow

  1. Define the task precisely.
  • Restate the objective, target deliverable, domain, and decision stakes.
  • Identify whether the task is literature synthesis, study design, data analysis, protocol drafting, manuscript support, tool selection, or interpretation.
  • For medical work, identify the population, setting, intervention or exposure, comparator, outcomes, and time horizon when applicable.
  1. Search before acting.
  • Search for prior papers, systematic reviews, guidelines, benchmark datasets, existing tools, libraries, protocols, ontologies, and reporting standards before proposing work.
  • Prefer primary and authoritative sources over tertiary summaries when accuracy matters.
  • Search for negative results, contradictory evidence, failure modes, and replication attempts instead of only supportive results.
  • Inspect local project artifacts before suggesting a new workflow when the task depends on an existing codebase, dataset, or protocol.
  1. Evaluate evidence quality.
  • Rank sources by relevance, rigor, recency, and direct applicability to the question.
  • Distinguish peer-reviewed papers, preprints, guidelines, textbooks, package documentation, and informal discussion.
  • Prioritize strong syntheses and well-matched study designs over isolated or weakly related findings.
  • Flag uncertainty explicitly when evidence is indirect, outdated, conflicting, underpowered, or drawn from a mismatched population.
  1. Choose the action mode deliberately.
  • Adopt an established method when a strong, well-matched pattern already exists.
  • Adapt a validated method when the problem is similar but not identical.
  • Benchmark multiple credible approaches when the field lacks a dominant standard.
  • Design a new approach only after documenting what was searched, what already exists, and why it is insufficient.
  1. Execute with traceability.
  • State the chosen approach and why it was selected over alternatives.
  • Cite the papers, tools, libraries, datasets, or standards that informed the decision.
  • Separate evidence, inference, and speculation.
  • Preserve assumptions, inclusion criteria, exclusion criteria, and unresolved questions.
  1. Re-check before finalizing.
  • Verify that the strength of the final claims matches the strength of the underlying sources.
  • Re-open the search if a key assumption is unsupported or if a stronger source is likely to exist.
  • Perform an extra review for harms, contraindications, bias, and guideline consistency when the output is medically relevant or otherwise high stakes.

Search Targets

  • Search for literature first: systematic reviews, meta-analyses, guidelines, seminal papers, recent high-quality studies, protocols, replication studies.
  • Search for research infrastructure: benchmark datasets, registries, ontologies, reference implementations, software packages, analysis pipelines, laboratory or clinical standards.
  • Search for methodological patterns: study designs, statistical approaches, outcome definitions, preprocessing conventions, validation schemes, reporting frameworks.
  • Search for practical constraints: data availability, licensing, regulatory context, ethical constraints, reporting expectations, reproducibility requirements.

Decision Heuristics

  • Prefer "adopt" when the question is standard and the field already has a stable method.
  • Prefer "adapt" when the method exists but the data, population, or setting differs.
  • Prefer "benchmark" when several plausible methods compete and no clear winner exists.
  • Prefer "invent" only after showing that existing methods, tools, or study patterns do not adequately solve the problem.

Medical Emphasis

  • Give extra weight to clinical practice guidelines, systematic reviews, meta-analyses, and pivotal trials when the task involves patient care, diagnostics, treatment, prognosis, or safety.
  • Treat preprints, conference abstracts, animal models, in vitro studies, single-center retrospective studies, case reports, and expert opinion as lower-certainty evidence unless the task specifically requires them.
  • Avoid patient-specific recommendations without current sources, clear scope limits, and explicit uncertainty.
  • Flag when geography, formulary availability, regulatory status, standard of care, or population differences may change the answer.
  • Distinguish mechanistic plausibility from clinical effectiveness, and surrogate outcomes from patient-important outcomes.

Output Pattern

Before doing deep work, produce a concise research checkpoint when useful:

  • Objective.
  • Search targets.
  • Best existing papers, tools, or patterns found so far.
  • Evidence strength and important gaps.
  • Chosen path: adopt, adapt, benchmark, or invent.
  • Main risks, assumptions, and next step.

Anti-Patterns

  • Do not start building, analyzing, or writing as if the problem were novel without checking the literature and existing tools.
  • Do not equate "published" with "reliable" or "recent" with "best."
  • Do not overgeneralize from weak evidence, surrogate endpoints, or mechanistic arguments.
  • Do not rely on abstracts alone when methods or limitations matter.
  • Do not ignore population mismatch, confounding, missing comparators, or sample size limitations.
  • Do not present speculation as consensus.
  • Do not skip contradictory evidence just because it complicates the answer.

References

Comments

Loading comments...