Install
openclaw skills install sopaper-evidenceEvidence-first research workflow for evidence discovery, source verification, and citation grounding. Use when the task requires searching, verifying, and or...
openclaw skills install sopaper-evidenceSopaper Evidence is an evidence-first research skill. Its job is to build a reliable evidence pack before supporting any downstream paper outline, abstract, related work summary, experiment plan, or draft section.
Version: v1.0.0
Canonical repository: https://github.com/sheepxux/SoPaper-Evidence
This published skill bundle includes the helper scripts it references under scripts/. The GitHub repository remains the public source of truth for releases, examples, and issue tracking.
Use the highest-quality source available for each claim.
Read references/source-priority.md when source quality or conflicts matter. Read references/input-schemas.md when stronger input structure is needed before running the workflow.
Collect or infer:
If the project scope is unclear, produce a short working scope and label assumptions.
Search for:
For each source, capture the title, URL or path, source type, and why it matters.
Use references/prior-work-search-playbook.md for a repeatable search process. For OpenClaw-specific work, use references/openclaw-evidence-playbook.md.
For each evidence item, classify it as:
verified_factproject_evidenceinferenceunverifiedDo not merge these labels. If a statement depends on inference, say so explicitly.
Use the schema in references/evidence-schema.md.
At minimum, extract:
Organize findings into:
related_workdatasets_and_benchmarksbaselinescase_studiesproject_resultsclaim_to_evidenceevidence_gapsUse assets/claim-evidence-map-template.md when the user needs a reusable deliverable.
Use assets/related-work-matrix-template.md when comparing papers, baselines, and benchmark coverage.
Use assets/experiment-gap-report-template.md when the task requires prioritizing missing experiments before drafting.
Use bundled scripts/build_evidence_ledger.py when the user already has markdown notes or source lists and needs a first-pass evidence ledger.
Use bundled scripts/generate_search_plan.py when the user starts only with a topic and needs a first-pass evidence search plan.
Use bundled scripts/generate_topic_claims.py when the user starts only with a topic and needs a cautious structured claims draft.
Use bundled scripts/search_external_sources.py when the user needs a first-pass source list from a topic or search plan.
Use bundled scripts/fetch_external_sources.py when raw URLs should be converted into structured source-note drafts before review.
Use bundled scripts/verify_source_notes.py when fetched notes should be conservatively upgraded into page-level verified facts or reviewed primary-source summaries before entering the ledger.
Use bundled scripts/run_evidence_pipeline.py when the user already has source files, claims, and optional result artifacts and wants one end-to-end draft pack. Result artifacts may be structured markdown, .csv, .tsv, or .json, and multiple result artifacts can be fused into aggregate project evidence.
Use bundled scripts/bootstrap_claim_map.py when the user already has a claims list and a ledger draft and needs a first-pass claim map.
Use bundled scripts/triage_evidence_gaps.py when the user needs a first-pass blocker/major/minor gap report from the current claims and evidence ledger.
Use bundled scripts/review_comparison_fairness.py when the user needs a dedicated fairness check on comparative claims, baseline breadth, metric grounding, and scope alignment.
Use bundled scripts/run_topic_evidence_pipeline.py when the user wants the full topic-driven workflow from theme to search plan, source list, fetched notes, ledger, claim map, and gap report.
Use bundled scripts/validate_input_bundle.py when the user has partially structured inputs and needs a quick schema check before running the pipeline.
Only after the evidence map is complete, support tasks such as:
Before writing, run the checks in references/claim-audit-rules.md. Use assets/paper-outline-from-evidence-template.md when the user needs a draft-safe paper structure.
Unless the user asks for something else, default to this output shape:
Evidence briefKey sourcesClaim-to-evidence mapEvidence gapsSafe writing notesExperiment gap report when blocker gaps existSee the example set in:
When supporting downstream paper writing:
When the user is working on OpenClaw or a similar embodied AI / robotics project, prioritize:
Do not assume OpenClaw has capabilities, datasets, or benchmark wins unless they are present in project artifacts or verified sources. Use references/benchmark-baseline-checklist.md before accepting benchmark-fit or baseline coverage claims. Use references/evidence-gap-triage.md when deciding whether to keep drafting or stop and report blockers.