Install
openclaw skills install literature-managerSearch, download, convert, organize, and audit academic literature collections. Use when asked to find papers, build a literature library, add papers to references, download PDFs, convert papers to markdown, organize references by category, audit a reference collection, or collect code/dataset links for tools mentioned in papers.
openclaw skills install literature-managerManage academic literature collections: search → download → convert → organize → verify.
pdftotext (poppler-utils) — PDF text extractioncurl — downloadingpython3 — JSON processing in auditfile (coreutils) — PDF validationuvx markitdown[pdf] (optional) — fallback PDF→MD converter (note: plain uvx markitdown does NOT work for PDFs — must use uvx markitdown[pdf])# Download a single paper by DOI
bash scripts/download.sh "10.1038/s41592-024-02200-1" output_dir/
# Convert PDF to markdown
bash scripts/convert.sh paper.pdf output.md
# Verify a single PDF+MD pair
bash scripts/verify.sh paper.pdf paper.md
# Full audit of a references/ folder
bash scripts/audit.sh /path/to/references/
Use web_fetch on Google Scholar:
https://scholar.google.com/scholar?q=QUERY&as_ylo=YEAR
Extract: title, authors, year, journal, DOI, PDF links.
For each result, identify the best open-access PDF source (see Download Strategy).
Run scripts/download.sh <DOI_or_URL> <output_dir/> per paper. The script tries sources in order:
PMC_ID → PDF)https://sci-hub.box/<DOI> (use when publisher is paywalled)# Sci-Hub download example:
curl -L "https://sci-hub.box/10.1038/nature12345" -o paper.pdf
⚠️ Legal note: Sci-Hub may violate publisher terms of service or copyright law in some jurisdictions. Use only if you understand and accept the legal implications in your context.
If all sources fail (including Sci-Hub), flag as permanent paywall. Provide the user with the DOI and ask for manual download.
Run scripts/convert.sh <input.pdf> <output.md>. Uses pdftotext (reliable) with uvx markitdown[pdf] as fallback.
# Correct markitdown command for PDFs:
uvx markitdown[pdf] input.pdf > output.md
# ⚠️ The following will NOT work for PDFs (missing [pdf] extra):
# uvx markitdown input.pdf
Prefer uvx markitdown[pdf] over pdftotext when full fidelity (tables, figures captions) matters.
Standard folder structure:
references/
├── README.md # Human index (summaries per category)
├── index.json # Machine index (structured metadata)
├── RESOURCES.md # Code repos + datasets
├── resources.json # Structured version
├── <category-1>/
│ ├── papers/ # PDFs
│ └── markdown/ # Converted text
└── <category-N>/
├── papers/
└── markdown/
Categories are user-defined. Number-prefix for sort order (e.g., 01-theoretical-frameworks/).
{
"id": "short_id",
"title": "Full title",
"authors": ["Author1", "Author2"],
"year": 2024,
"journal": "Journal Name",
"doi": "10.xxxx/...",
"category": "category_name",
"subcategory": "optional",
"pdf_path": "category/papers/filename.pdf",
"markdown_path": "category/markdown/filename.md",
"tags": ["tag1", "tag2"],
"one_line_summary": "English one-liner",
"key_concepts": ["concept1"],
"relevance_to_project": "English description"
}
Per category section, per paper: title, authors, year, journal, DOI, short summary in user's language.
Downloaded files are often named using DOI format rather than AuthorYear:
10-1038_ncomms3018.md # DOI: 10.1038/ncomms3018
10-1016_j-neuron-2015-03-034.md
When markdown_path entries in index.json become stale (e.g., after folder reorganization), maintain a separate mapping file:
// temp/paper_md_mapping.json
{
"author2024_keyword": "references/new-downloads/10-1038_s41592-024-02200-1.md",
...
}
To build this mapping: cross-reference each paper's DOI in index.json against actual files on disk. Use find + Python to automate.
id: null corruption: If many entries have id=null and share the same pdf_path, the index was likely corrupted during a batch write. Rebuild from actual files on disk.markdown_path: After restructuring folders, markdown_path in index.json often points to old locations. Use the mapping file above as the source of truth.Run scripts/audit.sh <references_dir/> for full verification:
file -b = PDF)pdftotext | head)For tool/method papers, find GitHub repos and public datasets. Store in RESOURCES.md + resources.json.
For large batches, parallelize:
Always use a separate sub-agent for verification (QC should not self-grade).
1. Spawn agent(s)
2. Immediately set a cron job (every 10-15 min, isolated agentTurn)
→ Check if expected output files exist
→ Re-spawn failed agents
→ When all complete: announce + delete cron
3. After task finishes, confirm cron was removed
To add papers to an existing collection: