Install
openclaw skills install virtual-reading-groupOrchestrate a multi-agent virtual academic reading group. Use when reading multiple papers, generating expert discussion notes, cross-examining positions acr...
openclaw skills install virtual-reading-groupOrchestrate parallel expert agents to read papers, discuss findings, challenge each other's interpretations, and synthesize an integrated discussion document with traceable citations.
Minimum inputs required:
Optional inputs:
references/default-personas.md)The skill runs 4 sequential phases. Each phase must complete before the next begins.
| Phase | Agents | Input | Output |
|---|---|---|---|
| 1. Paper Reading | N experts (parallel) | Papers + research question | {AuthorYear}_notes.md, {Expert}_session_summary.md |
| 2. Junior Discussion | 1 junior researcher | All Phase 1 outputs | {Junior}_discussion.md |
| 3. Expert Responses | N experts (parallel) | Phase 2 output + other experts' summaries | {Expert}_response_to_{Junior}.md |
| 4. Synthesis | 1 synthesizer | All previous outputs | Integrated_Discussion_Summary.md |
For detailed prompts and phase specifications: Read references/workflow.md.
⚠️ Important: The prompts below are abbreviated summaries. For full prompt templates that produce quality output, use
references/workflow.md. The pseudocode blocks show orchestration structure — adapt to your actual sub-agent spawning mechanism.
- Confirm research question is specified
- Confirm paper list is non-empty
- Confirm output directory exists or create it
- Load personas from user input or references/default-personas.md
Determine number of experts and paper batches:
if paper_count <= 4:
num_experts = 1
elif paper_count <= 10:
num_experts = 2
elif paper_count <= 20:
num_experts = min(4, ceil(paper_count / 5))
else:
num_experts = min(8, ceil(paper_count / 5))
Distribute papers evenly across experts (max 5 per expert).
# ⚠️ Context contamination warning: assigning >5 papers per expert degrades
# note quality — later papers in the batch get shallower treatment as context
# fills up. Prefer 3-5 papers per agent for best results.
For each expert, spawn a sub-agent with:
expert-reader-{expert_name}references/paper-notes-template.md{output_dir}/{AuthorYear}_notes.md📄 Full prompt template: See references/workflow.md → Phase 1
Wait for all Phase 1 agents to complete before proceeding.
Spawn single agent with:
junior-discussion📄 Full prompt template: See references/workflow.md → Phase 2
Wait for Phase 2 to complete before proceeding.
For each expert, spawn a sub-agent with:
expert-response-{expert_name}📄 Full prompt template: See references/workflow.md → Phase 3
Wait for all Phase 3 agents to complete before proceeding.
Spawn single agent with:
synthesisassets/synthesis-template.md structure[Expert_A]/[Expert_B]/[Junior] + (PaperCode, §Section)📄 Full prompt template: See references/workflow.md → Phase 4
List all generated files and provide a brief summary of the discussion themes.
If user wants experts to expand on specific points:
For a full second round (new questions, new responses):
Chen_discussion_r1.md)If a phase fails:
references/workflow.md| File Type | Pattern | Example |
|---|---|---|
| Paper notes | {FirstAuthorLastName}{Year}_notes.md | Chen2024_notes.md |
| Expert summary | {ExpertLastName}_session_summary.md | Lin_session_summary.md |
| Junior discussion | {JuniorLastName}_discussion.md | Chen_discussion.md |
| Expert response | {ExpertLastName}_response_to_{JuniorLastName}.md | Lin_response_to_Chen.md |
| Synthesis | Integrated_Discussion_Summary.md | — |
Enforce in all agent prompts:
(AuthorYear, §Section) or (AuthorYear, p.X)[Expert_A], [Expert_B], [Junior]Never fabricate citations. If an agent cannot find the exact passage in the source text:
<!-- source not found -->Fabricated citations are worse than missing citations — they corrupt the knowledge base silently. Accuracy > Coverage.
If a paper has no PDF or markdown source available:
📭 未讀Only write substantive notes when the actual source document is accessible.
| Papers | Experts | Batches | Estimated Time |
|---|---|---|---|
| 1-6 | 1 | 1 | 15-20 min |
| 7-12 | 2 | 2 | 20-30 min |
| 13-24 | 3-4 | 3-4 | 30-45 min |
| 25-50 | 4-8 | 5-8 | 45-90 min |
Replace default personas by providing:
Expert A: Dr. [Name], [Role]. Background in [X].
Emphasizes [methodology/perspective]. Skeptical of [Y].
Tone: [collegial/rigorous/provocative].
Expert B: Dr. [Name], [Role]. Background in [X].
...
See references/default-personas.md for complete templates.
Pass the language parameter when invoking the orchestration:
Language: {language} instructionExample: "Run the reading group in Japanese" → adds Language: Japanese to all phase prompts.
Model choice significantly impacts output quality and cost:
| Configuration | Phases | Quality | Cost | Use When |
|---|---|---|---|---|
| Full opus | All phases use opus | Highest | $$$ | Publication-quality analysis, complex papers |
| Mixed | Phase 1: sonnet, Phases 2-4: opus | High | $$ | Good balance — reading is less reasoning-intensive |
| Budget | All phases use sonnet | Medium | $ | Quick exploration, simpler papers |
Recommendations:
This skill is standalone but works well with paper collection workflows:
references/workflow.md — Detailed phase specifications and full prompt templatesreferences/default-personas.md — Ready-to-use expert and junior researcher personasreferences/paper-notes-template.md — Template for individual paper notesassets/synthesis-template.md — Structure for the final integrated discussion summary