Install
openclaw skills install ai-paper-surveyConduct structured AI paper surveys using alphaXiv MCP tools. Reads user research interests from a keywords file, searches recent papers across multiple dime...
openclaw skills install ai-paper-surveyStructured, multi-phase paper survey workflow for AI research.
embedding_similarity_search, full_text_papers_search, get_paper_content)Check if a research keywords file exists. Look for files matching patterns:
研究关键词*.mdresearch-keywords*.mdresearch-interests*.md
in the current working directory.If found, read it and extract:
If no keywords file, ask the user for:
Determine the time range (default: last 3 months from today).
Generate search queries using the template below. For each user theme T, generate:
Semantic query: "Fundamental advances in {T}, paradigm shift, redefine {T}, {year}"
Keyword query: "{specific_keywords_from_T} {year_range}"
Contrast query: "Alternative to {current_paradigm_of_T}, beyond {T}, {year}"
Execute search queries in parallel using alphaXiv MCP tools:
embedding_similarity_search for semantic queries (captures conceptual matches)full_text_papers_search for keyword queries (captures exact term matches)Rules:
Expected output: 30-60 unique candidate papers with titles and abstracts.
For each candidate paper, classify by the user's framework. Default framework (3-tier):
Rules:
Expected output: Classified paper list with tier assignments.
For Tier 1 and top Tier 2 papers (4-6 papers max), use get_paper_content to retrieve full analysis.
After reading each paper, immediately extract and cache:
Discard the raw full-text analysis after extraction to manage context window.
For each paper in the deep reading set, run the paper-impact-analyzer:
python path/to/paper-impact-analyzer/scripts/analyze.py {arxiv_id_1} {arxiv_id_2} ...
Merge impact data with the content analysis from Phase 3.
Generate a structured Markdown report with the following sections:
# {Topic} Paper Survey — {Date Range}
> Survey date: {today}
> Scope: {themes covered}
> Papers screened: {N candidates} → {M selected}
## Classification Framework
{Describe the tier system used}
## Tier 1 (Essence): Redefining the Problem
### Paper 1: {Title}
- **Essential question**: What fundamental assumption does this challenge?
- **Core contribution**: {1 sentence}
- **Key result**: {best number}
- **Impact**: {rating from analyzer} | {venue} | {github stars}
- **Links**: arXiv | GitHub
{... repeat for each Tier 1 paper}
## Tier 2 (Engineering): Doing It Better
| Paper | Contribution | Impact | Links |
|-------|-------------|--------|-------|
{table rows}
## Tier 3 (Patches): Symptom Relief
| Paper | What it fixes | Links |
|-------|--------------|-------|
{table rows}
## Top 3 Recommended Papers
{Ranked list with justification combining content depth + impact signals}
## Trends & Observations
{2-3 paragraphs on emerging patterns}
Save the report to {working_directory}/{topic}-paper-survey-{date}.md.
Users can override the default 3-tier framework by specifying their own in the keywords file. The skill will use whatever framework the user provides.
| Level | Searches | Deep reads | Best for |
|---|---|---|---|
| Quick | 4 | 2-3 | Weekly check-in |
| Standard | 6 | 4-6 | Monthly review |
| Thorough | 8-10 | 6-8 | Quarterly survey |
Default: Standard.
Survey the last 3 months of papers in my research areas
Quick survey: what's new in LLM reasoning and agent tool-calling since January?
Thorough literature review on RL training methods for LLMs, classify by innovation tier