Paper Scout

v0.1.0

Search and summarize research papers in robotics and related fields. Use when asked to find recent papers, digest academic literature, or scout publications...

0· 290·3 current·3 all-time
bywyp@wyp41

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for wyp41/paper-scout.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Paper Scout" (wyp41/paper-scout) from ClawHub.
Skill page: https://clawhub.ai/wyp41/paper-scout
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Canonical install target

openclaw skills install wyp41/paper-scout

ClawHub CLI

Package manager switcher

npx clawhub@latest install paper-scout
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
Name and description match the instructions: searching CrossRef and (as a fallback) Google Scholar to build Markdown digests. No unrelated credentials, binaries, or config paths are requested, so the requested capabilities line up with the stated purpose.
Instruction Scope
SKILL.md instructs the agent to query CrossRef, deduplicate/filter results, and save Markdown files to ~/Desktop. It also calls for Google Scholar scraping using 'real-time browser automation' for JS-heavy pages; however it doesn't specify what automation tooling (e.g., puppeteer, selenium) or whether it will use the user's browser/profile. Writing outputs to the user's Desktop is explicit and should be acceptable if expected.
Install Mechanism
No install spec or code files are included (instruction-only). This is low-risk from an install perspective because nothing is written to disk by an installer. Any runtime downloading or automation tooling would be the agent's responsibility, not specified here.
Credentials
The skill declares no required environment variables, credentials, or config paths. That is proportionate to a read-only literature-scouting function. Note: Google Scholar scraping sometimes needs proxies, cookies, or logins in practice; the SKILL.md does not request or justify such secrets.
Persistence & Privilege
always is false and user-invocable is true (normal defaults). The SKILL.md does not request permanent agent-level privileges or changes to other skills or system-wide settings.
Assessment
This skill is coherent for its stated task, but before installing: 1) confirm how the agent will perform the Google Scholar step — what browser automation/tooling it will use and whether it will access your local browser/profile or require credentials; 2) expect the agent to write Markdown files to your Desktop (~/Desktop/YYYY-MM-DD-academic-digest.md); 3) understand that scraping Google Scholar can trigger rate limits, CAPTCHAs or violate terms of service — ask whether the implementation uses polite rate-limiting or official APIs where possible; and 4) if you need stronger guarantees, request the author to specify required tooling and to allow configuring the output path and scraping behavior (or to rely only on CrossRef/API-based sources). If you cannot confirm those points, proceed cautiously.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🔭 Clawdis
latestvk97c4qsnedys59fk2qsjtz0ww582ez9g
290downloads
0stars
1versions
Updated 1mo ago
v0.1.0
MIT-0

SKILL.md - paper_scout


Overview

paper_scout automates searching and summarizing research papers in robotics and related fields. It uses CrossRef as the primary data source, with Google Scholar as the supplementary source for missing fields or broader coverage.


Functionality

  1. Primary Query Engine:

    • CrossRef: Quick and structured search for venues like TRO, IJRR, Science Robotics, etc.
  2. Backup Source:

    • Google Scholar: Scrapes data using real-time browser automation for keywords and specific venues.
  3. Filters:

    • Keywords:

      muscle parameter estimation, reinforcement learning, imitation learning, human intention prediction, human-robot interaction collaboration.

    • Top venues: TRO, IJRR, RAL, Science Robotics, TMech, CVPR, ICLR, ICCV.
    • Date: Year filtering to prioritize recent publications (e.g., since 2024).
  4. Data Outputs: Structured Markdown reports saved to ~/Desktop/YYYY-MM-DD-academic-digest.md with fields:

    • Title
    • Source venue (e.g., Science Robotics)
    • Year
    • Authors
    • Abstract
    • Major contributions/innovation points

Example Configuration

# paper_scout SKILL.yaml
crawl:
  sources:
    - crossref
    - scholar_google
queries:
  custom: []  # User-provided
  defaults:
    - reinforcement learning robotics
    - human intention prediction imitation learning
filters:
  venues: ["TRO", "IJRR", "Science Robotics", "RAL", "CVPR"]
  date: since:2024
output:
  path: ~/Desktop/{{today}}-academic-digest.md
  markdown: true

Example Usage

  1. Query Crossref for reinforcement learning robotics papers since 2024.
  2. Supplement missing information by scraping Google Scholar (sorted by date).
  3. Remove duplicates, filter results to top academic venues.
  4. Save a Markdown digest to the desktop.

Notes

  • Automates cross-source integration with duplicate removal.
  • Extensible for more specialized sources (e.g., IEEE, PubMed).
  • Browser scraping (Google Scholar) requires interactive sessions for JS-heavy pages.

Comments

Loading comments...