Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Paper review pro

v1.0.0

高精度论文检索与检阅系统,支持多源检索、智能筛选、结构化摘要、BibTeX 导出、CCF 评级与综合评分

0· 100·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for alfredliang11/paper-review-pro.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Paper review pro" (alfredliang11/paper-review-pro) from ClawHub.
Skill page: https://clawhub.ai/alfredliang11/paper-review-pro
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install paper-review-pro

ClawHub CLI

Package manager switcher

npx clawhub@latest install paper-review-pro
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
high confidence
!
Purpose & Capability
The skill's name/description (multi-source paper retrieval, summarization, BibTeX export, CCF ranking) match the included code and modules (arXiv/Semantic Scholar search, summarizer, bibtex, scoring). However the code accesses global OpenClaw config (~/.openclaw/openclaw.json) and several environment variables (OPENCLAW_GATEWAY_TOKEN, DASHSCOPE_API_KEY, DASHSCOPE_MODEL, HF endpoints) that are not declared in the skill metadata or SKILL.md as required secrets — this is disproportionate to the claimed purpose because it reaches into user/global configuration for credentials.
!
Instruction Scope
Runtime instructions tell the agent to run local scripts (config.py, review.py). The code performs web requests (arXiv API + fallback scraping, Semantic Scholar API), calls LLM endpoints (OpenClaw Gateway or Dashscope) and will send prompts and paper text to those endpoints. It also attempts to read ~/.openclaw/openclaw.json for a gateway token and uses environment variables not documented in requires.env. Additionally, review.py sets HF_ENDPOINT and HF_HUB_ENDPOINT at module import time (unconditional assignment), which is out-of-band behavior relative to the SKILL.md instructions.
Install Mechanism
There is no install spec (instruction-only), and code files are present but nothing in the manifest indicates additional binary downloads; this is lower install risk. However, the skill will execute networked Python code when run (no sandboxing implied).
!
Credentials
The skill metadata declares no required env vars, but the code reads/writes multiple env vars and config locations: it unconditionally sets HF_ENDPOINT and HF_HUB_ENDPOINT, reads OPENCLAW_GATEWAY_TOKEN (and ~/.openclaw/openclaw.json) and may use DASHSCOPE_API_KEY and DASHSCOPE_MODEL. Accessing a global OpenClaw auth token or an API key from the user's home config is sensitive and not justified in SKILL.md; those are effectively undeclared credential accesses.
!
Persistence & Privilege
always is false (good), and the skill does not request permanent platform-level privileges. However it attempts to read the platform-level OpenClaw config (~/.openclaw/openclaw.json) which may contain gateway auth tokens belonging to the user's environment or other skills. Also review.py modifies HF-related environment variables for the running process, which can affect other code in the same environment.
What to consider before installing
This skill appears to implement the advertised review/search features, but it quietly reads global OpenClaw config (~/.openclaw/openclaw.json) and environment variables (possible gateway/API tokens) and sets HF endpoint env vars without declaring them. Before installing or running: 1) inspect ~/.openclaw/openclaw.json and remove or rotate any sensitive tokens if you don't want them used; 2) run the skill in an isolated environment (dedicated VM or container) if you are worried about credential exposure or HF endpoint overrides; 3) if you don't need LLM features, run with --no-llm and/or disable network access to reduce risk; 4) review the code paths that call external LLM gateways (expansion.py) and the lines that set HF_ENDPOINT/HF_HUB_ENDPOINT in review.py; 5) consider enabling only explicit environment variables you control (set OPENCLAW_GATEWAY_URL and token) rather than allowing the skill to read global config. If you need, ask the author to declare required env/config in SKILL.md and to avoid reading other skills'/global configs.

Like a lobster shell, security has layers — review code before you run it.

latestvk9711tpdzskz81ka191byfn19x83wfsq
100downloads
0stars
1versions
Updated 4w ago
v1.0.0
MIT-0

Paper Review Pro - 论文检阅系统

📖 简介

Paper Review Pro 是一款面向科研工作者的智能论文检索与分析工具。它不仅能从 arXiv 和 Semantic Scholar 多源检索论文,还能自动筛选核心文献、生成结构化摘要、评估论文权威性,并输出完整的研究领域分析报告。

适用场景

  • 🔍 快速了解一个新研究领域的核心文献
  • 📚 系统性文献综述(Systematic Literature Review)
  • 💡 探索研究方向,发现潜在创新点
  • 📊 追踪领域内最新进展(按年份过滤)
  • 📑 一键导出 BibTeX,快速导入 Zotero 等文献管理工具

✨ 核心优势

特性说明
多源检索同时检索 arXiv + Semantic Scholar,覆盖预印本与正式发表
智能筛选相关度 + 权威度综合评分,自动识别高价值论文
CCF 评级内置 CCF(中国计算机学会)推荐目录,自动标注 A/B/C 类
结构化摘要LLM 自动生成:研究问题、方法、结论、创新点
领域扩展基于 Top-K 论文智能生成扩展检索词,发现相关子领域
一键导出自动生成 BibTeX 文件,支持 Zotero/Mendeley 导入
完整报告检索完成后自动生成研究领域分析报告
健壮性多层错误处理与回退机制,确保稳定运行

🚀 快速开始

基本检索

cd ~/.openclaw/workspace/skills/paper-review-pro
python scripts/config.py
python scripts/review.py --query "LLM reasoning" --retrieve_number 20 --keep_topk 5 --year 2024

参数说明

参数必需默认值说明
--query-检索关键词
--retrieve_number10初始检索数量
--keep_topk3保留核心论文数量
--year2025截止年份(检索该年及之后)
--expand_query_number2每个扩展词保留的论文数量

输出内容

检索完成后,你将获得:

  1. Top-K 核心论文列表 - 按综合评分排序,含标题、作者、年份、摘要、链接
  2. 扩展检索结果 - 基于扩展词的补充检索结果
  3. BibTeX 文件 - 自动生成,位置:~/.openclaw/paper-review-pro/papers/bibtex_{query}_{timestamp}.bib
  4. 检索报告 - 完整的研究领域分析,位置:~/.openclaw/paper-review-pro/reports/

📦 功能详解

1. 多源检索与去重

检索源

  • arXiv - 预印本为主,覆盖最新研究
  • Semantic Scholar - 正式发表为主,含引用信息

去重机制:基于论文标题和 DOI 进行本地去重,避免重复结果。

2. 综合评分系统

评分公式:

综合分数 = 相关度 × (1 - weight) + 权威度 × weight
参数说明默认值
相关度查询关键词与标题/摘要的匹配度-
权威度基于发表状态和 CCF 评级-
weight权威度权重0.3

权威度评分标准

等级说明分数
CCF-A顶级会议/期刊(NeurIPS, ICML, CVPR, ACL 等)1.0
CCF-B优秀会议/期刊0.8
CCF-C良好会议/期刊0.6
已发表未评级有 venue 但未匹配 CCF0.5
preprint预印本(arXiv 等)0.3

3. CCF 评级查询

数据库规模:422 个 venue(会议 275 个,期刊 147 个)

代表性收录

  • A 类会议:NeurIPS, ICML, ICLR, CVPR, ICCV, ECCV, ACL, EMNLP, CCS, S&P, SIGMOD, VLDB, KDD 等
  • A 类期刊:IEEE TPAMI, IJCV, JMLR, IEEE TIFS, TSE, TODS, TKDE 等

4. 结构化摘要生成

使用 LLM 自动生成四要素摘要:

要素说明
研究问题论文要解决的核心问题
方法采用的技术路线/算法
结论主要发现/实验结果
创新点与已有工作的区别

5. 领域扩展检索

基于 Top-K 核心论文的内容分析,生成语义相关的扩展检索词,帮助发现:

  • 相关子领域
  • 替代技术方案
  • 跨领域应用

6. BibTeX 导出

自动生成,支持一键导入 Zotero/Mendeley。

导入 Zotero

  1. 打开 Zotero
  2. 文件 → 导入 → 从文件导入
  3. 选择生成的 .bib 文件

⚙️ 配置与参数

配置文件

位置:~/.openclaw/paper-review-pro/config.json

{
  "default_n": 10,
  "default_k": 2,
  "min_year": 2025,
  "default_p": 2,
  "authority_weight": 0.3,
  "llm": {
    "enabled": true,
    "provider": "gateway",
    "gateway_url": "http://localhost:14940",
    "gateway_model": "dashscope/qwen3.5-plus"
  }
}

快速配置命令

python scripts/config.py --default_n 20 --default_k 3 --min_year 2024 --authority-weight 0.3

命令行参数

基本参数

参数说明
--query检索关键词(必需)
--retrieve_number初始检索数量
--keep_topk保留核心论文数量
--year截止年份
--expand_query_number每个扩展词保留数量

高级参数

参数说明默认
--no-bibtex禁用 BibTeX 导出启用
--no-authority禁用权威度评分启用
--authority-weight权威度权重 (0.0-1.0)0.3
--use-api使用在线 API 查询发表状态禁用
--no-llm禁用 LLM 功能启用

🛡️ 错误处理与回退机制

本系统设计有多层错误处理机制,确保在各种异常情况下仍能稳定运行。

1. arXiv 检索回退

arXiv API (首选)
    ↓ [失败]
arXiv 网页爬取 (回退)
    ↓ [失败]
跳过 arXiv,仅使用 Semantic Scholar

网页爬取保护

  • 超时保护:60 秒
  • 条目长度限制:50KB/条
  • 进度输出:实时显示处理状态

2. LLM 功能回退

OpenClaw Gateway API (首选)
    ↓ [401/网络错误]
Dashscope API (备用)
    ↓ [失败]
规则提取 Fallback

Fallback 行为

  • 摘要生成:从原文提取首尾句作为研究问题/结论,方法/创新点使用模板
  • 扩展词生成:从标题提取名词短语,基于常见学术模式生成

保证:即使 LLM 完全不可用,系统也能正常运行,不会中断。

3. 发表状态查询回退

在线 API 查询 (--use-api)
    ↓ [失败/未启用]
本地数据库匹配
    ↓ [未匹配]
标记为"未评级"

4. 卡死检测保护

TimeoutMonitor 监控所有关键步骤:

  • 超时阈值:1200 秒(20 分钟)
  • 监控点:检索、过滤、评分、扩展、导出、报告生成
  • 超时行为:自动终止程序并输出错误信息

5. 网络请求重试

所有外部 API 请求均支持自动重试:

  • 重试次数:3 次
  • 重试间隔:指数退避(1s, 2s, 4s)

📋 使用示例

示例 1:标准检索

python scripts/review.py --query "transformer attention" --retrieve_number 20 --keep_topk 5 --year 2020

示例 2:仅关注内容相关性(忽略权威度)

python scripts/review.py --query "novel architecture" --keep_topk 5 --no-authority

示例 3:高权威度优先(系统性综述)

python scripts/review.py --query "deep learning survey" --keep_topk 10 --authority-weight 0.5

示例 4:快速检索(禁用 LLM)

python scripts/review.py --query "quick test" --retrieve_number 5 --no-llm

示例 5:使用在线 API 获取更准确的发表信息

python scripts/review.py --query "recent paper" --keep_topk 5 --use-api

🧪 测试与验证

测试 CCF 评级模块

# 运行完整测试套件
python scripts/test_publication_status.py

# 测试特定 venue
python scripts/test_publication_status.py --title "论文标题" --venue "IEEE Transactions on Multimedia"

# 显示数据库统计
python scripts/test_publication_status.py --show-db

测试 BibTeX 导出

python scripts/core/bibtex.py --title "Test Paper" --authors "John Doe" --year 2025 --publication "CVPR" --ccf-rank A

📚 参考文档

详细模块说明请参考 reference/ 目录:

文档内容
reference/LLM_INTEGRATION.mdLLM 功能集成(摘要生成、查询扩展)
reference/BIBTEX_EXPORT.mdBibTeX 导出模块说明
reference/PUBLICATION_STATUS.md发表状态与 CCF 评级模块
reference/SCORING_SYSTEM.md综合评分系统说明
reference/BUGFIXES.md重要修复说明

⚠️ 注意事项

  1. CCF 评级数据库:当前使用本地数据库,覆盖常见计算机领域会议/期刊。如需扩展,请编辑 scripts/core/publication_status.py 中的评级字典。

  2. BibTeX 文件位置:生成的 .bib 文件保存在 ~/.openclaw/paper-review-pro/papers/ 目录,文件名格式:bibtex_{查询关键词}_{时间戳}.bib

  3. 权威度权重建议

    • 探索性研究:0.2-0.3(相关度优先)
    • 系统性综述:0.4-0.5(权威度优先)
    • 纯内容分析:使用 --no-authority 禁用
  4. 在线 API 查询--use-api 参数通过 Semantic Scholar API 查询发表信息,更准确但会增加检索时间(约 2-5 秒/篇)。

  5. arXiv 速率限制:arXiv API 有速率限制,建议检索间隔 ≥3 秒。网页爬取作为回退方案,超时保护 60 秒。

  6. LLM 配置:首次使用前请确保 OpenClaw Gateway 或 Dashscope API 已正确配置。如未配置,系统自动降级到规则 Fallback。


📝 更新日志

详见 CHANGELOG.md(如有)或项目发布说明。


版本: 2026-03-29
许可: MIT
维护: OpenClaw Community

Comments

Loading comments...