Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Academic Survey Self Improve

v1.0.0

高质量学术综述自动生成器。支持 arXiv 实时搜索、新颖性检测、质量控制循环、自动优化。每小时生成 10+ 页高质量综述。

0· 349·2 current·2 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for vixuowis/academic-survey-self-improve.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Academic Survey Self Improve" (vixuowis/academic-survey-self-improve) from ClawHub.
Skill page: https://clawhub.ai/vixuowis/academic-survey-self-improve
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install academic-survey-self-improve

ClawHub CLI

Package manager switcher

npx clawhub@latest install academic-survey-self-improve
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
!
Purpose & Capability
The code aligns with the stated purpose in many places: multiple modules query arXiv, analyze paper text, assemble LaTeX and compile PDFs. However the improver module depends on an LLMEvaluator (evaluator.py) which plausibly calls an external LLM service; the skill metadata and SKILL.md declare no required environment variables or credentials for LLM APIs. That is an incoherence: an LLM-driven improvement step would normally require API keys or a declared primary credential. Also SKILL.md mentions 'send report' in the workflow but no destination or required credentials are declared.
!
Instruction Scope
SKILL.md instructs running main.py (including --auto and --quality) and even suggests scheduling hourly cron jobs to auto-generate surveys. The runtime instructions are broad (fully automated, hourly generation, 'send report') and give the skill discretion to search arXiv extensively, write files (TeX, PDFs, topic_history.json), and iterate. The instructions do not document external endpoints for reporting or which credentials are needed for LLM calls — this open-ended autonomy is a scope/visibility concern. The included code does perform network calls to arXiv and file writes; there are subprocess calls to pdflatex. No explicit instructions tell the agent to read unrelated system files, but the SKILL.md's 'send report' step is vague and unaccounted-for in the visible code excerpts.
Install Mechanism
No install spec is present (instruction-only installation); the skill bundle contains Python source files which will be placed in the skill workspace. There are no external binary downloads or archive extracts in the provided manifest. This is the lower-risk install pattern compared with arbitrary remote downloads.
!
Credentials
The package declares no required environment variables or credentials, yet improver.py imports and calls an LLMEvaluator from evaluator.py. LLM evaluators typically require API keys (e.g., OPENAI_API_KEY) or service endpoints. That missing declaration is disproportionate — either the evaluator uses a local LLM (which should be documented) or it expects secret credentials that are not declared. Additionally, SKILL.md references sending reports but provides no details or required auth; that omission increases risk of unexpected data exfiltration if the code posts results externally.
Persistence & Privilege
The skill is not marked always:true and does not request elevated platform privileges. It writes its own topic_history.json into its output directory (normal) and suggests cron scheduling in examples (user action). There is no evidence it modifies other skills or system-wide agent configuration.
What to consider before installing
This package mostly does what its README says (fetch arXiv, build LaTeX, iterate), but there are gaps you should resolve before installing or scheduling it to run automatically: 1) Inspect evaluator.py and main.py to see whether they call external LLM APIs (look for requests, openai, http.post, socket, or environment reads like OPENAI_API_KEY). If the evaluator needs API keys, the skill should declare that — otherwise it may fail or attempt to use credentials implicitly. 2) Search the codebase for any network POST/PUT calls or hardcoded endpoints (especially anything that would 'send report') and confirm where outputs are sent. 3) Confirm you want an hourly cron job that will generate many PDFs and do repeated network requests (rate limits, storage growth, CPU from pdflatex). 4) Run the skill in a sandboxed environment first (isolated user account, limited network) and monitor outbound connections and files created. 5) Verify the claimed provenance (the SKILL.md lists GitHub/ClawHub links but 'Source' is unknown); prefer skills from known authors or inspect the entire code for secrets/obfuscated behavior. If you want, provide evaluator.py and main.py contents and I can check them specifically for API calls, credential reads, and outbound endpoints.

Like a lobster shell, security has layers — review code before you run it.

latestvk970erhzbdd2b7kmdkpg77wdrh82jgd1
349downloads
0stars
1versions
Updated 10h ago
v1.0.0
MIT-0

Academic Survey Generator v3.0

高质量学术综述自动生成器 - 从 arXiv 实时搜索到完整 PDF,全自动完成。

✨ 核心特性

1. arXiv 实时搜索 🔍

  • 搜索多个 AI 方向最新论文
  • 过去 7 天内发表
  • 自动获取 50-100 篇论文
  • 提取论文元数据(标题、作者、摘要)

2. 智能主题识别 💡

  • 关键词频率分析
  • 自动生成候选主题
  • 新颖性评分(0-10)
  • 撞车检测(避免与现有 survey 重复)

3. 高质量内容生成 📝

  • 9 章节完整结构
  • 详细的方法论分析
  • 对比表格
  • 分类图表(TikZ)
  • 50+ 真实 arXiv 引用

4. 质量控制循环 ✅

  • 自动质量评分(10 分制)
  • 页数检查(≥10 页)
  • 引用检查(≥50 篇)
  • 章节完整性检查
  • 迭代优化(最多 3 次)

5. 自动优化 🚀

  • 扩展章节内容
  • 增加技术细节
  • 添加数学公式
  • 增强学术写作

🚀 快速开始

基础生成

cd ~/.openclaw/workspace/skills/academic-survey-self-improve
python3 main.py generate "Graph Neural Networks"

arXiv 实时搜索生成

python3 main.py generate "graph neural networks" --from-arxiv

智能主题选择

python3 main.py generate --smart

完全自动化(推荐)

python3 main.py generate --auto

高质量生成(带质量控制)

python3 main.py generate --quality

📊 质量标准

指标标准说明
页数≥10 页确保内容充分
引用≥50 篇真实 arXiv 论文
章节完整性9 章节完整结构
质量分数≥7.0/10自动评分
新颖性≥6/10避免撞车

📁 文件结构

academic-survey-generator/
├── SKILL.md                    # 技能文档
├── main.py                     # 主入口
├── quality_generator.py        # 高质量生成器 ⭐
├── fully_automated_generator.py # 完全自动化生成器
├── smart_generator.py          # 智能主题选择
├── arxiv_generator.py          # arXiv 搜索生成
├── generator.py                # 基础生成器
├── evaluator.py                # 质量评估
├── improver.py                 # 内容改进
└── output/                     # 输出目录
    ├── *.tex                   # LaTeX 源文件
    ├── *.pdf                   # 编译后的 PDF
    └── topic_history.json      # 主题历史(防撞车)

🎯 使用场景

1. 每小时自动生成

配置 cron 任务,每小时自动生成一篇新颖的综述。

{
  "id": "hourly-quality-survey",
  "schedule": {"kind": "every", "everyMs": 3600000},
  "payload": {
    "message": "python3 main.py generate --quality"
  }
}

2. 快速调研

输入研究主题,快速获得最新文献综述。

3. 教学演示

展示学术写作规范和综述结构。

4. 文献管理

自动整理最新 arXiv 论文。

📈 质量控制流程

搜索 arXiv (8分钟)
    ↓
识别主题 (10分钟)
    ↓
撞车检测 (10分钟)
    ↓
生成初稿 (20分钟)
    ↓
质量检查 (15分钟) ──→ 不达标 ──→ 优化迭代
    ↓                               ↓
    └───────────────────────────────┘
    ↓
达标准
    ↓
编译 PDF (7分钟)
    ↓
发送报告

🔧 配置选项

main.py 参数

参数说明
generate <topic>基础生成
--from-arxiv从 arXiv 搜索生成
--smart智能选择最热门主题
--auto完全自动化(搜索+识别+检测+生成)
--quality高质量生成(带质量控制循环)
--output <dir>指定输出目录

📝 输出示例

生成报告

主题: Code for Language
新颖性: 8/10 ⭐
质量分数: 7.1/10 ✅
页数: 9 页 ✅
引用: 40 篇 ✅

章节结构

  1. Introduction - 背景、动机、贡献
  2. Background - 历史发展、关键概念、技术基础
  3. Taxonomy - 分类框架、关系分析
  4. Methodologies - 方法详解、对比分析
  5. Applications - 应用场景、领域适配
  6. Experiments - 实验设置、基准、结果
  7. Challenges - 挑战与问题
  8. Future Directions - 未来研究方向
  9. Conclusion - 总结

🆕 更新日志

v3.0.0 (2026-03-09)

  • ✨ 新增 quality_generator.py 高质量生成器
  • ✨ 新增质量控制循环(自动评分、迭代优化)
  • ✨ 提高质量标准:10+ 页、50+ 引用
  • ✨ 新增 --quality 参数
  • 🐛 修复 Python 3.6 兼容性问题
  • 📝 完善 SKILL.md 文档

v2.0.0 (2026-03-09)

  • ✨ 新增 fully_automated_generator.py 完全自动化生成
  • ✨ 新增新颖性检测和撞车避免
  • ✨ 新增主题历史记录
  • ✨ 新增 --auto 参数

v1.0.0 (2026-03-07)

  • 🎉 初始版本
  • ✨ 基础综述生成功能
  • ✨ arXiv 搜索集成

📦 依赖

  • Python 3.6+
  • LaTeX (pdflatex)
  • TikZ (图表生成)
  • arXiv API (论文搜索)

📄 License

MIT License

👤 Author

Redigg AI Research

🔗 Links

Comments

Loading comments...