Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Adaptive Depth Research v6.0 Universal

v6.0.1

Perform adaptive multi-source research with configurable domains, auto PDF retrieval, universal extraction, and generate layered reports for decision, valida...

0· 185·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for xueylee-dotcom/deep-research-v60.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Adaptive Depth Research v6.0 Universal" (xueylee-dotcom/deep-research-v60) from ClawHub.
Skill page: https://clawhub.ai/xueylee-dotcom/deep-research-v60
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install deep-research-v60

ClawHub CLI

Package manager switcher

npx clawhub@latest install deep-research-v60
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Name/description promise (multi-source retrieval, PDF download, universal extraction, layered reports) matches the provided scripts and templates: arXiv/PubMed/PMC retrieval, PDF download + parsing, card/report generation and synthesis. No unrelated cloud credentials or services are requested.
!
Instruction Scope
SKILL.md and scripts instruct the agent to fetch external resources (arXiv, PubMed, PMC, arbitrary web URLs) and download PDFs. That matches purpose, but the instructions and scripts will perform arbitrary HTTP requests and write intermediate files (e.g., /tmp/{card}_extracted.json and created research directories). The scripts assume the agent can make outbound network connections and write files; they also run Python code that will follow URLs provided by topic/search results or user arguments — so if an attacker supplies or influences input URLs, the skill could fetch arbitrary endpoints (risk of SSRF or fetching sensitive internal endpoints). The SKILL.md does not explicitly warn about these network/file effects.
!
Install Mechanism
This is instruction-only (no install spec), so nothing will be automatically downloaded at install time. However, the code requires Python 3 and third‑party libraries (requests, pdfplumber or pdftotext) that are not declared in the skill metadata or SKILL.md. Users must manually ensure dependencies are installed; omission increases risk of user running scripts with missing/incorrect libs or installing arbitrary packages to satisfy them.
Credentials
The skill declares no required environment variables or credentials (appropriate). It nevertheless performs network calls to public APIs and writes files to disk (/tmp and skill workspace). No secrets are requested. This is proportionate to the stated purpose, but note: NCBI/Entrez requests may benefit from API keys for high-volume usage; the skill does not request or document an API key or rate-limiting behavior. Also, a hardcoded invocation in run-research.sh references /root/.openclaw/workspace/skills/deep-research/scripts/check-sourcing.sh (absolute root path), which is atypical and may fail or indicate an assumption about runtime layout.
Persistence & Privilege
always:false and default autonomous invocation settings — normal. The skill does not request permanent platform-wide privileges or attempt to modify other skills. It writes outputs under created research directories and temporary /tmp files only; no evidence of modifying other skill configs or system-wide settings.
What to consider before installing
What to consider before installing/using: - Function vs requirements: The skill does what it says (download/search papers, parse PDFs, build reports). However the package metadata does NOT list required runtime dependencies (python3, requests, and either pdfplumber or pdftotext). Install these from known sources before running. - Network and file I/O: Scripts make outbound HTTP(S) calls to arXiv, NCBI/PMC, PubMed eutils and arbitrary URLs you pass. They download PDFs and write to /tmp and a research/ workspace. Only run in an environment where outbound network requests and temporary file writes are acceptable (use a sandbox or isolated VM for initial testing). - Input-driven fetching (SSRF risk): Because URLs and search terms may lead the tool to fetch arbitrary endpoints, do not run untrusted topics/URLs. If an attacker can control inputs, the tool could be induced to access internal services or fetch malicious content. Validate inputs and run in a network-restricted environment if you need to limit that risk. - Missing dependency and doc gaps: The skill should document Python package requirements and any rate-limit/API-key expectations for Entrez/eutils. Consider adding a requirements.txt or Dockerfile and clarifying expected runtime paths. - Hardcoded path: run-research.sh calls a check-sourcing script via an absolute path (/root/.openclaw/...), which is unusual and may fail or indicate assumptions about the host environment. Inspect and if needed, adjust the path to refer to the skill-relative script (scripts/check-sourcing.sh) before running. - Legal/compliance: The skill attempts to download PDFs and may attempt paywalled content; ensure you have rights to fetch and store copyrighted material and that you comply with terms of use for third-party services. - Practical steps before running: 1) Review scripts locally and run in an isolated sandbox/VM. 2) Install dependencies from trusted package sources (pip install requests pdfplumber or pdftotext). Prefer pinning versions. 3) Replace the absolute /root/.openclaw path with the relative skill path if necessary. 4) Limit network access if you want to prevent the skill from reaching internal endpoints. 5) If you expect heavy Entrez usage, add an NCBI API key and rate-limit handling. Overall: the skill appears coherent with its research purpose but has implementation omissions and a few brittle assumptions that warrant manual review and sandboxed testing prior to use.

Like a lobster shell, security has layers — review code before you run it.

adaptivevk97fw9aj028hm8v4nmzj6vds4h836twclatestvk97fw9aj028hm8v4nmzj6vds4h836twcresearchvk97fw9aj028hm8v4nmzj6vds4h836twcuniversalvk97fw9aj028hm8v4nmzj6vds4h836twcv6vk97fw9aj028hm8v4nmzj6vds4h836twc
185downloads
0stars
2versions
Updated 1h ago
v6.0.1
MIT-0

Skill: Adaptive Depth Research v6.0 Universal

版本:6.0.0 描述:领域无关 | 配置驱动 | 数据源自适应 | 三层输出


🎯 核心设计原则

  1. 数据源自适应:arXiv/PMC 自动下 PDF,付费期刊自动切摘要模式
  2. 领域无关:提取逻辑不依赖特定术语,靠配置驱动
  3. 输出分层:执行摘要 + 验证清单 + 完整报告,各取所需
  4. 一次配置,全领域复用

📦 架构图

【配置层】
config/research-config.yaml  # 定义领域、关键指标、数据源优先级

      ↓

【检索层】(自动分流)
├─ arXiv → 下载 PDF → 全文解析 → 高可信提取
├─ PMC → 下载 PDF → 全文解析 → 高可信提取
├─ PubMed → 仅摘要 → 方向性提取 + 标注"需验证"
└─ Web → 网页抓取 → 关键结论提取

      ↓

【提取层】(通用 Prompt)
prompts/extract-universal.txt  # 不依赖领域术语

      ↓

【输出层】(三层报告)
├─ reports/executive-summary.md  # 决策者用,≤1 页
├─ reports/validation-checklist.md  # 执行者用,可操作
└─ reports/full-report.md  # 审计用,完整溯源

🚀 触发命令

# 完整研究流程
bash scripts/run-research.sh "<主题>" --domain "<领域>"

# 示例
bash scripts/run-research.sh "transformer efficiency" --domain "machine learning"
bash scripts/run-research.sh "telemedicine cost savings" --domain "healthcare"

📁 文件结构

skills/deep-research/
├── SKILL.md
├── config/
│   └── research-config.yaml      # 领域配置
├── prompts/
│   ├── extract-universal.txt     # 通用提取Prompt
│   ├── cluster-cards.txt         # 卡片聚类
│   └── write-brief.txt           # 主题简报
├── templates/
│   ├── executive-summary.md      # 执行摘要模板
│   ├── validation-checklist.md   # 验证清单模板
│   └── full-report.md            # 完整报告模板
└── scripts/
    ├── run-research.sh           # 完整研究流程
    ├── fetch-and-extract.sh      # 自动分流提取
    ├── extract-from-pdf.py       # PDF解析
    └── check-sourcing.sh         # 溯源验证

🔧 配置说明

修改研究领域

编辑 config/research-config.yaml:

research_domain: "your_domain_here"

key_metrics:
  performance:
    - accuracy
    - AUC
    - F1
  # 添加你的指标...

📊 三层输出说明

1. 执行摘要 (executive-summary.md)

  • 受众:决策者
  • 长度:≤1页
  • 内容
    • 核心结论(已验证 vs 待验证)
    • 可直接行动
    • 需验证后行动

2. 验证清单 (validation-checklist.md)

  • 受众:执行者
  • 格式:操作导向
  • 内容
    • 缺失指标汇总表
    • 具体验证路径
    • 通用验证方法

3. 完整报告 (full-report.md)

  • 受众:审计/存档
  • 内容
    • 方法论说明
    • 已验证结论 + 证据
    • 待验证线索
    • 战略建议(短/中/长期)
    • 完整卡片索引

🎯 执行流程

Step 1: 检索

# 自动检索 arXiv + PubMed
bash scripts/run-research.sh "<主题>"

Step 2: 提取

# 自动分流提取
bash scripts/fetch-and-extract.sh <source_url>

# 或手动提取
python3 scripts/extract-from-pdf.py card-001 "<pdf_url>"

Step 3: 验证

# 溯源检查
bash scripts/check-sourcing.sh reports/full-report.md sources/

Step 4: 生成报告

三层报告自动生成在 reports/ 目录


✅ 质量门禁

  1. 数据完整度标注:high/medium/low
  2. 缺失指标清单:每个卡片明确列出
  3. 验证路径具体:可操作,非模糊建议
  4. 溯源验证:所有数据可溯源到卡片

📈 v6.0 vs 之前版本

维度v5.xv6.0 Universal
领域医疗特化领域无关,配置驱动
数据源固定PubMed自适应分流
提取依赖医疗术语通用Prompt
输出单一报告三层输出
配置硬编码YAML配置

💡 用户只需

  1. 修改配置:改 research_domainkey_metrics
  2. 运行命令bash scripts/run-research.sh "<主题>"
  3. 阅读摘要:5分钟了解核心结论
  4. 按清单验证:30分钟补全关键数据
  5. 推进决策:基于验证结果

🔑 核心哲学

让工具适应人,而不是人适应工具。
你专注领域知识和商业决策,工具负责广度扫描、结构化整理、诚实标注。


Skill版本:6.0.0 Universal | 最后更新:2026-03-19

Comments

Loading comments...