Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Deep Research Pro v5.0.1

v5.0.1

Performs deep research using a three-stage process: data extraction, thematic insight briefs with contradiction analysis, and narrative-driven strategic repo...

0· 198·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for xueylee-dotcom/deep-research-v50.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Deep Research Pro v5.0.1" (xueylee-dotcom/deep-research-v50) from ClawHub.
Skill page: https://clawhub.ai/xueylee-dotcom/deep-research-v50
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install deep-research-v50

ClawHub CLI

Package manager switcher

npx clawhub@latest install deep-research-v50
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Name/description (deep research + extraction/quality gating) aligns with the included scripts and templates. However SKILL.md and the run examples reference 'PubMed API' and a '<pmid>' argument while scripts.extract-pmc.py constructs a PMC URL (expects a PMC id) — a documentation/expectation mismatch. All required capabilities (PDF/PMC extraction, quality scoring, synthesis) are present and consistent with the stated purpose; no unrelated credentials or binaries are requested.
!
Instruction Scope
Runtime instructions call local scripts that (a) fetch remote resources (extract-from-pdf.py downloads arbitrary PDF URLs provided by the caller) and (b) read and grep report/source files. The scripts write extracted JSON to /tmp and create reports under supplied research directories. The agent instructions do not ask for unrelated secrets, but the ability to download arbitrary URLs can be abused (SSRF or fetching internal endpoints) if untrusted inputs are used. Also SKILL.md example usage (<pmid>) may mislead users about correct inputs (pmid vs pmcid).
Install Mechanism
This is instruction-plus-script (no install spec). No remote installers, package downloads, or archive extraction are performed by the skill itself. The code uses standard Python libraries and requests/pdf parsers if installed. Lowest install risk from package distribution perspective.
Credentials
Skill requests no environment variables, no credentials, and no config paths. The scripts only perform network requests to source sites (NCBI/any provided PDF URLs). No unrelated service tokens are requested. This is proportionate to the stated research purpose.
Persistence & Privilege
The skill is not always:true and does not request persistent privileges. One implementation detail: synthesize.sh invokes the check script via an absolute path (/root/.openclaw/.../scripts/check-sourcing.sh). That is odd but not inherently privileged — it may fail on some hosts or indicate assumptions about runtime layout. No code attempts to modify other skills or system-wide config.
What to consider before installing
What to check before installing or running: - Input handling: The PDF extractor will download any URL you pass to it. Treat this as a network-capable program: do not pass untrusted or internal network URLs (SSRF risk). Run the scripts in a sandboxed environment with network egress controls if possible. - Documentation mismatch: SKILL.md shows usage with '<pmid>'/PubMed, but scripts.extract-pmc.py builds a PMC URL (expects a PMC id). Confirm which identifier to pass and test on benign known PMCID values first. - Hard-coded paths: synthesize.sh calls the check script using a /root/.openclaw/... absolute path. That may fail or, if the environment mirrors that path, read files from unexpected locations. Inspect and (if needed) modify the script to use relative paths inside the research directory before running. - Review network targets: extract-pmc.py requests NCBI/PMC (expected) and extract-from-pdf.py fetches arbitrary URLs (expected for PDFs). Ensure your runtime allows only the external hosts you trust; consider disabling outbound network or restricting DNS if you are uncertain. - Sanity-check outputs: the scripts write temporary JSON to /tmp and generate reports. Review those outputs and run check-sourcing.sh manually to confirm it only accesses expected 'sources' files. - Run first in an isolated environment: because the skill performs network I/O and file reads/writes, test it inside a container or VM with limited network access and without sensitive files mounted. If you need higher assurance, ask the skill author to: (1) fix the pmid/pmcid documentation mismatch, (2) avoid hard-coded absolute paths, and (3) add explicit input validation/whitelisting for PDF URLs.

Like a lobster shell, security has layers — review code before you run it.

deepvk9780t67v5nysdtw4z9jxv6jth8365yfinsightvk9780t67v5nysdtw4z9jxv6jth8365yflatestvk9780t67v5nysdtw4z9jxv6jth8365yfresearchvk9780t67v5nysdtw4z9jxv6jth8365yfv5vk9780t67v5nysdtw4z9jxv6jth8365yf
198downloads
0stars
2versions
Updated 22h ago
v5.0.1
MIT-0

Skill: Deep Research Pro (v5.0 - 洞察引擎)

版本:5.0.1 描述:真深度研究技能 - 三阶段合成 + 失败即停止

核心原则

深度不是"写得多",而是"每一行数据都可溯源"。


🔴 v5.0.1 强制规则(新增)

规则1:提取失败必须明确报错

# 如果提取的数据不满足最低要求,输出:
{
  "error": "提取失败",
  "reason": "样本量缺失 / 主要结果缺失 / 原文引用不足30字",
  "suggestion": "跳过此来源或人工复核"
}

规则2:质量评分必须校验内容

质量评分逻辑:
- 有样本量 + 主要结果 + 原文引用 ≥ 30字 → 8.0-9.0
- 有2项 → 7.0-7.5
- 有1项或全是"见原文" → 标记为"待验证",不评分

规则3:报告必须区分"已验证"和"待验证"

## 已验证结论(基于核心论文)

### 结论1:LSTM在ICU场景预测准确率达0.87
- 来源:card-002 (PMC11110807)
- 证据:样本量1,250, 95%CI 0.82-0.91, p<0.001
- 原文引用:"The LSTM model achieved..." (Results, p.5)

## 待验证线索(基于元数据)

### 线索1:远程医疗或可节省成本
- 来源:card-001 (PubMed摘要)
- 状态:⚠️ 需人工访问原文验证

执行流程

Step 1: 检索与提取

# 使用PubMed API获取结构化数据
python3 scripts/extract-pmc.py <pmid>

# 如果返回error,跳过该来源
# 如果数据不全,标记为"待验证"

Step 2: 卡片生成(强制校验)

---
source_id: card-xxx
status: verified | pending | failed
quality_score: 8.5 | N/A
---

## 1. 核心数据提取

| 指标 | 数值 | 验证状态 |
|------|------|----------|
| 样本量 | 9,080 | ✅ 已提取 |
| 主要结果 | 未提取 | ⚠️ 待验证 |
| 原文引用 | "..."(30字+) | ✅ 已提取 |

## 2. 质量说明

- 数据完整度:2/3
- 建议:访问原文验证主要结果

Step 3: 报告生成(明确区分)

禁止:混合使用"已验证"和"待验证"数据

要求

  • 已验证结论:单独章节
  • 待验证线索:单独章节 + 警告标识

质量门禁

  1. 卡片数量:≥5个有完整数据的
  2. 溯源验证:100%通过
  3. 明确区分:已验证 vs 待验证

v5.0.1 vs v5.0 对比

维度v5.0v5.0.1
提取失败静默填充"见原文"明确报错
质量评分虚高(8.5分但空洞)必须校验内容
报告生成混合使用数据明确区分已验证/待验证

Skill版本:5.0.1 | 最后更新:2026-03-19

Comments

Loading comments...