Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

OpenAI Deep Research Skill

v0.1.0

Execute multi-step deep research with the OpenAI Responses API, including question decomposition, evidence gathering with web search, contradiction tracking,...

0· 97·0 current·0 all-time
byGrus@guanglechen

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for guanglechen/openai-deep-research-skill.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "OpenAI Deep Research Skill" (guanglechen/openai-deep-research-skill) from ClawHub.
Skill page: https://clawhub.ai/guanglechen/openai-deep-research-skill
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install openai-deep-research-skill

ClawHub CLI

Package manager switcher

npx clawhub@latest install openai-deep-research-skill
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The name/description (deep research using the OpenAI Responses API) matches the included script and README: the tool decomposes questions, gathers evidence (optionally via a web-search tool) and writes audit artifacts. However, the registry metadata claims there are no required environment variables or primary credential while the SKILL.md and scripts explicitly require OPENAI_API_KEY (and optionally OPENAI_BASE_URL). This discrepancy is a proportionality/information-gap issue.
Instruction Scope
SKILL.md and the Python script restrict activity to research workflow: planning, calling the OpenAI Responses API, optional web-search tool usage, and writing local artifact files under outputs/. Instructions do not direct reading unrelated system files or harvesting environment variables beyond the API key/base URL. The workflow also provides a dry-run mode to avoid API calls.
Install Mechanism
There is no remote installer; the SKILL.md tells the user to pip install from the bundled scripts/requirements.txt which only lists the official openai package. No remote downloads or executables are fetched, and included code runs locally. This is low install risk.
!
Credentials
The tool legitimately needs an OpenAI API key, but the registry metadata failing to list OPENAI_API_KEY (and OPENAI_BASE_URL) is inconsistent and could mislead users about what secrets the skill will use. The script allows pointing the client to an arbitrary OPENAI_BASE_URL (a user-controlled gateway). That feature is reasonable for some deployments but also increases risk: if you pass a malicious base URL you could exfiltrate prompts/responses to an attacker. Require/declare OPENAI_API_KEY in metadata and be cautious where you set base URL.
Persistence & Privilege
always is false; the skill does not request forced inclusion or write global agent configuration. It writes run artifacts into an outputs/<timestamp>-<slug> directory under the working directory, which is expected for this purpose and not an escalation of privilege.
What to consider before installing
This skill is mostly coherent for performing cited research with the OpenAI Responses API, but take these precautions before installing or running it: - The SKILL.md and code require an OpenAI API key (OPENAI_API_KEY) even though the registry metadata did not declare it — do not supply sensitive org-wide keys unless you trust and have reviewed the code. - The script allows setting OPENAI_BASE_URL (custom gateway). Only point this to endpoints you control or trust; a malicious gateway could capture prompts/responses or API keys if you pass them explicitly. - Installation is local via pip installing the bundled requirements (openai package). Run in an isolated environment (virtualenv/container) if you want to limit system exposure. - The tool writes output files to outputs/...; review produced artifacts for sensitive content before sharing them. - If you need extra safety, run with --dry-run or --disable-web-search first, and inspect scripts/deep_research.py to confirm it behaves as you expect. If you want higher assurance, ask the publisher to update registry metadata to declare OPENAI_API_KEY and OPENAI_BASE_URL, and request a short security note explaining whether the code ever transmits data to endpoints other than the Responses API and the optional web-search tool.

Like a lobster shell, security has layers — review code before you run it.

latestvk97cf63fd7fmygtaxt1q5mgvvn83g3vr
97downloads
0stars
1versions
Updated 1mo ago
v0.1.0
MIT-0

OpenAI Deep Research

Overview

Run a deterministic research workflow that separates planning, evidence collection, and report synthesis. Generate reusable research artifacts under an output directory for auditability and iteration.

Workflow

  1. Define research scope.
  2. Run the script to generate plan, findings, and report artifacts.
  3. Evaluate report quality with the checklist.
  4. Rerun with adjusted depth/model/tool settings when gaps remain.

Quick Start

Install dependencies:

cd openai-deep-research-skill
python3 -m pip install -r scripts/requirements.txt

Run a real research job:

python3 scripts/deep_research.py "中国AI Agent市场2026年商业化路径" \
  --language zh-CN \
  --depth 6 \
  --research-depth deep \
  --max-total-output-tokens 20000 \
  --parallel 3

Run a local dry-run without API calls:

python3 scripts/deep_research.py "sample topic" --dry-run

Runtime Inputs

Set OPENAI_API_KEY before running real jobs. Use OPENAI_BASE_URL only when routing through a compatible gateway.

Tune key flags:

  • --depth: Control breadth of decomposition (2-12).
  • --research-depth: Control per-question evidence depth (shallow|standard|deep).
  • --parallel: Control concurrent evidence runs (1-8).
  • --planner-model: Choose planning model.
  • --research-model: Choose evidence model.
  • --writer-model: Choose synthesis model.
  • --planner-max-output-tokens: Cap planner response size.
  • --research-max-output-tokens: Cap each sub-question research response size.
  • --writer-max-output-tokens: Cap final report synthesis response size.
  • --max-total-output-tokens: Hard limit for estimated run output tokens.
  • --disable-web-search: Disable web tool for internal-data-only runs.
  • --web-tool-type: Override tool type when endpoint uses a non-default web-search tool name.

Artifact Contract

Write one run directory per execution: outputs/<timestamp>-<topic-slug>/. Produce these files:

  • run_meta.json: runtime parameters and metadata.
  • plan.json: normalized sub-question plan.
  • plan_raw.txt: raw planner model output.
  • findings.json: per-question evidence summaries.
  • research_raw.json: raw responses per sub-question.
  • report.md: final cited report.

Quality Gate

Apply all checks before accepting report.md:

  1. Verify each sub-question has explicit evidence or explicit gap notes.
  2. Verify source links are absolute URLs and point to relevant content.
  3. Verify contradictory evidence is surfaced in Contradictions and Uncertainty.
  4. Verify recommendation statements are specific and actionable.
  5. Verify weak-confidence sections are marked clearly.
  6. Verify all required top-level sections exist in Markdown (Executive Summary, Key Findings, Evidence by Sub-question, Contradictions and Uncertainty, Recommendations, Sources).

Use references/research-quality.md for scoring rubric and iteration guidance.

Troubleshooting

If execution fails with missing package errors, install dependencies from scripts/requirements.txt. If JSON parsing fails, rerun with the same topic and lower --depth, then inspect plan_raw.txt or research_raw.json. If web-search tool type is rejected, pass a compatible value via --web-tool-type or disable web search.

Comments

Loading comments...