Skill flagged — suspicious patterns detected
ClawHub Security flagged this skill as suspicious. Review the scan results before using.
OpenAI Deep Research Skill
v0.1.0Execute multi-step deep research with the OpenAI Responses API, including question decomposition, evidence gathering with web search, contradiction tracking,...
⭐ 0· 54·0 current·0 all-time
byGrus@guanglechen
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Suspicious
medium confidencePurpose & Capability
The name/description (deep research using the OpenAI Responses API) matches the included script and README: the tool decomposes questions, gathers evidence (optionally via a web-search tool) and writes audit artifacts. However, the registry metadata claims there are no required environment variables or primary credential while the SKILL.md and scripts explicitly require OPENAI_API_KEY (and optionally OPENAI_BASE_URL). This discrepancy is a proportionality/information-gap issue.
Instruction Scope
SKILL.md and the Python script restrict activity to research workflow: planning, calling the OpenAI Responses API, optional web-search tool usage, and writing local artifact files under outputs/. Instructions do not direct reading unrelated system files or harvesting environment variables beyond the API key/base URL. The workflow also provides a dry-run mode to avoid API calls.
Install Mechanism
There is no remote installer; the SKILL.md tells the user to pip install from the bundled scripts/requirements.txt which only lists the official openai package. No remote downloads or executables are fetched, and included code runs locally. This is low install risk.
Credentials
The tool legitimately needs an OpenAI API key, but the registry metadata failing to list OPENAI_API_KEY (and OPENAI_BASE_URL) is inconsistent and could mislead users about what secrets the skill will use. The script allows pointing the client to an arbitrary OPENAI_BASE_URL (a user-controlled gateway). That feature is reasonable for some deployments but also increases risk: if you pass a malicious base URL you could exfiltrate prompts/responses to an attacker. Require/declare OPENAI_API_KEY in metadata and be cautious where you set base URL.
Persistence & Privilege
always is false; the skill does not request forced inclusion or write global agent configuration. It writes run artifacts into an outputs/<timestamp>-<slug> directory under the working directory, which is expected for this purpose and not an escalation of privilege.
What to consider before installing
This skill is mostly coherent for performing cited research with the OpenAI Responses API, but take these precautions before installing or running it:
- The SKILL.md and code require an OpenAI API key (OPENAI_API_KEY) even though the registry metadata did not declare it — do not supply sensitive org-wide keys unless you trust and have reviewed the code.
- The script allows setting OPENAI_BASE_URL (custom gateway). Only point this to endpoints you control or trust; a malicious gateway could capture prompts/responses or API keys if you pass them explicitly.
- Installation is local via pip installing the bundled requirements (openai package). Run in an isolated environment (virtualenv/container) if you want to limit system exposure.
- The tool writes output files to outputs/...; review produced artifacts for sensitive content before sharing them.
- If you need extra safety, run with --dry-run or --disable-web-search first, and inspect scripts/deep_research.py to confirm it behaves as you expect.
If you want higher assurance, ask the publisher to update registry metadata to declare OPENAI_API_KEY and OPENAI_BASE_URL, and request a short security note explaining whether the code ever transmits data to endpoints other than the Responses API and the optional web-search tool.Like a lobster shell, security has layers — review code before you run it.
latestvk97cf63fd7fmygtaxt1q5mgvvn83g3vr
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
