Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

deep-research-pro

v1.0.0

Multi-source deep research agent. Searches the web via SkillBoss API Hub, synthesizes findings, and delivers cited reports.

0· 69·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for marjoriebroad/abe-deep-research-pro.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "deep-research-pro" (marjoriebroad/abe-deep-research-pro) from ClawHub.
Skill page: https://clawhub.ai/marjoriebroad/abe-deep-research-pro
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install abe-deep-research-pro

ClawHub CLI

Package manager switcher

npx clawhub@latest install abe-deep-research-pro
Security Scan
Capability signals
Requires sensitive credentials
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Pending
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
SKILL.md and README consistently describe a research skill that uses SkillBoss API Hub (api.heybossai.com) and require a SKILLBOSS_API_KEY — that capability aligns with the description. However, the registry metadata at the top of the package summary declares "Required env vars: none" and "Primary credential: none", which contradicts SKILL.md. This mismatch is unexpected and should be resolved (the skill will not function without the API key).
Instruction Scope
The SKILL.md provides concrete runtime instructions: run web/news searches and scraping via the SkillBoss API, deep-read fetched pages, synthesize a cited report, save the report under ~/clawd/research/[slug]/report.md, and optionally spawn sub-agents (sessions_spawn). All of these are within the stated research purpose. The sub-agent spawning, and writing to the user's home directory, expand the skill's runtime reach and should be intentionally allowed by the user/agent policy.
Install Mechanism
No install spec or code files to execute are present beyond documentation and examples. This is an instruction-only skill (no downloads or extracted archives), which is low risk from an installer perspective.
Credentials
At runtime the skill requires a single external credential (SKILLBOSS_API_KEY) which is proportional to a web-scraping/searching integration. The concern is the metadata mismatch: registry metadata claims no required env vars but SKILL.md requires SKILLBOSS_API_KEY — that inconsistency could hide unexpected runtime requirements or deployment mistakes.
Persistence & Privilege
The skill does not request always:true and is user-invocable. It writes reports to the user's home folder and instructs spawning sub-agents, which are normal for a research skill but increase operational scope. Autonomous invocation is allowed by default; combined with external API access this increases what the agent can do if permitted, so consider policy controls.
What to consider before installing
Before installing: 1) Confirm the SKILLBOSS_API_KEY requirement — the registry metadata and SKILL.md disagree; the skill will not work without this key. 2) Verify you trust the SkillBoss provider (api.heybossai.com) since the key grants broad web-search/scraping ability; consider a scoped or rate-limited key or a throwaway key for testing. 3) Decide if you are comfortable with the skill writing files under ~/clawd/research and spawning sub-agents; if not, restrict its filesystem or agent permissions. 4) Ensure the agent environment has Python 3.11+ and requests installed if you intend to run the example code. 5) If you need higher assurance, request the author clarify the metadata mismatch and provide a minimal test run (logs) showing only expected network endpoints are contacted. If anything else (additional env vars, unexpected hostnames, or hidden install steps) appears, treat the skill as higher risk.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🔬 Clawdis
latestvk971pn5gxby13hvg71kqemhfqs85cc22
69downloads
0stars
1versions
Updated 5d ago
v1.0.0
MIT-0

Deep Research Pro 🔬

A powerful, self-contained deep research skill that produces thorough, cited reports from multiple web sources. Powered by SkillBoss API Hub — web search and page scraping via a single unified API.

How It Works

When the user asks for research on any topic, follow this workflow:

Step 1: Understand the Goal (30 seconds)

Ask 1-2 quick clarifying questions:

  • "What's your goal — learning, making a decision, or writing something?"
  • "Any specific angle or depth you want?"

If the user says "just research it" — skip ahead with reasonable defaults.

Step 2: Plan the Research (think before searching)

Break the topic into 3-5 research sub-questions. For example:

  • Topic: "Impact of AI on healthcare"
    • What are the main AI applications in healthcare today?
    • What clinical outcomes have been measured?
    • What are the regulatory challenges?
    • What companies are leading this space?
    • What's the market size and growth trajectory?

Step 3: Execute Multi-Source Search

For EACH sub-question, call SkillBoss API Hub search:

import requests, os

SKILLBOSS_API_KEY = os.environ["SKILLBOSS_API_KEY"]

# Web search
result = requests.post(
    "https://api.heybossai.com/v1/pilot",
    headers={"Authorization": f"Bearer {SKILLBOSS_API_KEY}", "Content-Type": "application/json"},
    json={"type": "search", "inputs": {"query": "<sub-question keywords>"}, "prefer": "balanced"},
    timeout=60
).json()
search_results = result["result"]["results"]

# News search (for current events)
result = requests.post(
    "https://api.heybossai.com/v1/pilot",
    headers={"Authorization": f"Bearer {SKILLBOSS_API_KEY}", "Content-Type": "application/json"},
    json={"type": "search", "inputs": {"query": "<topic>", "search_type": "news"}, "prefer": "balanced"},
    timeout=60
).json()
news_results = result["result"]["results"]

Search strategy:

  • Use 2-3 different keyword variations per sub-question
  • Mix web + news searches
  • Aim for 15-30 unique sources total
  • Prioritize: academic, official, reputable news > blogs > forums

Step 4: Deep-Read Key Sources

For the most promising URLs, fetch full content via SkillBoss API Hub scraping:

result = requests.post(
    "https://api.heybossai.com/v1/pilot",
    headers={"Authorization": f"Bearer {SKILLBOSS_API_KEY}", "Content-Type": "application/json"},
    json={"type": "scraping", "inputs": {"url": "<url>"}},
    timeout=60
).json()
content = result["result"]["results"]

Read 3-5 key sources in full for depth. Don't just rely on search snippets.

Step 5: Synthesize & Write Report

Structure the report as:

# [Topic]: Deep Research Report
*Generated: [date] | Sources: [N] | Confidence: [High/Medium/Low]*

## Executive Summary
[3-5 sentence overview of key findings]

## 1. [First Major Theme]
[Findings with inline citations]
- Key point ([Source Name](url))
- Supporting data ([Source Name](url))

## 2. [Second Major Theme]
...

## 3. [Third Major Theme]
...

## Key Takeaways
- [Actionable insight 1]
- [Actionable insight 2]
- [Actionable insight 3]

## Sources
1. [Title](url) — [one-line summary]
2. ...

## Methodology
Searched [N] queries across web and news. Analyzed [M] sources.
Sub-questions investigated: [list]

Step 6: Save & Deliver

Save the full report:

mkdir -p ~/clawd/research/[slug]
# Write report to ~/clawd/research/[slug]/report.md

Then deliver:

  • Short topics: Post the full report in chat
  • Long reports: Post the executive summary + key takeaways, offer full report as file

Quality Rules

  1. Every claim needs a source. No unsourced assertions.
  2. Cross-reference. If only one source says it, flag it as unverified.
  3. Recency matters. Prefer sources from the last 12 months.
  4. Acknowledge gaps. If you couldn't find good info on a sub-question, say so.
  5. No hallucination. If you don't know, say "insufficient data found."

Examples

"Research the current state of nuclear fusion energy"
"Deep dive into Rust vs Go for backend services in 2026"
"Research the best strategies for bootstrapping a SaaS business"
"What's happening with the US housing market right now?"

For Sub-Agent Usage

When spawning as a sub-agent, include the full research request and context:

sessions_spawn(
  task: "Run deep research on [TOPIC]. Follow the deep-research-pro SKILL.md workflow.
  Read /home/clawdbot/clawd/skills/deep-research-pro/SKILL.md first.
  Goal: [user's goal]
  Specific angles: [any specifics]
  Save report to ~/clawd/research/[slug]/report.md
  When done, wake the main session with key findings.",
  label: "research-[slug]",
  model: "opus"
)

Requirements

  • SKILLBOSS_API_KEY environment variable (for web search and page scraping via SkillBoss API Hub)
  • Python 3.11+ with requests library

Comments

Loading comments...