Deep Research Agent

v1.0.0

Deep research and analysis agent for any topic. Use when the user wants to research a topic, analyze competitors, evaluate technologies, compare tools, inves...

0· 428·2 current·2 all-time
byJahonn Ding@jahonn
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description (deep research, comparison, landscape, evaluation) match the SKILL.md and README. The skill doesn't ask for unrelated credentials, binaries, or config paths; all declared metadata is proportional to a research/reporting agent.
Instruction Scope
SKILL.md instructs the agent to run web searches, evaluate sources, spawn a subagent ('Sonnet') for deep tasks, and write RESEARCH.md (and optionally LANDSCAPE.md). These behaviors are within the stated research purpose but do grant the agent permission to perform web requests, aggregate results, and create files — verify the host environment's network & file policies if that matters to you.
Install Mechanism
No install spec and no code files that execute arbitrary downloads. package.json and references are documentation-only; the skill is instruction-only which is low install risk.
Credentials
The skill requests no environment variables, credentials, or config paths. Its suggested data sources (GitHub, Reddit, HN, Discord) are reasonable for research and do not require secrets in principle. If you grant integrated connectors (e.g., GitHub/Discord APIs) later, review those permission requests separately.
Persistence & Privilege
Flags: always=false, user-invocable=true. The skill will create output files (RESEARCH.md) as part of normal operation but does not request persistent platform‑level privileges or try to modify other skills. Autonomous invocation is allowed by platform default and not a specific red flag here.
Assessment
This skill appears coherent for research tasks and poses no direct red flags (no credentials, no installers). Before installing, confirm: (1) whether the platform will allow the skill to access the web from the agent and whether that access is allowed to use any of your linked accounts; (2) where generated files (RESEARCH.md / LANDSCAPE.md) are stored and who can read them; (3) what 'spawn subagent (Sonnet)' means on your platform (which model/skill will be invoked and what permissions it has). Always verify claimed facts and follow the provided sources—the agent's outputs should be checked rather than blindly trusted.

Like a lobster shell, security has layers — review code before you run it.

latestvk976sas5dtb2ysc06kfqzje6m583b0tj

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

Research Agent — Deep Investigation on Any Topic

A structured research workflow that turns a vague question into a comprehensive analysis. 5 research modes, each with a clear output format. Supports web search, source evaluation, and structured reporting.

Research Modes

ModeTriggerOutput
Quick"What is X?" / "Tell me about X"1-paragraph summary + 3 key facts
Deep Dive"Research X" / "Deep dive into X"Full analysis report
Compare"Compare X vs Y" / "X or Y?"Comparison matrix + recommendation
Landscape"What's out there for X?" / "Alternatives to X"Market map + positioning
Evaluate"Should we use X?" / "Is X worth it?"Decision framework with scoring

How to Use

Quick Research (30 seconds)

"What is gstack?"
"Tell me about Claude Code skills"

→ Web search, extract key facts, 1-paragraph summary. No fluff.

Deep Dive (2-5 minutes)

"Research the AI coding agent landscape"
"Deep dive into Agent Skills standard"

→ Spawn subagent (Sonnet) with the Deep Dive prompt. Searches multiple sources, cross-references, identifies patterns, writes RESEARCH.md.

Compare (1-3 minutes)

"Claude Code vs Cursor vs Codex"
"RICE vs Kano vs ICE for prioritization"
"Notion vs Linear vs Jira"

→ Side-by-side comparison table with scoring across key dimensions. Includes a recommendation with reasoning.

Landscape Analysis (3-5 minutes)

"What open source projects exist for X?"
"Map the competitive landscape for X"
"What tools do PMs use for X?"

→ Categorized map of existing solutions. For each: what it does, what it misses, where the gap is.

Evaluate (2-3 minutes)

"Should we build on X or Y?"
"Is it worth adopting X?"
"Pros and cons of using X for our case"

→ Decision matrix scoring across dimensions (cost, effort, risk, fit, longevity). Recommendation with confidence level.

Phase Details

Deep Dive Prompt

Spawn a subagent (Sonnet) with this research methodology:

  1. Define the question. Restate the research question. What specifically are we trying to find out?

  2. Source gathering. Search for:

    • Official docs / primary sources (most reliable)
    • Community discussions (Reddit, HN, Discord — real user opinions)
    • Technical analysis (blog posts, benchmarks, comparisons)
    • GitHub metrics (stars, activity, issues, contributors)
    • Commercial context (funding, team, business model)
  3. Source evaluation. For each source:

    • Credibility: official vs community vs opinion
    • Recency: when was this published/updated?
    • Bias: does the author have a stake in the outcome?
  4. Pattern extraction. What themes emerge across sources?

    • Points of agreement (high confidence)
    • Points of disagreement (needs further investigation)
    • Gaps in available information
  5. Structured output. Write RESEARCH.md with:

    • Executive summary (3-5 sentences)
    • Key findings (numbered, with sources)
    • Detailed analysis (organized by theme)
    • Gaps and caveats (what we couldn't verify)
    • Recommendation (if applicable)
    • Sources (with URLs)

Compare Prompt

For comparing N items across M dimensions:

  1. Define comparison axis. What dimensions matter for this decision?

    • Functional: what can it do?
    • Performance: how fast/reliable?
    • Cost: pricing model, free tier?
    • Ecosystem: integrations, community, docs?
    • Maturity: how battle-tested?
  2. Score each item (1-5 per dimension):

    | Dimension     | Option A | Option B | Option C |
    |---------------|----------|----------|----------|
    | Feature set   | ⭐⭐⭐⭐   | ⭐⭐⭐     | ⭐⭐⭐⭐⭐  |
    | Ease of use   | ⭐⭐⭐⭐⭐  | ⭐⭐⭐     | ⭐⭐       |
    
  3. Context-specific recommendation. Not "A is best" but "A is best IF you need X, B if you need Y."

Landscape Prompt

For mapping a space:

  1. Categorize solutions:

    • Direct competitors (same approach, same users)
    • Adjacent tools (different approach, overlapping use case)
    • Workarounds (not products, but how people solve it today)
    • Emerging (new, not proven yet)
  2. For each solution:

    • What it does (1 sentence)
    • What it does well (strength)
    • What it misses (gap)
    • Who should use it (ideal user)
  3. Identify the gap. Where is nobody doing a good job? That's the opportunity.

Output Files

  • RESEARCH.md — Deep dive report (full analysis with sources)
  • Comparison results go to stdout (capture in conversation)
  • Landscape maps go to stdout or LANDSCAPE.md if long

Model Selection

ModeModelWhy
QuickHaikuSimple lookup, fast answer
Deep DiveSonnetNeeds reasoning, source evaluation
CompareSonnetNeeds judgment for scoring
LandscapeSonnetNeeds categorization and pattern recognition
EvaluateSonnetNeeds decision-making framework

Tips

  • Be specific. "Research AI" is too broad. "Research AI coding agents for solo developers" is actionable.
  • State your goal. "I need to decide between X and Y" gives the research direction.
  • Time-box it. "Give me the top 5, not top 50" keeps it focused.
  • Ask for sources. "Show me where you found this" for verification.

Files

4 total
Select a file
Select a file to preview.

Comments

Loading comments…