Deep Research Agent
v1.0.0Deep research and analysis agent for any topic. Use when the user wants to research a topic, analyze competitors, evaluate technologies, compare tools, inves...
Like a lobster shell, security has layers — review code before you run it.
License
SKILL.md
Research Agent — Deep Investigation on Any Topic
A structured research workflow that turns a vague question into a comprehensive analysis. 5 research modes, each with a clear output format. Supports web search, source evaluation, and structured reporting.
Research Modes
| Mode | Trigger | Output |
|---|---|---|
| Quick | "What is X?" / "Tell me about X" | 1-paragraph summary + 3 key facts |
| Deep Dive | "Research X" / "Deep dive into X" | Full analysis report |
| Compare | "Compare X vs Y" / "X or Y?" | Comparison matrix + recommendation |
| Landscape | "What's out there for X?" / "Alternatives to X" | Market map + positioning |
| Evaluate | "Should we use X?" / "Is X worth it?" | Decision framework with scoring |
How to Use
Quick Research (30 seconds)
"What is gstack?"
"Tell me about Claude Code skills"
→ Web search, extract key facts, 1-paragraph summary. No fluff.
Deep Dive (2-5 minutes)
"Research the AI coding agent landscape"
"Deep dive into Agent Skills standard"
→ Spawn subagent (Sonnet) with the Deep Dive prompt. Searches multiple sources, cross-references, identifies patterns, writes RESEARCH.md.
Compare (1-3 minutes)
"Claude Code vs Cursor vs Codex"
"RICE vs Kano vs ICE for prioritization"
"Notion vs Linear vs Jira"
→ Side-by-side comparison table with scoring across key dimensions. Includes a recommendation with reasoning.
Landscape Analysis (3-5 minutes)
"What open source projects exist for X?"
"Map the competitive landscape for X"
"What tools do PMs use for X?"
→ Categorized map of existing solutions. For each: what it does, what it misses, where the gap is.
Evaluate (2-3 minutes)
"Should we build on X or Y?"
"Is it worth adopting X?"
"Pros and cons of using X for our case"
→ Decision matrix scoring across dimensions (cost, effort, risk, fit, longevity). Recommendation with confidence level.
Phase Details
Deep Dive Prompt
Spawn a subagent (Sonnet) with this research methodology:
-
Define the question. Restate the research question. What specifically are we trying to find out?
-
Source gathering. Search for:
- Official docs / primary sources (most reliable)
- Community discussions (Reddit, HN, Discord — real user opinions)
- Technical analysis (blog posts, benchmarks, comparisons)
- GitHub metrics (stars, activity, issues, contributors)
- Commercial context (funding, team, business model)
-
Source evaluation. For each source:
- Credibility: official vs community vs opinion
- Recency: when was this published/updated?
- Bias: does the author have a stake in the outcome?
-
Pattern extraction. What themes emerge across sources?
- Points of agreement (high confidence)
- Points of disagreement (needs further investigation)
- Gaps in available information
-
Structured output. Write
RESEARCH.mdwith:- Executive summary (3-5 sentences)
- Key findings (numbered, with sources)
- Detailed analysis (organized by theme)
- Gaps and caveats (what we couldn't verify)
- Recommendation (if applicable)
- Sources (with URLs)
Compare Prompt
For comparing N items across M dimensions:
-
Define comparison axis. What dimensions matter for this decision?
- Functional: what can it do?
- Performance: how fast/reliable?
- Cost: pricing model, free tier?
- Ecosystem: integrations, community, docs?
- Maturity: how battle-tested?
-
Score each item (1-5 per dimension):
| Dimension | Option A | Option B | Option C | |---------------|----------|----------|----------| | Feature set | ⭐⭐⭐⭐ | ⭐⭐⭐ | ⭐⭐⭐⭐⭐ | | Ease of use | ⭐⭐⭐⭐⭐ | ⭐⭐⭐ | ⭐⭐ | -
Context-specific recommendation. Not "A is best" but "A is best IF you need X, B if you need Y."
Landscape Prompt
For mapping a space:
-
Categorize solutions:
- Direct competitors (same approach, same users)
- Adjacent tools (different approach, overlapping use case)
- Workarounds (not products, but how people solve it today)
- Emerging (new, not proven yet)
-
For each solution:
- What it does (1 sentence)
- What it does well (strength)
- What it misses (gap)
- Who should use it (ideal user)
-
Identify the gap. Where is nobody doing a good job? That's the opportunity.
Output Files
RESEARCH.md— Deep dive report (full analysis with sources)- Comparison results go to stdout (capture in conversation)
- Landscape maps go to stdout or
LANDSCAPE.mdif long
Model Selection
| Mode | Model | Why |
|---|---|---|
| Quick | Haiku | Simple lookup, fast answer |
| Deep Dive | Sonnet | Needs reasoning, source evaluation |
| Compare | Sonnet | Needs judgment for scoring |
| Landscape | Sonnet | Needs categorization and pattern recognition |
| Evaluate | Sonnet | Needs decision-making framework |
Tips
- Be specific. "Research AI" is too broad. "Research AI coding agents for solo developers" is actionable.
- State your goal. "I need to decide between X and Y" gives the research direction.
- Time-box it. "Give me the top 5, not top 50" keeps it focused.
- Ask for sources. "Show me where you found this" for verification.
Files
4 totalComments
Loading comments…
