Install
openclaw skills install opusflame-deep-researchAutonomous multi-model deep research with framework-driven reasoning. Spawns 4 parallel model agents (Gemini 2.5 Pro, o3, Opus, MiniMax), each applies best-p...
openclaw skills install opusflame-deep-researchAutonomous research system that runs 4 AI models in parallel, each applying relevant analytical frameworks, then cross-validates and merges findings into a comprehensive cited report.
User Question
│
▼
┌─ Phase 0: Framework Selection ─┐
│ Identify best-practice │
│ framework(s) for this question │
└────────────┬────────────────────┘
│
┌───────┼───────┐───────┐
▼ ▼ ▼ ▼
Gemini o3 Opus MiniMax
2.5 Pro 4 M2.5
(search (deep (nuance (China/
heavy) logic) +balance)alt view)
│ │ │ │
└───────┼───────┘───────┘
▼
Phase 5: Merge & Cross-Validate
│
▼
Final Report (PDF)
Before researching, ask: "Is there a best-practice framework for answering this type of question?"
| Question Type | Frameworks to Apply |
|---|---|
| Competitive strategy | Porter's Five Forces, 7 Powers (Helmer), Schwerpunkt/High Ground (Packy), SWOT |
| Market entry / sizing | TAM/SAM/SOM, Blue Ocean Strategy, Jobs-to-be-Done |
| Business model evaluation | Business Model Canvas, Unit Economics, Ramp vs Route test (point solution vs platform?) |
| Investment / valuation | DCF, Comparable Analysis, Venture method, Power Law thesis |
| Product strategy | JTBD, Kano Model, Value Prop Canvas, Hook Model |
| Growth / GTM | AARRR Pirate Metrics, Bullseye Framework, STP (Segmentation-Targeting-Positioning) |
| Technology assessment | Gartner Hype Cycle, Wardley Maps, Build vs Buy matrix |
| Risk analysis | Pre-Mortem, FMEA, Scenario Planning |
| Organizational / ops | OKR analysis, RACI, Theory of Constraints |
| Pricing | Van Westendorp, Conjoint, Value-based pricing framework |
| Industry analysis | Value Chain Analysis, Industry Lifecycle, Winner-Takes-More thesis |
| Person / hiring | Track Record Analysis, Reference Triangle, Founder-Market Fit |
If a framework applies:
If no standard framework applies:
Break the topic into 5-8 research sub-questions. Think like an investigative journalist:
Spawn 4 sub-agents using sessions_spawn, each with a different model:
Model 1: gemini (google/gemini-2.5-pro) — Search-heavy, broad coverage
Model 2: o3 (openai/o3) — Deep logical reasoning, contrarian
Model 3: opus (anthropic/claude-opus-4-6) — Nuanced, balanced synthesis
Model 4: minimax (minimax/MiniMax-M2.5) — Alternative perspectives, China/grey-area
## Research Task
[Topic]
## Framework
You MUST structure your analysis using: [Framework Name]
Apply each component of the framework systematically to the topic.
If data is missing for a component, note it explicitly.
## Sub-Questions
[List of 5-8 sub-questions]
## Instructions
1. Use web_search extensively (minimum 10 unique searches)
2. Use web_fetch to read full articles for key sources
3. Cross-reference claims across 2+ sources
4. Structure findings around the framework components
5. Flag disagreements, unknowns, and low-confidence claims
6. Minimum 15 unique source URLs
7. Output format: markdown with inline citations [1][2]...
8. End with a Sources section listing all URLs
## Quality Rules
- Every factual claim needs a source
- Prefer primary sources (filings, official reports) over secondary
- Note source freshness — flag anything >6 months old
- Include opposing viewpoints
- State confidence level (high/medium/low) for key conclusions
All 4 models run in parallel via sessions_spawn with mode="run". Do NOT poll in a loop — they auto-announce when done.
Save each model's output:
memory/research/[topic]-gemini-[date].md
memory/research/[topic]-o3-[date].md
memory/research/[topic]-opus-[date].md
memory/research/[topic]-minimax-[date].md
This is the most critical phase. The primary agent (you) must:
Create a matrix of key claims and which models agree/disagree:
| Claim | Gemini | o3 | Opus | MiniMax | Confidence |
|-------|--------|----|----|---------|------------|
| [claim 1] | ✅ | ✅ | ✅ | ❌ | High (3/4) |
| [claim 2] | ✅ | ❌ | ✅ | ✅ | High (3/4) |
| [claim 3] | ✅ | ✅ | ❓ | ❓ | Medium (2/4) |
For each disagreement:
From experience, models commonly get wrong:
Verify any quantitative claim that only one model makes.
# [Topic] — Deep Research Report
**Framework Used**: [Name] — [why this framework]
**Models**: Gemini 2.5 Pro, o3, Opus 4, MiniMax M2.5
**Date**: [date]
**Total Searches**: [count across all models]
## Executive Summary
3-5 sentence overview. Note consensus level.
## Framework Analysis
### [Framework Component 1]
Analysis with model consensus noted. [1][2]
### [Framework Component 2]
...
## Key Findings (Beyond Framework)
Discoveries that don't fit neatly into the framework.
## Model Disagreements
Where models diverged and why.
## Agreement Matrix
[The table from 5a]
## Data & Evidence
Tables, numbers, comparisons.
## Risks / Unknowns
What we couldn't confirm. Low-confidence areas.
## Conclusion & Recommendations
Actionable takeaways ranked by confidence.
## Sources
[1] Title — URL
[2] ...
memory/research/[topic]-终极版-[date].md~/.openclaw/media/outbound/references/financial-research.md