McKinsey Research

v2.1.0

Run a full McKinsey-level market research and strategy analysis using 12 specialized prompts. USE WHEN: - market research, competitive analysis, business str...

9· 3.4k·20 current·20 all-time
byAbdullah AlRashoudi@abdullah4ai
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
Name/description ask for multi-step market research and the SKILL.md only requires web_search, web_fetch, and spawning sub-agents — these are appropriate and proportionate to producing research analyses. No unrelated env vars, binaries, or external credentials are requested.
Instruction Scope
Runtime instructions are explicit and scoped: they sanitize inputs, wrap data in <user_data> tags, spawn sub-agents for each analysis, restrict sub-agents' capabilities (no exec, no arbitrary messaging, limited file writes), and assemble a single HTML report. This is coherent, but the coordinator and sub-agents write analysis artifacts to local workspace directories (artifacts/research/...). Those artifact files persist across sessions and may contain sanitized user inputs and scraped web search results — the skill warns about not storing credentials but cannot technically prevent a user from submitting secrets which would then be stored. Also, the sanitization strips tags, URLs, code blocks and truncates fields, which mitigates some injection risks but does not remove arbitrary plaintext secrets or PII.
Install Mechanism
No install spec or external downloads; the skill is instruction-only and will not place new binaries on disk. Low install risk.
Credentials
The skill requests no environment variables or credentials, which is appropriate for its purpose. Note: it still writes user-provided business data and fetched market data to local artifact files, so the effective exposure surface is persisted data rather than env/credential access.
Persistence & Privilege
always:false and no special platform privileges — standard. However, artifact persistence is a meaningful privilege: sub-agent outputs and the final HTML report are stored in artifacts/research/{slug}/ and 'may be readable by other skills in the same workspace' (per references/security.md). That persistent storage is intentional for the workflow but increases risk if users supply sensitive data.
Assessment
This skill is internally consistent and appears to do what it says — a coordinated set of sub-agents performing 12 analyses and producing a report. Before using it: (1) Do not paste secrets, API keys, passwords, or sensitive customer data into the intake form — artifact files are written to disk and persist. (2) If you must test, use dummy data first to confirm output and artifact behavior. (3) Review references/security.md — it strips tags, URLs, code blocks, and truncates inputs, but it cannot remove plain-text secrets you supply. (4) Be aware the skill will perform web searches and may fetch URLs found in search results; the final report may include quoted external content subject to copyright or inaccuracies. (5) Confirm you trust the skill source before cloning/installing (README suggests a third-party GitHub copy). If you need stronger guarantees (no persistent storage or stricter secret scrubbing), ask for those features or run the skill in an isolated/ephemeral workspace.

Like a lobster shell, security has layers — review code before you run it.

latestvk979vj24knv4ht565nde9djgfh83kw0j
3.4kdownloads
9stars
8versions
Updated 3w ago
v2.1.0
MIT-0

McKinsey Research - AI Strategy Consultant

User provides business context once. The skill plans and executes up to 12 specialized analyses via sub-agents in parallel, then synthesizes into a single executive report. Adapt scope based on company stage (see Adaptive Stage Logic below).

Phase 1: Language + Intake

Ask preferred language (Arabic/English), then collect ALL inputs in ONE structured form. See the intake form fields: Core (1-5), Financial (6-10), Strategic (11-14), Expansion (15-16), Performance (17-18). If product description is under 50 words, ask for clarification before proceeding.

Diamond Gate 1: Present scope summary (market, geography, competitors). Get user confirmation before Phase 2.

Phase 2: Plan + Parallel Execution

Sanitize inputs per references/security.md. Substitute variables per references/variable-map.md. Load individual prompts from references/prompts/.

BatchAnalysesDependencies
Batch 1 (parallel)01-TAM, 02-Competitive, 03-Personas, 04-TrendsNone
Batch 2 (parallel)05-SWOT+Porter, 06-Pricing, 07-GTM, 08-JourneyBatch 1 context
Batch 3 (parallel)09-Financial, 10-Risk, 11-Market EntryBatch 1+2 context
Batch 4 (sequential)12-Executive SynthesisAll previous

Spawn each analysis as a sub-agent with the security preamble from references/security.md. Stagger Batch 1 launches by 5 seconds to avoid web search rate limits. Validate each output is 500+ words.

See references/gotchas.md for common pitfalls. Use references/saudi-market.md for KSA/Gulf data sources. Use references/benchmarks.md for industry metric comparisons.

Phase 3: Collect + Synthesize

  1. Read all analysis outputs from artifacts/research/{slug}/
  2. Run Prompt 12 (Executive Synthesis) with all previous outputs
  3. Generate final HTML report using templates/report.html
  4. Save to artifacts/research/{date}-{slug}.html

Phase 4: Delivery

Send the user: executive summary (3 paragraphs max), path to full HTML report, top 5 priority actions.

Adaptive Stage Logic

StagePriority AnalysesSkip/Light
IdeaTAM, Personas, Competitive, TrendsFinancial Model (light), Market Entry (skip)
StartupTAM, Competitive, Pricing, GTM, PersonasMarket Entry (skip unless asked)
GrowthPricing, GTM, Journey, Financial, ExpansionTAM (light), Personas (light)
MatureSWOT, Risk, Expansion, Financial, SynthesisTAM (skip), Personas (skip)

"Light" = include in synthesis but don't spawn a dedicated sub-agent. Use web_search inline. "Skip" = omit unless user explicitly requests.

Artifacts

  • Individual analyses: artifacts/research/{slug}/{analysis-name}.md
  • Final report: artifacts/research/{date}-{slug}.html
  • Raw data: artifacts/research/{slug}/data/
  • Execution log: data/reports.jsonl
  • Feedback tracking: data/feedback.json

Important Notes

  • Each prompt produces a consulting-grade deliverable
  • Use web_search to enrich with real market data; only cite verifiable sources
  • If user provides partial info, work with what you have and note assumptions
  • For Arabic output: keep brand names and technical terms in English
  • Prompt 12 must cross-reference insights from all previous analyses; deduplicate aggressively
  • Sub-agents that fail should be retried once before skipping with a note

Reference Files

FileContents
references/security.mdInput safety, sanitization, tool constraints, artifact isolation
references/variable-map.mdVariable substitution rules and mapping table
references/prompts/12 individual analysis prompts (01-tam.md through 12-synthesis.md)
references/prompts.mdOriginal combined prompts (backup)
references/gotchas.mdKnown pitfalls and operational tips
references/saudi-market.mdKSA/Gulf data sources and market context
references/benchmarks.mdIndustry benchmarks (SaaS, e-commerce, fintech, marketplace, mobile)
templates/report.htmlHTML report template

Comments

Loading comments...