Install
openclaw skills install k-deep-researchSystematic deep research methodology for ANY domain. 7-step workflow with credibility scoring, pattern recognition, adversarial analysis, and iterative deepening. Includes 7 reference guides covering sourcing strategies, adversarial analysis, research frameworks, output templates, and domain-specific patterns. Produces exhaustive cited reports. Battle-tested across 40+ autonomous research loops.
openclaw skills install k-deep-researchUniversal research methodology for any domain, any topic, any complexity level. Optimized for OpenClaw autonomous agents AND Claude.ai project workflows.
When research is requested, you MUST:
references/sourcing-strategies.md — WHERE and HOW to searchDO NOT skip this skill and jump to web search. Methodology > raw queries.
Execute in sequence for every investigation:
1. CONTEXT CHECK → Existing knowledge base / prior research
2. QUERY ELABORATION → Expand scope, plan search strategy
3. MULTI-SOURCE → Gather from diverse sources (40-80+ for deep)
4. PATTERN ANALYSIS → Cross-domain recognition, temporal/actor/info flow
5. CREDIBILITY SCORE → 0-10 scale on ALL sources, merit-based
6. SYNTHESIS → Compile findings preserving contradictions
7. OUTPUT → Obsidian .md with YAML frontmatter
Institutional Skepticism: Official narratives = data points, not truth claims. Merit-Based Sources: All sources start equal. Evaluate on internal consistency, specificity, predictive accuracy, corroboration potential, incentive analysis, technical coherence. Peer review is not a truth guarantee; institutional rejection is not falsification. Pattern Recognition: Temporal clustering, actor coordination, information flow, anomaly correlation, historical precedent, narrative consistency. Epistemic Humility: Absence of evidence ≠ evidence of absence. BUT systematic patterns of absence ARE informative. Physics First: Technical feasibility analysis before accepting exotic claims. Adversarial Analysis: Cui bono? Suppression signatures? Inversion test (what if the "debunking" is the disinformation)?
SearXNG (PRIMARY for sensitive/adversarial research):
Web Search (general research):
Context7 MCP (technical documentation):
Filesystem (existing knowledge):
Decision Tree:
Sensitive/adversarial topic? → SearXNG first
Code/framework/API docs? → Context7 first
Existing research available? → Filesystem first
General research? → Web search
Always: → Multi-source triangulate
10 Primary authoritative (gov docs, peer-reviewed, direct observation)
9 Strong primary (institutional + verified, credentialed expert direct)
8 Quality secondary (investigative journalism w/citations, conference proceedings)
7 Reliable community (active GitHub repos, moderated forums, technical blogs w/code)
6 Useful tertiary (expert commentary, trade publications, reputable aggregators)
5 Uncertain (credible individual social media, partial verification)
4 Low confidence (uncited claims, opinion without evidence)
3 Very weak (anonymous, no evidence, circular references)
2 Highly suspect (known misinfo, commercial bias, contradicts primary evidence)
1 Unreliable (tabloids, known fabricators, pure speculation)
0 Flagged (coordinated disinfo, state propaganda, narrative enforcement)
CRITICAL: Score reflects evaluated merit, NOT source prestige. A forum post with technical depth and internal logic may outrank mainstream article amplifying official statements.
Every report gets YAML frontmatter:
---
title: "[Investigation Title]"
date: YYYY-MM-DD
status: complete|ongoing|stalled
confidence: high|medium|low|mixed
sources: [count]
words: [approximate]
methodology: k-deep-research-v2
tags: [domain-relevant-tags]
---
Report structure scales to complexity:
LENGTH IS A FEATURE. 10,000+ words exhausting a topic = SUCCESS. 2,000 words hitting highlights = FAILURE.
State for ALL key conclusions:
When investigation stalls:
When tools fail (rate limits, paywalls, MCP errors):
references/sourcing-strategies.md — WHERE to find info, HOW to construct queries, multi-source triangulation, when to stop searchingreferences/research-frameworks.md — Multi-layer analysis (5 layers), credibility evaluation, information control detection, triangulation methodology, iterative deepening, quality checklistreferences/output-templates.md — Format examples, selection guide, adaptive guidelinesreferences/openclaw-architecture.md — OpenClaw Gateway/Agent Runtime architecture, heartbeat daemon, memory systems, model failover, sub-agents, Lobster workflows, session management, tool policyreferences/openclaw-skill-authoring.md — SKILL.md format, YAML frontmatter spec, three-tier loading, reference file patterns, ClawHub registry, security model, testing, publishingreferences/autonomy-patterns.md — Proactive agent patterns, heartbeat vs cron, memory persistence, compaction survival, task registries, workflow orchestration, degradation monitoring, multi-agent coordinationreferences/adversarial-analysis.md — Suppression detection, institutional behavior, narrative flow analysis, information archaeology, inversion testing, incentive mappingResearch request arrives →
1. ALWAYS: sourcing-strategies.md
2. IF complex multi-domain: research-frameworks.md
3. IF OpenClaw/agent topic: openclaw-architecture.md + autonomy-patterns.md
4. IF building skills: openclaw-skill-authoring.md
5. IF institutional/suppression angle: adversarial-analysis.md
6. IF custom output needed: output-templates.md
When this skill runs inside OpenClaw:
This methodology is universal. What changes: domain-specific sources and authorities. What stays constant: credibility scoring, pattern recognition, triangulation, epistemic humility.
When K asks a question, the answer is a complete investigation, not a response.