What If? Scenario Builder
A unified methodology for constructing disciplined, insightful, and actionable "what if?" scenarios.
This skill integrates fifteen proven foresight and analytical frameworks into a coherent process that
takes you from a raw question to a rich, multi-layered exploration of possibilities.
Why This Skill Exists
Most "what if?" thinking fails in predictable ways: it stays on the surface (imagining one obvious
consequence and stopping), it ignores interactions between variables (treating everything as independent),
it is biased toward the dramatic (ignoring the base-rate boring outcome), or it collapses into a single
"most likely" future instead of preserving genuine uncertainty. This methodology is designed to prevent
all of these failures systematically.
The core insight across every major foresight tradition is that scenario work is not about being right —
it is about being ready. The value lies not in predicting which scenario will come true, but in expanding
the space of possibilities you are prepared for and revealing assumptions you did not know you were making.
The Probabilistic-Possibility Gap
One of the most productive places in scenario thinking is the gap between what is probable and what
is conceivable. Probabilistic thinking asks "what is likely?" — it anchors you in base rates and
evidence. Possibility thinking asks "what is conceivable, even if unlikely?" — it frees you to explore
transformations and collapses that most people dismiss.
The gap between these two is where the most interesting scenarios live. They are plausible enough to
prepare for, but surprising enough that most people have not considered them. A scenario that is merely
probable is already priced in — everyone expects it, so it offers no strategic advantage. A scenario
that is merely conceivable but implausible is fantasy — it does not warrant resources. The sweet spot
is the territory in between: scenarios that sound surprising when you first hear them, but once you trace
their causal logic, seem almost obvious in retrospect.
Practical technique: After building your scenarios, identify which ones sit in this gap — more likely than
most people assume, but less expected than the base rate. These are your highest-value scenarios. They
deserve the most development because they represent genuine strategic insight: futures that others are
not preparing for.
The Methodologies
This skill draws on fifteen methodologies, organized into five functional groups. Each serves a distinct
purpose in the scenario-building process. You do not need to use all fifteen for every "what if?" — the
process section below guides you on which to apply based on the question's scope and depth.
Group A: Grounding & Orientation
Methods that anchor your thinking before you start exploring.
1. Base Rate Negation Check
Before getting creative, ask: "In similar historical cases, what actually happened most of the time?"
This is your reality anchor. Most "what if?" conversations drift toward the unusual and dramatic because
the unusual is more interesting to think about. The base rate check fights this tendency.
How to apply it:
- Identify the closest historical analogues to the scenario's subject
- Determine what happened in 70-80% of those cases — that is the base rate
- State the base rate explicitly before exploring deviations
- Every creative scenario you build afterward should be measured against: "Is this more or less likely
than the base rate?"
The base rate is not a ceiling — unlikely things do happen. But it is a calibration tool. If your
scenario is ten times less likely than the base rate, you should know that, and you should have a
specific reason for exploring it anyway (e.g., the consequences would be catastrophic, so even a 5%
chance matters).
Worked Example:
Question: "What if a major pandemic emerges from permafrost thaw?"
- Historical analogues: 1918 flu, 1957 flu, 1968 flu, 2009 H1N1, COVID-19
- In the last 100 years, ~5 pandemics significant enough to cause global disruption
- Base rate: roughly 1 every 20 years, or ~5% in any given year
- Of these, NONE emerged from permafrost thaw specifically — zoonotic spillover from live animal
markets and agricultural settings is the dominant pattern (4 of 5)
- The base rate for "pandemic from permafrost" specifically is <1% per year
Negation check: The more likely scenario (by base rate) is another zoonotic spillover from
conventional sources. If you are preparing for a permafrost pandemic, you should FIRST prepare for
the base-rate pandemic, and then ask whether permafrost-specific preparations add marginal value.
When to override the base rate: If permafrost thaw is accelerating nonlinearly AND ancient pathogens
have no modern immunity AND arctic access is increasing, the tail risk may deserve disproportionate
attention despite low base rate.
Advanced Technique — Base Rate Decomposition: Break a compound event into independent sub-events and
multiply probabilities: P(pandemic from permafrost) = P(thaw releases viable pathogen) × P(human
exposure) × P(human-to-human transmission) × P(global spread). Each factor has its own base rate,
and the compound probability is usually much lower than intuition suggests.
Advanced Technique — The Zero-Base-Rate Distinction: When the base rate is zero (no historical
precedent exists), do NOT automatically assume this means the scenario requires massive, world-
restructuring changes. Distinguish between two very different reasons a base rate can be zero:
-
Structural Impossibility — The scenario violates known physical laws or fundamental constraints.
Examples: faster-than-light travel, perpetual motion, arthropod respiration at cattle scale under
current atmospheric conditions. When the zero base rate is structural, you MUST identify which laws
you are suspending and cascade the consequences of those changes through the entire scenario world.
If you suspend the square-cube law, you have changed far more than just "ladybugs get big" — you
have changed the physics that governs every structure and organism in the scenario.
-
Historical Non-Occurrence — The scenario violates no known physical law; it simply has not
happened yet. Examples: a specific mutation arising, a particular political alignment forming, a
technology being developed along a path not yet taken. The path from zero to the scenario may be
narrow and targeted — a single mutation in tracheal tube rigidity allowing progressive arthropod
size increase over evolutionary time, without requiring different oxygen levels or gravity. The
causal chain from here to the scenario may be long, but each step is individually plausible and
the cascade does NOT necessarily restructure the entire world.
The critical discipline: when the base rate is zero, ask which type of zero it is before deciding
how to proceed. A structural-impossibility zero demands that you re-examine and cascade your premises.
A historical-non-occurrence zero demands only that you construct a plausible pathway — and then
check whether that pathway's side effects alter anything beyond the scenario's focal subject. Sometimes
the answer is "no, the pathway is narrow and the rest of the world can stay largely as it is." Sometimes
the answer is "yes, even a targeted change has cascading implications" — but you should discover
that through analysis, not assume it by default.
Common Pitfalls:
- False analogues: Choosing historical parallels that are superficially similar but structurally
different. "This is like the internet!" is only useful if the structural dynamics are similar, not
just the surface-level disruption.
- Availability bias: Recent or vivid events dominate your analogue selection. COVID-19 will skew
pandemic base rates for a generation, even though it may not be representative.
- Survivorship bias: History records successes and catastrophic failures but misses near-misses.
The base rate of "nuclear weapons used in war" is 0% based on history, but near-misses (Cuban
Missile Crisis, Stanislav Petrov incident) suggest the true probability may be higher.
- Over-cascading from zero base rates: Assuming that because something has never happened, it
must require a world-restructuring cause. A zero base rate means "not observed," not "structurally
impossible" — unless you can show it IS structurally impossible. Don't over-inflate the scope of
your premise just because the base rate is zero.
2. Causal Layered Analysis (CLA) — Sohail Inayatullah
Instead of just asking "what if X happens?", peel back layers to understand what makes the question
meaningful at all. CLA examines four depth levels:
Litany — The surface event or trend. "What if AI replaces 50% of jobs?" This is where most people
stop — at the headline. The litany is what is visible, discussed, and often sensationalized.
Systemic Causes — The structural factors that make the event possible. What economic, political,
technological, and social systems would have to be in place for AI to replace 50% of jobs? What labor
laws, corporate structures, educational systems, and market dynamics enable or resist this?
Worldview/Discourse — The paradigm that makes this scenario even thinkable. The idea that "jobs can
be replaced" rests on particular assumptions about the nature of work, the relationship between humans
and technology, and what constitutes economic value. Different civilizations at different times would not
even frame the question this way.
Myth/Metaphor — The deep cultural narratives that animate the scenario. The fear of machines replacing
humans draws on myths going back to the Golem, Frankenstein, and the Tower of Babel. These myths are not
ornamental — they shape which futures people find plausible and which they dismiss.
Why this matters: A "what if?" that only operates at the litany level produces shallow, reactive thinking.
A "what if?" that reaches the myth level can reveal why certain futures feel inevitable or impossible,
and open space for genuinely different possibilities.
Worked Example:
Question: "What if universal basic income is adopted globally?"
- Litany: "Everyone gets free money! Poverty ends / inflation explodes / nobody works anymore." —
Headline-level discourse, emotional, polarized, shallow.
- Systemic Causes: Automation reducing labor demand, wealth inequality reaching Gilded Age levels,
administrative complexity of means-tested welfare, declining union power, fiscal capacity of developed
economies, digital payment infrastructure enabling distribution.
- Worldview/Discourse: The libertarian frame (UBI as market freedom), the socialist frame (UBI as
redistribution), the technocratic frame (UBI as automation transition tool), the conservative frame
(UBI as destroying work ethic). Each frame makes UBI mean something fundamentally different.
- Myth/Metaphor: The Garden of Eden (work not necessary), The Fall (loss of meaning from labor
separation), The Mother (state as provider), The Contract (social order as mutual obligation). UBI
activates ALL of these simultaneously, generating visceral reactions across the political spectrum.
Scenario implication: A "what if UBI" scenario that only operates at the litany level will debate
inflation vs. poverty reduction. A CLA-informed scenario will also explore which WORLDVIEW wins —
because libertarian UBI, socialist UBI, and technocratic UBI are three very different futures, even
though the policy instrument is the same.
Advanced Technique — Myth-First CLA: The most common failure mode of CLA is identifying the
myth level and then proceeding to build scenarios at the litany or systemic level, treating the myth
as an interesting footnote. This wastes CLA's deepest power. When you identify the myth that animates
a scenario, try inverting the process: start from the myth and work downward.
If the myth is "the Toxic Mother" (a creature that nurtures and poisons simultaneously), build the
worldview that flows from that myth (the sacred is dangerous, protection comes with a price), then
the systemic structures that worldview produces (rituals of propitiation, pharmacological traditions
built around toxins, social hierarchies based on proximity to the sacred danger), then the litany
(granary pest control, elytra shields, bug-blood medicine). The myth-first approach often produces
scenarios that are surprising yet coherent — they feel like genuine discoveries rather than clever
extrapolations. The litany-first approach tends to produce scenarios that are expected yet decorated —
they reach the same obvious conclusions but with more narrative flair.
Not every scenario benefits from myth-first treatment. Use it when: the scenario involves deep cultural
or psychological responses, when your litany-level scenarios feel shallow or predictable, or when you
are working in Creative or Speculative contexts where mythic resonance is a feature, not a bug.
Common Pitfalls:
- Skipping the myth level because it feels "unscientific." The myth level is often the most
powerful determinant of which futures people find plausible. Ignoring it means you will be
blindsided by cultural dynamics that seem irrational but are deeply patterned.
- Treating levels as independent rather than mutually reinforcing. The myth animates the
worldview, which shapes the discourse, which constrains the systemic options, which determines
the litany.
- Identifying the myth and then ignoring it. Naming the myth without letting it shape the
scenario's structure is worse than not finding it at all — it creates the illusion of depth
without the substance. If you found a myth, let it do work.
3. Analogy Search
Where the Base Rate Negation Check uses analogues for probabilistic calibration ("how likely is this?"),
Analogy Search uses them for causal insight ("how might this unfold?"). The distinction matters: knowing
that 70% of startups fail (Base Rate) tells you about probability; understanding that a particular AI
company's dynamics structurally resemble the early smartphone market (Analogy) tells you about the causal
mechanisms, timing, and failure modes you should watch for.
A structural analogy is a real-world case, historical event, or cross-domain parallel that shares the
same underlying causal dynamics as your scenario, even if the surface details differ entirely. Finding
the right analogy is one of the most powerful moves in scenario work because it allows you to import
hard-won knowledge from one domain into another where that knowledge does not yet exist.
How to apply it:
- After establishing the base rate, ask: "What situation has already played out with similar causal
dynamics?" — not similar surface features, but similar structural mechanics
- Search across domains: a geopolitical scenario might have its best analogy in ecology; a technology
adoption curve might mirror a historical religious movement; a financial bubble might structurally
resemble a forest fire regime
- For each analogy found, extract the causal model: what drove the outcome in the analogue? What
feedback loops were present? What surprised observers?
- Map the analogy onto your scenario: which elements correspond? Which elements differ? Where does
the analogy break down?
- Use multiple analogies: the best scenario work draws on 2-3 structural analogies from different
domains, because each analogy illuminates different aspects of the situation
- When no analogy exists (truly novel situations), fall back to first-principles reasoning: decompose
the scenario into fundamental causal mechanisms (incentives, constraints, feedback loops, network
effects) and reason from those primitives upward
Worked Example:
Question: "What if autonomous vehicles achieve full Level 5 capability within 5 years?"
Base Rate analogues (for probability): Previous autonomous technology timelines (autonomous drones,
industrial robots). Base rate suggests significant delays are common — most autonomous technology
milestones have taken 2-3x longer than initial projections.
Structural analogies (for causal insight):
- Smartphone adoption (2007-2015): A transformative personal technology that required
infrastructure buildout, regulatory adaptation, and behavioral change. The analogy suggests:
even perfect technology takes 5-8 years to reach mainstream adoption because the ecosystem
(maps, insurance, parking, liability law) must co-evolve. Key insight from this analogy: the
technology was ready years before the ecosystem was.
- Elevator automation (1900-1945): When elevator operators were replaced by automated systems,
public fear was the primary barrier, not technology. People refused to ride driverless elevators
until trust-building mechanisms (emergency phones, visible safety features) were added. The
analogy suggests: the barrier to AV adoption may not be capability but trust, and trust requires
specific design interventions, not just better performance.
- Horse-to-automobile transition (1900-1930): A transportation mode change that took 30 years,
required entirely new infrastructure (roads, gas stations, traffic laws), and coexisted with the
old mode for decades. The analogy suggests: AVs and human-driven vehicles will share roads for
much longer than technologists assume, and the mixed-traffic period is the most dangerous phase.
Each analogy reveals a different causal dimension that the others miss. Together, they paint a richer
picture than any single analogy could.
Common Pitfalls:
- Surface analogies. "This is like the internet!" is only useful if the structural dynamics
(network effects, adoption barriers, regulatory response) are similar, not just the level of
disruption. Most "this changes everything" analogies are surface-level and misleading.
- Single-analogy fixation. One analogy inevitably distorts — it over-emphasizes the features
that match and hides the ones that differ. Use at least two analogies from different domains.
- Analogy as proof. An analogy is a source of hypotheses, not evidence. "This is like X, and X
did Y" does not mean Y will happen — it means Y is worth investigating as a possibility.
Group B: Structural Construction
Methods for systematically building out the scenario space.
4. Schwartz Eight-Step Scenario Method
The backbone process for structured scenario construction, from Peter Schwartz's The Art of the Long
View (1991):
- Identify the focal question — What decision, concern, or curiosity is driving this exploration?
- Identify key driving forces — What forces in the environment will shape the answer? (economic,
technological, social, political, environmental)
- Rank by importance and uncertainty — Which forces matter most? Among those, which are most
uncertain? The intersection of high importance and high uncertainty is where scenarios live.
- Select scenario logic — Choose the 2-3 most critical uncertainties as axes that define different
future worlds.
- Flesh out scenario narratives — Build detailed, internally consistent stories for each quadrant
or combination.
- Analyze implications — What does each scenario mean for the focal question? What decisions would
be good in one scenario but bad in another?
- Identify early indicators — What observable signals would tell you which scenario is beginning
to unfold?
- Revisit and refine — Scenarios are living documents, not one-time exercises.
Worked Example:
Focal Question: "What will the global energy system look like in 2040?"
Step 2 — Key Driving Forces: Climate policy stringency, cost trajectory of renewables, energy storage
breakthroughs, geopolitical stability of fossil fuel regions, electrification of transport, nuclear
energy renaissance, carbon capture viability, energy demand growth in developing economies, grid
modernization pace, public acceptance of nuclear/fracking.
Step 3 — Ranking (Importance × Uncertainty): Climate policy stringency (Very High × Very High = ★★★★★),
Energy storage breakthroughs (Very High × Very High = ★★★★★), Geopolitical stability (High × High = ★★★★),
Nuclear renaissance (Medium × Very High = ★★★★), Cost trajectory of renewables (High × Low = ★★★).
Step 4 — Scenario Axes: Axis 1: Climate policy stringency (Weak → Strong), Axis 2: Energy storage
breakthrough (Incremental → Revolutionary). This gives four distinct worlds to explore.
Common Pitfalls:
- Choosing axes that are not independent. If your two uncertainties are "climate policy" and
"renewable cost," they may be correlated (policy drives investment drives cost reduction). Look
for genuinely independent uncertainties.
- Too many axes. Three axes gives eight scenarios — that is usually too many to develop fully.
Stick with two axes unless you have a very large team.
5. Intuitive Logics (2x2 Matrix Method)
Developed by Global Business Network, this is the most widely used scenario construction technique:
- Select two critical uncertainties (identified in Step 3 of Schwartz's method above)
- Plot them as axes on a 2x2 matrix
- Each quadrant becomes a distinct scenario
- Name each scenario memorably — the name should instantly evoke the world it describes
- Develop each into a narrative of 300-600 words
The 2x2 is not the only way to organize scenarios, but it is the most practical because it forces
you to focus on what matters most and naturally produces diversity (four very different worlds).
Advanced Technique — Naming Scenarios:
Good scenario names are: Evocative (create an immediate mental image), Distinctive (never confused
with another), Directional (hint at the scenario's character).
Bad names: "Scenario A," "High Policy / Low Tech," "The Moderate Future"
Good names: "Green Thunder," "Slow Burn," "Fractured Dawn," "The Long Tailpipe"
Common Pitfall: The 2x2 can create four scenarios that feel artificial — like they were generated
by a matrix rather than by genuine insight. The matrix is a starting point, not a cage. If one quadrant
produces a boring or implausible scenario, adjust the axes or merge quadrants. The goal is insight,
not completeness.
6. Manoa Method (Four Archetypes)
Developed by Jim Dator at the Hawaii Research Center for Futures Studies. This method ensures you
do not just produce variations of "continued growth" by mandating four archetypal futures:
- Continued Growth — Current trends continue and intensify. More of what we have now.
- Collapse — Key systems break down. This is not pessimism — it is the historical norm for
civilizations. What specific systems collapse, and why?
- Discipline — Values shift toward constraint, stewardship, and collective limits. Growth is
deliberately restrained in favor of stability or equity.
- Transformation — Fundamental change in how the system operates. Not just more or less of the
same, but a qualitative shift — a new paradigm.
For any "what if?" question, generating all four archetypes prevents the most common failure mode:
producing three scenarios that are all variations on "things get somewhat better or somewhat worse."
Worked Example: "What if brain-computer interfaces become mainstream?"
Continued Growth: BCI adoption follows the smartphone curve. Early adopters → tech professionals →
mainstream → universal. Benefits accumulate incrementally: better communication, medical applications,
enhanced productivity. Social structures adapt gradually. New norms emerge around "cognitive privacy."
The fundamental relationship between humans and technology does not change — it deepens.
Collapse: BCI creates catastrophic vulnerabilities. Hacking becomes hacking of minds. A major neural
security breach causes mass psychological harm. Public trust collapses. Regulation becomes draconian.
The technology is banned or severely restricted, creating a black market that causes further harm.
Discipline: After early enthusiasm, societies collectively decide the risks outweigh the benefits.
Strict regulatory frameworks limit BCI to medical applications. Enhancement is banned. International
treaties govern neural data. A "cognitive sovereignty" movement argues the mind must remain a private,
uninstrumented space. BCI exists but is carefully bounded.
Transformation: BCI does not just enhance existing capabilities — it fundamentally changes what it
means to be human. Shared consciousness emerges. Individual identity becomes fluid. The distinction
between "my thoughts" and "network thoughts" dissolves. New forms of organization, creativity, and
social life emerge that were literally inconceivable before. The question "what if BCI becomes
mainstream?" turns out to have been the wrong question — because "mainstream" implies the same humans
doing the same things with a new tool, when in fact the tool transforms the humans.
Key Insight: The Transformation archetype is the hardest to imagine and the most important. Most
scenario work produces variations on Growth and Collapse, with maybe a nod to Discipline.
Transformation is the one that is almost always missing — and it is often the one that actually happens.
7. Morphological Analysis — Fritz Zwicky
For scenarios with many independent variables, break the situation into key dimensions and systematically
explore combinations:
- Define 4-7 key dimensions of the problem (e.g., economy type, governance model, technology level,
social cohesion, environmental condition)
- List 3-5 possible states for each dimension
- Systematically combine states across dimensions to generate scenarios
- Filter for internal consistency (some combinations are self-contradictory)
- Select the most interesting and revealing combinations for full development
This method is particularly powerful for complex, non-quantifiable problems where traditional modeling
fails. It prevents you from fixating on the most obvious combinations and reveals unexpected but coherent
possibilities.
Worked Example:
Question: "What if a new global governance system emerges?"
| Dimension | State 1 | State 2 | State 3 |
|---|
| Power Structure | Unipolar | Multipolar | Networked |
| Legitimacy Source | Military/economic | Democratic consent | Performance/deliverables |
| Sovereignty Model | Westphalian | Supranational | Post-sovereign |
| Decision Speed | Slow/consensus | Moderate/delegated | Fast/automated |
| Technology Role | Tool | Platform | Governor |
This generates 3^5 = 243 possible combinations. Most are internally inconsistent (e.g., Networked
power + Slow/consensus decisions). Filtering for consistency might yield 15-20 viable scenarios,
from which you select the 4-6 most revealing for full development.
Surprising combination: Networked power + Performance legitimacy + Post-sovereign + Fast/automated
decisions + Technology as governor = A future where AI-driven governance platforms compete for "citizens"
based on outcome metrics, sovereignty is a relic, and the most successful governance systems are the
ones that deliver results fastest. This is not a scenario most people would generate without the
morphological method, but it is internally consistent and thought-provoking.
Common Pitfall:
- Too many dimensions. With 7 dimensions × 4 states each = 16,384 combinations. Keep it to
4-6 dimensions with 2-4 states each.
- Treating all combinations as equally plausible. The method generates the space; your judgment
filters it. Do not be mechanically exhaustive.
Group C: Interaction & Dynamics
Methods for understanding how variables influence each other and how scenarios evolve over time.
Temporal Cascade Framing: When tracing how a scenario unfolds, structure your analysis across three
temporal bands rather than treating time as uniform:
- Immediate ripples (days to months): The direct, largely predictable first-order consequences.
These are usually visible to anyone paying attention and rarely contain genuine surprises.
- Feedback loops (months to years): Second-order effects where consequences alter the conditions
that produced them, creating reinforcing or dampening dynamics. This is where scenarios begin to
diverge significantly from linear projections, because the feedback can amplify or suppress the
initial change in non-obvious ways.
- New equilibria (years to decades): The long-term stable states or collapse points that emerge
after feedback loops have played out. These are often qualitatively different from the starting
conditions — not just "more" or "less" of the original situation, but a genuinely new configuration.
This framing is complementary to Futures Wheels (which traces consequence depth) and Cross-Impact
Analysis (which traces interaction breadth). Temporal Cascades add a time axis: they tell you WHEN
effects manifest, not just WHAT they are or HOW they interact. A scenario that looks alarming in the
immediate ripples phase may self-correct through dampening feedback. A scenario that looks benign
immediately may accumulate reinforcing feedback that drives it toward a very different equilibrium.
Practical technique: After building your scenarios, explicitly label each major consequence with its
likely temporal band. If all your consequences cluster in one band, you are missing dynamics in the
other two. The most important insights usually come from the feedback loop phase — that is where
both the biggest surprises and the most effective intervention points live.
The Inversion Test: For each scenario, check whether its character inverts across temporal bands.
A scenario that looks equally benign (or equally catastrophic) in all three bands is probably
under-developed. The richest scenarios show a dramatic shift: a benign immediate phase that accumulates
reinforcing feedback toward a catastrophic equilibrium (the "slow trap"), or an alarming immediate
phase that triggers dampening feedback leading to a stable, healthier equilibrium (the "corrective
shock"). If your scenario does not invert across any temporal band, you may be extrapolating linearly
rather than tracing feedback dynamics.
8. Cross-Impact Analysis
Map how different "what if" variables influence each other. A change in one factor can make another
more or less likely, creating cascading chains that linear thinking misses.
How to apply it:
- List all key variables/developments in your scenario (6-10 is usually manageable)
- For each pair, ask: "If A occurs, does it make B more likely, less likely, or have no effect?"
- Build a cross-impact matrix showing these relationships — build it explicitly, not just
mentally. For complex scenarios with many interacting variables, narrating chains without
constructing the matrix is the most common failure mode. The matrix reveals interactions you
will miss if you only think linearly: a variable that is insignificant in isolation but becomes
an amplifier when combined with another.
- Identify chains (A→B→C→D) and feedback loops (A→B→A)
- Pay special attention to "amplifiers" (factors that make many other things more likely) and
"dampeners" (factors that suppress other developments)
- Interaction surprises: The highest-value insights from Cross-Impact Analysis come from
variable interactions that neither variable alone would suggest. If your analysis only produces
chains that any informed person could predict, you have not gone deep enough. Push for the
non-obvious interaction: "A alone does X, B alone does Y, but A+B together produce Z — which
is qualitatively different from either X or Y."
This method reveals that scenarios are not collections of independent events but interconnected systems
where one change propagates through the entire structure.
Worked Example: "What if AI achieves AGI?"
| Event → Effect | AGI achieved | AI regulation | Mass unemployment | UBI adopted | Military AI race |
|---|
| AGI achieved | — | +3 | +3 | +2 | +3 |
| AI regulation | -2 | — | -1 | +1 | -1 |
| Mass unemployment | +1 | +3 | — | +3 | 0 |
| UBI adopted | 0 | +1 | -2 | — | -1 |
| Military AI race | +2 | -2 | +1 | 0 | — |
Scale: -3 (strongly decreases likelihood) to +3 (strongly increases likelihood)
Key insight: AGI achievement strongly increases the probability of military AI race (+3), which in turn
further accelerates AGI development (+2), creating a reinforcing feedback loop. Meanwhile, regulation
decreases AGI probability (-2) but AGI increases regulation probability (+3), creating a chasing dynamic
where regulation is always reactive.
Cascading chain: AGI → Mass unemployment → UBI demand → Political restructuring → New regulation →
Slowed AI progress → Economic adjustment → Reduced unemployment → Reduced UBI pressure → Deregulation
→ Accelerated AI progress → [loop]. This chain suggests the system may oscillate rather than converge.
Common Pitfalls:
- Treating cross-impacts as symmetric. "A makes B more likely" does not mean "B makes A more
likely" with the same strength. AGI makes regulation more likely, but regulation only makes AGI
somewhat less likely — asymmetric relationship.
- Ignoring time delays. Some cross-impacts are immediate (AGI → stock market crash), while
others take years (AGI → educational system reform). A cross-impact matrix without time
consideration misses dynamics like overshoot and oscillation.
9. Monte Carlo Thinking (Conceptual)
For scenarios involving probabilistic outcomes, mentally simulate many possible runs:
- Identify the key probabilistic variables and their approximate probability distributions
- Imagine running the scenario 1000 times — in what fraction of runs does each outcome occur?
- Focus on the range of outcomes, not just the average
- Identify "tail risks" — unlikely but high-impact outcomes that dominate the expected value
- Ask: "What would have to be true for the 95th-percentile bad outcome? The 95th-percentile good one?"
You do not need actual Monte Carlo simulation — the conceptual exercise of thinking in distributions
rather than point estimates is what matters. It prevents the common error of treating uncertain
variables as if they had single values.
Worked Example:
Question: "What if we invest $50M in a new technology that has a 30% chance of success?"
Point-estimate thinking: "30% chance, so probably it fails." This is wrong.
Monte Carlo thinking: Imagine 1000 parallel universes:
- In ~300 of them, the technology succeeds
- In those 300, the returns follow a distribution: maybe 50 return 10x, 100 return 3x, 150 return 1x
- In the 700 failures, most lose the full $50M, but maybe 200 lose only $20M (partial recovery)
- Expected value = (50 × $500M + 100 × $150M + 150 × $50M + 200 × -$20M + 500 × -$50M) / 1000
The key insight is not the calculation — it is thinking in distributions rather than single outcomes.
The question is not "will it succeed?" but "across all possible outcomes, what is the shape of the
distribution, and does the right tail compensate for the left tail?"
Tail risk identification: What if the failure mode is not just losing $50M but being locked into a
technological dead end that costs $200M in missed opportunities? That tail risk dominates the entire
calculation, even at 5% probability.
10. Agent-Based Thinking (Conceptual)
Model how individual actors with their own rules and incentives would interact to produce emergent
outcomes:
- Identify the key actor types (e.g., governments, corporations, citizens, AI systems)
- Define each actor's: goals, constraints, information available, decision rules
- Imagine how these actors would respond to the scenario's triggering event
- Trace the second-order effects: Actor A's response changes Actor B's situation, which changes
Actor B's response, etc.
- Look for emergent outcomes that no single actor intended or predicted
This is particularly valuable for scenarios involving strategic interaction (game theory situations),
social dynamics, or any system where the aggregate outcome is not simply the sum of individual choices.
Worked Example:
Question: "What if carbon taxes are imposed at $100/ton globally?"
Actors and their rules:
- Fossil fuel companies: Goal = maximize shareholder value. Rule = shift to most profitable energy
source given the tax. Will lobby against the tax, but if it passes, will pivot rapidly.
- Developing nations: Goal = economic growth. Rule = resist anything that slows growth, but accept
aid/compensation tied to compliance. Will seek exemptions, then use exemption as competitive advantage.
- Consumers: Goal = maintain lifestyle at low cost. Rule = choose cheapest option within existing
habits. Will change behavior when price differential exceeds habit inertia.
- Green tech startups: Goal = scale. Rule = use policy tailwinds to accelerate. Will overstate
readiness to capture subsidies, leading to potential bubbles.
Emergent outcome: Fossil fuel companies pivot to "green" branding while lobbying for loopholes.
Developing nations accept carbon tax revenue but invest in fossil fuels for domestic use. Consumers
shift slowly due to habit inertia, creating a long demand tail. Green tech overbuilding leads to a
consolidation crash. The NET effect is slower emissions reduction than the tax's proponents predicted,
not because the tax fails, but because each actor responds rationally from their own perspective and
the aggregate is suboptimal — a classic collective action problem.
Group D: Consequence Tracing
Methods for tracing how a single change ripples outward through time and causality.
11. Counterfactual History
Take a specific point of divergence in an actual historical situation and trace the ripple forward.
This is not the same as Cross-Impact Analysis (which maps pairwise interactions between abstract
variables). Counterfactual History is grounded in real, concrete causality chains — you start from a
real historical turning point, change one thing, and rigorously follow the consequences through the
actual web of events that existed at that time.
Why this is distinct: Cross-Impact Analysis goes wide (many variables, many interactions). Counterfactual
History goes deep (one change, full causal chain traced through a specific historical context). They are
complementary — one maps breadth, the other maps depth.
How to apply it:
- Identify the specific historical turning point you want to alter
- Establish what actually happened and why (the causal chain as it occurred)
- Change exactly one variable — be precise about what differs and what stays the same
- Trace consequences through the existing causal structure: who would have acted differently, what
decisions would have changed, what events would or would not have occurred
- Maintain causal discipline: do not introduce changes that your single altered variable could not
plausibly produce
- Check: are you smuggling in changes beyond your one alteration? If so, narrow the counterfactual
The power of this method is that it forces you to understand the actual causal structure of a situation
before you can meaningfully alter it. The discipline of changing only one variable reveals which factors
were truly load-bearing and which were incidental.
Worked Example:
Question: "What if the Soviet Union had not collapsed in 1991?"
The actual causal chain: Economic stagnation + oil price collapse + Gorbachev's reforms (glasnost/perestroika)
- nationalist movements in Baltic republics + hardliner coup attempt + Yeltsin's rise → dissolution.
Change one variable: Gorbachev does not introduce glasnost. Perestroika (economic reform) proceeds, but
political liberalization is withheld.
Consequence tracing:
- Without glasnost, nationalist movements in the Baltic republics cannot organize openly → no public
independence declarations in 1989-1990
- The hardliner coup never occurs (it was a reaction to the perceived chaos of openness) → no August 1991 crisis
- But: economic stagnation continues because perestroika without glasnost cannot address the root
problems (corruption, inefficiency) that political openness would have exposed
- Oil revenues remain low through the 1990s → fiscal crisis deepens
- The Soviet system enters a prolonged stagnation more like the Brezhnev era than a collapse
- China watches and learns: economic reform without political liberalization IS viable (reinforcing
the Chinese Communist Party's strategy)
- No NATO expansion (no vacuum to fill) → very different US-Russia and US-China dynamics
- The internet still arrives → eventually the information monopoly erodes, but more slowly and on a
different timeline
Key insight: The counterfactual reveals that Soviet collapse was not inevitable — it required a specific
sequence of choices and reactions. But it also shows that without political reform, the economic problems
would have continued to fester, producing a different kind of crisis (slow rot rather than sudden collapse).
Common Pitfalls:
- Changing more than one variable. If you alter both glasnost AND oil prices, you have two
counterfactuals, not one — and you can no longer trace which change produced which consequence.
- Presentism. Imposing modern knowledge or values on historical actors. People in the past did not
know what would happen next. Your counterfactual must respect their information and incentives at the time.
- Determinism in disguise. Building a counterfactual that "just so happens" to reach the outcome you
want. Good counterfactuals are as surprised by their own conclusions as the reader is.
12. Futures Wheels
A visual technique for mapping cascading consequences from a single change through concentric rings
of depth. Where Cross-Impact Analysis maps breadth of interactions (many variables, pairwise), Futures
Wheels map depth of consequences from a single change — direct effects, then indirect effects, then
tertiary effects, and so on.
How to apply it:
- Write the triggering change in the center of the wheel
- First ring (direct consequences): What happens immediately and directly as a result of this change?
List 4-8 direct effects.
- Second ring (indirect consequences): For each direct effect, what does IT cause? List 2-4 indirect
effects per direct effect.
- Third ring (tertiary consequences): For the most important indirect effects, what do THEY cause?
- Continue until the effects become too remote or too uncertain to trace meaningfully
- Look for: convergence (multiple chains leading to the same outcome), feedback loops (a consequence
that circles back to amplify or dampen the original change), and surprise (tertiary effects that are
more significant than direct ones)
The visual structure is important — it reveals patterns that a linear list cannot. When multiple chains
converge on the same outcome, that outcome is more robust. When a tertiary effect is bigger than the
direct effects, you have found a second-order surprise that most people will miss.
Worked Example:
Trigger: "Cheap ambient-temperature seawater desalination becomes available"
First ring (direct): Abundant fresh water in coastal areas → Brine discharge into oceans → Desalination
industry boom → Water-intensive agriculture expands near coasts → Property values rise in arid coastal regions
Second ring (indirect): Abundant coastal water → Inland migration to coasts → Brine accumulation →
marine ecosystem disruption → New agricultural zones compete with traditional farmland → Water-dependent
industries relocate → Reduced demand for water pipelines/aqueducts
Third ring (tertiary): Coastal population boom → Coastal real estate speculation and inequality →
Fishery collapse from brine → food security crisis in fishing-dependent nations → Inland agricultural
regions economically depressed → Political conflict between coastal and inland regions → Water pipeline
infrastructure becomes stranded assets → Geopolitical leverage shifts from water-scarce to water-abundant nations
Key insight: The tertiary effects are far more significant than the direct effects. The direct effect
is "more water" — but the tertiary effects include geopolitical realignment, stranded infrastructure,
and political conflict between regions. Futures Wheels reveal that the most important consequences of
a change are often invisible if you only look one step ahead.
Common Pitfalls:
- Stopping at the first ring. Most people can see direct consequences. The value of Futures Wheels
is in the second and third rings — that is where the surprises live.
- Forgetting negative and positive feedback. Some consequences amplify the original change (cheap
water → more agriculture → more water demand → more desalination → more brine → more ecosystem
damage → less fish → more demand for agriculture → more water demand). Others dampen it. Map both.
- Treating all branches as equally likely. Some second-ring effects are near-certain; others are
speculative. Weight your wheel accordingly.
Group E: Stress-Testing & Validation
Methods for challenging and strengthening your scenarios.
13. Pre-Mortem
Imagine it is five years from now and your plan or preferred outcome has failed spectacularly. Work
backward to determine what caused the failure.
How to apply it:
- State the assumed successful outcome explicitly
- Travel to the future where it has failed completely
- Generate 5-8 specific, plausible reasons for the failure
- Rank by likelihood and severity
- For each, ask: "What would I have needed to do differently to prevent this?"
The pre-mortem is the dark twin of backcasting. Where backcasting builds a path to success, the
pre-mortem reveals the hidden failure modes that optimism obscures. Use both in tandem for any scenario
with real stakes.
Worked Example:
Plan: "We will transition our company to 100% renewable energy by 2030."
Pre-mortem (5 years later, the plan failed):
- Supply chain disruption: Critical components became unavailable due to geopolitical conflict with
the primary supplier nation. No diversified sourcing.
- Grid interconnection delays: The local utility could not upgrade the grid connection fast enough.
We assumed infrastructure would keep pace with our timeline.
- Cost overrun: Renewable installation costs rose 40% due to materials scarcity and labor shortages.
Budget was based on continued cost declines, not reversals.
- Intermittency problems: We underestimated storage needed for seasonal variation. Batteries covered
daily cycles but not the 3-week winter lull.
- Organizational resistance: Middle management found workarounds that kept fossil fuel contracts
running because renewable systems were "not ready yet" for critical operations.
Key insight: Failure modes 2 and 4 are external dependencies we cannot control. Failure mode 5 is an
internal culture problem. The pre-mortem reveals that the plan's success depends on things outside our
control more than on our own efforts.
14. Backcasting
Start with a desired (or feared) future state and work backward to determine what would need to happen
to get there:
- Describe the future state in detail — what does the world look like?
- Ask: "What would have to be true one year before this for it to come about?"
- Ask: "What would have to be true two years before?"
- Continue working backward to the present
- Identify the critical junctures — the points where the path could have diverged
- Assess the plausibility of each step
Backcasting is especially powerful for "what if we achieved X?" questions because it transforms a
vague aspiration into a concrete chain of prerequisites. It also reveals hidden dependencies — things
that must be true for the outcome to occur but that you might not have considered.
Worked Example:
Desired future: "Our city has zero traffic fatalities by 2035."
Working backward:
- 2035: Zero traffic fatalities
- 2034: Last-mile safety: all remaining crash-prone intersections rebuilt. Emergency response time
under 3 minutes everywhere.
- 2033: Autonomous vehicle penetration at 80%. Human driving restricted to designated zones.
- 2032: Comprehensive V2X (vehicle-to-everything) communication network operational citywide.
- 2030: All new vehicles sold are autonomous-capable. Speed limits reduced to 30 km/h in mixed-traffic
zones.
- 2028: Major infrastructure redesign: separated lanes for autonomous vehicles, pedestrians, cyclists.
- 2026: Pilot autonomous zones operational. First V2X corridors active.
- Now: Political commitment, budget allocation, public consultation, pilot planning.
Critical junctures:
- 2026-2028: If autonomous zone pilots fail (technology not ready), the whole timeline delays 3-5 years
- 2028-2030: If public rejects speed limit reductions, human-vehicle conflict continues
- 2030-2032: If V2X infrastructure is delayed, autonomous vehicles cannot communicate and safety gains
plateau
Hidden dependency revealed: The entire path assumes autonomous vehicle technology will be reliable enough
for urban deployment by 2030. If this is wrong, there is no alternative path to zero fatalities without
this milestone. This is a critical risk.
15. Red Teaming / Adversarial Thinking
Deliberately adopt the perspective of an adversary, critic, or disconfirming voice:
- Challenge every assumption: "What if the opposite were true?"
- Seek vulnerabilities: "Where is this scenario weakest? What evidence would falsify it?"
- Think like an opponent: "If I wanted this scenario to fail, how would I attack it?"
- Search for disconfirming evidence: "What data would I NOT want to see if this scenario were correct?"
- Apply the "outside view": How have similar scenarios played out historically?
Red teaming is not cynicism — it is intellectual hygiene. A scenario that survives aggressive red
teaming is far more reliable than one that has only been validated by its creators.
Worked Example:
Scenario: "Renewable energy will dominate by 2040."
Red Team challenges:
- Assumption: Renewable costs continue to decline. What if they plateau? Solar panel efficiency is
approaching theoretical limits. Mining for rare earth minerals may become more expensive as the
easiest deposits are depleted. Cost decline is not a law of nature.
- Assumption: Storage solves intermittency. What if battery technology hits fundamental chemistry
limits? Lithium-ion has improved 5x in 30 years; a 5x more would still not cover seasonal storage
needs. Alternative chemistries have been "5 years away" for 20 years.
- Assumption: Political will sustains the transition. What if economic recession turns public opinion
against expensive energy transitions? What if petrostates successfully lobby for delays? The energy
transition requires sustained political commitment across multiple election cycles.
- Assumption: The grid can be rebuilt. The scale of grid infrastructure needed is enormous. What if
NIMBYism, permitting delays, and supply chain constraints make the buildout too slow?
- Disconfirming evidence to seek: Are there signs that renewable adoption is slowing in early adopter
countries? Are there grid stability incidents being underreported? Are fossil fuel companies making
investments that suggest they expect long-term viability?
Red Team conclusion: The scenario is plausible but depends on three things breaking right
simultaneously: continued cost declines, storage breakthroughs, and sustained political will. The
probability of all three occurring is lower than the probability of any individual one. This does not
invalidate the scenario, but it suggests the timeline may be longer than advocates expect.
The Unified Process
This is the core workflow. Apply it flexibly — not every "what if?" requires all steps, and the depth
of each step should match the importance of the question.
Phase 1: Ground the Question
Apply: Base Rate Negation Check + CLA + Analogy Search + Divergence Depth Selector
-
State the "what if?" question precisely. Not "what about AI?" but "what if transformer-scale AI
models become 100x cheaper to run within 3 years?"
-
Set the Divergence Depth: Decide whether this scenario operates under minimal-change or
butterfly-effect rules:
- Minimal-change counterfactual: One variable shifts, everything else stays as close to the
current state as causally possible. Use this for "what if X had been different?" questions
grounded in a specific, real-world context. The discipline here is holding as much constant
as your one change permits.
- Butterfly-effect scenario: The initial change is allowed to cascade freely through the
system, altering other variables that in turn alter still others. Use this for "what if X
happens in the future?" questions where the system has time to reconfigure. The discipline
here is ensuring each cascade step is causally plausible, not just narratively convenient.
- Why this matters: The choice determines how aggressively you trace consequences. A
minimal-change counterfactual that drifts into butterfly-effect territory has lost its rigor
(it changed more than it claimed to). A butterfly-effect scenario that is too timid (only
exploring first-order effects) has left value on the table.
-
Run the Base Rate Negation Check:
- What is the closest historical analogue?
- What happened in 70-80% of similar cases?
- State the base rate outcome explicitly.
-
Run Analogy Search:
- What situations from other domains share the same underlying causal dynamics?
- Extract causal models from 2-3 structural analogies
- Map each analogy onto your scenario: what corresponds, what differs, where does it break down?
- If no analogy exists (truly novel), decompose into first-principles causal mechanisms
-
Run CLA on the question itself:
- Litany: What is the surface-level question people are asking?
- Systemic causes: What structures make this question salient now?
- Worldview: What paradigm frames the question this way? What would a different paradigm ask instead?
- Myth/metaphor: What deep narrative does this scenario tap into?
-
Identify your own biases: What do you WANT to be true? What do you FEAR? What expertise are you
lacking? State these honestly — they shape your blind spots.
Output of Phase 1: A grounded, reframed question with explicit base rate, structural analogies,
layered understanding, divergence depth selection, and acknowledged biases.
Phase 2: Map the Possibility Space
Apply: Schwartz Method (Steps 1-4) + Intuitive Logics + Manoa Archetypes
-
Identify driving forces (Schwartz Step 2). List 8-15 forces across STEEP categories:
- Social, Technological, Economic, Environmental, Political
-
Rank by importance and uncertainty (Schwartz Step 3). Focus on forces that are both highly
important and highly uncertain — these define the scenario space.
-
Select 2-3 critical uncertainties as scenario axes (Schwartz Step 4 / Intuitive Logics).
Construct a 2x2 matrix (or 2x3 if three uncertainties).
-
Generate the Manoa archetypes for each quadrant:
- What does Continued Growth look like in this quadrant?
- What does Collapse look like?
- What does Discipline look like?
- What does Transformation look like?
You do not need all four for every quadrant — use your judgment on which are most revealing.
But ensure that across all your scenarios, all four archetypes are represented somewhere.
Output of Phase 2: A structured possibility space with 4-8 raw scenario sketches.
Phase 3: Build the Scenarios
Apply: Morphological Analysis + Cross-Impact Analysis + Agent-Based Thinking + Futures Wheels + Counterfactual History + Temporal Cascade Framing
-
For each scenario, use Morphological Analysis to ensure dimensional completeness:
- What is the state of each key dimension in this scenario?
- Are the states internally consistent? (If not, adjust or discard)
- Are there surprising but coherent combinations you missed?
-
Apply Cross-Impact Analysis:
- How do the variables in each scenario interact?
- Identify cascading chains and feedback loops
- Which variables amplify each other? Which dampen?
- Does the scenario become more or less stable over time as interactions compound?
-
Apply Futures Wheels to trace consequence depth:
- For the triggering event of each scenario, map direct → indirect → tertiary effects
- Look for convergence (multiple chains leading to the same outcome) and surprise (tertiary
effects more significant than direct ones)
- Identify feedback loops that amplify or dampen the original change
-
Apply Temporal Cascade Framing:
- Immediate ripples: What happens in days to months? (Direct, largely predictable)
- Feedback loops: What happens in months to years? (Self-reinforcing or self-correcting dynamics)
- New equilibria: What happens in years to decades? (Stable states or collapse points)
- If all consequences cluster in one temporal band, you are missing dynamics in the others
- The most important intervention points usually live in the feedback loop phase
-
Apply Counterfactual History where applicable:
- If the scenario involves an alternative to a known historical outcome, trace the specific
causal chain from the point of divergence
- Maintain discipline: change only one variable and follow real causal structure
- Use this particularly for "what if X had not happened?" or "what if Y had happened instead?" questions
-
Apply Agent-Based Thinking:
- Who are the key actors in this scenario?
- What would each actor do in response to the triggering conditions?
- What emergent outcomes arise from their interaction?
- Are there unintended consequences that no actor planned?
- Agency Uncertainty branching: Identify the decision points where human choice introduces
irreducible uncertainty (see Anti-Patterns). At each such point, do NOT resolve the branching
by picking the most likely branch. Instead, preserve the fork: show what flows from each
plausible choice, and identify what conditions would make each branch more likely. The
branching itself is the insight — it reveals where the future is genuinely open.
-
Build the narrative for each scenario (Schwartz Step 5):
- Give each a memorable, evocative name
- Write 300-600 words per scenario
- Include: triggering event, key dynamics, stakeholder experiences, endpoint state
- Make it feel real — include specific, concrete details, not just abstractions
Output of Phase 3: 3-6 fully developed scenario narratives with cross-impact dynamics,
temporal cascade structure, and agent-based emergent properties identified.
Phase 4: Stress-Test and Validate
Apply: Pre-Mortem + Backcasting + Red Teaming + Monte Carlo Thinking
-
Run a Pre-Mortem on each scenario:
- For your preferred scenario: "It failed. Why?"
- For your feared scenario: "It happened. What made it inevitable?"
- Generate 3-5 failure causes per scenario
- Identify which failure causes are most preventable
-
Apply Backcasting:
- For the most desirable scenario: What specific steps would lead there?
- For the most dangerous scenario: What specific steps would prevent it?
- Identify critical junctures where paths diverge
-
Run Red Team analysis:
- Challenge the assumptions of each scenario
- Search for disconfirming evidence
- Identify the weakest point of each scenario
- Apply the outside view: what do similar historical situations suggest?
-
Apply Monte Carlo Thinking:
- For each scenario, estimate: If this situation recurred 100 times, how often would this
outcome occur? (Rough probability ranges: <5%, 5-20%, 20-40%, 40-60%, 60-80%, >80%)
- Identify tail risks: the <5% outcomes with outsized consequences
- Check: Does the sum of scenario probabilities roughly equal 100%? If not, you are missing
possibilities or double-counting.
Output of Phase 4: Stress-tested scenarios with identified failure modes, critical junctures,
probability estimates, and known vulnerabilities.
Phase 5: Synthesize and Deliver
-
Analyze implications (Schwartz Step 6):
- What does each scenario mean for the focal question?
- Which decisions would be robust across all scenarios?
- Which decisions would be good in one scenario but catastrophic in another?
- What are the key trade-offs?
-
Identify scenarios in the Probabilistic-Possibility Gap:
- Which scenarios are more likely than most people assume but less expected than the base rate?
- These are your highest-value scenarios — they represent genuine strategic insight
- Give them the most development and the strongest narratives
-
Identify early indicators (Schwartz Step 7):
- What observable events or trends would signal that a particular scenario is beginning to unfold?
- Create a "watch list" of 5-10 indicators per scenario
- Distinguish between leading indicators (predictive) and lagging indicators (confirmatory)
-
Present the output in the structured format below.
Output Format
The Iceberg Principle
The methodology has two parts: the engine (the full analytical process you apply) and the
display (what the reader sees). The engine should always run at full depth — every method
applicable to the question should be applied. But the display should be streamlined: show the
insights, not the equations. Show the destination, not the GPS coordinates of every turn.
Think of it like a doctor's diagnosis: the blood work, imaging, differential diagnosis, and
consultations all happen — but the patient gets the conclusion, not the lab reports. The rigor
is in the process; the readability is in the presentation.
What this means in practice:
- DO the full Base Rate check. But present only the conclusion: "Historical base rate is ~15-20%;
this scenario requires [specific condition], which makes it [more/less likely than base rate]."
Skip the decomposition math unless it produced a non-obvious insight.
- DO the full CLA analysis. But weave the myth/worldview/systemic layers into the scenario
narratives rather than listing them as a separate section. The reader should feel the myth
operating in the scenario, not read a separate paragraph about it.
- DO build the Cross-Impact matrix. But present only the key chains, loops, and interaction
surprises — not the full grid. The matrix is a work tool; the chains and surprises are the
deliverable.
- DO apply Temporal Cascade framing and the Inversion Test. But embed the temporal dynamics
inside each scenario's narrative rather than listing them as separate bullet points.
- DO use Agent-Based Thinking. But surface only the emergent outcomes that surprised you,
not the full actor-by-actor breakdown.
The one exception: When the user explicitly asks to see your work ("show me the process",
"walk me through the methodology"), switch to Full Display mode and show everything.
Default Display Format (Condensed)
Use this streamlined structure as the default for all "what if?" scenario analyses. The full
methodology runs behind the scenes; only the insights reach the page.
## The Question
[The refined "what if?" question — one paragraph that establishes divergence depth,
zero-base-rate type if applicable, and any key reframing that happened during grounding]
## Grounding Insights
[2-4 sentences max. The base rate conclusion, the most revealing structural analogy,
and the CLA myth — stated as findings, not process. Skip the lab reports.]
## Scenarios
### Scenario 1: [Evocative Name]
[A rich narrative of 300-600 words that implicitly contains the myth, the temporal cascade
dynamics, the key cross-impact chains, and the agent-based emergent outcomes. The scenario
should READ like a story and CONTAIN the analysis. End with one sentence noting the
inversion if one exists: "The [apparent quality] inverts — [what it becomes]."]
### Scenario 2: [Evocative Name]
[Same structure]
### Scenario 3: [Evocative Name]
[Same structure]
### Scenario 4: [Evocative Name]
[Same structure]
## Cross-Scenario Dynamics
[The key chains, loops, and interaction surprises from the Cross-Impact matrix — presented
as narrative insights, not as a grid. Focus on: (1) the most important reinforcing loop,
(2) the most important dampener, and (3) the interaction surprise. 2-4 paragraphs total.]
## Pre-Mortem Findings
[1-2 paragraphs: the most likely failure mode across scenarios, and the most dangerous
pathway to the worst outcome]
## Tail Risks
[2-3 items max. The low-probability, high-impact outcomes that deserve attention despite
their unlikelihood]
## What I Might Be Wrong About
[2-3 paragraphs: honest acknowledgment of blind spots, assumptions, and areas of genuine
uncertainty. Where agency uncertainty makes prediction fundamentally limited. The
alternative path or myth you might have underweighted.]
Full Display Format (Verbose)
When the user explicitly asks to see the process, or when the analysis is being used as a
teaching/demonstration tool, use this expanded format that shows all workings:
## The Question
[The refined "what if?" question after grounding]
## Base Rate & Grounding
- Divergence depth: [Minimal-change / Butterfly-effect — and why]
- Zero-base-rate type: [Structural Impossibility / Historical Non-Occurrence — if applicable]
- Historical analogue: [closest historical parallel]
- Base rate outcome: [what happened 70-80% of the time in similar cases]
- Structural analogies: [2-3 cross-domain parallels and what they reveal about causal dynamics]
- CLA reframing: How this question looks at each layer (litany → systemic → worldview → myth)
## Scenario Space
- Key driving forces: [list with importance × uncertainty ranking]
- Critical uncertainties: [the 2-3 axes chosen, with rationale]
- Matrix overview: [brief description of how the axes create distinct worlds]
## Scenarios
### Scenario 1: [Evocative Name]
- Archetype: [Growth/Collapse/Discipline/Transformation]
- Narrative: [300-600 words]
- Key dynamics: [cross-impact chains, feedback loops]
- Temporal cascade: [immediate ripples → feedback loops → new equilibria]
- Inversion: [does the scenario invert across bands? How?]
- Consequence depth: [Futures Wheels — direct → indirect → tertiary effects]
- Agent dynamics: [who does what, what emerges, where agency uncertainty is highest]
- Probability range: [rough estimate]
- Early indicators: [3-5 observable signals]
[Repeat for each scenario]
## Cross-Impact Analysis
[The full matrix with chains, loops, amplifiers, dampeners, and interaction surprises]
## Cross-Scenario Analysis
- Robust strategies: [what works across all scenarios]
- Scenario-specific strategies: [what works in one but not others]
- Critical trade-offs: [the hardest choices]
- Key junctures: [where paths diverge and can be influenced]
## Pre-Mortem Findings
- [Failure modes for preferred scenario]
- [Pathways to feared scenario]
## Tail Risks
- [<5% probability outcomes with outsized impact]
## What I Might Be Wrong About
- [Explicit acknowledgment of blind spots, assumptions, and areas of genuine uncertainty]
- [Where agency uncertainty makes prediction fundamentally limited, not just imprecise]
Quick-Start: Minimal Application
For rapid "what if?" exploration when time is limited, use this compressed three-move process:
- Base Rate Check: "Historically, what actually happens in situations like this 80% of the time?"
- Manoa Stretch: Generate at least two archetypal futures that are NOT the base rate — one
Collapse and one Transformation. Just a paragraph each.
- Pre-Mortem: "My preferred outcome just failed. The most likely reason was ___."
These three moves, which take under five minutes, capture roughly 60% of the methodology's value.
They are always better than undisciplined speculation.
Choosing Methods by Depth Level
Not every "what if?" needs the full process. Use this guide to select the right depth:
| Depth | When to Use | Methods | Display | Time |
|---|
| Quick | Casual curiosity, small-stakes exploration | Base Rate + Manoa Stretch + Pre-Mortem | Condensed | 5 min |
| Standard | Planning decisions, strategic thinking | Full Phase 1-2 + Phase 4 (light) | Condensed | 20 min |
| Deep | Major decisions, organizational foresight | Full Phase 1-5 | Condensed (default) / Full (on request) | 1-2 hours |
| Comprehensive | Policy analysis, long-range strategy | All methods + iteration + stakeholder review | Full | Days |
Anti-Patterns: What This Methodology Rejects
The Single-Future Trap: Collapsing multiple scenarios into one "most likely" prediction. The whole
point of scenario work is to preserve uncertainty, not resolve it prematurely.
The Drama Bias: Gravitating toward catastrophic or utopian scenarios because they are more
intellectually stimulating. The base rate check exists to prevent this.
Linear Extrapolation: Assuming the future will be like the present, only more so. The Manoa
archetypes exist to prevent this by forcing consideration of qualitative change.
The Isolation Fallacy: Treating variables as independent. Cross-Impact Analysis exists because
in the real world, everything affects everything.
Confirmation Theater: Building scenarios that confirm what you already believe. Red Teaming and
the Base Rate Negation Check exist to prevent this.
False Precision: Assigning exact probabilities to things that are genuinely uncertain. Use ranges,
not point estimates. "5-20%" is honest; "12.3%" is theatrical.
Ignoring Agency Uncertainty: Some uncertainty is not a bug in your model — it is a feature of
reality. When scenarios depend on choices that free-will agents (people, organizations, governments)
will make, no amount of better modeling can eliminate that uncertainty. The future is genuinely
open at those points. This is distinct from epistemic uncertainty (which better information could
resolve). Agency uncertainty means that even with perfect information about the present, the future
remains indeterminate because it has not been decided yet. Practical implication: when you identify
a point of high agency uncertainty in your scenario, do not try to resolve it with probability
estimates. Instead, preserve the branching — show multiple paths emanating from that decision
point, and focus on what would make each path more likely.
Over-Cascading from Zero Base Rates: When the base rate is zero, it is tempting to assume the
scenario requires massive, world-restructuring causes — different physics, different atmospheres,
different everything. But a zero base rate means "not observed," not "structurally impossible" (unless
you can show it IS structurally impossible). A targeted mutation, a narrow evolutionary pressure, or
a specific technological breakthrough may create the scenario without restructuring the entire world.
Distinguish between structural impossibility (which demands cascading premises) and historical
non-occurrence (which demands only a plausible pathway — and then checking whether that pathway's
side effects alter anything beyond the focal subject). See the Zero-Base-Rate Distinction in Method 1.
The Certainty Illusion: Presenting scenarios as if they were predictions. They are not. They are
tools for thinking. The moment you treat a scenario as a forecast, you have misused it.
Process-as-Performance: Showing your analytical workings as if the process itself were the
deliverable. The Cross-Impact matrix, the CLA layer breakdown, the base rate decomposition — these
are scaffolding, not architecture. They exist to produce insight, not to demonstrate rigor. If the
reader needs to see the matrix to find your conclusion compelling, the conclusion is not compelling
enough on its own. The Iceberg Principle applies: full engine, streamlined display. Show the insight,
not the equation — unless the user explicitly asks for the workings.
Method Selection Guide by Question Type
Different "what if?" questions benefit from different method combinations:
| Question Type | Primary Methods | Why |
|---|
| "What if X happens?" | Base Rate + Analogy Search + CLA + Cross-Impact + Futures Wheels | Ground first, find structural parallels, trace cascading effects both wide and deep |
| "What if we do Y?" | Pre-Mortem + Backcasting + Agent-Based | Test the plan, map the path, model reactions |
| "What will the future look like?" | Schwartz + Intuitive Logics + Manoa | Build the full scenario space |
| "How could this fail?" | Pre-Mortem + Red Team + Monte Carlo | Stress-test systematically |
| "What are we missing?" | CLA + Morphological + Futures Wheels + Analogy Search + Red Team | Go deep, go wide, find structural parallels, find surprises, challenge assumptions |
| "Which strategy is robust?" | Full Schwartz + Cross-Impact + Monte Carlo | Compare across all scenarios probabilistically |
| "What if the opposite?" | CLA (myth level) + Manoa Transformation | Challenge the paradigm itself |
| "What if X had not happened?" | Counterfactual History + Futures Wheels | Trace the altered causal chain, map consequence depth |
| "What if X instead of Y?" | Counterfactual History + Cross-Impact | Change one variable, trace ripples through the system |
Method Selection Guide by User Context
The same "what if?" question needs different treatment depending on why the user is asking. A creative
writer worldbuilding and a policy analyst risk-assessing may ask identical questions but need very
different methods and output styles.
| User Context | Purpose | Emphasis | Output Style |
|---|
| Strategic | Business or personal decisions, resource allocation | Robust strategies, early indicators, probability estimates, actionable recommendations, temporal cascade framing | Structured, concise, decision-oriented. Lead with implications and trade-offs. |
| Creative | Fiction, worldbuilding, narrative design | Vivid scenarios, unexpected consequences, rich detail, emotional texture, dramatic tension, analogy search for worldbuilding depth | Narrative-heavy, evocative, sensory. Let the stories breathe. Minimize probability estimates. |
| Analytical | Policy, risk assessment, academic research | Causal rigor, evidence base, base rates, cross-impact matrices, falsifiability, explicit agency uncertainty flags | Formal, systematic, cite sources. Include matrices and chains explicitly. |
| Speculative | Emerging tech, societal trends, curiosity | Manoa Transformation archetype, CLA myth level, probabilistic-possibility gap scenarios, counterfactuals, butterfly-effect divergence depth | Exploratory, playful, imaginative. Push further into the unknown. Embrace the surprising. |
How to determine context: If the user does not state their purpose explicitly, infer it from cues:
- Mentions of ROI, strategy, investment, planning → Strategic
- Mentions of story, novel, game, world, character → Creative
- Mentions of policy, risk, regulation, assessment → Analytical
- Mentions of emerging, future, trend, possibility, curious → Speculative
When in doubt, ask. A single question — "Are you exploring this for a decision, a creative project,
research, or curiosity?" — saves enormous wasted effort on the wrong emphasis.