# Assumption Taxonomy

## Table of Contents

1. [Overview](#overview)
2. [The Seven Assumption Types](#the-seven-assumption-types)
3. [Probing Questions by Type](#probing-questions-by-type)
4. [Identifying Mixed Assumptions](#identifying-mixed-assumptions)
5. [Case Studies](#case-studies)

---

## Overview

Assumptions are the invisible premises that underlie all reasoning. Most failures in thinking stem from unexamined assumptions rather than flawed logic.

This taxonomy provides a systematic way to identify hidden assumptions across seven categories derived from philosophy and cognitive science.

**Principle**: For a robust first-principles analysis, you must identify assumptions in all seven categories. Missing a category often leads to blind spots.

---

## The Seven Assumption Types

### 1. Epistemic Assumptions (About Knowledge)

Assumptions about what we know, how we know it, and the certainty of our knowledge.

**Common forms**:
- "We know X is true"
- "This data proves Y"
- "Experts agree on Z"

**Why they matter**: Epistemic assumptions often conflate belief with knowledge, correlation with causation, or authority with truth.

### 2. Ontological Assumptions (About Existence)

Assumptions about what exists, the nature of reality, and the categories we use to classify things.

**Common forms**:
- "X is a [category]"
- "Y exists as a stable entity"
- "Z has inherent properties"

**Why they matter**: Ontological assumptions determine what we consider "real" and what we consider "social constructs." Misclassification leads to wrong frameworks.

### 3. Axiological Assumptions (About Value)

Assumptions about what is good, bad, desirable, or undesirable. Includes ethical, aesthetic, and pragmatic values.

**Common forms**:
- "X is better than Y"
- "We should maximize Z"
- "This outcome is desirable"

**Why they matter**: Value assumptions are often the most hidden because they seem "obvious." Different stakeholders have different axiological assumptions.

### 4. Causal Assumptions (About Why)

Assumptions about cause-and-effect relationships, mechanisms, and explanatory chains.

**Common forms**:
- "X causes Y"
- "If we do A, B will happen"
- "Y happened because of X"

**Why they matter**: Causal assumptions are often inferred from correlation without proper verification. They drive predictions and interventions.

### 5. Temporal Assumptions (About Time and Change)

Assumptions about stability, persistence, change, and future projection.

**Common forms**:
- "This trend will continue"
- "Past performance predicts future results"
- "Conditions are stable"

**Why they matter**: Temporal assumptions can fail due to regime shifts, nonlinear dynamics, or exogenous shocks.

### 6. Systemic Assumptions (About Relationships)

Assumptions about interactions, dependencies, boundaries, and emergence in systems.

**Common forms**:
- "X and Y are independent"
- "The system is additive"
- "Parts can be analyzed separately"

**Why they matter**: Systemic assumptions often miss feedback loops, nonlinear effects, and emergent properties.

### 7. Boundary Assumptions (About Scope and Applicability)

Assumptions about where a model applies, what is inside/outside the system, and edge cases.

**Common forms**:
- "This applies to everyone"
- "This works at scale"
- "Edge cases are negligible"

**Why they matter**: Boundary assumptions determine the validity domain. Violating boundaries leads to model failure.

---

## Probing Questions by Type

### Epistemic Assumptions

**Core question**: How do we know this is true?

**Probing questions**:
1. What evidence supports this claim? Is it direct or indirect?
2. Could this be true even if we had no evidence for it? (necessity vs. contingency)
3. What would falsify this claim? (falsifiability test)
4. Is this claim based on authority, consensus, or direct verification?
5. What would I need to observe to change my mind about this?
6. Is there selection bias in the evidence? (survivorship, confirmation bias)
7. How certain is this knowledge? (probability vs. certainty)

**Example**:
- **Claim**: "Our users love the product"
- **Epistemic probes**: What evidence? (NPS surveys, retention data). What would falsify it? (NPS < 4, churn > X%). Is there selection bias? (Only surveying active users).

### Ontological Assumptions

**Core question**: What kind of thing is this?

**Probing questions**:
1. Is this a natural category or a social construct?
2. Does this have inherent properties or properties we assign?
3. What defines the boundary of this category? Edge cases?
4. Is this stable over time or does it change?
5. Is this a fundamental entity or an emergent property?
6. Would this exist without human observers/institutions?
7. What are the necessary and sufficient conditions for something to be X?

**Example**:
- **Claim**: "Our company is innovative"
- **Ontological probes**: What defines "innovative"? Is innovation a property of companies or outcomes? Would we call this "innovative" in a different context?

### Axiological Assumptions

**Core question**: Why is this considered good/bad?

**Probing questions**:
1. What value system makes this desirable?
2. Who benefits from this being considered good? (incentive alignment)
3. Are there alternative value systems that would rank this differently?
4. Is this value instrumental (means to an end) or intrinsic (valued for itself)?
5. What trade-offs are being made? What is being sacrificed?
6. Would this still be considered good if we removed all external influences?
7. Is this value universal (holds for everyone) or particular (specific to some)?

**Example**:
- **Claim**: "We should maximize user engagement"
- **Axiological probes**: Why is engagement valuable? For whom? What is being sacrificed? (user wellbeing, time). Is engagement intrinsic or instrumental to revenue?

### Causal Assumptions

**Core question**: What evidence supports this cause-effect relationship?

**Probing questions**:
1. What is the mechanism? How does X cause Y?
2. Could there be a third variable causing both? (confounding)
3. Could Y cause X instead? (reverse causality)
4. Would Y happen without X? (control comparison)
5. What is the temporal sequence? Does cause precede effect?
6. What evidence would disprove this causal claim? (falsification)
7. Is this correlation masquerading as causation?

**Example**:
- **Claim**: "More features increase user satisfaction"
- **Causal probes**: Mechanism? (more value). Confounding? (users who need more features have complex problems). Reverse causality? (satisfied users request more features). Control? (satisfaction without new features).

### Temporal Assumptions

**Core question**: Will this hold in the future?

**Probing questions**:
1. What conditions made this true in the past? Are those conditions still present?
2. What has changed since this pattern was established?
3. What could disrupt this pattern? (shocks, regime changes)
4. Is this a law or a contingent regularity?
5. What is the time horizon of validity? (short, medium, long term)
6. Are there delayed effects or time lags?
7. Is this stationary (stable distribution) or nonstationary (changing)?

**Example**:
- **Claim**: "Growth will continue at 20% per year"
- **Temporal probes**: What drove past growth? (market expansion, product fit). Are those still present? (market saturated). What could disrupt it? (competition, regulation). Is 20% a law or a contingent rate?

### Systemic Assumptions

**Core question**: What are the hidden interactions and feedbacks?

**Probing questions**:
1. What other variables does this affect? (downstream effects)
2. What other variables affect this? (upstream causes)
3. Are there feedback loops? (positive, negative, delayed)
4. Is this linear or nonlinear? Thresholds, tipping points?
5. What emergent properties arise from the interactions?
6. Are there dependencies or couplings we're ignoring?
7. Does the whole equal the sum of parts? (synergy vs. reduction)

**Example**:
- **Claim**: "Adding more engineers will speed up development"
- **Systemic probes**: Downstream effects? (communication overhead, coordination). Upstream causes? (process bottlenecks). Feedback? (more engineers → more complexity → slower development). Linear? (diminishing returns). Emergent properties? (team dynamics, culture).

### Boundary Assumptions

**Core question**: Where does this apply and where does it not?

**Probing questions**:
1. What is the domain of applicability? (time, space, context)
2. What are the edge cases? (extreme values, corner cases)
3. What happens when we push the boundary? (failure modes)
4. Are there conditions where this reverses? (boundary crossing)
5. Is this universal or context-dependent?
6. What is outside the system? What are we ignoring?
7. What are the implicit constraints we're not stating?

**Example**:
- **Claim**: "This pricing model works for everyone"
- **Boundary probes**: What customer segments? (B2B, B2C, enterprise). Edge cases? (price-sensitive users, power users). Reversals? (at high prices, demand drops non-linearly). Outside system? (competitor pricing, economic conditions).

---

## Identifying Mixed Assumptions

Most complex claims involve multiple assumption types. Use the **Assumption Matrix** to systematically scan:

```
                  | Epistemic | Ontological | Axiological | Causal | Temporal | Systemic | Boundary
------------------|-----------|-------------|-------------|--------|----------|----------|----------
Claim: "Our SaaS  |           |             |             |        |          |          |
pricing model is  | Evidence  | "Pricing    | "Higher     | Higher | Stable   | No       | B2B
optimal"          | supports  | model"      | prices      | prices | market   | price    | only
                  | this      | as entity   = better"    =      |          | wars    |
                  |           |             |             | higher |          |          |
                  |           |             |             | value  |          |          |
```

**Process**:
1. Write the claim at the top
2. Go through each assumption type
3. Identify the embedded assumption
4. Apply the probing questions for that type
5. Document the strength (strong, weak, unknown)

---

## Case Studies

### Case 1: "Remote Work Destroys Company Culture"

**Epistemic assumptions**:
- We know what "culture" is and can measure it
- Remote work is the cause, not a correlate

**Ontological assumptions**:
- "Culture" is a stable entity that can be destroyed
- "Destruction" is the right category for change

**Axiological assumptions**:
- Culture is valuable and should be preserved
- In-person culture is better than remote culture

**Causal assumptions**:
- Remote work causes culture degradation (mechanism?)
- No third variables (e.g., poor management)

**Temporal assumptions**:
- Past correlation (remote → culture issues) will continue
- Culture is stable over time (not already changing)

**Systemic assumptions**:
- Culture is independent of other factors (hiring, communication tools)
- Linear relationship (more remote → worse culture)

**Boundary assumptions**:
- This applies to all companies (not just specific types)
- Applies to all cultures (not just certain kinds)

**First-principles analysis**: Culture = shared beliefs + norms + behaviors. Remote work changes communication patterns, not necessarily culture. If communication is sufficient, culture can persist. The real issue is often poor onboarding, unclear norms, or inadequate tools — not remote work itself.

### Case 2: "AI Will Replace All Programmers"

**Epistemic assumptions**:
- We know what "programming" is and can automate it
- Current AI progress predicts future capabilities

**Ontological assumptions**:
- "Programming" is a monolithic skill set
- "Replacement" is the right category (not augmentation)

**Axiological assumptions**:
- Efficiency is the only value (cost, speed, quality)
- No other benefits to human programming (creativity, understanding)

**Causal assumptions**:
- AI capability → programmer replacement (direct causal chain)
- No constraints on AI progress (compute, data, algorithms)

**Temporal assumptions**:
- Linear extrapolation of current progress
- No unforeseen barriers (diminishing returns, complexity)

**Systemic assumptions**:
- Programming is independent of other skills (domain knowledge, communication)
- Economic system unchanged (demand for software, value creation)

**Boundary assumptions**:
- This applies to all programming (not just certain domains)
- No edge cases (novel problems, ethical constraints)

**First-principles analysis**: Programming = problem specification + solution design + implementation. AI can assist with implementation, but problem specification and solution design require understanding of stakeholder needs, constraints, and trade-offs — which are context-dependent and not fully automatable. Programmers' role shifts to higher-level design and verification.

---

## When to Use This Reference

**Use during Phase 2 (Assumption Extraction)** to systematically scan for hidden assumptions.

**Apply to**:
- Any claim or proposition under analysis
- Each component of a complex problem
- Stated and unstated beliefs

**Checklist**:
- [ ] Epistemic assumptions identified
- [ ] Ontological assumptions identified
- [ ] Axiological assumptions identified
- [ ] Causal assumptions identified
- [ ] Temporal assumptions identified
- [ ] Systemic assumptions identified
- [ ] Boundary assumptions identified
- [ ] Each assumption probed with category-specific questions
