# Application Framework

## Table of Contents

1. [Overview](#overview)
2. [Problem Type Matrix](#problem-type-matrix)
3. [Output Templates](#output-templates)
4. [Case Studies](#case-studies)
5. [Domain-Specific Guidance](#domain-specific-guidance)

---

## Overview

This document provides practical templates, examples, and guidance for applying the primal-foundation framework to real-world problems.

**Principle**: The framework is flexible. Adapt it to your specific domain and problem type while maintaining the core principles.

---

## Problem Type Matrix

| Problem Type | Trigger | Best Method | Key Focus |
|--------------|---------|-------------|-----------|
| **Strategic decision** | "What should we do?" | Reverse Engineering | Goal → Axioms → Path |
| **System design** | "How should we build X?" | Constructive Building | Axioms → Layers → System |
| **Problem diagnosis** | "Why is X happening?" | Evolutionary Simulation | Initial state → Rules → Emergence |
| **Innovation/Novelty** | "How can we do X better?" | Multi-Path Verification | Multiple reconstructions → Convergence |
| **Validation of claim** | "Is this true?" | Assumption Extraction + Verification | Identify → Test → Classify |
| **Optimization** | "How can we improve X?" | Constructive + Evolutionary | Build + Simulate → Refine |

---

## Output Templates

### Template 1: Strategic Decision

```
# First-Principles Analysis: [Topic]

## Layer 0: Epistemological Foundation
- Domain: [Business, Technology, Personal, etc.]
- Scope: [What are we analyzing?]
- Applicability: [Where does this apply?]

## Layer 1: Assumption Extraction
### Epistemic Assumptions
1. [Assumption] → Challenge: [How to verify]
2. [Assumption] → Challenge: [How to verify]

### Ontological Assumptions
1. [Assumption] → Challenge: [How to verify]
2. [Assumption] → Challenge: [How to verify]

### Axiological Assumptions
1. [Assumption] → Challenge: [How to verify]
2. [Assumption] → Challenge: [How to verify]

### Causal Assumptions
1. [Assumption] → Challenge: [How to verify]
2. [Assumption] → Challenge: [How to verify]

### Temporal Assumptions
1. [Assumption] → Challenge: [How to verify]
2. [Assumption] → Challenge: [How to verify]

### Systemic Assumptions
1. [Assumption] → Challenge: [How to verify]
2. [Assumption] → Challenge: [How to verify]

### Boundary Assumptions
1. [Assumption] → Challenge: [How to verify]
2. [Assumption] → Challenge: [How to verify]

## Layer 2: Verification
### Logical Verification
- Consistency: [Pass/Fail]
- Contradictions: [None or describe]

### Physical Verification
- Conservation laws: [Pass/Fail]
- Causality: [Pass/Fail]
- Scalability: [Pass/Fail]

### Epistemic Verification
- Falsifiability: [Pass/Fail]
- Evidence strength: [High/Medium/Low]

### Systemic Verification
- Emergence considered: [Yes/No]
- Feedback loops: [Identified or None]

### Causal Verification
- Mechanism: [Identified or Missing]
- Confounders: [Identified or None]

### Assumption Classification
- Axioms ✓: [List]
- Conditional ⚠: [List with conditions]
- Failed ✗: [List with reasons]

## Layer 3: Axiom Extraction
### Verified Axioms
1. [Axiom 1] — Source: [Which assumption(s)]
2. [Axiom 2] — Source: [Which assumption(s)]
3. [Axiom 3] — Source: [Which assumption(s)]

### Axiom Set Verification
- Consistency: [Pass/Fail]
- Independence: [Pass/Fail]
- Completeness: [Pass/Fail]

## Layer 4: Reconstruction
### Method Used
[Constructive Building / Evolutionary Simulation / Reverse Engineering / Multi-Path]

### Reconstructed Solution
[Describe the solution built from axioms]

### Traceability
- [Solution component] → [Axiom X]
- [Solution component] → [Axiom Y]
- [Solution component] → [Axiom Z]

## Layer 5: Reflection
### What Changed
[How does this differ from conventional thinking?]

### What Stayed the Same
[What conventional wisdom survived?]

### Non-Obvious Insights
[What became visible only after clearing assumptions?]

### Uncertainty
[What are we uncertain about? Confidence levels?]

### Next Steps
[What to monitor? What to verify?]

## Recommendation
[Specific, actionable recommendation based on reconstruction]
```

---

### Template 2: System Design

```
# First-Principles System Design: [System Name]

## Epistemological Foundation
- Domain: [Software, Organization, Physical system, etc.]
- Constraints: [Physical, logical, resource]
- Goals: [What must the system achieve?]

## Assumption Extraction
[Focus on systemic and causal assumptions]

## Verification
[Focus on systemic verification: emergence, feedback, nonlinearity]

## Axiom Extraction
[Focus on systemic axioms: complexity, interaction, emergence]

## Reconstruction (Constructive Building)
### Layer 1: Fundamental Requirements
[Axioms → basic requirements]

### Layer 2: Component Structure
[Requirements → components]

### Layer 3: Interaction Rules
[Components → how they interact]

### Layer 4: System Architecture
[Components + rules → complete system]

## System Properties
- Emergent behaviors: [Identify]
- Failure modes: [Identify]
- Scalability constraints: [Identify]
- Evolution paths: [Identify]

## Comparison with Conventional Designs
| Aspect | Conventional | First-Principles | Why Different? |
|--------|-------------|------------------|----------------|
| [Aspect 1] | [Description] | [Description] | [Reason] |
| [Aspect 2] | [Description] | [Description] | [Reason] |

## Implementation Guidance
- Core axioms to preserve: [List]
- Heuristic elements: [List]
- Validation approach: [How to verify it works]
```

---

### Template 3: Problem Diagnosis

```
# First-Principles Diagnosis: [Problem Description]

## Problem Statement
[What is happening? When? Where?]

## Assumption Extraction
[Focus on causal and temporal assumptions]

## Verification
[Focus on causal verification: mechanism, confounders, counterfactual]

## Axiom Extraction
[Focus on causal axioms: what causes what?]

## Reconstruction (Evolutionary Simulation)
### Initial State
[What was the state before the problem?]

### Governing Rules
[What are the causal rules?]

### Evolution
[How did the system evolve to the problematic state?]

### Diagnosis
[What is the root cause? Which axiom explains it?]

## Solution Paths
| Path | Axiomatic Basis | Pros | Cons |
|------|-----------------|------|------|
| [Path 1] | [Axiom X] | [Pros] | [Cons] |
| [Path 2] | [Axiom Y] | [Pros] | [Cons] |

## Recommended Action
[What to do based on the diagnosis]
```

---

## Case Studies

### Case Study 1: Business Strategy — SaaS Pricing

**Problem**: "Our SaaS pricing is not optimizing revenue. How should we redesign it?"

#### Layer 0: Epistemological Foundation
- Domain: B2B SaaS economics
- Scope: Pricing strategy for subscription model
- Applicability: Competitive markets with value-based differentiation

#### Layer 1: Assumption Extraction (Partial)

**Axiological Assumptions**:
1. "Higher prices = more revenue" → Challenge: Only if quantity doesn't drop disproportionately
2. "Customers prefer predictable costs" → Challenge: Some prefer pay-per-use
3. "Freemium converts to paid" → Challenge: Only if free tier is constrained appropriately

**Causal Assumptions**:
1. "Price determines demand" → Challenge: Value perception, competition, budget also matter
2. "Features drive value" → Challenge: Outcomes, not features, drive value

#### Layer 2: Verification

**Epistemic Verification**:
- Falsifiability: Can test price points, measure conversion ✅
- Evidence: Need data on price elasticity ⚠

**Causal Verification**:
- Mechanism: Price → perceived value → purchase decision ✅
- Confounders: Budget constraints, competitor pricing, enterprise procurement cycles ⚠

#### Layer 3: Axiom Extraction

**Axioms**:
1. A1: Value exchange requires perceived benefit ≥ cost
2. A2: Demand decreases as price increases (law of demand)
3. A3: Revenue = price × quantity
4. A4: Different customers have different perceived values (heterogeneity)

#### Layer 4: Reconstruction (Reverse Engineering)

**Goal**: Maximize revenue

**Step back from goal**:
- Revenue maximization requires optimal price-quantity balance
- Optimal balance depends on demand elasticity (A2)
- Elasticity varies by customer segment (A4)

**Derive solution**:
- Segment customers by perceived value (from A4)
- Price each segment at their willingness-to-pay (from A1 + A2)
- Use tiers or usage-based pricing to capture heterogeneity

**Reconstructed pricing model**:
1. **Segmentation**: Identify customer segments (startup, SME, enterprise)
2. **Tiered pricing**: Different tiers for different perceived values
3. **Usage-based**: Scale with usage to capture value alignment
4. **Free tier**: Limited to drive acquisition but constrain value (to prevent cannibalization)

**Trace**:
- Segmentation → customer heterogeneity (A4)
- Tiered pricing → willingness-to-pay (A1 + A2)
- Usage-based → value alignment (A1)
- Free tier constraint → prevent cannibalization (A3)

#### Layer 5: Reflection

**What changed**:
- **Conventional**: Cost-plus pricing (price = cost + margin)
- **First-principles**: Value-based pricing (price = perceived value)
- **Insight**: Pricing is about capturing value, not covering costs

**What stayed**:
- Segmentation is still valuable (conventional wisdom confirmed)

**Non-obvious insight**:
- Free tier should be more constrained than typical practice to prevent cannibalization. Many companies make free tiers too generous, reducing paid conversion.

**Uncertainty**:
- Demand elasticity for each tier is uncertain (need testing)
- Optimal price points need empirical validation

**Next steps**:
- A/B test different price points
- Monitor conversion and churn
- Adjust tiers based on data

**Recommendation**:
Implement tiered pricing with usage-based overage, starting with conservative free tier to test conversion.

---

### Case Study 2: Technical Architecture — Microservices

**Problem**: "Our monolithic application is becoming hard to maintain. Should we move to microservices?"

#### Layer 0: Epistemological Foundation
- Domain: Software architecture
- Scope: Monolith to microservices migration
- Applicability: Large-scale applications with multiple teams

#### Layer 1: Assumption Extraction (Partial)

**Systemic Assumptions**:
1. "Microservices improve maintainability" → Challenge: They add complexity (communication, deployment)
2. "Microservices enable independent scaling" → Challenge: Only if services have different load patterns
3. "Monoliths are always bad" → Challenge: They're simpler for small teams

**Causal Assumptions**:
1. "Monoliths slow development" → Challenge: Slow development may be due to process, not architecture
2. "Distributed systems are harder" → Challenge: Only if not designed properly

#### Layer 2: Verification

**Systemic Verification**:
- Emergence: Microservices introduce emergent properties (network latency, partial failure) ✅
- Feedback: Service dependencies create feedback loops ✅
- Nonlinearity: Complexity grows faster than number of services ✅

**Physical Verification**:
- Scalability: Microservices scale better for heterogeneous workloads ✅

#### Layer 3: Axiom Extraction

**Axioms**:
1. A1: Complexity grows with surface area (communication interfaces)
2. A2: Change cost depends on scope of impact
3. A3: Deployment independence requires loose coupling
4. A4: Cognitive load limits team size (Conway's Law)
5. A5: Distributed systems have inherent failures (network partitions, partial failure)

#### Layer 4: Reconstruction (Evolutionary Simulation)

**Initial state**: Monolith

**Rules**:
- Monolith: Low surface area (single interface), high change impact (affects entire app), single deployment
- Microservices: High surface area (many interfaces), low change impact (local to service), distributed deployments

**Evolution**:
1. **Monolith**: Single team, fast initially, slows as app grows (change impact grows)
2. **Split**: Extract bounded contexts (domains with high cohesion, low coupling)
3. **Microservices**: Each service has own interface, change impact local, but complexity increases (communication, orchestration)
4. **Equilibrium**: Optimal number of services minimizes total complexity (change cost + coordination cost)

**Emergent solution**:
- **Don't split arbitrarily**: Split along domain boundaries (bounded contexts)
- **Balance complexity**: Too few services = monolith problems; too many = coordination overhead
- **Invest in tooling**: Service mesh, CI/CD, monitoring to manage distributed complexity

**Trace**:
- Bounded contexts → minimize interface surface area (A1)
- Service independence → limit change impact (A2 + A3)
- Team size → align with service boundaries (A4)
- Failure handling → accept and design for distributed failures (A5)

#### Layer 5: Reflection

**What changed**:
- **Conventional**: "Microservices are always better"
- **First-principles**: "Microservices are better when change impact is the bottleneck"
- **Insight**: Monoliths are fine for small teams; microservices become necessary when team size or change rate increases

**What stayed**:
- Domain-driven design (split along domain boundaries) is confirmed as best practice

**Non-obvious insight**:
- The optimal number of services is not "as many as possible" but "as many as needed to minimize total complexity." Over-splitting is as bad as under-splitting.

**Uncertainty**:
- Exact trade-off point depends on specific context (team size, change rate, tooling)

**Next steps**:
- Measure current change impact and coordination cost
- Identify bounded contexts with high cohesion
- Pilot extraction of one service
- Measure impact before full migration

**Recommendation**:
Extract one bounded context as a pilot service, measure complexity change, then decide on full migration.

---

### Case Study 3: Personal Decision — Career Pivot

**Problem**: "Should I switch from Software Engineer to Product Manager?"

#### Layer 0: Epistemological Foundation
- Domain: Career decision-making
- Scope: Individual career path choice
- Applicability: Personal decision with long-term impact

#### Layer 1: Assumption Extraction (Partial)

**Axiological Assumptions**:
1. "PM is more strategic" → Challenge: SWE can also be strategic
2. "PM pays better" → Challenge: Depends on level, company, market
3. "I will like PM more" → Challenge: Preference is uncertain without experience

**Causal Assumptions**:
1. "PM leads to executive roles" → Challenge: Many executives are technical
2. "SWE skills don't transfer to PM" → Challenge: Technical understanding is valuable in PM

#### Layer 2: Verification

**Epistemic Verification**:
- Falsifiability: Can try PM via rotation, internship, or side project ✅
- Evidence: Need data on career trajectories, compensation, satisfaction ⚠

**Systemic Verification**:
- Feedback loops: Skills learned in PM may open new opportunities ✅

#### Layer 3: Axiom Extraction

**Axioms**:
1. A1: Career value = skill scarcity × leverage
2. A2: Satisfaction = autonomy × competence × relatedness (Self-Determination Theory)
3. A3: Skills are transferable if they address fundamental human needs (problem-solving, communication)
4. A4: Market value is determined by supply and demand

#### Layer 4: Reconstruction (Multi-Path Verification)

**Path 1: Constructive Building**

Layer 1 (Axioms → Requirements):
- From A1: Choose path where skills are scarce and have high leverage
- From A2: Ensure path provides autonomy, competence, relatedness
- From A3: Identify transferable skills between SWE and PM

Layer 2 (Requirements → Comparison):
- SWE: Technical skills (scarce for complex problems), leverage through code, moderate autonomy
- PM: Problem-solving skills (scarce), leverage through product direction, high autonomy

Layer 3 (Comparison → Decision):
- PM may offer higher leverage (product > code)
- SWE skills transfer to PM (technical understanding)
- PM may offer more autonomy

**Path 2: Reverse Engineering**

**Goal**: Maximize career value and satisfaction

**Work backward**:
- Satisfaction requires autonomy, competence, relatedness (A2)
- Career value requires scarcity × leverage (A1)
- PM offers autonomy (product ownership) and leverage (product direction)
- SWE offers competence (technical depth) but limited autonomy

**Convergence**: Both paths suggest PM may offer better value/satisfaction trade-off, but depends on personal fit.

#### Layer 5: Reflection

**What changed**:
- **Conventional**: "PM is better for growth"
- **First-principles**: "PM is better if you value autonomy and leverage over technical depth"
- **Insight**: The decision depends on personal values (autonomy vs. technical depth)

**What stayed**:
- Both roles are valuable; the right choice depends on context

**Non-obvious insight**:
- SWE skills are highly transferable to PM (technical understanding is a scarce skill in PM). The transition is lower-risk than it appears.

**Uncertainty**:
- Personal fit for PM is uncertain without experience
- Long-term career trajectories are hard to predict

**Next steps**:
- Try PM via internal rotation, side project, or informational interviews
- Assess autonomy, competence, relatedness in both roles
- Compare actual experiences with axiomatic predictions

**Recommendation**:
Before fully pivoting, try PM temporarily (rotation, side project) to validate fit. SWE skills are transferable, so the transition is reversible.

---

## Domain-Specific Guidance

### Business Strategy

**Focus**: Causal assumptions, systemic interactions, value exchange

**Key axioms**:
- Value exchange requires perceived benefit ≥ cost
- Markets equilibrate supply and demand
- Competition erodes profits (commoditization)
- Competitive advantage requires scarcity or differentiation

**Common traps**:
- Confusing revenue with profit
- Assuming trends continue indefinitely
- Ignoring network effects and feedback loops

### Technology Design

**Focus**: Systemic assumptions, scalability, emergent properties

**Key axioms**:
- Complexity grows with surface area
- Distributed systems have inherent failures
- Performance is constrained by physical limits
- Change cost depends on coupling

**Common traps**:
- Over-engineering for unlikely scenarios
- Ignoring operational complexity
- Assuming scale implies complexity

### Personal Decisions

**Focus**: Axiological assumptions, uncertainty, personal fit

**Key axioms**:
- Satisfaction = autonomy × competence × relatedness
- Career value = skill scarcity × leverage
- Skills are transferable if they address fundamental needs
- Preferences are revealed through action, not stated

**Common traps**:
- Confusing stated preferences with revealed preferences
- Assuming others' career paths apply to you
- Underestimating transferable skills

### Social Systems

**Focus**: Systemic assumptions, emergence, feedback loops

**Key axioms**:
- Social systems have emergent properties
- Incentives drive behavior
- Information flows shape outcomes
- Institutions are social constructs

**Common traps**:
- Assuming individuals act rationally
- Ignoring power dynamics and incentives
- Treating social systems as linear

---

## When to Use This Reference

**Use during Phase 6 (Application)**.

**Apply to**:
- Selecting the appropriate output template
- Structuring the analysis
- Learning from case studies
- Adapting to domain-specific considerations

**Application Checklist**:
- [ ] Problem type identified
- [ ] Appropriate template selected
- [ ] All 7 layers of the framework applied
- [ ] Reconstruction verified against axioms
- [ ] Non-obvious insights identified
- [ ] Uncertainty acknowledged
- [ ] Next steps defined
- [ ] Recommendation is specific and actionable
