RootCraft Learning System
Core Principle
A high-efficiency learning methodology that integrates First Principles Thinking, Taxonomy-Based Classification, Feynman Technique, and Recursive Questioning into a closed-loop system:
- First Principles → Trace to fundamental facts and concepts
- Classification → Systematically decompose and structure
- Feynman Technique → Validate through output, identify gaps
- Recursive Questioning → Chase "aha moments" through layered inquiry
Learning Flow (9 Steps)
When users mention learning, exam prep, or skill acquisition, guide them through this process:
Step 1: Define Goals & Evaluation Criteria
- Clarify learning objectives (target proficiency level)
- Establish evaluation standards (how to measure mastery)
- Set timelines and milestones
Step 2: Apply First Principles
- Break down problems to find fundamental facts
- Keep asking "why" until reaching irreducible truths
- Distinguish between assumptions and verified facts
Step 3: Use Taxonomy-Based Classification
- Divide topics into distinct subtopics/categories
- Build classification systems (preferably MECE: Mutually Exclusive, Comprehensively Encompassing)
- Clarify relationships between categories
Step 4: Apply Feynman Technique with Recursive Questioning
Core Process (5 Sub-Steps):
4.1 Start with a Real Problem
- Begin with concrete, practical challenge
- Example: "Write a diffusion model code" or "Implement this algorithm"
- Ground learning in tangible context
4.2 Generate Questions Through Practice
- While working, note every confusion point
- Ask: "Why does this work?" "What does this term mean?"
- Record questions without immediately seeking answers
4.3 Recursive Downward Questioning
- For each unclear concept, ask deeper questions
- Pattern: "What do you mean by X?" → "Why is X necessary?" → "What happens without X?"
- Continue until reaching intuitive understanding
- Example chain:
- "What is 'gradient descent'?" →
- "Why do we need to minimize loss?" →
- "What is 'loss' actually measuring?" →
- "Why is measuring error useful?" →
- Aha! "Loss is just a compass pointing toward better answers"
4.4 Restate in Your Own Words
- After each answer, rephrase to confirm understanding
- Use: "So my understanding is... Is this correct?"
- If explanation feels forced or unclear, return to 4.3
- Valid understanding = can explain to a 10-year-old
4.5 Chase the "Aha!" Moments
- Recognize the click: "Oh! That's why!"
- These moments mark true comprehension milestones
- Document each aha moment with:
- What was unclear before
- What clicked
- Why it matters
- Key insight: One deep aha > Ten shallow memorizations
Step 5: Multi-Perspective Learning
- Use diverse resources (books/videos/courses/practical exercises)
- Cross-validate information across sources
- Find the optimal personal learning path
Step 6: Practice & Application
- Select relevant projects for hands-on practice
- Analyze real-world case studies
- Connect theory with practical application
- Apply recursive questioning to new challenges
Step 7: Feedback & Iteration
- Regular review of learned content
- Seek peer feedback (teach, question, evaluate)
- Adjust learning strategy based on insights
- Revisit aha moments to reinforce understanding
Step 8: Continuous Learning & Review
- Periodically revisit mastered content (spaced repetition)
- Follow Ebbinghaus Forgetting Curve for reviews
- Expand into related knowledge domains
- Apply recursive questioning to advanced topics
Step 9: Mind Mapping & Notes
- Use mind mapping tools to organize knowledge structure
- Build systematic note-taking systems (Cornell Notes method)
- Maintain traceable and updatable notes
- Special: Create "Aha Moment Log" tracking breakthrough insights
Trigger Scenarios
When users say:
- "I want to learn..."
- "How to learn efficiently..."
- "Is there a good learning method..."
- "Help me create a study plan..."
- "This concept is unclear..."
- "Want to systematically master..."
- "Why does this work?"
- "I don't understand..."
→ Proactively recommend this method
Output Format Suggestions
Can generate for users:
- Learning goal checklist
- Knowledge taxonomy tree
- Feynman explanation template
- Recursive questioning script (question chain template)
- Aha moment tracker (breakthrough log)
- Review schedule table
- Mind mapping structure
File Organization
All learning materials saved to: workspace/study/{topic}/
Example structure:
workspace/study/machine-learning/
├── 01-goals.md # Learning goals and evaluation criteria
├── 02-first-principles.md # First principles analysis
├── 03-taxonomy.md # Knowledge taxonomy tree
├── 04-feynman.md # Feynman explanation notes
├── 04b-recursive-questions.md # Question chains and answers
├── 04c-aha-moments.md # Breakthrough insights log
├── 05-resources.md # Learning resources list
├── 06-projects.md # Practice projects
├── 07-feedback.md # Feedback and iteration records
├── 08-review.md # Spaced review plan
└── 09-mindmap.md # Mind mapping source file
Operation Requirements:
- Create topic directory
- Generate documents in sequence
- Initialize
04c-aha-moments.md with template for tracking insights
Recommended Tools
- Mind Mapping: XMind, MindNode, Obsidian
- Note-taking: Notion, Obsidian, Evernote
- Spaced Repetition: Anki, RemNote
- Pomodoro Timer: Forest, Focus
- Question Tracking: Obsidian Daily Notes, Notion Database
Version History
| Version | Date | Changes |
|---|
| 1.0.0 | 2026-04-30 | Official release - Integrated First Principles, Taxonomy Classification, Feynman Technique, and Recursive Questioning into 9-step learning flow with "Aha Moment" tracking |
| 0.1.0 | 2024-XX-XX | Original Chinese version "格物本质赋能学习法" launched |
Example: Learning Diffusion Models
Step 1: Set Goals
- Goal: Understand and implement a basic diffusion model
- Evaluation: Can explain forward/reverse process and generate images
- Time: 2-3 weeks
Step 2: First Principles
Diffusion essence = Gradual noise addition + Learned denoising
- Why add noise gradually?
- What does "learned denoising" mean?
- How does this connect to thermodynamics?
Step 3: Taxonomy
Diffusion Models
├── Forward Process (noise scheduling)
├── Reverse Process (denoising network)
├── Training Objective (noise prediction)
├── Sampling (iterative denoising)
└── Applications (image generation, inpainting)
Step 4: Recursive Questioning in Action
Question Chain Example:
Q: "Why do we add noise gradually?"
→ A: "To create a tractable path between data and noise"
Q: "What does 'tractable path' mean?"
→ A: "A path we can reverse mathematically"
Q: "Why do we need to reverse it?"
→ A: "Because generation = going from noise back to data"
Q: "Why start from noise at all?"
→ A: "Noise is easy to sample; data is hard to model directly"
💡 AHA! "Diffusion is like unscrambling an egg - we practice scrambling
so much we learn to unscramble!"
Step 5: Multi-Perspective
- Paper: "Denoising Diffusion Probabilistic Models"
- Video: Lilian Weng's blog explanation
- Code: Hugging Face Diffusers library
Step 6: Practice
- Implement simple 1D diffusion
- Use diffusers for 2D image generation
- Modify noise schedule and observe effects
Step 7: Feedback
- Explain to colleague/peer
- Write blog post on understanding
- Compare with other generative models
Step 8: Review
- Revisit aha moments weekly
- Connect to VAEs, GANs, flows
- Apply to new domains (audio, video)
Step 9: Mind Map
Diffusion Models
├── Core Insight: "Learned unscrambling"
├── Forward: Data → Noise (easy)
├── Reverse: Noise → Data (learned)
└── Aha: "Practice destroying to learn creating"