Systematic Debugging

v3.0.0

Four-phase debugging framework that ensures root cause investigation before attempting fixes. Never jump to solutions.

8· 6.2k·61 current·63 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for runesleo/runesleo-systematic-debugging.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Systematic Debugging" (runesleo/runesleo-systematic-debugging) from ClawHub.
Skill page: https://clawhub.ai/runesleo/runesleo-systematic-debugging
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Canonical install target

openclaw skills install runesleo/runesleo-systematic-debugging

ClawHub CLI

Package manager switcher

npx clawhub@latest install runesleo-systematic-debugging
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The name/description (systematic debugging) matches the SKILL.md: it directs codebase searches, git inspection, reproduction steps, instrumentation, and hypothesis-driven fixes. None of the required capabilities (there are none) are disproportionate.
Instruction Scope
Runtime instructions operate on local development artifacts (errors, code, git logs, docs, logs, and 'memory' or past conversations). Those actions are expected for a debugging framework. The instructions do not direct data to external endpoints or request unrelated system credentials. Note: it explicitly suggests adding diagnostic instrumentation and reading project artifacts, which is within scope but may touch sensitive logs or config.
Install Mechanism
No install spec or code files are present (instruction-only). Nothing will be downloaded or written by an installer.
Credentials
The skill requests no environment variables, credentials, or config paths. The suggested commands (grep, git) operate on the repository and local files, which is proportionate to debugging.
Persistence & Privilege
always is false and there is no install altering agent config. Model invocation is allowed (platform default) but that is expected for a user-invocable debugging skill and is not combined with any privileged access.
Assessment
This is an instruction-only debugging playbook and is internally coherent with its stated purpose. Before enabling or allowing autonomous runs, be aware that following the steps requires the agent (or you) to read local project files, git history, logs, and possibly add diagnostic logging — those artifacts can contain sensitive data (credentials, PII). If you plan to share outputs externally, sanitize logs first. Because there's no installer or external network calls specified, the main risk is accidental exposure of sensitive info while collecting diagnostics. If you prefer to limit risk, run the process manually or disable autonomous invocation for the skill.

Like a lobster shell, security has layers — review code before you run it.

debuggingvk97djhavgeq5kwthjfb14nchmd82fagbdeveloper-toolsvk97djhavgeq5kwthjfb14nchmd82fagblatestvk97djhavgeq5kwthjfb14nchmd82fagbworkflowvk97djhavgeq5kwthjfb14nchmd82fagb
6.2kdownloads
8stars
1versions
Updated 1mo ago
v3.0.0
MIT-0

Systematic Debugging

Overview

Random fixes waste time and create new bugs. Quick patches mask underlying issues.

Core principle: ALWAYS find root cause before attempting fixes. Symptom fixes are failure.

Violating the letter of this process is violating the spirit of debugging.

The Iron Law

NO FIXES WITHOUT ROOT CAUSE INVESTIGATION FIRST
NO INVESTIGATION WITHOUT CONTEXT RECALL FIRST

If you haven't completed Phase 0, you cannot proceed to Phase 1. If you haven't completed Phase 1, you cannot propose fixes.

When to Use

Use for ANY technical issue:

  • Test failures
  • Bugs in production
  • Unexpected behavior
  • Performance problems
  • Build failures
  • Integration issues

Use this ESPECIALLY when:

  • Under time pressure (emergencies make guessing tempting)
  • "Just one quick fix" seems obvious
  • You've already tried multiple fixes
  • Previous fix didn't work
  • You don't fully understand the issue

Don't skip when:

  • Issue seems simple (simple bugs have root causes too)
  • You're in a hurry (rushing guarantees rework)
  • Manager wants it fixed NOW (systematic is faster than thrashing)

The Five Phases

You MUST complete each phase before proceeding to the next.

Phase 0: Context Recall (MANDATORY FIRST STEP)

BEFORE doing ANYTHING else:

  1. Extract Keywords from Error

    • What's the error type? (OOM, timeout, connection, type error...)
    • What component? (server, browser, API, database...)
    • What area of the codebase?
  2. Search for Prior Knowledge

    • Check project docs, MEMORY files, or past conversations
    • Search codebase for similar error patterns: grep -r "ErrorType" .
    • Check git log for related recent changes: git log --oneline -20
  3. Review Results

    • Found relevant experience? -> Apply directly, skip to Phase 4
    • Found partial match? -> Use as starting point for Phase 1
    • Nothing found? -> Proceed to Phase 1, remember to record solution later
  4. Output Format

    Context Recall:
    - Query: "xxx"
    - Found: [description of related knowledge]
    - Action: [apply experience / continue investigation / no match]
    

VIOLATION: Proceeding to Phase 1 without Context Recall output = process failure.


Phase 1: Root Cause Investigation

BEFORE attempting ANY fix:

  1. Read Error Messages Carefully

    • Don't skip past errors or warnings
    • They often contain the exact solution
    • Read stack traces completely
    • Note line numbers, file paths, error codes
  2. Reproduce Consistently

    • Can you trigger it reliably?
    • What are the exact steps?
    • Does it happen every time?
    • If not reproducible -> gather more data, don't guess
  3. Check Recent Changes

    • What changed that could cause this?
    • Git diff, recent commits
    • New dependencies, config changes
    • Environmental differences
  4. Gather Evidence in Multi-Component Systems

    WHEN system has multiple components (CI -> build -> signing, API -> service -> database):

    BEFORE proposing fixes, add diagnostic instrumentation:

    For EACH component boundary:
      - Log what data enters component
      - Log what data exits component
      - Verify environment/config propagation
      - Check state at each layer
    
    Run once to gather evidence showing WHERE it breaks
    THEN analyze evidence to identify failing component
    THEN investigate that specific component
    
  5. Trace Data Flow

    • Where does bad value originate?
    • What called this with bad value?
    • Keep tracing up until you find the source
    • Fix at source, not at symptom

Phase 2: Pattern Analysis

Find the pattern before fixing:

  1. Find Working Examples

    • Locate similar working code in same codebase
    • What works that's similar to what's broken?
  2. Compare Against References

    • If implementing pattern, read reference implementation COMPLETELY
    • Don't skim - read every line
    • Understand the pattern fully before applying
  3. Identify Differences

    • What's different between working and broken?
    • List every difference, however small
    • Don't assume "that can't matter"
  4. Understand Dependencies

    • What other components does this need?
    • What settings, config, environment?
    • What assumptions does it make?

Phase 3: Hypothesis and Testing

Scientific method:

  1. Form Single Hypothesis

    • State clearly: "I think X is the root cause because Y"
    • Write it down
    • Be specific, not vague
  2. Test Minimally

    • Make the SMALLEST possible change to test hypothesis
    • One variable at a time
    • Don't fix multiple things at once
  3. Verify Before Continuing

    • Did it work? Yes -> Phase 4
    • Didn't work? Form NEW hypothesis
    • DON'T add more fixes on top
  4. When You Don't Know

    • Say "I don't understand X"
    • Don't pretend to know
    • Ask for help
    • Research more

Phase 4: Implementation

Fix the root cause, not the symptom:

  1. Create Failing Test Case

    • Simplest possible reproduction
    • Automated test if possible
    • One-off test script if no framework
    • MUST have before fixing
  2. Implement Single Fix

    • Address the root cause identified
    • ONE change at a time
    • No "while I'm here" improvements
    • No bundled refactoring
  3. Verify Fix

    • Test passes now?
    • No other tests broken?
    • Issue actually resolved?
  4. If Fix Doesn't Work

    • STOP
    • Count: How many fixes have you tried?
    • If < 3: Return to Phase 1, re-analyze with new information
    • If >= 3: STOP and question the architecture (step 5 below)
    • DON'T attempt Fix #4 without architectural discussion
  5. If 3+ Fixes Failed: Question Architecture

    Pattern indicating architectural problem:

    • Each fix reveals new shared state/coupling/problem in different place
    • Fixes require "massive refactoring" to implement
    • Each fix creates new symptoms elsewhere

    STOP and question fundamentals:

    • Is this pattern fundamentally sound?
    • Are we "sticking with it through sheer inertia"?
    • Should we refactor architecture vs. continue fixing symptoms?

    Discuss with the user before attempting more fixes.

Red Flags - STOP and Follow Process

If you catch yourself thinking:

  • "Quick fix for now, investigate later"
  • "Just try changing X and see if it works"
  • "Add multiple changes, run tests"
  • "Skip the test, I'll manually verify"
  • "It's probably X, let me fix that"
  • "I don't fully understand but this might work"
  • "Pattern says X but I'll adapt it differently"
  • "Here are the main problems: [lists fixes without investigation]"
  • Proposing solutions before tracing data flow
  • "One more fix attempt" (when already tried 2+)
  • Each fix reveals new problem in different place

ALL of these mean: STOP. Return to Phase 1.

If 3+ fixes failed: Question the architecture (see Phase 4.5)

Common Rationalizations

ExcuseReality
"Issue is simple, don't need process"Simple issues have root causes too. Process is fast for simple bugs.
"Emergency, no time for process"Systematic debugging is FASTER than guess-and-check thrashing.
"Just try this first, then investigate"First fix sets the pattern. Do it right from the start.
"I'll write test after confirming fix works"Untested fixes don't stick. Test first proves it.
"Multiple fixes at once saves time"Can't isolate what worked. Causes new bugs.
"Reference too long, I'll adapt the pattern"Partial understanding guarantees bugs. Read it completely.
"I see the problem, let me fix it"Seeing symptoms != understanding root cause.
"One more fix attempt" (after 2+ failures)3+ failures = architectural problem. Question pattern, don't fix again.

Quick Reference

PhaseKey ActivitiesSuccess Criteria
0. Context RecallExtract keywords, search prior knowledgeOutput recall summary
1. Root CauseRead errors, reproduce, check changes, gather evidenceUnderstand WHAT and WHY
2. PatternFind working examples, compareIdentify differences
3. HypothesisForm theory, test minimallyConfirmed or new hypothesis
4. ImplementationCreate test, fix, verifyBug resolved, tests pass

Real-World Impact

From debugging sessions:

  • Systematic approach: 15-30 minutes to fix
  • Random fixes approach: 2-3 hours of thrashing
  • First-time fix rate: 95% vs 40%
  • New bugs introduced: Near zero vs common

Comments

Loading comments...