gstack Investigate

v1.0.0

Provides systematic root cause analysis and verified fixes for bugs or errors using a four-phase debugging process without guessing or premature fixes.

0· 750·1 current·1 all-time
byGarry Tan@garrytan
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
The name/description (systematic root-cause debugging) matches the instructions (collect symptoms, read code, reproduce, test hypotheses, implement fixes). The requested operations (git history checks, adding logs, running tests, searching externals) are all coherent with a debugging workflow.
Instruction Scope
The SKILL.md instructs the agent to run commands (e.g., git log), read code, add temporary logging, run the full test suite, and perform external searches after sanitizing data. These are expected for debugging, but the instructions also tell the agent to 'Save the report to memory/' which implies write access to an agent workspace or persistent memory that is not declared elsewhere. It also tells the agent to check 'memory' for prior sessions. These are reasonable for this kind of skill but are not explicitly declared in the metadata.
Install Mechanism
Instruction-only skill with no install spec and no code files — lowest install risk. Nothing is downloaded or executed from untrusted URLs by the skill itself.
Credentials
The skill declares no required environment variables or credentials, which aligns with its non-networked guidance. However, it implicitly requires access to project repositories and test environments (e.g., git, test runners) and the ability to run commands — these runtime capabilities are not listed in requires.binaries or config paths, a minor mismatch between metadata and instructions.
Persistence & Privilege
always:false (not force-included) and model invocation is allowed (platform default). The instruction to save debug reports to memory/ implies the skill will persist artifacts between sessions; this is reasonable for debugging but the metadata did not declare any config paths for persistent storage.
Assessment
This skill is an instruction-only debugging guide and appears to do what it says. Before installing, confirm the agent runtime will (1) have git and any test runners available and (2) be allowed to read the target repository and write to its memory/workspace (the skill tells the agent to save reports to memory/). If you plan to let the agent run tests or modify code, ensure those operations run in a safe environment (not production), and avoid granting it access to credentials or secrets it doesn't need. The SKILL.md advises sanitizing data before external searches — keep that practice. If you need higher assurance, ask the skill author to (a) declare required binaries (git, test tools) and the memory/config paths it will write to, and (b) confirm no network uploads of raw traces or secrets will occur.

Like a lobster shell, security has layers — review code before you run it.

latestvk97bwh1y25z6nvwz7m0g6tgnkh8497x1
750downloads
0stars
1versions
Updated 1w ago
v1.0.0
MIT-0

Systematic Debugging

Iron Law

NO FIXES WITHOUT ROOT CAUSE INVESTIGATION FIRST.

Fixing symptoms creates whack-a-mole debugging. Every fix that doesn't address root cause makes the next bug harder to find. Find the root cause, then fix it.


Phase 1: Root Cause Investigation

Gather context before forming any hypothesis.

  1. Collect symptoms: Read the error messages, stack traces, and reproduction steps. If the user hasn't provided enough context, ask ONE question at a time. Don't ask five questions at once.

  2. Read the code: Trace the code path from the symptom back to potential causes. Search for all references, read the logic around the failure point.

  3. Check recent changes:

    git log --oneline -20 -- <affected-files>
    

    Was this working before? What changed? A regression means the root cause is in the diff.

  4. Reproduce: Can you trigger the bug deterministically? If not, gather more evidence before proceeding.

  5. Check memory for prior debugging sessions on the same area. Recurring bugs in the same files are an architectural smell.

Output: "Root cause hypothesis: ..." ... a specific, testable claim about what is wrong and why.


Phase 2: Pattern Analysis

Check if this bug matches a known pattern:

Race condition ... Intermittent, timing-dependent. Look at concurrent access to shared state.

Nil/null propagation ... NoMethodError, TypeError. Missing guards on optional values.

State corruption ... Inconsistent data, partial updates. Check transactions, callbacks, hooks.

Integration failure ... Timeout, unexpected response. External API calls, service boundaries.

Configuration drift ... Works locally, fails in staging/prod. Env vars, feature flags, DB state.

Stale cache ... Shows old data, fixes on cache clear. Redis, CDN, browser cache.

Also check:

  • Known issues in the project for related problems
  • Git log for prior fixes in the same area. Recurring bugs in the same files are an architectural smell, not a coincidence.

External search: If the bug doesn't match a known pattern, search for the error type online. Sanitize first: strip hostnames, IPs, file paths, SQL, customer data. Search the error category, not the raw message.


Phase 3: Hypothesis Testing

Before writing ANY fix, verify your hypothesis.

  1. Confirm the hypothesis: Add a temporary log statement, assertion, or debug output at the suspected root cause. Run the reproduction. Does the evidence match?

  2. If the hypothesis is wrong: Search for the error (sanitize sensitive data first). Return to Phase 1. Gather more evidence. Do not guess.

  3. 3-strike rule: If 3 hypotheses fail, STOP. Tell the user:

    "3 hypotheses tested, none match. This may be an architectural issue rather than a simple bug."

    Options:

    • Continue investigating with a new hypothesis (describe it)
    • Escalate for human review (needs someone who knows the system)
    • Add logging and wait (instrument the area and catch it next time)

Red flags ... if you see any of these, slow down:

  • "Quick fix for now" ... there is no "for now." Fix it right or escalate.
  • Proposing a fix before tracing data flow ... you're guessing.
  • Each fix reveals a new problem elsewhere ... wrong layer, not wrong code.

Phase 4: Implementation

Once root cause is confirmed:

  1. Fix the root cause, not the symptom. The smallest change that eliminates the actual problem.

  2. Minimal diff: Fewest files touched, fewest lines changed. Resist the urge to refactor adjacent code.

  3. Write a regression test that:

    • Fails without the fix (proves the test is meaningful)
    • Passes with the fix (proves the fix works)
  4. Run the full test suite. No regressions allowed.

  5. If the fix touches >5 files: Flag the blast radius to the user before proceeding. That's large for a bug fix.


Phase 5: Verification & Report

Fresh verification: Reproduce the original bug scenario and confirm it's fixed. This is not optional.

Run the test suite.

Output a structured debug report:

DEBUG REPORT

  • Symptom: what the user observed
  • Root cause: what was actually wrong
  • Fix: what was changed, with file references
  • Evidence: test output, reproduction showing fix works
  • Regression test: location of the new test
  • Related: prior bugs in same area, architectural notes
  • Status: DONE | DONE_WITH_CONCERNS | BLOCKED

Save the report to memory/ with today's date so future sessions can reference it.


Important Rules

  • 3+ failed fix attempts: STOP and question the architecture. Wrong architecture, not failed hypothesis.
  • Never apply a fix you cannot verify. If you can't reproduce and confirm, don't ship it.
  • Never say "this should fix it." Verify and prove it. Run the tests.
  • If fix touches >5 files: Flag to user before proceeding.
  • Completion status:
    • DONE ... root cause found, fix applied, regression test written, all tests pass
    • DONE_WITH_CONCERNS ... fixed but cannot fully verify (e.g., intermittent bug, requires staging)
    • BLOCKED ... root cause unclear after investigation, escalated

Comments

Loading comments...