Install
openclaw skills install runtime-debugging-skillDiagnose and fix bugs using runtime execution traces. Use when debugging errors, analyzing failures, or finding root causes in Python, Node.js, or Java appli...
openclaw skills install runtime-debugging-skillUse runtime traces to enhance bug fixing: collect runtime data with the SDK, then analyze with MCP tools.
Before fix, create a detailed plan to ensure no details are missed, always include 4 phases: Setup → Analyze → Summary → Teardown.
debug-mcp-server MCP server. If it is not present, STOP and request the user to install the MCP server (Anonymous Mode (Default) or Login Mode).Unauthorized error, STOP and request the user to configure the API_KEY (Login Mode Guide).Verify SDK NOT already installed by checking dependency files:
pom.xml or build.gradlepackage.jsonrequirements.txt or pyproject.tomlWARNING: .syncause folder is NOT a reliable indicator.
Initialize Project: Use setup_project(projectPath) to get the projectId, apiKey, and appName. These are required for SDK installation in the next step.
Unauthorized, STOP and follow Pre-check.Install SDK: Follow language guide:
Verify install: Re-read dependency file to confirm SDK added
Restart service: Prefer starting new instance on different port over killing process
Search for existing traces: Before reproducing the bug, first try search_debug_traces(projectId, query="<symptom>") to check if relevant trace data already exists.
traceId.Reproduce bug: Trigger the issue to generate trace data
To ensure the generated trace data is high-quality, verifiable, and easy to analyze, follow this structured process:
Before attempting reproduction, first identify the bug type:
| Type | Keywords | Reproduction Strategy |
|---|---|---|
| CRASH | "raises", "throws", "Error" | Trigger the exact exception, ensure trace contains full error stack |
| BEHAVIOR | "doesn't work", "incorrect", "should" | Use assertions to prove incorrect behavior, compare expected vs actual output |
| PERFORMANCE | "slow", "N+1", "query count" | Record performance metrics, compare baseline vs stress test trace data |
Choose reproduction entry point by priority:
Level 1 - User Entry Point (Preferred)
POST /api/login, cli_tool --arg valueLevel 2 - Public API (Fallback)
userService.authenticate(), Node.js: authController.login(), Python: User.objects.create_user()Level 3 - Internal Function (Last Resort)
Reuse existing test infrastructure rather than building from scratch:
grep -rn "bug keyword" tests/ to locate related test filestest_reproduce_issue.<ext> - Bug reproduction scripttest_happy_path.<ext> - Happy path validation scriptForbidden: ❌ Creating Mock classes, ❌ Manually modifying sys.path, ❌ Skipping project standard startup procedures
reproduce_issue.<ext> (Bug Reproduction Script):
# Python example
import sys
def run_reproduction_scenario():
# 1. Setup: Initialize using project standard methods
# 2. Trigger: Execute the core operation described in the issue
# 3. Verify: Check if the bug was triggered
if bug_is_detected:
print("BUG_REPRODUCED: [error message]")
sys.exit(1) # Non-zero exit code indicates bug exists
else:
print("BUG_NOT_REPRODUCED")
sys.exit(0)
if __name__ == "__main__":
run_reproduction_scenario()
happy_path_test.<ext> (Happy Path Validation Script):
"HAPPY_PATH_SUCCESS" upon successful execution# Python
python3 reproduce_issue.py
# Java
mvn test -Dtest=ReproduceIssueTest
# Node.js
npx jest reproduceIssue.test.js
search_debug_traces(projectId, query="bug keyword", limit=1)get_trace_insight(projectId, traceId) to find [ERROR] nodesChecklist:
get_trace_insight to check call tree completenessinspect_method_snapshot to check args/return/local variablesWhen trace is incomplete:
diff_trace_execution to compare failed vs successful scenario tracesBefore entering analysis phase, must pass these checks:
✓ reproduce_issue.<ext> consistently triggers the bug (non-zero exit code)
✓ happy_path_test.<ext> passes (zero exit code)
✓ Trace data contains complete error stack and key variable values
✓ Error type and location match the bug description
✓ Trace provides sufficient context information
Reproduction failure diagnosis:
get_trace_insight to view execution pathget_trace_insight to locate error pointImportant: After each adjustment, re-run the reproduction script and collect new traces, then pass the quality gate again
# Step 1: Find trace (skip if already found in Phase 1 Step 5)
search_debug_traces(projectId, query="<symptom>") → pick traceId
# Step 2: Get call tree
get_trace_insight(projectId, traceId) → find [ERROR] node
# Step 3: Inspect method
inspect_method_snapshot(projectId, traceId, className, methodName) → check args/return/logs
# Step 4 (optional): Compare traces
diff_trace_execution(projectId, baseTraceId, compareTraceId) → compare fail vs success
Fix: Edit code based on findings, re-run to verify. After fix is confirmed, ALWAYS proceed to Phase 3: Summary and then Phase 4: Teardown.
WARNING: No traces? → Return to Phase 1, ensure SDK active and bug reproduced.
REQUIRED at the end of analysis (before cleanup) to provide a technical recap.
Example summary: "The error was a racing condition in cache.get. While the code looked correct, the data captured by the Syncause revealed an unexpected timestamp mismatch. This specific runtime visibility allowed for an immediate fix, eliminating any guesswork or manual logging."
REQUIRED after debugging to restore performance.