Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Meta Debugger

v1.0.0

AI-powered self-debugging system that identifies, analyzes, and fixes errors automatically. Learns from past errors, builds error patterns, generates fix sug...

0· 95·1 current·1 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for jason-aka-chen/meta-debugger.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Meta Debugger" (jason-aka-chen/meta-debugger) from ClawHub.
Skill page: https://clawhub.ai/jason-aka-chen/meta-debugger
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install meta-debugger

ClawHub CLI

Package manager switcher

npx clawhub@latest install meta-debugger
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Name/description (self-debugging, generate/apply fixes, learn from past errors) align with the code and SKILL.md. However, the Installation section suggests running `pip install json traceback ast`, which are Python stdlib modules and not pip packages — this is incoherent and unnecessary. That mismatch looks like sloppy documentation and reduces confidence in maintenance quality.
!
Instruction Scope
SKILL.md instructs the agent to analyze errors and to generate and apply fixes (code patches, configuration fixes, automatic application with rollback). Those capabilities are powerful: applying fixes autonomously can modify code or configs across the project. The documentation does not clearly constrain which files/paths may be changed, how patches are generated/applied, or what safeguards exist beyond a generic 'safe_mode' flag. This is scope-creep relative to simple error analysis and requires human review and sandboxing before use.
Install Mechanism
The registry shows no install spec (instruction-only plus a code file). That is lower risk because nothing is being automatically downloaded at install time. The one anomaly is the SKILL.md pip instruction to install standard-library modules — this is incorrect rather than malicious, but it is an incoherence that suggests the skill's documentation hasn't been reviewed.
Credentials
The skill requests no environment variables or external credentials, which is appropriate. The implementation sets a default storage path under the user's home (storage_path defaults to ~/.meta_debugger/<name>), so the skill will persist error and fix history locally; SKILL.md does not clearly document what user data (contexts, stack traces) will be recorded. Persisting contextual data may include sensitive inputs unless explicitly filtered.
Persistence & Privilege
The skill does not request 'always: true' and is user-invocable only. It does create a per-user storage path and keeps internal histories/patterns, which gives it ongoing local presence (data persisted to disk). That is not inherently malicious but should be considered when enabling auto_fix or using in production; the skill does not request system-wide privilege changes or modify other skills.
What to consider before installing
This skill appears to implement the advertised debugging and auto-fix features, but there are a few red flags to consider before installing or enabling autonomous fixes: - Do not enable auto_fix in production yet. Test in a controlled environment where file changes are reversible (use source control or a sandbox). The skill can generate and apply patches; you should confirm exactly which files it will touch. - The SKILL.md 'pip install json traceback ast' line is wrong — these are stdlib modules. Treat this as a sign the docs or packaging may be sloppy; request clarification from the author or inspect the code yourself. - Inspect the full meta_debugger.py implementation (especially apply_fix, generate_fixes, and any persistence code) to see whether it writes files, runs shell commands, or makes network calls. The provided file sets a default storage path (~/.meta_debugger) and records error/context history — ensure sensitive inputs are filtered or not stored if that matters to you. - If you plan to run with auto_fix=True or allow the agent to invoke the skill autonomously, restrict its permissions (run under a limited user) and ensure backups/CI checks are in place so accidental or incorrect patches can be detected and rolled back. - If you need higher assurance, ask the owner for: (1) the full source code and a description of how apply_fix modifies files, (2) whether any remote endpoints exist for logging/telemetry, and (3) explicit data-retention and filtering policies for recorded contexts. If those answers are not available, run only in development/sandbox.

Like a lobster shell, security has layers — review code before you run it.

latestvk97e02s5p533nqqeb8s3jp8e7583dr3m
95downloads
0stars
1versions
Updated 1mo ago
v1.0.0
MIT-0

Meta Debugger

Self-diagnosing and self-healing AI capability.

Features

1. Error Detection

  • Runtime Monitoring: Detect errors in real-time
  • Pattern Recognition: Identify error patterns
  • Anomaly Detection: Find unusual behaviors
  • Log Analysis: Parse and analyze logs

2. Root Cause Analysis

  • Stack Trace Analysis: Understand error origins
  • Context Tracking: Track what led to error
  • Similar Errors: Find related past errors
  • Impact Assessment: Evaluate error severity

3. Fix Generation

  • Solution Suggestions: Generate fix candidates
  • Code Patches: Create actual code changes
  • Configuration Fixes: Fix config issues
  • Workarounds: Suggest alternative approaches

4. Automatic Fix

  • Safe Application: Apply fixes safely
  • Rollback Support: Undo if needed
  • Test Validation: Verify fix works
  • Learning Loop: Learn from results

5. Prevention

  • Pattern Building: Build error patterns
  • Pre-flight Checks: Validate before execution
  • Guard Rails: Add safety checks
  • Monitoring: Ongoing error watch

Installation

pip install json traceback ast

Usage

Initialize Debugger

from meta_debugger import MetaDebugger

debugger = MetaDebugger(
    name="my_assistant",
    auto_fix=True,
    safe_mode=True
)

Register Error Handlers

@debugger.error_handler
def handle_api_error(error, context):
    """Custom error handler"""
    return {
        'action': 'retry',
        'max_retries': 3,
        'backoff': 'exponential'
    }

@debugger.error_handler  
def handle_timeout(error, context):
    """Handle timeout errors"""
    return {
        'action': 'increase_timeout',
        'new_timeout': 60
    }

Wrap Functions

@debugger.wrap
def call_api(url, params):
    """Function that might fail"""
    return requests.get(url, params=params)

Manual Debug

# Analyze an error
analysis = debugger.analyze(
    error=ValueError("Invalid input"),
    context={'input': user_input, 'function': 'process_data'}
)

print(analysis)
# {
#     'root_cause': 'Type mismatch',
#     'severity': 'medium',
#     'suggestions': [
#         'Convert input to correct type',
#         'Add input validation'
#     ]
# }

# Apply fix
result = debugger.apply_fix(analysis)

Error History

# Get error patterns
patterns = debugger.get_error_patterns()

# Get common fixes
fixes = debugger.get_common_fixes()

# Get prevention suggestions
prevention = debugger.get_prevention_tips()

API Reference

Error Handling

MethodDescription
@error_handlerDecorator for error handlers
register_handler(type, handler)Register custom handler
handle(error, context)Handle an error

Analysis

MethodDescription
analyze(error, context)Analyze error root cause
get_stack_trace(error)Parse stack trace
find_similar(error)Find similar past errors

Fix Generation

MethodDescription
generate_fixes(error)Generate fix candidates
rank_fixes(fixes)Rank fixes by probability
apply_fix(fix)Apply a fix

Prevention

MethodDescription
add_guardrail(check)Add pre-execution check
validate_input(input, rules)Validate inputs
build_pattern(error)Build error pattern

Learning

MethodDescription
record_error(error, context)Record error for learning
record_fix(error, fix, success)Record fix result
get_insights()Get learned insights

Error Patterns

ERROR_PATTERNS = {
    "timeout": {
        "causes": ["network", "server_load", "query_complexity"],
        "fixes": ["increase_timeout", "retry", "cache"],
        "prevention": ["timeout_guards", "circuit_breaker"]
    },
    "value_error": {
        "causes": ["type_mismatch", "invalid_format", "out_of_range"],
        "fixes": ["type_conversion", "validation", "default_value"],
        "prevention": ["input_validation", "schema_check"]
    },
    "connection_error": {
        "causes": ["network_down", "server_unavailable", "auth_failed"],
        "fixes": ["retry", "reconnect", "fallback"],
        "prevention": ["health_check", "load_balancing"]
    }
}

Fix Strategies

Retry Strategy

{
    'strategy': 'retry',
    'max_attempts': 3,
    'backoff': 'exponential',
    'backoff_base': 2,
    'max_delay': 60
}

Fallback Strategy

{
    'strategy': 'fallback',
    'primary': 'api_v1',
    'fallback': 'api_v2',
    'condition': 'primary_unavailable'
}

Circuit Breaker

{
    'strategy': 'circuit_breaker',
    'failure_threshold': 5,
    'timeout': 60,
    'half_open_requests': 3
}

Default Value

{
    'strategy': 'default',
    'field': 'result',
    'default': {'status': 'unknown'}
}

Example: Full Usage

from meta_debugger import MetaDebugger

# Initialize
debugger = MetaDebugger("production_assistant")

# Register handlers
@debugger.error_handler
def handle_api_error(error, context):
    if "timeout" in str(error).lower():
        return {'action': 'retry', 'max_retries': 3}
    elif "auth" in str(error).lower():
        return {'action': 'refresh_token'}
    return {'action': 'log_and_continue'}

# Wrap risky function
@debugger.wrap
def fetch_stock_data(symbol):
    # This might fail
    return api.get(f"/stock/{symbol}")

# Use it
try:
    data = fetch_stock_data("600519")
except Exception as e:
    # Debugger automatically handles
    debugger.handle(e, {'function': 'fetch_stock_data', 'symbol': '600519'})

Integration

With Skills

class MySkill:
    def __init__(self):
        self.debugger = MetaDebugger()
    
    def execute(self, input):
        try:
            return self._execute(input)
        except Exception as e:
            return self.debugger.handle(e, {'skill': 'MySkill', 'input': input})

With OpenClaw

@hookimpl
def on_error(error, context):
    debugger = MetaDebugger()
    return debugger.handle(error, context)

Metrics

MetricDescription
error_rateErrors per 1000 calls
fix_success_rateSuccessful fixes
avg_recovery_timeTime to recover
prevented_errorsErrors caught by guards

Best Practices

  1. Start with Safe Mode: Always review before auto-fixing
  2. Log Everything: Build learning data
  3. Test Fixes: Validate before production
  4. Iterate: Improve patterns over time
  5. Balance: Don't over-catch or under-catch

Future Capabilities

  • Cross-system error correlation
  • AI-generated fixes with LLMs
  • Self-healing infrastructure
  • Predictive error prevention

Comments

Loading comments...