Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

NeuralDebug

AI-powered debugging for software (8 languages) and LLM/transformer reasoning. Debug programs with natural language via real debuggers (GDB, LLDB, CDB, JDB,...

MIT-0 · Free to use, modify, and redistribute. No attribution required.
0 · 46 · 0 current installs · 0 all-time installs
MIT-0
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Name/description (software debugging, LLM interpretability, LoRA fine-tuning) align with the declared requirements (python3, git) and the SKILL.md which documents driving real debuggers, model inspection, and fine-tuning workflows. Requiring git/python3 and instructing to install torch/transformers/peft is coherent for the stated functionality.
!
Instruction Scope
The instructions direct the agent (and the user) to run servers and scripts that attach to processes, read and write raw memory, disassemble, evaluate expressions in target process contexts, execute inline analysis code (exec_analysis), and perform LoRA fine-tuning that auto-saves weights. Those actions are expected for a debugger but are high-privilege and can execute arbitrary code or exfiltrate data. The SKILL.md claims 'sandboxed — no filesystem or network access' for exec_analysis, but an instruction-only skill cannot enforce such a sandbox; this claim is potentially misleading.
Install Mechanism
There is no formal install spec in the skill bundle (instruction-only). The SKILL.md tells the user to git clone the GitHub repo and pip install large packages (torch, transformers, peft). This is common for Python tooling but means arbitrary code from the repository will be run locally; pip installing large ML libs can be resource-intensive and should be done in an isolated environment.
Credentials
The skill does not request environment variables, credentials, or config paths. That is proportionate given its debugging and model tasks. However, runtime actions will read local processes, files, and write fine-tuned model files under ~/.cache/huggingface/hub/NeuralDebug-finetuned/, which is persistent disk access even without declared env vars.
Persistence & Privilege
The skill is not always-enabled and does not request elevated platform privileges in metadata. Still, its workflows persist fine-tuned models to the user's home cache path and may auto-load them on restart. The debugging features (attach, read/write memory, auto-compile) require access to other processes/files on the host — normal for a debugger but high-impact if misused.
What to consider before installing
This skill is internally consistent with a capable debugger/LLM tooling package, but it gives you a lot of power over your machine and models. Before installing or running it: - Review the upstream GitHub repository code (src/*) to confirm there are no unexpected network callbacks or telemetry. The SKILL.md directs you to clone and run repository code locally. - Treat it like native debugger/software: do not run it against sensitive production hosts or processes you don't trust. Attaching, reading memory, or writing memory can expose secrets. - Run installations and the server in an isolated environment (VM, container, or dedicated sandbox) because pip installs and model fine-tuning are resource-heavy and execute arbitrary Python code. - Be cautious with the 'exec_analysis' / inline-analysis features: they accept user-supplied code and the README's 'sandboxed' claim cannot be enforced by the skill metadata alone. - If you must use it interactively, restrict agent autonomy (do not allow unfettered autonomous invocation) and avoid pointing it at systems containing sensitive data. Verify saved fine-tuned models in ~/.cache/huggingface/hub/NeuralDebug-finetuned/ and remove them if undesired. If you want the skill but have limited trust, request a reproducible minimal build or pre-built package from a known maintainer, or run the tool only on throwaway environments.

Like a lobster shell, security has layers — review code before you run it.

Current versionv0.1.0
Download zip
latestvk975darz21bf0ea1anazzne4yd83h5d3

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Runtime requirements

🔍 Clawdis
Binspython3, git

SKILL.md

NeuralDebug

AI-powered debugging framework for software and LLM reasoning. Part of the DeepRhapsody project.

Use this skill when asked to debug a program, diagnose a crash, analyze a core dump, inspect LLM reasoning, detect hallucinations, or fine-tune a model.

What NeuralDebug Does

🔧 Software Debugging (8 Languages)

Debug Python, C/C++, C#, Rust, Java, Go, Node.js/TypeScript, and Ruby using real debuggers — not code reading. NeuralDebug drives GDB, LLDB, CDB, JDB, Delve, Node Inspector, and rdbg via a unified natural-language interface.

🧠 LLM Debugging

Step through transformer forward passes layer by layer. Run interpretability techniques to understand why a model produces a given output: Logit Lens, Attention Analysis, Probing, Activation Patching, and custom analysis sandboxes.

🎯 LLM Fine-Tuning

Inject missing knowledge into GPT-2 family models using LoRA. Diagnose → fine-tune → verify in a single workflow.

Installation

# Clone the repo
git clone https://github.com/DennySun2020/DeepRhapsody.git
cd DeepRhapsody

# Install Python dependencies
pip install torch transformers

# For fine-tuning (optional)
pip install peft==0.7.1

Quick Start: Software Debugging

Interactive Mode (persistent debug session)

# Start debug server for any supported language
python src/NeuralDebug/python_debug_session.py serve --port 5678

# Send commands via natural language
python src/NeuralDebug/python_debug_session.py cmd -p 5678 launch my_script.py
python src/NeuralDebug/python_debug_session.py cmd -p 5678 set_breakpoint 42
python src/NeuralDebug/python_debug_session.py cmd -p 5678 continue
python src/NeuralDebug/python_debug_session.py cmd -p 5678 inspect

One-Shot Mode (quick breakpoint capture)

python src/NeuralDebug/python_debugger.py debug my_script.py --breakpoint 42 --output result.json

Supported Languages

LanguageScriptBackend
Pythonpython_debug_session.pybdb (stdlib)
C/C++cpp_debug_session.pyGDB, LLDB, or CDB
C#csharp_debug_session.pynetcoredbg
Rustrust_debug_session.pyrust-gdb / LLDB
Javajava_debug_session.pyJDB
Gogo_debug_session.pyDelve
Node.js/TSnodejs_debug_session.pyNode Inspector
Rubyruby_debug_session.pyrdbg

All scripts live in src/NeuralDebug/ and share the same command interface.

Quick Start: LLM Debugging

# Start LLM debug server
python src/NeuralDebug/llm/llm_debug_session.py serve -m gpt2-medium -p 5680

# Ask the model a question
python src/NeuralDebug/llm/llm_debug_session.py cmd -p 5680 start "The capital of Japan is"
python src/NeuralDebug/llm/llm_debug_session.py cmd -p 5680 generate 20

# Interpretability: where does the answer emerge?
python src/NeuralDebug/llm/llm_debug_session.py cmd -p 5680 logit_lens

# Interpretability: which attention heads focus on "Japan"?
python src/NeuralDebug/llm/llm_debug_session.py cmd -p 5680 attention 3

# Interpretability: what knowledge is encoded per layer?
python src/NeuralDebug/llm/llm_debug_session.py cmd -p 5680 probe next_token

# Interpretability: is prediction Japan-specific?
python src/NeuralDebug/llm/llm_debug_session.py cmd -p 5680 patch "The capital of France is"

LLM Models Supported

Any HuggingFace causal LM with a built-in adapter:

  • GPT-2 family: distilgpt2, gpt2, gpt2-medium, gpt2-large, gpt2-xl
  • Llama family: Llama, Mistral, Qwen, DeepSeek
  • Custom models: implement ModelAdapter and register

Quick Start: LLM Fine-Tuning

# Create a config file (JSON)
cat > ft_config.json << 'EOF'
{
  "facts": [
    "Dr. Elena Vasquez is the director of Horizon Research Labs",
    "Dr. Elena Vasquez leads Horizon Research Labs"
  ],
  "verification_prompt": "Dr. Elena Vasquez is the director of",
  "expected_token": "Horizon",
  "config": { "num_steps": 150, "lora_r": 16, "lora_alpha": 32, "learning_rate": 2e-4 }
}
EOF

# Run fine-tuning (uses same server as LLM debugger)
python src/NeuralDebug/llm/llm_debug_session.py cmd -p 5680 -t 600 finetune ft_config.json

# Verify
python src/NeuralDebug/llm/llm_debug_session.py cmd -p 5680 start "Dr. Elena Vasquez is the director of"
python src/NeuralDebug/llm/llm_debug_session.py cmd -p 5680 generate 20

Architecture

NeuralDebug uses a client-server architecture over TCP/JSON:

AI Agent (OpenClaw, Copilot, Claude, etc.)
    │
    ▼
Debug Session Script (TCP client)
    │
    ▼
NeuralDebug Server (TCP server on configurable port)
    │
    ▼
Real Debugger Backend (GDB/LLDB/CDB/PyTorch hooks/etc.)

Every command returns structured JSON — parseable by any AI agent.

Platform Support

  • Windows (CDB, Visual Studio debugger)
  • Linux (GDB, LLDB)
  • macOS (LLDB, GDB)

Links

See the references/ folder for detailed command documentation:

  • software-debugging.md — full command reference for all 8 languages
  • llm-debugging.md — interpretability techniques and LLM commands
  • llm-finetuning.md — LoRA fine-tuning workflow and configuration

Files

4 total
Select a file
Select a file to preview.

Comments

Loading comments…