Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Dr.Binary

Use when the user wants to analyze a binary file, check if a file is malicious, decompile an executable, or understand what a binary does. Triggers on: "anal...

MIT-0 · Free to use, modify, and redistribute. No attribution required.
0 · 76 · 0 current installs · 0 all-time installs
byDeepbits Technology@deepbitstech
MIT-0
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The skill's name/description (binary analysis, decompile, malware check) matches the code and SKILL.md: upload a binary to a sandbox and drive Ghidra-based analysis. However the registry metadata lists no required env vars while SKILL.md requires DRBINARY_API_KEY — an inconsistency. The upload target is UPLOAD_URL = https://mcp.deepbits.com/workspace/upload while SKILL.md references drbinary.ai, which is a domain/name mismatch worth verifying.
!
Instruction Scope
SKILL.md explicitly instructs running upload.py to send the local binary to a remote sandbox and then calling various MCP (Ghidra/sandbox) tools. upload.py reads only the file to upload, but it also attempts to load a .env file three directories up (Path(...)/.env) and sets environment variables from it. Reading a top-level .env is scope creep: it can expose unrelated secrets on the machine and cause unexpected values to be sent or used. The primary runtime action (uploading a user file to a third-party endpoint) is expected for this skill, but users must be aware the binary is transmitted off-host.
Install Mechanism
Instruction-only skill with a small Python helper script; there is no install spec and no archive downloads. No elevated install risk detected.
!
Credentials
SKILL.md requires a single API key (DRBINARY_API_KEY) which is proportionate to uploading to an external sandbox. But the registry metadata claims no required env vars, which is inconsistent. More importantly, upload.py's load_env reads a .env file outside the skill directory and merges those variables into the environment, risking accidental use or exposure of other secrets from the user's workspace.
Persistence & Privilege
The skill is not force-enabled (always: false) and does not request persistent or system-wide configuration changes. It runs on-demand and contains no code that modifies other skills or global agent settings.
What to consider before installing
This skill appears to do what it says — it uploads a provided binary to an external sandbox (mcp.deepbits.com) for analysis — but there are a few things to check before installing/using it: - Verify the service and domain: SKILL.md refers to drbinary.ai while the upload URL is mcp.deepbits.com. Confirm you trust that endpoint and that it is the legitimate backend for the service you expect. - Expect data exfiltration: the script will transmit the entire binary to the remote server. Do not upload sensitive or proprietary binaries unless you trust the service and understand the privacy/retention policy. - .env loading risk: upload.py attempts to load a .env file three directories up and will set those variables in the environment. That could unintentionally surface other secrets (API keys, tokens) from your project. Consider removing or relocating any sensitive .env before running, or edit the script to not load external .env files. - Metadata mismatch: the registry metadata does not declare the DRBINARY_API_KEY requirement even though SKILL.md uses it. Treat the SKILL.md as authoritative, and ensure you provide a dedicated DRBINARY_API_KEY (not a shared/privileged secret). - If you need to proceed safely: (1) audit the upload URL and service ownership; (2) run the script in an isolated environment (sandbox/container) with a throwaway API key; (3) avoid using on real sensitive binaries. If you want, I can (a) summarize the exact lines in upload.py that read .env and send the file, (b) suggest a safer edit to the script that removes .env loading, or (c) help draft questions to verify the remote service operator.

Like a lobster shell, security has layers — review code before you run it.

Current versionv1.0.0
Download zip
latestvk9742ya4deaewqsc3pb8tgyz8983513f

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

Dr. Binary Analysis

Required environment variables

  • DRBINARY_API_KEY — drbinary.ai → Settings → Billing → API Key

Steps

1. Upload the binary

Run upload.py with the local file path. It uploads the file to the Dr. Binary sandbox and prints the remote path:

python skills/drbinary-analysis/upload.py /path/to/file.exe
# → /sandbox/<pathname>

2. Open Ghidra server

Call the ghidra_open_server MCP tool with the remote sandbox path returned in step 1. This initialises analysis and returns basic file metadata (size, hash, segments, imports, exports, strings, functions).

3. Analyse with Ghidra tools

Use the available MCP tools to perform a thorough analysis:

  • ghidra_list_imports — identify suspicious API calls
  • ghidra_list_strings — extract strings for IoC identification
  • ghidra_list_exports — list exported symbols
  • ghidra_decompile_function — decompile key functions to pseudo-C
  • ghidra_generate_call_graph — understand program flow
  • sandbox_execute — run safe commands (e.g. file, strings, sha256sum)

4. Report

Return a report in this format:

## Binary Analysis Report

**File Information**
- Name: [filename]
- Size: [bytes]
- SHA256: [hash]

**Analysis Summary**
[Brief overview of findings]

**Detailed Findings**
1. [Finding category]
   - Evidence: [specific data]
   - Significance: [what it means]

**Threat Assessment**
- Severity: [Critical/High/Medium/Low]
- Classification: [malware type or benign]
- Confidence: [High/Medium/Low]

**Recommendations**
1. [Action item]

Files

2 total
Select a file
Select a file to preview.

Comments

Loading comments…