Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

deep-coding-p

v1.0.0

Advanced multi-agent development system for complex software projects. Leverages Orchestrator, Builder, and Reviewer agents to decompose modules, implement c...

0· 19·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for subaru0573/deep-coding-p.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "deep-coding-p" (subaru0573/deep-coding-p) from ClawHub.
Skill page: https://clawhub.ai/subaru0573/deep-coding-p
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Canonical install target

openclaw skills install subaru0573/deep-coding-p

ClawHub CLI

Package manager switcher

npx clawhub@latest install deep-coding-p
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
Name/description (multi‑agent deep coding harness) align with included artifacts: orchestrator rules, dashboard UI, and a local Python dashboard server. Required capabilities listed in the README (python/node/playwright, agent runtimes) are consistent with building, running, and testing projects.
Instruction Scope
SKILL.md explicitly instructs Orchestrator/Builder/Reviewer agents to read/write project files, spawn subagents, run builds/tests, and execute arbitrary project code (including serving web apps and running Playwright). This is expected for this use case but expands the agent's runtime authority and requires you to accept execution of generated code and file system access.
Install Mechanism
No remote install or downloads; the skill is instruction-only plus local assets (server.py, dashboard.html). User copies files and runs python3 server.py. This minimizes supply‑chain risk compared to remote installers.
Credentials
The skill asks for no environment variables or external credentials, which matches its purpose. However SKILL.md asks you to grant broad agent/tool permissions (read/write/edit/exec, sessions_spawn/sessions_send/subagents) and to enable ACP/agent-to-agent settings in platform config — these are high‑privilege platform changes but they are proportionate to a system that needs to spawn and manage coding agents.
Persistence & Privilege
Skill does not request 'always: true' and does not itself demand persistent system-wide changes. It does instruct you how to change platform config (openclaw.json) to enable agent spawning and ACP backends — those are platform-level privileges the user must consciously grant.
Assessment
This skill appears to do what it says, but it intentionally runs and spawns agents and executes generated project code — which can run arbitrary code on the host. Before installing or running: - Run it only on an isolated machine, VM, or container (not on a machine with secrets or production data). - Inspect assets/server.py (especially its safe_path() and binding logic) before starting it; ensure it truly binds to 127.0.0.1 and prevents path traversal. If unsure, run with a firewall rule blocking external access to port 8765. - Do not place secrets (API keys, tokens, private keys) in the project workspace that the dashboard will serve or index; the server reads project-state.json and serves files under the project root. - Be deliberate about granting the platform permissions SKILL.md recommends (read/write/exec, sessions_spawn, subagents, enabling ACP/agentToAgent). Those are necessary for multi‑agent orchestration but give spawned agents significant power. - If you need higher assurance, run Builders/Reviewers in sandboxed containers, and use a controlled fallback chain of trusted agent runtimes. If you want, I can scan server.py and the other files for the specific safe_path implementation and any network/io calls and point out any code lines you should review before running.

Like a lobster shell, security has layers — review code before you run it.

latestvk97denhsv3sx70cavdr19hnje185fs9r
19downloads
0stars
1versions
Updated 5h ago
v1.0.0
MIT-0

System Dependencies

This skill requires the following system capabilities:

DependencyPurposeRequired?Check
python3Dashboard server (port 8765)Yespython3 --version
node / npmProject builds, PlaywrightFor web projectsnode -v, npm -v
playwrightE2E browser testing (Reviewers)Optional, for E2Enpx playwright --version
ACP runtimeBuilder/Reviewer agent executionOptional, see belowPlatform-specific

No specific coding agent is required. The default configuration uses ACP + qoder, but you can use any available agent runtime. See First-Time Setup for configuration options.

Security Notes

⚠️ Dashboard server (server.py):

  • Binds to 127.0.0.1:8765 only — never expose to public network
  • Serves files from the project directory — verify no secrets (API keys, tokens) are present
  • Includes path traversal protection via safe_path() check

⚠️ Code execution:

  • Builders and Reviewers will execute and run arbitrary project code
  • For web projects: HTTP server serves project files locally
  • E2E tests use Playwright to open and interact with pages in a real browser
  • Only run on machines where executing generated code is acceptable
  • Use containers/VMs for untrusted projects

First-Time Setup

When a user installs this skill for the first time, guide them through the following steps:

Step 1: Create Project Workspace

mkdir -p my-projects/{requests/done,logs}
cp <skill-dir>/assets/server.py my-projects/
cp <skill-dir>/assets/dashboard.html my-projects/
cd my-projects

This creates the project root with all required directories and the Dashboard assets.

Step 2: Configure Orchestrator Agent

Create an Orchestrator agent in your openclaw.json (or equivalent config):

{
  "id": "orchestrator",
  "name": "Orchestrator",
  "workspace": "<your-path>/my-projects"
}

Give the Orchestrator a heartbeat prompt that references references/orchestrator-rules.md.

Step 3: Configure Builder Agent(s)

Choose your preferred coding agent(s). Options:

OptionConfigurationNotes
ACP + qoderruntime: "acp", agentId: "qoder"Default, requires acpx plugin
ACP + clauderuntime: "acp", agentId: "claude"Alternative ACP agent
ACP + codexruntime: "acp", agentId: "codex"OpenAI Codex
Subagent runtimeruntime: "subagent"Built-in, no extra setup
PTY coding agentsexec with PTYClaude Code, Codex CLI, etc.

The Orchestrator rules (references/orchestrator-rules.md) default to ACP + qoder, but you should update the agent ID to match your setup.

Recommended: Set up a 3-tier fallback chain

  1. Primary: Your preferred coding agent (e.g., qoder, claude)
  2. Fallback 1: Alternative ACP agent (e.g., claude if qoder is 429'd)
  3. Fallback 2: Built-in subagent runtime

Step 4: Allow Tool Access

Ensure your Orchestrator and Builder agents have access to:

  • read, write, edit — for file operations
  • exec — for running builds, tests, servers
  • sessions_spawn, sessions_send, sessions_list — for agent communication
  • subagents — for managing spawned agents

In openclaw.json:

{
  "tools": {
    "sessions": {
      "visibility": "all"
    },
    "agentToAgent": {
      "enabled": true,
      "allow": ["main", "orchestrator", "qoder-dev", "claude-dev"]
    }
  },
  "acp": {
    "enabled": true,
    "backend": "acpx",
    "defaultAgent": "qoder",
    "allowedAgents": ["qoder", "claude", "codex"]
  }
}

Step 5: Choose Your LLM

Set the default model for the Orchestrator and agents:

{
  "agents": {
    "defaults": {
      "model": {
        "primary": "your-provider/your-model"
      }
    }
  }
}

For coding agents (qoder, claude, codex), they use their own model — no LLM config needed.

Step 6: Verify Setup

cd my-projects
python3 server.py
# Open http://localhost:8765 — should show empty dashboard

Harness Deep Coding System

Multi-agent development: Orchestrator decomposes → Builders code → Reviewers verify → E2E test → deliver.

Roles

User-Facing Agent (you)

  • Gather requirements through conversation
  • Create request JSON at projects/requests/TIMESTAMP.json (use actual timestamp)
  • Notify Orchestrator via sessions_send to agent:orchestrator:main
  • Report progress every heartbeat when project is active

Orchestrator

  • Decomposes project into 2-4 modules + mandatory integration-test
  • Creates project-state.json with module states
  • Spawns Builders and Reviewers via sessions_spawn
  • Monitors progress via heartbeat, handles failures
  • Runs E2E smoke test after bugfix/feature accepted

Builder

  • Codes independently per module
  • Uses configured agent runtime (ACP subagent, or fallback)
  • Writes to logs/builder-MODULE.log (APPEND, UTC+8)

Reviewer

  • MUST actually test the application, not just read code
  • For web projects: serve via HTTP, verify in browser
  • Writes detailed review results to review_history
  • Writes to logs/reviewer-MODULE.log (APPEND, UTC+8)

User-Facing Workflow

1. Gather Requirements

  • What to build, key features, constraints, tech stack
  • Break into 2-4 logical modules (data → core → render → UI)
  • Auto-add final integration-test module depending on ALL others

2. Create Request

{
  "name": "Project Name",
  "description": "What it does",
  "owner": "user name",
  "tags": ["web", "game"]
}

Path: <project-root>/requests/TIMESTAMP.json (use actual timestamp)

3. Notify Orchestrator

Send to agent:orchestrator:main:

  • Request file path
  • Instructions to decompose into modules
  • Create project-state.json
  • Spawn Builder for first module
  • Use per-agent logs, APPEND mode, UTC+8
  • Run E2E smoke test after acceptance

4. Progress Reporting

Read project-state.json every heartbeat:

  • Report completion % and module states
  • Announce 100% completion

Project Structure

All paths are relative to your project root directory:

<project-root>/
├── projects-registry.json          ← All projects overview
├── server.py                       ← Dashboard server (port 8765)
├── dashboard.html                  ← Dashboard UI
├── requests/
│   └── done/                       ← Processed requests
├── logs/                           ← Agent activity logs
├── PROJECT-SLUG/
│   ├── project-state.json           ← Module states, review history
│   ├── logs/
│   │   ├── orchestrator.log         ← Orchestrator decisions
│   │   ├── builder-MODULE.log       ← Each Builder writes own file
│   │   └── reviewer-MODULE.log      ← Each Reviewer writes own file
│   └── SOURCE CODE (generated files)

See references/architecture.md for full project structure, module lifecycle, and dashboard details.

Module Lifecycle

pending → in_progress → ready_for_review → in_review → accepted
                        ↑                    |
                        └── needs_revision ──┘

Critical Rules

RuleDescription
One action per heartbeatNever do multiple spawns in one cycle
Spawn Reviewer immediatelyNever leave ready_for_review more than one cycle
Reviewer writes resultsMust write to review_history array, never just change state
E2E smoke testMandatory for bugfixes and new features before delivery
No archive copiesDO NOT copy project-state.json to archive/

Common Issues

IssueFix
429 rate limitWait, then re-spawn. Do NOT self-accept
Missing E2EBugfix/feature accepted → must spawn E2E Reviewer
Reviewer not spawnedCheck sessions_list, spawn if missing
Builder timeoutCheck if files exist, accept if complete
Archive duplicatesOrchestrator should NOT copy to archive/

Dashboard

Dashboard is included in assets/server.py and assets/dashboard.html.

Usage:

  1. Copy assets/server.py and assets/dashboard.html to your project root directory
  2. Run: python3 server.py
  3. Open: http://localhost:8765

Security: The server binds to 127.0.0.1 only and includes path traversal protection.

Features: project list, completion status, module states, agent activity timeline.

Comments

Loading comments...