Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

image

v1.0.0

Run local ComfyUI workflows via the HTTP API. Use when the user asks to run ComfyUI, execute a workflow by file path/name, or supply raw API-format JSON; sup...

0· 85·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for tobeyrebecca/toby-image.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "image" (tobeyrebecca/toby-image) from ClawHub.
Skill page: https://clawhub.ai/tobeyrebecca/toby-image
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install toby-image

ClawHub CLI

Package manager switcher

npx clawhub@latest install toby-image
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The skill's name/description (run local ComfyUI workflows) matches the included artifacts: a run script that posts workflow JSON to a local ComfyUI HTTP API and a download script that saves model weights into ~/ComfyUI/models/. The ability to inspect and edit workflow JSON and to download model weights is expected for this purpose. Minor inconsistencies: registry metadata lists no required binaries while SKILL.md metadata notes python3 is required, and the top-level skill name ('image') differs from SKILL.md's declared name ('ComfyUI'). These are metadata mismatches but do not indicate malicious behavior.
Instruction Scope
SKILL.md explicitly instructs the agent to read and modify workflow JSON (prompt/style/seed fields), write a temp workflow file, and run the bundled run script which posts to the local API (127.0.0.1:8188). It also instructs how to install/run ComfyUI if not present and how to run the download script for model URLs. All referenced files/paths (~/ComfyUI, skills/comfyui/assets/tmp-workflow.json, ~/.local/bin for pget) are coherent with the stated goal. Note: editing arbitrary workflow JSON is powerful — workflows may contain user-provided or sensitive prompts; the agent must only modify fields necessary for generation.
Install Mechanism
There is no formal install spec (instruction-only), but code files are bundled and intended to be run locally. The download script may fetch the pget binary from GitHub Releases (https://github.com/replicate/pget/releases/latest/download/<asset>) and will write it to ~/.local/bin if missing; this is a legitimate helper but does write an executable to the user's home directory. Downloading model URLs (arbitrary URLs provided by the user) will write files under ~/ComfyUI/models/<subfolder> and can store large binaries. These behaviors are expected for the functionality but are higher-risk than pure instruction-only skills because they create files and executables on disk.
Credentials
The skill does not request environment variables, credentials, or access to unrelated services. It uses local filesystem paths (home dir, ~/ComfyUI) and network access to the local ComfyUI server and to user-supplied model URLs — all relevant to its purpose. One minor mismatch: SKILL.md metadata signals a dependency on python3 (a reasonable runtime requirement) but the registry requirements block lists none.
Persistence & Privilege
The skill does not request always:true and does not modify other skills or global agent settings. It does create or write to user-local paths (~/.local/bin for pget and ~/ComfyUI/models/ for weights) and will save edited workflows under the skill assets/tmp-workflow.json; these are scoped to the user's home and the ComfyUI install and are proportional to the task. The automatic attempt to install pget into ~/.local/bin is a persistent change the user should be aware of.
Assessment
This skill appears to do what it says: edit workflow JSON and queue runs against a local ComfyUI server, plus download model weights into ~/ComfyUI/models/. Before installing or running it: 1) Review the two scripts (comfyui_run.py and download_weights.py) yourself — they will make network requests (to localhost and to URLs you provide) and write files to your home directory. 2) Only provide model URLs you trust (downloading arbitrary URLs can place malicious or large files on disk). 3) Be aware the download script will attempt to install a pget binary to ~/.local/bin if missing. 4) Ensure you are comfortable running commands that clone/install ComfyUI and running the ComfyUI server locally. 5) Note minor metadata inconsistencies (declared required binaries vs registry metadata and a name mismatch); these are bookkeeping issues but review SKILL.md before use. If you want stronger assurance, run the scripts in a controlled environment (container or VM) first.

Like a lobster shell, security has layers — review code before you run it.

aivk977c8fxshc76sybmb8g8wsxg1859423latestvk977c8fxshc76sybmb8g8wsxg1859423
85downloads
0stars
1versions
Updated 1w ago
v1.0.0
MIT-0

ComfyUI Runner

Overview

Run ComfyUI workflows on the local server (default 127.0.0.1:8188) using API-format JSON and return output images.

Editing the workflow before running

The run script only takes --workflow <path>. You must inspect and edit the workflow JSON before running, using your best knowledge of the ComfyUI API format. Do not assume fixed node IDs, class_type names, or _meta.title values — the user may have updated the default workflow or supplied a custom one.

For every run (including the default workflow):

  1. Read the workflow JSON (default: skills/comfyui/assets/default-workflow.json, or the path/file the user gave).
  2. Identify prompt-related nodes by inspecting the graph: look for nodes that hold the main text prompt — e.g. PrimitiveStringMultiline, CLIPTextEncode (positive text), or any node with _meta.title or class_type suggesting "Prompt" / "positive" / "text". Update the corresponding input (e.g. inputs.value, or the text input to the encoder) to the image prompt you derived from the user (subject, style, lighting, quality). If the user didn't ask for a custom image, you can leave the existing prompt or tweak only if needed.
  3. Optionally identify style/prefix nodes — e.g. StringConcatenate, or a second string input that acts as style. Set them if the user asked for a specific style or to clear a default prefix.
  4. Optionally set a new seed — find sampler-like nodes (e.g. KSampler, BasicGuider, or any node with a seed input) and set seed to a new random integer so each run can differ.
  5. Write the modified workflow to a temp file (e.g. skills/comfyui/assets/tmp-workflow.json). Use ~/ComfyUI/venv/bin/python for any inline Python; do not use bare python.
  6. Run: comfyui_run.py --workflow <path-to-edited-json>.

If the workflow structure is unclear or you can't find prompt/sampler nodes, run the file as-is and only change what you can reliably identify. Same approach for arbitrary user-supplied JSON: inspect first, edit at your best knowledge, then run.

Run script (single responsibility)

~/ComfyUI/venv/bin/python skills/comfyui/scripts/comfyui_run.py \
  --workflow <path-to-workflow.json>

The script only queues the workflow and polls until done. It prints JSON with prompt_id and output images. All prompt/style/seed changes are done by you in the JSON beforehand.

If the server isn't reachable

If the run script fails with a connection error (e.g. connection refused or timeout to 127.0.0.1:8188), ComfyUI may not be installed or not running.

Check: Does ~/ComfyUI exist and contain main.py?

  • If not installed: Install ComfyUI (e.g. clone the repo, create a venv, install dependencies, then start the server). Example:

    git clone https://github.com/comfyanonymous/ComfyUI.git ~/ComfyUI
    cd ~/ComfyUI
    python3 -m venv venv
    ~/ComfyUI/venv/bin/pip install -r requirements.txt
    

    Then start the server (see below). Tell the user they may need to install model weights into ~/ComfyUI/models/ depending on the workflow.

  • If installed but not running: Start the ComfyUI server so the API is available on port 8188. Example:

    ~/ComfyUI/venv/bin/python ~/ComfyUI/main.py --listen 127.0.0.1
    

    Run in the background or in a separate terminal so it keeps running. Then retry the workflow run.

Use ~ (or the user's home) for paths so it works on their machine.

Model weights from URLs

When the user pastes or sends a list of model weight URLs (one per line, or comma-separated), download those files into the ComfyUI installation so the workflow can use them later.

  1. Normalize the list — one URL per line; strip empty lines and comments (lines starting with #).
  2. Run the download script with the ComfyUI base path (default ~/ComfyUI). The script uses pget for parallel downloads when available; if pget is not in PATH, it installs it to ~/.local/bin automatically (no sudo). If pget cannot be installed (e.g. unsupported OS/arch), it falls back to a built-in download. Use the ComfyUI venv Python so the script runs correctly:
    ~/ComfyUI/venv/bin/python skills/comfyui/scripts/download_weights.py --base ~/ComfyUI
    
    Pass URLs as arguments, or pipe a file/list on stdin:
    echo "https://example.com/model.safetensors" | ~/ComfyUI/venv/bin/python skills/comfyui/scripts/download_weights.py --base ~/ComfyUI
    
    Or save the user's list to a temp file and run:
    ~/ComfyUI/venv/bin/python skills/comfyui/scripts/download_weights.py --base ~/ComfyUI < /tmp/weight_urls.txt
    
    To force the built-in download (no pget): add --no-pget.
  3. Subfolder: The script infers the ComfyUI models subfolder from the URL/filename (e.g. vae, clip, loras, checkpoints, text_encoders, controlnet, upscale_models). The user can optionally specify a subfolder per line as url subfolder (e.g. https://.../model.safetensors vae). You can also pass a default with --subfolder loras so all URLs in that run go to models/loras/.
  4. Existing files: By default the script skips URLs that already exist on disk; use --overwrite to replace.
  5. Paths: Files are written under ~/ComfyUI/models/<subfolder>/. Tell the user where each file was saved and that they can run the workflow once the ComfyUI server is (re)started if needed.

Supported subfolders (under ComfyUI/models/): checkpoints, clip, clip_vision, controlnet, diffusion_models, embeddings, loras, text_encoders, unet, vae, vae_approx, upscale_models, and others. Use --subfolder <name> when the auto-inference is wrong.

After run

Outputs are saved under ComfyUI/output/. Use the images list from the script output to locate the files (filename + subfolder).

Always send the output to the user

After a successful ComfyUI run, you must deliver the generated image(s) to the user. Do not reply with only the filename in text or with NO_REPLY.

  1. Parse the script output JSON for images (each has filename, subfolder, type).
  2. Build the full path: ComfyUI/output/ + subfolder + filename (e.g. ComfyUI/output/z-image_00007_.png).
  3. Send the image to the user via the channel they're on (e.g. use the message/send tool with the image path so the user receives the file). Include a short caption if helpful (e.g. "Here you go." or "Tokyo street scene.").

Every successful run must result in the user receiving the image. Never leave them with only a filename or no delivery.

Resources

scripts/

  • comfyui_run.py: Queue a workflow, poll until completion, print prompt_id and images. No args — you edit the JSON before running.
  • download_weights.py: Download model weight URLs into ~/ComfyUI/models/<subfolder>/. Uses pget when available (installs to ~/.local/bin if missing); fallback to built-in download. Input: URLs as args or one per line on stdin. Options: --base, --subfolder, --overwrite, --no-pget. Infers subfolder from URL/filename when not given.

assets/

  • default-workflow.json: Default workflow. Copy and edit (prompt, style, seed) then run with the edited path; or run as-is for a generic run.

Comments

Loading comments...