๐Ÿซง Nano Banana Edit โ€” Pro Pack on RunComfy

v0.1.1

Edit images with Google Nano Banana 2 (image-to-image edit endpoint) on RunComfy. Documents Nano Banana Edit's strengths (preserve subject identity, swap bac...

0ยท 69ยท 2 versionsยท 0 currentยท 0 all-timeยท Updated 2h agoยท MIT-0
byKalvin@kalvinrv

๐Ÿซง Nano Banana Edit โ€” Pro Pack on RunComfy

runcomfy.com ยท docs ยท Edit endpoint

Google Nano Banana 2 Edit โ€” the image-to-image edit endpoint of the Gemini-family flash-tier image model โ€” hosted on the RunComfy Model API. Up to 20 input images per call for batch edits and multi-reference variation.

When to pick this model (vs siblings)

You wantUse
Preserve subject identity, swap background or clothingNano Banana Edit โœ“
Edit up to 20 images consistently in one batchNano Banana Edit โœ“
Localize edit to "X only" with spatial languageNano Banana Edit โœ“
Edit multilingual text inside the image (signs, labels)GPT Image 2 edit
Single ref + precise local edit ("she's now holding X")Flux Kontext
Generate a new image from scratchNano Banana 2 t2i (sibling skill)

If the user said "nano banana edit" / "edit with nano banana" explicitly, route here regardless.

Prerequisites

  1. RunComfy CLI โ€” npm i -g @runcomfy/cli
  2. RunComfy account โ€” runcomfy login opens a browser device-code flow.
  3. CI / containers โ€” set RUNCOMFY_TOKEN=<token> instead of runcomfy login.

Endpoints + input schema

google/nano-banana-2/edit

FieldTypeRequiredDefaultNotes
promptstringyesโ€”Edit instruction. Lead with preservation, end with the change.
image_urlsarrayyesโ€”1โ€“20 publicly-fetchable HTTPS URLs.
number_of_imagesintno11โ€“4 outputs per call.
seedintnoโ€”Reproducibility.
aspect_ratioenumnoautoauto (follows input) or fixed ratios โ€” lock for batch consistency.
resolutionenumno1K0.5K / 1K / 2K / 4K.
output_formatenumnopngpng / jpeg / webp.
safety_toleranceintno41 (strict) โ€“ 6 (permissive).
limit_generationsboolnoโ€”If true, restricts each round to one output.
enable_web_searchboolnofalseWeb grounding (extra cost / latency).

How to invoke

Single-image background swap, identity preserved:

runcomfy run google/nano-banana-2/edit \
  --input '{
    "prompt": "Keep the subject identity, pose, and clothing unchanged. Convert the background into a rainy neon cyberpunk street.",
    "image_urls": ["https://.../portrait.jpg"]
  }' \
  --output-dir <absolute/path>

Batch edit with locked framing:

runcomfy run google/nano-banana-2/edit \
  --input '{
    "prompt": "Replace the watermark in the bottom-right with the text \"AURA\" in clean white sans-serif. Keep everything else exactly as in the input.",
    "image_urls": ["https://.../sku-1.jpg", "https://.../sku-2.jpg", "https://.../sku-3.jpg"],
    "aspect_ratio": "1:1",
    "resolution": "1K"
  }' \
  --output-dir <absolute/path>

Targeted spatial edit ("left object only"):

runcomfy run google/nano-banana-2/edit \
  --input '{
    "prompt": "Remove the leftmost object only. Keep the right two objects, the table, and the lighting unchanged.",
    "image_urls": ["https://.../still-life.jpg"]
  }' \
  --output-dir <absolute/path>

Prompting โ€” what actually works

Preservation first, change last. Always lead with "Keep [identity / pose / clothing / brand / framing] unchanged." Then state the change in one clean sentence. Models honor what's stated up front; tail-end preservations get ignored.

Localize with spatial language. "background only", "the left object", "the upper-right corner", "above the headline" โ€” concrete spatial scopes are honored. "make it more X" is vague and drifts.

Batch consistency โ€” when editing a series, lock aspect_ratio and resolution. Use the same prompt grammar across the batch so each output reads as a sibling, not a remix.

Iterate small. If a one-pass edit drifts, split into two: pass 1 changes background only, pass 2 swaps the subject's outfit. Cleaner edits, same total cost (assuming similar resolution).

Multi-image variation โ€” pass up to 20 inputs to get a coherent batch. Useful for SKU galleries, A/B testing, character sheet variations.

Anti-patterns:

  • Long compound instructions ("change A and B and C and D") โ€” drift increases per added scope.
  • Edit instructions written in passive voice ("the background should be changed") โ€” be imperative.
  • Missing preservation goals โ€” model will subtly rewrite the face / brand.
  • Aspect ratios that don't match input โ€” causes crops or stretches.

Where it shines

Use caseWhy Nano Banana Edit
SKU gallery โ€” same product on different backgroundsBatch of 20, identity-preserved, framing locked
Influencer / spokesperson background swapsStrong identity preservation across edits
Localized object removal / additionSpatial language honored
A/B variants for ad creativeSeed lock + multiple number_of_images
Brand-asset relocalizationSame composition with text / palette swap

Sample prompts (verified to produce strong results)

Background swap (page example):

Keep the subject identity unchanged. Convert the background into a rainy
neon cyberpunk street.

Targeted text replacement:

Keep the bottle, label, and lighting exactly as in the input.
Replace only the brand text on the label from "ALPHA" to "AURA",
same font weight, centered, white on black.

Multi-image batch consistency:

For each input image: keep the subject's pose and identity unchanged.
Convert the background to a soft warm-grey studio sweep with subtle
floor shadow. Center the subject at the same fraction of frame as the
input.

Limitations

  • 1โ€“20 input images per call โ€” the first is treated as primary; the rest provide auxiliary cues.
  • 1โ€“4 outputs per call.
  • Long compound prompts drift โ€” split into multiple passes.
  • Web search adds latency + cost โ€” only enable on demand.
  • For multilingual in-image text edits, GPT Image 2 edit wins.

Exit codes

codemeaning
0success
64bad CLI args
65bad input JSON / schema mismatch
69upstream 5xx
75retryable: timeout / 429
77not signed in or token rejected

Full reference: docs.runcomfy.com/cli/troubleshooting.

How it works

The skill invokes runcomfy run google/nano-banana-2/edit with a JSON body matching the schema. The CLI POSTs to https://model-api.runcomfy.net/v1/models/google/nano-banana-2/edit, polls the request, fetches the result, and downloads any .runcomfy.net/.runcomfy.com URL into --output-dir. Ctrl-C cancels the remote request before exit.

Security & Privacy

  • Token storage: runcomfy login writes the API token to ~/.config/runcomfy/token.json with mode 0600 (owner-only read/write). Set RUNCOMFY_TOKEN env var to bypass the file entirely in CI / containers.
  • Input boundary: the user prompt is passed as a JSON string to the CLI via --input. The CLI does NOT shell-expand the prompt; it transmits the JSON body directly to the Model API over HTTPS. No shell injection surface from prompt content.
  • Third-party content: image / mask / video URLs you pass are fetched by the RunComfy model server, not by the CLI on your machine. Treat external URLs as untrusted; image-based prompt injection is a known risk for any image-edit / video-edit model.
  • Outbound endpoints: only model-api.runcomfy.net (request submission) and *.runcomfy.net / *.runcomfy.com (download whitelist for generated outputs). No telemetry, no callbacks.
  • Generated-file size cap: the CLI aborts any single download > 2 GiB to prevent disk-fill from a malicious or runaway model output.

Version tags

latestvk97ak4jp4xyvrmajns78t6h3rs85sd07

Runtime requirements

Binsruncomfy
EnvRUNCOMFY_TOKEN
Config~/.config/runcomfy