๐ŸŽจ GPT Image Edit โ€” Pro Pack on RunComfy

v0.1.3

Edit images with OpenAI GPT Image 2 (the `/edit` endpoint of ChatGPT Images 2.0) on RunComfy โ€” bundled with the model's documented prompting patterns so the...

0ยท 0ยท 4 versionsยท 0 currentยท 0 all-timeยท Updated 23m agoยท MIT-0
byKalvin@kalvinrv

๐ŸŽจ GPT Image Edit โ€” Pro Pack on RunComfy

runcomfy.com ยท docs ยท Edit endpoint ยท Text-to-image sibling

OpenAI GPT Image 2 โ€” /edit endpoint (ChatGPT Images 2.0 image-to-image) on the RunComfy Model API. Strongest in its class at preserving identity through targeted edits and rewriting embedded text in any script (Latin, kana, CJK, Cyrillic, Arabic).

When to pick this model (vs siblings)

You wantUse
Edit multilingual / embedded text in imageGPT Image Edit โœ“
Identity preservation through translated headline variantsGPT Image Edit โœ“
Layout-precise edit (move headline, swap CTA, etc.)GPT Image Edit โœ“
Up to 10 reference imagesGPT Image Edit โœ“
Batch up to 20 images consistentlyNano Banana Edit
Single-shot precise local edit, source-fidelity-firstFlux Kontext
Generate from scratch with GPT Image 2sibling gpt-image-2 skill
Batch SKU galleries with stable identityNano Banana Edit

Prerequisites

  1. RunComfy CLI โ€” npm i -g @runcomfy/cli
  2. RunComfy account โ€” runcomfy login opens a browser device-code flow.
  3. CI / containers โ€” set RUNCOMFY_TOKEN=<token> instead of runcomfy login.

Endpoints + input schema

openai/gpt-image-2/edit

FieldTypeRequiredDefaultNotes
promptstringyesโ€”Edit instruction. Lead with preservation, end with the change.
imagesstring[]yesโ€”Up to 10 publicly-fetchable HTTPS URLs. First is primary; rest are auxiliary.
sizeenumnoautoauto (preserve input), 1024_1024 (1:1), 1024_1536 (2:3 portrait), 1536_1024 (3:2 landscape).

size=auto preserves the input ratio โ€” strongly recommended unless the edit explicitly changes framing.

How to invoke

Single-ref preservation edit:

runcomfy run openai/gpt-image-2/edit \
  --input '{
    "prompt": "Keep the person'\''s face, pose, and brand mark unchanged. Replace the background with a soft warm-grey studio sweep and a gentle floor shadow.",
    "images": ["https://.../portrait.jpg"]
  }' \
  --output-dir <absolute/path>

Multilingual text rewrite (preserve everything except the headline):

runcomfy run openai/gpt-image-2/edit \
  --input '{
    "prompt": "Keep the photograph, layout, and brand mark exactly as in the input. Replace only the in-image headline. The new headline reads \"ไปŠๆ—ฅใฎใŠใ™ใ™ใ‚\" in bold Japanese kana, same position and font weight as before.",
    "images": ["https://.../poster-en.jpg"]
  }' \
  --output-dir <absolute/path>

Multi-ref composition:

runcomfy run openai/gpt-image-2/edit \
  --input '{
    "prompt": "Compose subject from image 1 into the room from image 2. Match the lighting and color palette of image 2. Keep image 1 subject identity (face, pose, clothing) unchanged.",
    "images": ["https://.../subject.jpg", "https://.../room.jpg"]
  }' \
  --output-dir <absolute/path>

Prompting โ€” what actually works

Lead with preservation goals. Always: "Keep [face / pose / clothing / brand / framing] unchanged." Then state the change. The model honors what's stated up front.

Multilingual text โ€” quote the characters, name the script. "the headline reads \"ใ‚ณใƒผใƒ’ใƒผ\" in bold Japanese kana", "the label says \"ะะ ะžะœะ\" in Cyrillic, white on black", "the right-margin caption reads \"ุชุฎููŠุถ\" in Arabic right-to-left". Don't paraphrase โ€” quote.

Directional language for spatial edits. Concrete spatial scopes work: "move the headline from top-right to bottom-center", "remove the leftmost object only", "replace the watermark in the bottom-right corner".

Multi-ref numbering. When passing multiple images, refer to them by number: "subject from image 1, lighting from image 2, color palette from image 3". The model routes cues correctly.

Use size: "auto" to preserve input ratio. Only override when the edit explicitly changes framing (e.g. cropping a 16:9 to 1:1).

Anti-patterns:

  • Long compound edit instructions ("change A and B and C and D") โ†’ drift increases per added scope.
  • Missing preservation goals โ†’ model subtly rewrites the face / brand / framing.
  • Paraphrasing in-image text instead of quoting it โ†’ text comes out different.
  • Asking for size outside the 3 fixed values + auto โ†’ 422.

Where it shines

Use caseWhy GPT Image Edit
Multilingual ad localizationOne source asset โ†’ many language variants of the same headline
Brand-safe headline / CTA swapsLayout precision + preservation language hold the rest stable
Multi-ref composition (subject from one, scene from another)Numbered refs route cues correctly
Layout-precise repositioningDirectional language ("top-right to bottom-center") honored
Identity preservation across signage editsStrongest in class for face / brand preservation through targeted edits

Sample prompts (verified to produce strong results)

Background swap with full preservation (page example):

Turn the background into a bright minimal white-to-soft-gray studio
sweep with gentle floor shadow; add a large headline in-image that
reads "OPEN STUDIO" in a bold clean sans-serif, high contrast, centered;
keep the main person or product, pose, and face identity unchanged

Multilingual variant:

Keep the photograph, layout, lighting, and brand mark exactly as in the
input. Replace only the in-image headline.
The new headline reads "ใ‚ณใƒผใƒ’ใƒผ" in bold Japanese kana, same position
and font weight as before.

Multi-ref composition:

Compose subject from image 1 into the kitchen from image 2.
Match the warm window light and color palette of image 2.
Keep subject identity (face, pose, clothing) from image 1 unchanged.

Limitations

  • size: 3 fixed values + auto โ€” anything else 422s.
  • images: up to 10 โ€” first is primary, rest are auxiliary cues.
  • Long compound prompts drift โ€” split into multiple passes when needed.
  • For batch consistency across many SKU images, Nano Banana Edit (up to 20) is better.
  • Photorealism on portraits โ€” Nano Banana Pro wins head-to-head.

Exit codes

codemeaning
0success
64bad CLI args
65bad input JSON / schema mismatch
69upstream 5xx
75retryable: timeout / 429
77not signed in or token rejected

Full reference: docs.runcomfy.com/cli/troubleshooting.

How it works

The skill invokes runcomfy run openai/gpt-image-2/edit with a JSON body matching the schema. The CLI POSTs to https://model-api.runcomfy.net/v1/models/openai/gpt-image-2/edit, polls the request, fetches the result, and downloads any .runcomfy.net/.runcomfy.com URL into --output-dir. Ctrl-C cancels the remote request before exit.

Security & Privacy

  • This skill only invokes the runcomfy CLI. No other endpoints, no telemetry, no callbacks.
  • The token saved by runcomfy login lives at ~/.config/runcomfy/token.json (mode 0600).
  • Auto-download is restricted to *.runcomfy.net / *.runcomfy.com โ€” a compromised model cannot trick the CLI into pulling arbitrary internet content.

Version tags

latestvk972r1d9x1ddkmexffrhr8rx7x85rfm0

Runtime requirements

Binsruncomfy
EnvRUNCOMFY_TOKEN
Config~/.config/runcomfy