🎨 GPT Image 2 β€” Pro Pack on RunComfy

v0.1.3

Generate and edit images with OpenAI GPT Image 2 (ChatGPT Images 2.0) on RunComfy. Documents GPT Image 2's strengths (embedded text, logos, multilingual typo...

0Β· 0Β· 4 versionsΒ· 0 currentΒ· 0 all-timeΒ· Updated 1h agoΒ· MIT-0
byKalvin@kalvinrv

🎨 GPT Image 2 β€” Pro Pack on RunComfy

runcomfy.com Β· Text-to-image Β· Edit Β· docs

OpenAI GPT Image 2 (ChatGPT Images 2.0) hosted on the RunComfy Model API β€” no OpenAI key, async REST.

When to pick this model (vs siblings)

GPT Image 2's distinct strength is directive precision: it follows multi-element prompts, layout cues, and embedded-text instructions more reliably than its peers. Pick it when what's on the canvas matters more than how stylized it looks.

You wantUse
Embedded text, logos, signage, multilingual typographyGPT Image 2 βœ“
Brand-safe, e-commerce / ad / UI mockup imageryGPT Image 2 βœ“
Iterative refinement that holds composition stableGPT Image 2 βœ“
Heavy stylization, painterly lookFlux 2
Hyperrealistic portraitNano Banana Pro
Cinematic / aesthetic-first hero shotsSeedream 5

If the user explicitly asked for GPT Image 2 / ChatGPT Image 2 / Image 2, route here regardless β€” don't second-guess the model choice.

Prerequisites

  1. RunComfy CLI β€” npm i -g @runcomfy/cli
  2. RunComfy account β€” runcomfy login opens a browser device-code flow.
  3. CI / containers β€” set RUNCOMFY_TOKEN=<token> instead of runcomfy login.

Endpoints + input schema

Two endpoints, same model.

openai/gpt-image-2/text-to-image

FieldTypeRequiredDefaultNotes
promptstringyesβ€”The positive prompt
sizeenumno1024_10241024_1024 (1:1), 1024_1536 (2:3 portrait), 1536_1024 (3:2 landscape) β€” only these three

openai/gpt-image-2/edit

FieldTypeRequiredDefaultNotes
promptstringyesβ€”Natural-language edit instruction
imagesstring[]yesβ€”Up to 10 reference image URLs (publicly fetchable HTTPS)
sizeenumnoautoauto (preserve input ratio), or one of the three fixed sizes above

size=auto on edit preserves the input aspect ratio β€” strongly recommended unless the edit explicitly changes framing.

How to invoke

Text-to-image:

runcomfy run openai/gpt-image-2/text-to-image \
  --input '{"prompt": "<user prompt>", "size": "1024_1536"}' \
  --output-dir <absolute/path>

Edit (single ref):

runcomfy run openai/gpt-image-2/edit \
  --input '{
    "prompt": "<edit instruction>",
    "images": ["https://..."]
  }' \
  --output-dir <absolute/path>

Edit (multi-ref, up to 10):

runcomfy run openai/gpt-image-2/edit \
  --input '{
    "prompt": "compose subject from image 1 into the room from image 2; match the lighting of image 2",
    "images": ["https://...subject.jpg", "https://...room.jpg"]
  }' \
  --output-dir <absolute/path>

The CLI submits, polls every 2s until terminal, then downloads any *.runcomfy.net / *.runcomfy.com URL from the result into --output-dir. Stdout is the result JSON. Stderr is progress.

For pipe-friendly usage:

runcomfy --output json run openai/gpt-image-2/text-to-image \
  --input '{"prompt":"..."}' --no-wait | jq -r .request_id

Prompting β€” what actually works

These are model-specific patterns that empirically improve output quality. Apply to text-to-image and edit alike.

Be explicit on subject + setting + mood. "A close-up of a matte ceramic water bottle on warm linen, soft window light, neutral background" β€” three concrete directives β€” beats "nice product photo of a bottle".

Quote embedded text exactly. Keep it short. GPT Image 2 is the strongest text-rendering model in this class, but only when you put the literal characters in quotes. Long blocks of text degrade. For multilingual text, name the script: "Japanese kana", "Cyrillic", "Arabic right-to-left".

Use compositional cues directly. "rule of thirds", "close-up", "aerial view", "centered subject", "shallow depth of field" β€” these have learned-meaning to the model.

Iterate one attribute at a time. When refining, change one thing per iteration (lighting OR background OR pose OR text) and keep the rest of the prompt verbatim. The model holds composition stable across iterations when only one knob moves.

Don't conflict instructions. "no text" + "the word 'AQUA+' on the label" is incoherent β€” the model will pick one and you don't control which.

Don't pile up styles. "ukiyo-e + watercolor + 8K + cinematic + minimalist" cancels out. Pick one or two style anchors max.

For the edit endpoint specifically:

  • State preservation goals. "keep the person's pose and face identity unchanged", "keep the brand mark and typography on the package", "keep the overall framing". The model needs to know what NOT to change.
  • Use directional language for spatial edits. "Move the headline from top-right to bottom-center", not "reposition the headline".
  • Multi-ref: number the images in the prompt β€” "subject from image 1, lighting and background from image 2" β€” and the model will route the cues correctly.

Where it shines

Use caseWhy GPT Image 2
E-commerce product photographyReliable text on labels, brand-safe lighting, consistent across SKUs
High-conversion adsHeadline + visual integration in one pass
Brand asset localizationOne source asset β†’ many language variants of the same headline
Signage, posters, packaging mock-upsText rendering accuracy at multiple scales
UI mockups, scientific illustrationsLayout precision and label legibility

Sample prompts (verified to produce strong results)

Text-to-image β€” product hero:

A minimal hero product still life: a matte ceramic water bottle on warm linen,
soft window light, the word "AQUA+" in clean sans-serif on the label,
subtle rim highlights, e-commerce ready, 8K detail, neutral background

Text-to-image β€” multilingual signage:

A small Tokyo cafΓ© storefront at dusk, warm interior glow,
the sign reads "γ‚³γƒΌγƒ’γƒΌ" in bold Japanese kana on a wooden plaque,
shallow depth of field, rule of thirds, cinematic

Edit β€” background swap with preservation:

Turn the background into a bright minimal white-to-soft-gray studio sweep
with gentle floor shadow; add a large headline in-image that reads
"OPEN STUDIO" in a bold clean sans-serif, high contrast, centered;
keep the main person or product, pose, and face identity unchanged

Limitations

  • Only 3 fixed sizes on text-to-image (and the same 3 + auto on edit). Extreme aspect ratios are auto-resized to the nearest supported one.
  • Prompt length ~ a few thousand tokens. Long blocks of embedded text degrade output.
  • Edit's multi-image support is "guidance from up to 10 refs", not ControlNet-style stacks. The first image is treated as the primary; the rest provide auxiliary cues.
  • Photorealism on portraits is not its strongest suit β€” Nano Banana Pro wins that head-to-head.

Exit codes

The runcomfy CLI uses sysexits-style codes:

codemeaning
0success
64bad CLI args
65bad input JSON / schema mismatch (e.g. size: "2048_2048" would 422)
69upstream 5xx
75retryable: timeout / 429
77not signed in or token rejected

Full reference: docs.runcomfy.com/cli/troubleshooting.

How it works

  1. The skill invokes runcomfy run openai/gpt-image-2/<endpoint> with a JSON body matching the schema above.
  2. The CLI POSTs to https://model-api.runcomfy.net/v1/models/openai/gpt-image-2/<endpoint> with the user's bearer token.
  3. The Model API returns a request_id; the CLI polls GET .../requests/<id>/status every 2 seconds.
  4. On terminal status, the CLI fetches GET .../requests/<id>/result and downloads any URL whose host ends with .runcomfy.net or .runcomfy.com into --output-dir. Other URLs are listed but not fetched.
  5. Ctrl-C while polling sends POST .../requests/<id>/cancel so you don't get billed for GPU you stopped.

Security & Privacy

  • This skill only invokes the runcomfy CLI. No other endpoints, no telemetry, no callbacks.
  • The token saved by runcomfy login lives at ~/.config/runcomfy/token.json (mode 0600), read only by runcomfy.
  • Auto-download is restricted to *.runcomfy.net / *.runcomfy.com β€” a compromised model cannot trick the CLI into pulling arbitrary internet content.
  • Downloads stream to disk and abort if any single response exceeds 2 GiB.
  • No env vars are read other than RUNCOMFY_TOKEN (optional, for CI).

What this skill is not

Not a direct OpenAI API client. Not a capability grant β€” depends on a working RunComfy account. Not multi-tenant.


About RunComfy

RunComfy is a hosted platform for running AI media models β€” image, video, audio. The Model API exposes a catalog of models behind a single async REST endpoint: submit a request, poll status, fetch results. No deployment, no GPU rental, no separate provider keys. Models on RunComfy include OpenAI GPT Image 2, Black Forest Labs Flux 2, ByteDance Seedance, Google Nano Banana 2, Wan 2.7, and many more. The runcomfy CLI wraps the API for shell / agent use, with sysexits-style exit codes, JSON output mode, and auto-download of generated files. See the full models catalog.

Version tags

latestvk97afc72thhkt7tbyazzpxh1hn85ryry

Runtime requirements

Binsruncomfy
EnvRUNCOMFY_TOKEN
Config~/.config/runcomfy