Install
openclaw skills install codex-petCodex Pet generator on RunComfy. Build a Codex-compatible Codex Pet spritesheet.webp + pet.json from a single reference image, drop it into `${CODEX_HOME:-$HOME/.codex}/pets/<name>/` and Codex picks it up as a custom Codex Pet next to the 8 built-ins. This skill produces the exact Codex Pet atlas Codex expects (1536x1872 PNG/WebP, 8 cols x 9 rows, 192x208 cells, 9 animation states โ idle, running-right, running-left, waving, jumping, failed, waiting, running, review). Calls OpenAI GPT Image 2 edit ONCE via the local RunComfy CLI as `runcomfy run openai/gpt-image-2/edit` to produce a canonical Codex Pet pose, then assembles all 9 animation rows programmatically with ImageMagick micro-transforms โ no Codex Pro, no `$imagegen`, no OPENAI_API_KEY required, only RUNCOMFY_TOKEN. Triggers on "codex pet", "create codex pet", "make codex pet", "hatch codex pet", "/hatch image", "desktop pet codex", "codex pets", "spritesheet.webp", or any explicit ask to build a custom pet for OpenAI Codex.
openclaw skills install codex-petruncomfy.com ยท GPT Image 2 edit endpoint ยท docs
Codex Pet generator on RunComfy. Turn one source image into a Codex-compatible custom Codex Pet โ pet.json + spritesheet.webp โ drop it into ${CODEX_HOME:-$HOME/.codex}/pets/<name>/, Codex picks it up next to the 8 built-in Codex Pets.
OpenAI Codex Pets (released May 2026) are pixel-art animated companions that float over your desktop while Codex codes โ they react to mouse interaction and Codex status (scratching head when thinking, popping a speech bubble when a task completes). Codex ships with 8 built-in Codex Pets and supports custom Codex Pets installed locally as a folder under ${CODEX_HOME:-$HOME/.codex}/pets/.
Each custom Codex Pet folder contains exactly two files:
pet.json โ manifest with id, displayName, description, spritesheetPath.spritesheet.webp โ Codex Pet sprite atlas, 1536x1872 PNG or WebP, 8 columns x 9 rows of 192x208 cells, transparent background.The 9 rows correspond to 9 animation states Codex plays. Each row uses a fixed number of leading frames; trailing cells stay fully transparent.
hatch-pet)OpenAI ships an official hatch-pet skill that produces the same Codex Pet artifact via the Codex-internal $imagegen system skill (requires Codex Pro + $imagegen configured).
This Codex Pet skill is a drop-in alternative that runs via the RunComfy CLI: a single RUNCOMFY_TOKEN plus runcomfy and magick binaries โ no Codex Pro, no $imagegen, no OPENAI_API_KEY. The output Codex Pet artifact is identical โ same pet.json shape, same spritesheet.webp 1536x1872 atlas, same 9 animation rows โ so Codex treats this Codex Pet exactly like one made by hatch-pet.
This skill follows the same pattern Codex's built-in Codex Pets use: one canonical pose, replicated across cells with ImageMagick micro-transforms for subtle animation (1-2 px shifts, blink frames, tilt frames). That matches what the official hatch-pet output actually looks like cell-by-cell โ the Codex Pet animation visible in the Codex desktop app is intentionally subtle.
Pick this skill when:
$imagegen.Codex reads one fixed atlas: 8 columns, 9 rows, 192x208 cells. Each Codex Pet row corresponds to one animation state with a specific number of leading frames.
| Row | State | Used columns | Frames | Codex Pet behavior |
|---|---|---|---|---|
| 0 | idle | 0-5 | 6 | calm breathing/blinking; the reduced-motion first frame for the Codex Pet |
| 1 | running-right | 0-7 | 8 | Codex Pet locomotion to the right |
| 2 | running-left | 0-7 | 8 | mirrored locomotion to the left |
| 3 | waving | 0-3 | 4 | greeting / attention gesture |
| 4 | jumping | 0-4 | 5 | anticipation, lift, peak, descent, settle |
| 5 | failed | 0-7 | 8 | error / sad / deflated reaction |
| 6 | waiting | 0-5 | 6 | patient idle variant |
| 7 | running | 0-5 | 6 | active working / in-progress loop (NOT foot-running) |
| 8 | review | 0-5 | 6 | focused / inspecting / thinking |
Trailing cells after each row's last used column must be fully transparent.
The Codex Pet visual house style:
Avoid: motion lines, drop shadows, glows, sparkles, floating effects, text labels, scenery, white/black backgrounds.
npm i -g @runcomfy/cliruncomfy login. CI alternative: RUNCOMFY_TOKEN=<token>.brew install imagemagick (macOS) or apt-get install imagemagick (Linux). Provides the magick command for the deterministic atlas assembly.runcomfy run openai/gpt-image-2/edit call producing one 1024x1024 chibi pose on a magenta chroma-key background.pet.json, copy both files into ${CODEX_HOME:-$HOME/.codex}/pets/<pet-name>/.The micro-transform approach matches what Codex's built-in Codex Pets actually do โ the Codex Pet animation is intentionally subtle, so 1-2 px shifts and blink masks per cell give the right visual feel without burning 72 GPT Image 2 calls.
PET_NAME="my-pet"
PET_DESC="A friendly companion for late-night refactors."
SOURCE_URL="https://.../source.png"
RUN_DIR="./codex-pet-run/${PET_NAME}"
CHROMA="#FF00FF" # magenta chroma-key
mkdir -p "${RUN_DIR}"
runcomfy run openai/gpt-image-2/edit \
--input "{
\"prompt\": \"Generate one canonical Codex digital pet sprite based on the input image. EXAGGERATED chibi proportions: the head occupies about 60 percent of the total figure height; body and legs are tiny stubby and short. The whole pet figure must fit within a near-square bounding box (overall aspect close to 1:1). Pixel-art-adjacent low-resolution mascot, chunky whole-body silhouette, thick dark 1-2 px outline, visible stepped pixel edges, limited palette, flat cel shading, simple expressive face, tiny limbs. Centered in the image. No polished illustration, no painterly render, no anime key art, no 3D render, no glossy app-icon polish, no realistic detail. Background: solid flat magenta ${CHROMA} chroma-key fill outside the pet silhouette. The pet itself must not use the chroma-key color or any close-to-magenta highlights. No gradients, no shadows, no halos, no scenery, no text. Identity preserved from the input image.\",
\"images\": [\"${SOURCE_URL}\"],
\"size\": \"1024*1024\"
}" \
--output-dir "${RUN_DIR}/decoded/"
BASE=$(ls "${RUN_DIR}/decoded/"*.png | head -1)
echo "canonical Codex Pet: ${BASE}"
Chroma-key magenta to alpha, trim to the pet sprite bounding box, aspect-fit into 192x208 with transparent padding.
magick "${BASE}" \
-fuzz 18% -transparent "${CHROMA}" \
-alpha set \
-trim +repage \
-resize 192x208 \
-gravity center \
-background none \
-extent 192x208 \
"${RUN_DIR}/cell.png"
The 18% fuzz is tuned for GPT Image 2's anti-aliased magenta edges. Adjust to 25% if the Codex Pet has wider magenta halos, or to 8-10% if the pet has near-magenta highlights getting clipped.
For each row, build 8 cells from the canonical via ImageMagick micro-transforms, fill unused trailing cells with transparent, then concatenate into a 1536x208 row strip.
SRC="${RUN_DIR}/cell.png"
mkdir -p "${RUN_DIR}/cells"
# Helpers
shift_cell() { magick "$SRC" -background none -roll "+${1}+${2}" -alpha set "$3"; }
rotate_cell() { magick "$SRC" -background none -distort SRT "$1" -alpha set "$2"; }
make_blink() {
# Eyes are roughly at y=80-100 in a 208-tall cell.
# Soften with a skin-tone overlay across that horizontal band.
magick "$SRC" \
-region 80x6+56+82 -fill "#f4e6d8" -colorize 70% -blur 0x0.5 +region "$1"
}
blank_cell() { magick -size 192x208 xc:none -alpha set "PNG32:$1"; }
build_row() {
local row=$1; shift
local i=0
for spec in "$@"; do
local out="${RUN_DIR}/cells/row${row}-frame${i}.png"
case "$spec" in
base) cp "$SRC" "$out" ;;
blink) make_blink "$out" ;;
shift:*) IFS=':' read -r _ x y <<< "$spec"; shift_cell "$x" "$y" "$out" ;;
rotate:*) IFS=':' read -r _ ang <<< "$spec"; rotate_cell "$ang" "$out" ;;
esac
i=$((i+1))
done
while [ "$i" -lt 8 ]; do
blank_cell "${RUN_DIR}/cells/row${row}-frame${i}.png"
i=$((i+1))
done
magick "${RUN_DIR}/cells/row${row}-frame"*.png +append -alpha set \
"${RUN_DIR}/cells/row${row}-strip.png"
}
# 9 Codex Pet rows with their per-frame micro-transforms
build_row 0 base base blink base base blink # idle (6)
build_row 1 base shift:1:0 shift:2:-1 shift:1:0 base shift:-1:0 shift:-2:-1 shift:-1:0 # running-right (8)
# row 2 = running-left = horizontal flip of row 1, built below
build_row 3 base shift:0:-1 base shift:0:-1 # waving (4)
build_row 4 shift:0:2 base shift:0:-8 shift:0:-2 base # jumping (5) โ vertical arc
build_row 5 base shift:0:1 rotate:1 shift:0:1 shift:0:2 shift:0:1 rotate:-1 base # failed (8)
build_row 6 base base shift:0:-1 base base shift:0:1 # waiting (6)
build_row 7 base shift:0:-1 base shift:0:-1 base shift:0:-1 # running (6)
build_row 8 base rotate:-2 base rotate:2 base base # review (6)
# Row 2: running-left = mirror of running-right
magick "${RUN_DIR}/cells/row1-strip.png" -flop -alpha set "${RUN_DIR}/cells/row2-strip.png"
The micro-transform table is what gives the Codex Pet its readable-but-subtle motion in Codex. Tweak the numbers per row to taste; the deltas are intentionally small (1-2 px) so the Codex Pet feels alive without becoming distracting.
Stack the 9 row strips vertically into the 1536x1872 Codex Pet atlas, then convert to WebP.
magick \
"${RUN_DIR}/cells/row0-strip.png" \
"${RUN_DIR}/cells/row1-strip.png" \
"${RUN_DIR}/cells/row2-strip.png" \
"${RUN_DIR}/cells/row3-strip.png" \
"${RUN_DIR}/cells/row4-strip.png" \
"${RUN_DIR}/cells/row5-strip.png" \
"${RUN_DIR}/cells/row6-strip.png" \
"${RUN_DIR}/cells/row7-strip.png" \
"${RUN_DIR}/cells/row8-strip.png" \
-append -alpha set "${RUN_DIR}/spritesheet.png"
magick "${RUN_DIR}/spritesheet.png" "${RUN_DIR}/spritesheet.webp"
cat > "${RUN_DIR}/pet.json" <<EOF
{
"id": "${PET_NAME}",
"displayName": "${PET_NAME}",
"description": "${PET_DESC}",
"spritesheetPath": "spritesheet.webp"
}
EOF
DEST="${CODEX_HOME:-$HOME/.codex}/pets/${PET_NAME}"
mkdir -p "${DEST}"
cp "${RUN_DIR}/pet.json" "${RUN_DIR}/spritesheet.webp" "${DEST}/"
echo "Codex Pet installed at ${DEST}"
Restart Codex (or reload the pet list) and the custom Codex Pet appears next to the 8 built-ins.
The single GPT Image 2 call decides everything. Get this prompt right and the rest is deterministic.
Lead with the chibi proportion lock. "EXAGGERATED chibi proportions, head ~60 percent of figure height" is the difference between a thin tall character (which fits the 192x208 cell badly with pillarbox) and a head-dominant chibi (which fills the cell naturally). The latter is what Codex's built-in Codex Pets look like.
Demand the magenta #FF00FF chroma-key explicitly in every Codex Pet base prompt. GPT Image 2 only outputs RGB (no alpha), so the only way to get a transparent Codex Pet is to chroma-key a known background color out post-process.
Forbid the chroma-key color in the pet itself. Add: "The pet itself must not use the chroma-key color or any close-to-magenta highlights." Otherwise the chroma-key step removes Codex Pet body parts that happen to be magenta-ish.
Pin the style. "pixel-art-adjacent, chunky silhouette, 1-2 px outline, limited palette, flat cel shading" โ pin every term that makes the Codex Pet match the Codex house style.
Forbid the wrong styles. "No polished illustration, no painterly render, no anime key art, no 3D render, no glossy app-icon polish, no realistic detail." Without this, GPT Image 2 will gravitate toward over-rendered anime art.
Anti-patterns:
The default ImageMagick recipe in step 3 produces a Codex Pet animation similar to the built-in Codex Pets โ subtle bob, occasional blink, jumping arc, head tilt. To make the animation more or less perceptible, tweak the deltas:
shift:0:-1 to shift:0:-2 in row 0.shift:3:0 instead of shift:2:-1).shift:0:-8 to shift:0:-12.rotate:-2 / rotate:2 to rotate:-4 / rotate:4.Keep deltas small (โค 4 px or โค 4ยฐ) so the Codex Pet doesn't become distracting.
What is a Codex Pet? OpenAI Codex Pets are pixel-art animated companions launched May 2026 that float over your desktop and react to Codex's coding status. Custom Codex Pets live as pet.json + spritesheet.webp files under ${CODEX_HOME:-$HOME/.codex}/pets/<name>/.
Why use this Codex Pet skill instead of hatch-pet? Official hatch-pet requires the Codex-internal $imagegen system skill (Codex Pro). This skill needs only RUNCOMFY_TOKEN and runs the same animation-row spec via the RunComfy CLI, with one GPT Image 2 call total.
How long does a Codex Pet generation take? ~2 minutes โ 1 GPT Image 2 edit call (~90s) plus a few seconds of ImageMagick atlas assembly.
Why only one API call? The Codex Pet animation in the Codex desktop app is intentionally subtle (you can confirm by inspecting any built-in Codex Pet's atlas โ 72 cells of nearly-identical poses with tiny variations). One canonical pose plus deterministic ImageMagick micro-transforms produces the same animation feel without burning 72 separate generation calls.
Can the Codex Pet skill take a non-human subject? Yes โ pets, mascots, objects, foods all work. The base prompt simplifies the source into the Codex Pet house style automatically.
How do I install my Codex Pet? Copy pet.json and spritesheet.webp into ${CODEX_HOME:-$HOME/.codex}/pets/<pet-name>/ and reload Codex.
What if the canonical Codex Pet drifts off identity? Re-run step 1 with a tighter identity-preservation prompt (e.g. name specific features: hair color, glasses, accessory). Steps 2-6 are deterministic and don't need to change.
What size is each Codex Pet frame? 192x208 px. Each row strip is 1536x208 (8 frames). Final Codex Pet atlas is 1536x1872 (9 stacked rows).
Can I add custom poses or replace rows? Yes โ modify the build_row calls in step 3. The atlas slot count per row must match the Codex contract (idle=6, running-right/left=8, waving=4, jumping=5, failed=8, waiting/running/review=6) for Codex to play them correctly.
#00FFFF cyan or #00FF00 green) in both the prompt and the post-process.The runcomfy CLI uses sysexits-style codes:
| code | meaning |
|---|---|
| 0 | Codex Pet canonical generated successfully |
| 64 | bad CLI args |
| 65 | bad input JSON for the Codex Pet call / schema mismatch (e.g. size: "1024_1024" instead of "1024*1024") |
| 69 | upstream 5xx |
| 75 | retryable: timeout / 429 |
| 77 | not signed in or token rejected |
magick (ImageMagick) returns 0 on a clean Codex Pet atlas; non-zero indicates a missing input frame or output-path permission issue.
Full reference: docs.runcomfy.com/cli/troubleshooting.
runcomfy run openai/gpt-image-2/edit once with the user's source image and a tight chibi-proportion prompt, producing a 1024x1024 canonical Codex Pet on magenta.pet.json manifest is written; both files are copied into ${CODEX_HOME:-$HOME/.codex}/pets/<name>/ where Codex picks up the custom Codex Pet automatically.The 9-row Codex Pet atlas spec โ column counts, frame counts, cell dimensions โ comes from OpenAI's official hatch-pet skill (MIT licensed). The animation-row contract and the chroma-key strategy are documented there. This skill reuses the spec but swaps the visual generator ($imagegen โ RunComfy GPT Image 2) and the atlas assembly (Python โ ImageMagick) so it runs without Codex Pro.
Not a Codex client. Not a hatch-pet replacement when $imagegen is available โ official hatch-pet is preferable when Codex Pro is in play. Not a self-hosted GPT Image 2 โ depends on a working RunComfy account.
runcomfy login writes the API token to ~/.config/runcomfy/token.json with mode 0600. Set RUNCOMFY_TOKEN env var to bypass the file in CI.--input. The CLI does NOT shell-expand. No shell-injection surface.model-api.runcomfy.net and *.runcomfy.net / *.runcomfy.com.${CODEX_HOME:-$HOME/.codex}/pets/<pet-name>/. No remote upload.