Install
openclaw skills install claw3dUnified 3D workflow: create models (AI), search (Thingiverse), slice, print. Modular—enable only what you need.
openclaw skills install claw3dSingle skill for the full 3D pipeline: create models (AI), search (Thingiverse), slice, and print. Modular—only enabled capabilities appear below.
CRITICAL — Execute immediately, never stop after acknowledging. When you tell the user you'll do something (search, convert, preview, slice), you MUST execute the command in the SAME turn. Do NOT end your turn after just acknowledging — the user should never have to ask "done?" to get you to act. Acknowledge AND call the tool in one response.
Setup order: Printer first (when printing enabled) → then create/search models → slice → print.
When printing is enabled and the user has no printers configured, guide them to add a printer AND a linked slicing profile before creating or searching for models. A linked profile is required for slicing — it stores the build volume (width × depth × height) extracted from the 3MF, which determines how models are scaled.
Always run claw3d printer list first. If it returns nothing, go through setup below.
Send this message to the user:
Let's get your printer set up. I need 3 things:
- Printer name — e.g. "Creality K2 Pro Living Room"
- IP address + port — e.g.
192.168.1.50:7125(Moonraker default: 7125; Creality K2 SE: 4408)- Cura project file (.3mf) — Export it from Cura: File → Save → "Export Universal Cura Project" with your printer loaded. This file carries your printer's build volume and all settings — it's required for correct slicing.
Wait for the user to provide all three.
claw3d printer add --name "<name>" --host <ip> --port <port> --profile-from-3mf <MediaPath>
This does everything in one step:
~/.config/claw3d/config.jsonbuild_width, build_depth, build_height)If the user provides name+IP but no 3MF yet: Add without it (printer add --name ... --host ... --port ...), then immediately ask for the 3MF to create the profile:
Got it! Now please send the Cura project file (.3mf) so I can create the slicing profile. In Cura: File → Save → "Export Universal Cura Project".
Then: claw3d profile create --from-3mf <MediaPath> --name "<printer_name>_profile" → claw3d printer set-profile <printer_id> <profile_id>
Printer backends: Run claw3d configure backends to see options (Moonraker, PrusaLink, etc.). Community can add backends in claw3d/backends/.
When the user asks for a 3D model without specifying how (e.g. "I need a cup", "I want a dragon", "find me a vase", no image attached), do NOT default to one option. Offer choices based on what's enabled:
Great! Would you like me to:
- Search for existing models — I'll look on Thingiverse and show you options to download (if directory enabled)
- Create a 3D model from an image — Send me a sketch or photo and I'll turn it into 3D (if ai-forger enabled)
- Search first, then create from an image if nothing fits — Best of both (if both enabled)
Wait for the user to choose. Only if they explicitly say "create it", "from a photo/sketch", "search", "look up", etc., then proceed.
Never assume — "I need a dragon" could mean search OR create from image. Always clarify when ambiguous. Do not offer text-only 3D generation — results are inaccurate; always require an image or sketch.
MediaPath: When the user attaches a file (image, GLB, 3MF), the message includes a MediaPath — the full filesystem path. Always pass that exact path to --image, --edit-3d, --profile-from-3mf, etc. Copy it character-for-character.
Unique output paths: The workspace is shared. Using fixed names (model.glb, preview.mp4) causes old files from a previous request to be sent to new chats. Always derive a short ID from the MediaPath and use it for outputs.
MediaPath format: .../file_13---b10560d7-18fd-40e9-8a49-996ad190a26c.jpg — extract the segment after --- and use the first 8 chars (e.g. b10560d7) as ID.
If the MediaPath has no UUID (unusual), use date +%s to get a unique ID.
When the user attaches an image and asks to "3D print this", "print this", "make it printable", etc. — you CAN do it (if ai-forger + slicing + printing enabled):
claw3d convert --image <MediaPath> --output model_<ID>.glbclaw3d printer list; note [WxDxH mm] if shown.claw3d preview --input model_<ID>.glb --output preview_<ID>.mp4 [--build-volume WxDxH] — send the videoclaw3d profile list, then slice with --build-volume <WxDxH> and profile or --profile-from-3mfclaw3d printers, then claw3d print --gcode model_<ID>.gcodeDo NOT say "I can't print from an image" — you can create the 3D model first. If FAL_API_KEY is missing, convert will fail; then tell the user to set it up.
Get model (search OR create) → optionally edit → slice → print
claw3d search → claw3d fetch → claw3d dimensions → present with previewclaw3d convert --image (requires image/sketch) → claw3d preview → presentclaw3d convert --edit-3d (when user sends GLB and asks to modify)claw3d slice (sends G-code + gcode preview video)claw3d print| Command | Purpose |
|---|---|
claw3d convert | Image/sketch → GLB, or edit existing GLB |
claw3d preview | 360° turntable of 3D model |
claw3d search | Search Thingiverse |
claw3d fetch | Download model from Thingiverse |
claw3d dimensions | Bounding box (for slicing) |
claw3d pack | Arrange multi-part on build plate |
claw3d slice | GLB/STL → G-code |
claw3d print | Upload G-code and start print |
claw3d printer | Add/list/remove printers |
claw3d profile | Create/list slicing profiles |
claw3d configure | Select AI provider, see backends |
Run all via exec. Use claw3d.
All routing decisions, skill logic, and internal reasoning are for YOUR use only. NEVER send them to the user. The user should only see friendly, concise messages — never references to "Primary Gate", "SKILL.md", module names, decision rules, or your thought process. If you need to reason about which path to take, do it silently. The user just wants their model.
Bad (leaked reasoning): "According to the Primary Gate, a wine stand is a common functional object, so I should search Thingiverse..." Good (user-facing): "Great, let me take a look at what you need — give me a moment!"
This is the FIRST decision for EVERY request — images, videos, and text. Run the Primary Gate BEFORE any analysis, frame extraction, or claw3d analyze. Make this decision silently — do NOT explain your routing to the user.
The key question is: Would an existing Thingiverse model likely satisfy this need, or does the request require something inherently custom/unique?
→ SEARCH path first (go to 03-directory module) when:
→ CREATE path (continue to CREATE section below) when:
Decision rule of thumb:
"Could I type this into Thingiverse and find 5+ decent results?" → YES → SEARCH first "Does this require seeing a specific image, style, or personal constraint to design?" → YES → CREATE
→ ASK only when you genuinely cannot identify what physical object the user wants — e.g. "make something for my office" with no further context. If you can name the object, go to SEARCH. Do not ask.
SEARCH PATH — fallback to CREATE: After 3 rounds of search (up to 15 models reviewed) with no match, tell the user nothing matched and ask if they want a custom AI-generated model instead. If they have a photo/video, use it as reference for the AI generation.
When the user sends a video, you may receive a text Description (from OpenClaw's Gemini video understanding). Use the Description and/or the user's message text to run the Primary Gate — BEFORE extracting any frame or running claw3d analyze.
Steps for video:
03-directory module with that object as the search query. Do NOT extract a frame or run analyze⚠️ CRITICAL: A video showing someone demonstrating a common object does NOT make it a CREATE request. The video is just their way of communicating what they want — it doesn't mean they need AI generation. A person holding up a wine bottle and showing how they'd like a wine stand still maps to SEARCH. Only explicit artistic/stylistic/replication intent maps to CREATE.
This section handles finding the video file. It applies to BOTH paths (the CREATE path needs it for frame extraction; the SEARCH path may need it later if search fails and you fall back to CREATE).
Step 0 — Acknowledge immediately — Before doing anything else, send:
"Great, let me take a look at what you need — give me a moment!"
Step 1 — Get the video path. Three cases:
Case A — File path visible ([media attached: /home/node/.openclaw/media/inbound/...]):
Use that exact path.
Case B — No file path but Description is present (OpenClaw's Gemini video understanding ran and suppressed the path): The video is still on disk. Find it:
ls -t /home/node/.openclaw/media/inbound/ 2>/dev/null | head -5
Pick the most recent video file (.mp4, .mov, .webm). Use that as the path.
Case C — No file path and no Description (video silently dropped — too large):
Your video was too large — OpenClaw's default limit is 5MB. I can increase it to 50MB right now. Want me to?
If confirmed:
python3 -c "
import json, pathlib
p = pathlib.Path('/home/node/.openclaw/openclaw.json')
cfg = json.loads(p.read_text())
cfg.setdefault('channels', {}).setdefault('telegram', {})['mediaMaxMb'] = 50
p.write_text(json.dumps(cfg, indent=2))
print('Done')
"
Reply: "Done! The limit is now 50MB — please resend your video."
The config watcher restarts the Telegram channel automatically.
Step 2 — Run the Primary Gate using the Description/user message → SEARCH or CREATE. See above.
If SEARCH → go to 03-directory module. Note the video path — if search fails and you fall back to CREATE, you'll need it for frame extraction.
If CREATE → continue to the next section.
The bot rejects oversized files before the agent sees them. If the user reports this error in a text message, offer to fix it:
I can increase your video limit to 50MB right now. Want me to do that?
If confirmed, run the patch above.
Run once per session to understand the configuration:
claw3d configure analysis --status
| Mode | What happens |
|---|---|
auto (default) | claw3d analyze uses Gemini if key is set, else returns native_mode: true |
native | claw3d analyze immediately returns native_mode: true — you do the analysis |
gemini | claw3d analyze uses Gemini; errors if key missing |
Only enter this section if the Primary Gate resolved to CREATE.
Before doing anything with a user's image or video, run claw3d analyze (images) or analyze the video natively + claw3d extract-frame --timestamp + claw3d analyze (videos).
Step 1 — Always run analyze:
claw3d analyze --input <MediaPath> [--description "user's message"] [--pretty]
Step 2 — Read the result and branch:
native_mode: true → you are the analysis layerAnalyze the image yourself using these rules:
Classify image_type:
sketch: hand-drawn, pencil/pen outlines, whiteboard drawings → intent is almost always create_new, proceed directlyphoto: real photograph → read description carefully3d_model: CAD rendering or existing 3D model screenshotreference: product photo, inspiration, logoDecide needs_clarification:
OVERRIDE — replicate/copy intent always sets needs_clarification: false:
If the user's message contains any of: "make another one", "copy this", "replicate this", "clone this", "I want one like this", "same as this", "reproduce this", "duplicate this", "print this one" — the photo/frame IS the complete design reference. Proceed directly to convert. Do NOT ask for a drawing. The whole point is that they're showing you the exact object they want.
false (proceed without asking) when ALL of these are true:
true (ask ONE clarifying question) when ANY of these:
Rule of thumb for functional objects: If you could design it 3+ different ways and the user hasn't said which way → send the frame/image back and ask them to draw on it (see below). Exception: replicate intent (see override above) → always proceed directly.
If needs_clarification: false:
Step A — Tell the user you're starting (do NOT stay silent):
"Creating your 3D model now — I'll send it when it's ready!"
Step B — Write a suggested_prompt and run convert:
claw3d convert --image <MediaPath> --prompt "<suggested_prompt>" --output model_<ID>.glb
CRITICAL — When writing suggested_prompt:
For replicate/copy intent ("make another one", "copy this", etc.): Keep it SHORT — one sentence. The image already carries the shape. Do NOT add dimensions, material suggestions, or printing advice.
For all other intents: Describe ONLY the 3D object to be printed. Keep it to 1-2 sentences max. Do NOT include:
Example — user shows a wine bottle next to a dog sculpture:
If needs_clarification: true:
Two cases:
Case 1 — Ambiguous subject (multiple objects, unclear what to print): Ask ONE specific text question:
"I see a desk with a laptop and a mug. Which item would you like to 3D print?"
Case 2 — Subject is clear but it's a photo of a functional/custom object (holder, bracket, case, mount, stand, organizer, etc.):
frame_1a589237.jpg) — you will need it when the annotated image comes back."Hey! Could you draw in red on this image to show me the shape you have in mind? Any drawing app works — even a quick scribble on your phone. Then send it back and I'll use it as the design reference." Use the
messagetool to attach the frame — do NOT use inline MEDIA: syntax:message(text="Hey! Could you draw...", media="<frame_path>")
When the user sends back the annotated image: Do NOT say "Yes! On it!" and stop — immediately run exec:
claw3d convert --image <original_frame_path> --annotated-image <annotated_MediaPath> --prompt "<description of the object, NO scene context>" --output model_<ID>.glb
<original_frame_path> = the frame you sent them (e.g. frame_1a589237.jpg)<annotated_MediaPath> = the absolute path from the media attached messagenative_mode: false (Gemini was used) → act on the JSON{
"subject": "a wooden phone stand",
"image_type": "sketch",
"intent": "create_new",
"needs_clarification": false,
"clarification_question": null,
"suggested_prompt": "a minimalist wooden phone stand with a 70° angled back support..."
}
intent | Action |
|---|---|
create_new | Check needs_clarification first — if false, then claw3d convert --image <MediaPath> --prompt "<suggested_prompt>" --output model_<ID>.glb |
create_attachment | Same as create_new |
find_existing | This shouldn't appear here — Primary Gate should have caught it. But if it does: go to 03-directory module |
If needs_clarification: true:
false and proceed directly regardless of what Gemini returned.clarification_question verbatim (Gemini wrote it to be friendly and specific)claw3d analyze --input <MediaPath> --description "<original + reply>"You should only be here if the Primary Gate resolved to CREATE.
Step 1 — Extract the best frame
Two paths depending on how the video arrived:
Case A — Video attached as media (you can see the video in this conversation): You are a multimodal agent. Analyze the video directly to identify the best frame:
Pick the exact timestamp (HH:MM:SS), then extract:
claw3d extract-frame --input <video_path> --timestamp <HH:MM:SS> --output frame_<ID>.jpg
Case B — Only text Description, no media in conversation (OpenClaw pre-processed the video): You cannot see the video — you only have the text Description. Do NOT guess a timestamp from text. Use Gemini API for smart frame selection:
claw3d extract-frame --input <video_path> --output frame_<ID>.jpg
(no --timestamp → Gemini picks the best frame automatically)
If this fails because no Gemini API key is configured, stop and tell the user:
"I need a Gemini API key to pick the best frame from your video (the video isn't directly visible to me in this conversation). Please run:
claw3d configure analysis --gemini-api-key <YOUR_KEY>You can get a free key at Google AI Studio."
Step 2 — Analyze extracted frame:
claw3d analyze --input frame_<ID>.jpg --description "<user's message or Gemini description>" --pretty
Then follow the IMAGE flow above (including needs_clarification checks).
CRITICAL — Do NOT go silent after frame extraction. If needs_clarification: false, tell the user you're generating the model BEFORE running claw3d convert. The full sequence must be:
claw3d convert → claw3d preview → send both filesGeneric functional object — search first (even if user says "create"):
User: [sends video] "I need you to create a wine stand"
→ Primary Gate: wine stand = common, functional → SEARCH path
→ Go to 03-directory module: search → thumbnails → pick → confirm → download → preview
Video demonstrating a common object — STILL search first:
User: [sends video showing how they'd hold a wine bottle, describing an L-shaped holder]
→ Primary Gate: wine holder = common, functional → SEARCH path (video demo ≠ custom design)
→ Go to 03-directory: search "L-shaped wine bottle holder" → thumbnails → pick
Same object + artistic constraint — create:
User: [sends video + photo of sculpture] "I need a wine stand in the style of this sculpture"
→ Primary Gate: style constraint present → CREATE path
→ claw3d extract-frame → analyze (photo as reference) → convert with prompt + image
Sketch → 3D model (CREATE path):
User: [sends pencil sketch of a bracket]
→ Primary Gate: sketch present → CREATE path
→ claw3d analyze --input sketch.jpg --description "make this"
(native: sketch → create_new, needs_clarification: false, proceed directly)
→ claw3d convert --image sketch.jpg --prompt "an L-shaped bracket with two mounting holes" --output model_abc.glb
Generic object, user says "I want this" with a photo:
User: [sends photo of a mug] "I want this"
→ Primary Gate: mug = common object → SEARCH path
→ Go to 03-directory: search "mug" → thumbnails → confirm → download → preview
Custom functional object (specific, unlikely to exist):
User: [sends photo of a weird desk edge] "make a phone holder that clips onto this exact edge"
→ Primary Gate: too specific/personal to exist → CREATE path
→ analyze → needs_clarification → ask for sketch on the photo
Video — user asking to find (any wording):
User: [sends video, description: "person asking to find/create a wine stand, demonstrates with bottle"]
→ Primary Gate: wine stand = common functional object → SEARCH path
→ Go to 03-directory module
Search exhausted → fallback to CREATE:
→ 3 searches, 15 thumbnails reviewed, none match
→ "Couldn't find a good match. Want me to generate a custom one with AI?"
User: "yes"
→ If user has a video/photo: use it as reference for CREATE path
→ Extract frame (if video) → analyze → clarification if needed → convert
# Image intent analysis (outputs JSON)
claw3d analyze --input <image> [--description "text"] [--annotated <image>] [--pretty]
# Video frame extraction
claw3d extract-frame --input <video> [--output frame.jpg] [--timestamp HH:MM:SS]
# Analysis layer configuration
claw3d configure analysis # show status
claw3d configure analysis --mode native # use your own AI
claw3d configure analysis --mode auto # gemini if available, else native
claw3d configure analysis --mode gemini # always use Gemini
claw3d configure analysis --gemini-api-key <KEY> # set Gemini key
claw3d configure analysis --clear # remove Gemini key
<!-- /MODULE: intent -->
You CAN convert 2D images to 3D models AND edit existing 3D models (GLB). When the user sends a GLB and asks to modify it (e.g. "make it blue", "add wheels"), use claw3d convert --edit-3d <GLB_MediaPath>. When they send an image, use --image.
--edit-3d flowAlways require an image or sketch. Do not use text-only --prompt for new model creation — results are not accurate enough. If the user asks to "make a cup" without an image, ask them to send a sketch or photo, or offer to search Thingiverse instead.
CRITICAL — User sent a GLB and wants to edit it: Run claw3d convert --edit-3d <GLB_MediaPath> --prompt "..." --output edited_<ID>.glb. Never say you cannot edit 3D models.
When the user asks to create a 3D model from an image OR to edit a 3D model, REPLY IMMEDIATELY first:
Image conversion takes 1–2 minutes. Edit-3d can take 5–10+ minutes when Hunyuan is cold. Do NOT stay silent—always acknowledge first.
When the user attaches an image, the message includes a MediaPath. Always pass that exact path to --image. Copy it character-for-character.
IMPORTANT — Unique output paths: Derive a short ID from the MediaPath. Format: .../file_13---b10560d7-18fd-40e9-8a49-996ad190a26c.jpg — use first 8 chars after --- (e.g. b10560d7) as ID.
Build volume: Before running claw3d preview, check if a printer is configured with a known build volume: run claw3d printer list and look for [WxDxH mm] (e.g. [350×350×350mm]). If found, pass --build-volume WxDxH (e.g. --build-volume 350x350x350) — this renders the grey build plate and grid under the model for a realistic preview.
ID=b10560d7 # from MediaPath
claw3d convert --image <MediaPath> --output model_${ID}.glb
# With printer build volume (preferred):
claw3d preview --input model_${ID}.glb --output preview_${ID}.mp4 --build-volume 350x350x350
# Without printer configured:
claw3d preview --input model_${ID}.glb --output preview_${ID}.mp4
If the MediaPath has no UUID (unusual), use date +%s for a unique ID. NEVER send model.glb or preview.mp4 that existed before this request.
claw3d convert --edit-3d <GLB_MediaPath> --prompt "..." --output edited_<ID>.glb
process poll <session> with timeout: 120000. You will be notified when it completes — do NOT poll in a rapid loop.Wrote edited_<ID>.glb → convert is done.claw3d preview --input edited_<ID>.glb --output preview_edited_<ID>.mp4 [--build-volume WxDxH]
--real-scale for edited models. AI-regenerated models use normalized units (~1 unit), not mm. The preview auto-scales the model to fill the build volume.timeout: 120000.message(action="send", text="Here's the updated preview!", media="preview_edited_<ID>.mp4")
message(action="send", text="And the edited model:", media="edited_<ID>.glb")
CRITICAL: Do NOT end your turn after the first message(). You MUST send the .glb in a second message() call. The user needs the 3D model file, not just the video. Your turn is only complete after BOTH files are sent.NEVER use --image for a GLB when modifying. --image is for 2D sketches/photos.
Acknowledge — "Yes! Give me a minute—I'll let you know when the 3D model is ready."
Check build volume — Run claw3d printer list; note [WxDxH mm] if present.
Run convert — claw3d convert --image <MediaPath> --output model_<ID>.glb
Command still running), call process poll <session> once with timeout: 120000. You will get notified when it completes — do NOT poll in a rapid loop.Wrote model_<ID>.glb → convert is done. Proceed immediately.Run preview — claw3d preview --input model_<ID>.glb --output preview_<ID>.mp4 [--build-volume WxDxH]
timeout: 120000.Wrote preview_<ID>.mp4 → preview is done. Proceed immediately.Send BOTH files — TWO message() calls required. You are NOT done after sending the preview.
message(action="send", text="Here's your 3D model preview!", media="preview_<ID>.mp4")
message(action="send", text="And the 3D model file:", media="model_<ID>.glb")
CRITICAL: Do NOT end your turn after the first message(). You MUST send the .glb in a second message() call. The user needs the 3D model file, not just the video. Your turn is only complete after BOTH files are sent.
ALWAYS ask about printing — After sending the preview and model, ask:
Want me to slice this for 3D printing? If so, I need:
- Max print size — What's the longest dimension? (e.g. 100mm, 150mm)
- Strength — How strong? (10%, 25%, 50%, 75%, or 100%)
- Detail — How much print quality? (10%, 25%, 50%, 75%, or 100%)
This is mandatory for AI-generated models — they have no real-world dimensions, so you MUST get the max print size from the user before slicing. Do NOT slice without asking. Do NOT use a default size.
CRITICAL: Run convert and preview via exec BEFORE sending. The files do not exist until you create them.
# Image/sketch → 3D
claw3d convert --image <MediaPath> [--prompt "extra description"] --output model_<ID>.glb
# Multiview: 4-quadrant image → Gemini → Hunyuan3D
claw3d convert --image multiview.png --multiview [--prompt "add wheels"] --output model_<ID>.glb
# Edit 3D: modify existing GLB
claw3d convert --edit-3d model.glb --prompt "make it blue" --output edited.glb
# Preview
claw3d preview --input model_<ID>.glb --output preview_<ID>.mp4
# Scale
claw3d scale --input model.glb --output scaled.glb --scale 0.5
| Error | Check |
|---|---|
| Convert fails | FAL_API_KEY set, image exists, PNG/JPG |
| No image URL in FLUX response | FAL key valid; try again (transient) |
| Preview fails | Headless: apt install xvfb xauth |
API key errors: When convert fails with "FAL API key" or "401/403", ask the user to verify their API key in Control UI → Skills → claw3d. Get a key at https://fal.ai/dashboard/keys
You CAN search for existing 3D models on Thingiverse. Use claw3d find, claw3d fetch, and claw3d preview — do NOT use web search.
The intent classifier (06-intent.md) sets consider_search: true for common/functional objects. When that happens — or when the user explicitly asks to find/search/download a 3D model — follow this workflow.
Skip this flow and go straight to AI generation only when the user explicitly says "custom", "artistic", or describes something that clearly doesn't exist as a standard printable part.
Before running anything, send:
"Great, let me take a look at what you need — give me a moment!"
Run one command. It searches Thingiverse, downloads all thumbnails, fetches each model, packs multi-part models, and fit-checks all of them — returning only the models that physically fit in the printer.
claw3d find "<query>" --max-passing 4
⚠ claw3d find is a long-running command (1–2 minutes). When you see Command still running, this is the SEARCH running — NOT slicing. Do NOT say "Slicing started". Just poll silently with process poll <session> every 10 seconds until it finishes.
Output format:
[1] Balancing Wine Holder
ID: 660698
URL: https://www.thingiverse.com/thing:660698
By: Tanacota
Thumbnail: thumb_660698.jpg
Model: model_660698.glb
Extents: 153.8×160.5×69.8mm
Rotation applied: none
[2] ...
--build WxDxH to override.View all returned thumbnail files visually (you are multimodal). Pick the 4 that best match what the user needs — considering shape, function, and apparent quality. Record the thing ID for each.
If fewer than 4 passed, use however many there are.
If exit 1 (none fit): run claw3d find with a refined query (change keywords, add constraints), up to 3 rounds total. After 3 rounds with no match, fall back to AI generation.
Stamp option letters and compose a single 2×2 grid image (A top-left, B top-right, C bottom-left, D bottom-right):
claw3d stamp-thumbnails --grid thumb_660698.jpg thumb_123456.jpg thumb_789012.jpg thumb_456789.jpg
# Outputs: thumb_660698_A.jpg, thumb_123456_B.jpg, thumb_789012_C.jpg, thumb_456789_D.jpg
# Grid: thumb_grid_thumb_660698.jpg (single 2x2 image)
MANDATORY: Send ONE message with the grid image attached. Do NOT describe the options without the image. Do NOT skip the media= parameter. The user needs to SEE the thumbnails to choose — text-only is useless.
message(action="send", text="Here are four options I found:\n\nA — [name/brief reason]\nB — [name/brief reason]\nC — [name/brief reason]\nD — [name/brief reason]\n\nReply with A, B, C, or D — or let me know if none look right and I'll search again or create a custom one.", media="thumb_grid_thumb_660698.jpg")
The media= field with the grid image path is REQUIRED. If you send this message without the grid image attached, the user cannot see the options.
Wait for user response before continuing.
claw3d find output for that option. Continue to Step 4 with that ID. The model is already downloaded and pre-rotated (if needed) — no re-fetch required.claw3d find with a refined query, repeat from Step 1 — up to 3 rounds totalThe chosen model is already downloaded. Use its thing ID to inspect variant/part structure:
claw3d fetch --list-grouped <thing_id>
This deterministically selects the best extension group (STL > OBJ > GLB > 3MF). Parse the output:
Best extension: .stl (N file(s)) — what will be downloadedSub-variants (size/version choices…) — only shown if multiple size/version options existMulti-part model (N components…) — only shown if multiple parts with no size variantsRead the --list-grouped output and follow exactly one of these branches:
→ Output shows "Sub-variants" (size/version choices): Ask the user which size/version they want:
"This model comes in several sizes: small, medium, large. Which would work best for you?" Once user picks, use
--choose "<variant_tag>"in Step 6.
→ Output shows "Cosmetic variations" (same model, minor differences):
Do NOT ask the user. Auto-select the variant marked <- auto-selected and inform them:
"This model has a [no-text / simplified / …] version — I'll use that for a cleaner print." Use
--choose "<auto_selected_filename_keyword>"in Step 6. E.g. ifTiltedWineBottleStand_NoText.stlis auto-selected, use--choose "NoText".
→ Output shows "Multi-part model" (multiple components): all parts are needed — model was already packed by claw3d find. Skip directly to Step 7.
→ Single file or complete-set: model already downloaded by claw3d find — skip directly to Step 7.
Skip this step if no --choose is needed (single file, complete-set, or already fetched correctly).
With sub-variant chosen (e.g. user picked "large"):
claw3d fetch <thing_id> --choose "large" -o model_<ID>.glb
claw3d fit-check -i model_<ID>.stl --apply-rotation
With cosmetic variant auto-selected (e.g. NoText):
claw3d fetch <thing_id> --choose "NoText" -o model_<ID>.glb
claw3d fit-check -i model_<ID>.stl --apply-rotation
claw3d find auto-computes dimensions from the fitted extents. The Extents: line in the output is the already-rotated bounding box.
Only run
claw3d dimensions -i <file>manually if the file was re-fetched in Step 6 (variant selection).
Build volume is auto-read from the default printer config. Generate the preview with --real-scale so the user sees the model at its actual physical size relative to the build plate:
claw3d preview -i model_<ID>.glb -o preview_<ID>.mp4 --real-scale
# or for multi-plate:
claw3d preview -i model_<ID>_plate1.glb -o preview_<ID>_p1.mp4 --real-scale
claw3d preview -i model_<ID>_plate2.glb -o preview_<ID>_p2.mp4 --real-scale
--real-scale shows the model at its true mm dimensions on the plate. --build-volume is auto-read from the default printer (no need to pass it). If no printer is configured yet, ask the user for their build volume and pass it explicitly.
Important: Thingiverse thumbnails are often lifestyle renders that can look very different from the actual printable model. The 3D preview video is the ground truth — always describe the dimensions so the user understands what they're actually getting.
Single plate:
message(action="send", text="Here's the 3D preview of option [A/B/C] — [model name]. Print size: X × Y × Z mm. Does this look right? If it doesn't match what you expected from the thumbnail, say so and I'll try the next option.", media="preview_<ID>.mp4")
Multi-plate (N plates needed):
message(action="send", text="This model needs N separate prints. Here's plate 1 (X × Y × Z mm):", media="preview_<ID>_p1.mp4")
message(action="send", text="Plate 2 (X × Y × Z mm):", media="preview_<ID>_p2.mp4")
# … all plates
message(action="send", text="Print them sequentially and assemble. Ready to slice when you are!")
When the user asks to print N copies of a model ("add 3 more", "print 4 of these", "fill the plate"):
claw3d pack --copies N on the original STL sidecar. Pass any rotation the user wants baked in via --rotation-x/y/z. The packer places all N copies with 2mm gaps.# 4 copies, standing up (rotation-x 90), on a 220×215×245mm plate:
claw3d pack -i model_<ID>.stl --copies 4 --rotation-x 90 --build 220x215x245 -o model_<ID>_x4.stl
If pack exits with error ("exceeds build volume"): tell the user how many fit per plate, pack that many, slice, then ask if they want more plates.
If pack produces multiple plates (model_<ID>_x4_plate1.stl, model_<ID>_x4_plate2.stl): slice and queue each plate separately.
Slice the packed STL directly — no rotation flags needed (rotation already baked by pack):
claw3d slice -i model_<ID>_x4.stl -p <profile_id> -o model_<ID>_x4.gcode --build-volume <WxDxH>
Rotation is already baked in by --rotation-x/y/z in the pack step — do NOT also pass rotation to claw3d slice.
If no Thingiverse result matches, say:
"I couldn't find a good match in the Thingiverse library. I can generate a custom 3D model using AI — want me to do that?"
If yes, follow the AI generation flow from 02-ai-forger.md.
| Command | Purpose |
|---|---|
claw3d find "<query>" --max-passing 5 | Search + download thumbnails + fetch + fit-check in one shot. Returns only models that fit the printer |
claw3d fetch --list-grouped <id> | Best extension group + sub-variant detection (deterministic) |
claw3d fetch --list-only <id> | Raw file list (complete sets vs parts) |
claw3d fetch <id> -o model.glb | Download + convert to GLB |
claw3d fetch <id> --choose "large" -o model.glb | Download only files matching substring |
claw3d pack -i dir/ --build WxHxD -o model.glb | Arrange multi-part on build plate (2mm gap). Exit 1 if part too large |
claw3d pack -i model.stl --copies 4 --build WxHxD -o model_x4.stl | Duplicate single model 4 times on plate |
claw3d pack -i model.stl --copies 4 --rotation-x 90 --build WxHxD -o model_x4.stl | Duplicate with baked rotation |
claw3d fit-check -i model.stl --apply-rotation | One-off fit check: exits 0 = fits, exits 1 = doesn't fit |
claw3d dimensions -i model.glb | Bounding box + save .dimensions.json sidecar for future slicing |
claw3d preview -i model.glb -o preview.mp4 | 360° turntable video |
| Error | Action |
|---|---|
| No results / 401 | Check THINGIVERSE_ACCESS_TOKEN in Control UI → Skills → claw3d |
| "No directory providers configured" | Add token in Control UI |
claw3d find exit 1 (0 fitting models) | Refine query and retry, up to 3 rounds; then AI generation |
claw3d find not found | Rebuild Docker: docker build -f Dockerfile.claw3d -t openclaw:claw3d . then restart |
NEVER use hardcoded or remembered profile IDs. Profile IDs are stored on the slicer server and are lost when the container restarts. Before every slice:
claw3d profile list — use whatever ID is listed there.--profile-from-3mf) or create a profile first.--profile <id>.claw3d slice reads the .source.json sidecar automatically. When it detects "source": "thingiverse", it:
--no-mesh-clean (skips mesh repair)You do NOT need to pass --no-mesh-clean manually — the code handles it. Just pass the model file and the slicer does the right thing based on provenance.
When re-slicing, the previous settings are saved in .slice_config.json and reused automatically (max-dimension, strength, quality). You only need to pass flags that changed.
Path A: Model from directory (Thingiverse) — The model is already at the correct physical size. NEVER ask for max print size. Do NOT pass --max-from-model or --max-dimension. The source-based routing handles this automatically.
.stl sidecar (model_<ID>.stl) if it exists — send it directly, no conversion..glb exists (model was fetched as GLB): use model_<ID>.glb. The auto-routing will apply --no-mesh-clean automatically.Ask the user only:
Before I slice, I need two things:
- Strength — How strong should it be? (10%, 25%, 50%, 75%, or 100%)
- Detail — How much print detail / quality? (10%, 25%, 50%, 75%, or 100%)
Path B: Model from AI or user-provided — Use model_<ID>.glb. No dimensions file. MUST ask for max print size, strength, and detail before slicing. Use the printer's build volume (from claw3d printer list) as the default max dimension suggestion — use the smallest of width/depth/height.
Before I slice, I need a few things:
- Max print size — What's the longest dimension you want? (e.g. 100mm or 150mm)
- Strength — How strong should it be? (10%, 25%, 50%, 75%, or 100%)
- Detail — How much print detail / quality? (10%, 25%, 50%, 75%, or 100%)
Map percentages to CLI: 10%→1, 25%→2, 50%→3, 75%→4, 100%→5. Use --max-dimension <N>, --strength, --quality.
Natural language → CLI flags:
--strength 4--infill-density 20--max-dimension 100claw3d rotate -i model.glb --rotation-y 90 (then preview/slice without rotation flags)claw3d rotate -i model.glb --rotation-x 45--quality 4 or --layer-height 0.1Bed leveling: Do NOT add --bed-autocalibration unless the user explicitly asks. Default OFF.
claw3d rotate permanently modifies the model file. Rotation is cumulative by design — each call rotates from the model's current orientation, like dragging an object in a 3D editor. No state tracking needed.
When the user asks to rotate a model ("rotate 90 on X", "flip it sideways", "turn it upside down"):
Step 1 — Rotate the file:
claw3d rotate -i model_<ID>.glb --rotation-x 90
The GLB is now permanently rotated. All future preview/slice commands use the file as-is — no rotation flags needed.
Step 2 — Show preview (no rotation flags):
claw3d preview -i model_<ID>.glb --build-volume <WxDxH> -o preview_<ID>_rotated.mp4
Send with: "Here it is rotated 90° on X — does this look right for printing?"
Step 3 — When user confirms, slice (no rotation flags):
claw3d slice -i model_<ID>.glb -p <profile_id> -o model_<ID>.gcode --build-volume <WxDxH>
Multiple rotations just work:
| User says | You run | Result |
|---|---|---|
| "rotate 90 on X" | claw3d rotate -i model.glb --rotation-x 90 | Model is now 90° on X |
| "now rotate 90 on Y" | claw3d rotate -i model.glb --rotation-y 90 | Model is now 90° X + 90° Y |
| "and 45 on Z" | claw3d rotate -i model.glb --rotation-z 45 | All three rotations accumulated |
Natural language mapping:
--rotation-x 180--rotation-y 90--rotation-x 90 --rotation-z 45 (one command)Undo rotation: If the user says "undo that rotation" or "go back":
claw3d rotate -i model_<ID>.glb --undo
This restores the file from before the last rotation. Up to 5 undo levels are kept.
Do NOT pass --rotation-x/y/z to claw3d preview or claw3d slice. Always use claw3d rotate first — the file is the source of truth.
# Check what happened to a model (source, dimensions, last slice settings, files)
claw3d model-status -i model_<ID>.glb
# System health check (API keys, slicer, printers, ffmpeg)
claw3d doctor
CRITICAL: Both claw3d slice and claw3d preview are long-running commands. You MUST wait for them to finish before proceeding. Do NOT return control to the user while they run.
The exec call waits up to 2 minutes for the command to finish. Most commands complete within this window and return the result directly. If a command takes longer, exec returns Command still running with a session ID. In that case:
process poll <session> once with timeout: 120000 (2 min wait).Wrote <path> or [timing] → the command finished. Proceed immediately.claw3d slice:claw3d slice ... — tell the user: "Slicing started! I'll let you know when it's ready."Wrote <path> for the gcode preview → send both files immediately.claw3d preview:claw3d preview ... — tell the user: "Generating your 3D preview, I'll send it when it's ready!"Wrote <path> → send the file immediately. Do NOT ask the user if they want it.YOU MUST NOT return control to the user until you see Wrote <path> or an error.
After slice succeeds, send BOTH the G-code and the G-code preview video. Slice generates model_<ID>_gcode_preview.mp4 by default (body red, supports yellow). Use the message tool so both files attach in Telegram.
Include print estimates in your message. The slice output includes an [estimates] line with print time, filament usage, and layer count. Always include these stats when sending the G-code, e.g.: "Here's your G-code! Estimated print time: 2h 30m | Filament: 12.5m (37g) | Layers: 245"
When the user asks for "the video" after a slice: They mean the G-code preview (model_<ID>_gcode_preview.mp4). Do NOT run claw3d preview — that renders the 3D model. Send the existing gcode preview file.
Build volume for previews: When a default printer is configured (i.e. the printer was added with --profile-from-3mf), claw3d preview and claw3d slice automatically use that printer's build volume — you do NOT need to pass --build-volume explicitly. If needed, you can always override with --build-volume WxDxH (e.g. --build-volume 350x350x350). The build volume renders the grey build plate, 10mm grid, and volume wireframe in the preview video. To verify the current default printer's build volume, run claw3d printer list.
# GLB + separate 3MF (most common) — read build volume from printer list
claw3d slice -i <glb_path> --profile-from-3mf <3mf_path> -o model_<ID>.gcode --build-volume <WxDxH>
# Thingiverse/directory model — use .stl sidecar if it exists (preferred: no conversion, no mesh fixes)
claw3d slice -i model_<ID>.stl -p <profile_id> -o model_<ID>.gcode --strength 3 --build-volume <WxDxH>
# Thingiverse/directory model — GLB only (no .stl sidecar): must use --no-mesh-clean
claw3d slice -i model_<ID>.glb -p <profile_id> -o model_<ID>.gcode --strength 3 --no-mesh-clean --build-volume <WxDxH>
# AI/user model — use .glb (must ask for max)
claw3d slice -i model.glb -p <profile_id> -o model.gcode --max-dimension 150 --strength 4 --build-volume <WxDxH>
# Single 3MF (model + settings in one file)
claw3d slice -i project.3mf -o model.gcode
# Per-parameter overrides
claw3d slice -i model.glb -p <profile_id> -o model.gcode --infill-density 20 --layer-height 0.15
# Rotation: use `claw3d rotate` first, then slice without rotation flags
claw3d rotate -i model.glb --rotation-y 90
claw3d slice -i model.glb -p <profile_id> -o model.gcode
# Profile management
claw3d profile create --from-3mf settings.3mf --name my_pla
claw3d profile list
claw3d profile set-default <profile_id>
claw3d profile clear # delete all profiles (fresh start)
# Standalone preview with build area (read WxDxH from printer list)
claw3d preview --input model.glb --output preview.mp4 --build-volume <WxDxH>
claw3d gcode-preview --input model.gcode --output gcode_preview.mp4 --build-volume <WxDxH>
| Flag | Description |
|---|---|
-i, --input | Input GLB, STL, or 3MF |
-o, --output | Output G-code path |
-p, --profile | Profile ID (use --profile OR --profile-from-3mf for GLB/STL) |
--profile-from-3mf | Create profile from 3MF, then slice |
--strength | 1–5 (10%→1 … 100%→5). Default 3 |
--quality | 1–5 (10%→1 … 100%→5). Detail / print quality level |
--max-dimension | Scale longest axis to N mm (AI models) |
--max-from-model | Use max from dimensions.json (directory models) |
--no-mesh-clean | Skip all mesh repair during GLB→STL conversion. Required for directory/Thingiverse GLBs — mesh fixes are for AI models only and can delete real model geometry |
--rotation-x | ⚠️ Prefer claw3d rotate instead — bakes rotation into file. Only use in slice/preview for one-off tests |
--rotation-y | Same as above |
--rotation-z | Same as above |
--layer-height | Override layer height in mm (e.g. 0.15) |
--infill-density | Override infill percentage (e.g. 20) |
--preview-video | Generate 360° G-code preview video (default ON) |
--no-preview-video | Skip G-code preview video (faster) |
--build-volume | WxDxH mm (e.g. 350x350x350). Shows build plate + grid in gcode preview. Read from claw3d printer list. |
--bed-autocalibration | Run bed leveling before print. Default OFF — only add when user explicitly asks |
On first use, a printer AND a linked slicing profile are both required. The profile (created from a 3MF) stores the printer's build volume (build_width × build_depth × build_height in mm), which the slicer uses to scale models correctly. Without it, slicing fails.
When you upload a Cura project file (.3mf), the slicer extracts:
machine_name — Printer model (e.g. "Creality K2 Pro")build_width × build_depth × build_height — Build volume in mmnozzle_size — Nozzle diameter in mmIn Cura with your printer loaded: File → Save → "Export Universal Cura Project" → save as
.3mf.
This captures the full printer config, not just the model geometry.
Printer add flags:
--name (required): Display name, e.g. "Creality K2 Pro Living Room"--host (required): Printer IP or hostname--port (required): Moonraker usually 7125; Creality K2 SE often 4408--profile-from-3mf (required for slicing): Create and link profile from 3MF in one step. Without this, slicing will fail until a profile is linked manually via printer set-profile.--id (optional): CLI slug. If omitted, derived from --name (e.g. "Creality K2 Pro" → creality_k2_pro)claw3d printer add --name "<name>" --host <ip> --port <port> --profile-from-3mf <path> [--id <slug>]
claw3d printer set-profile <printer_id> <profile_id>
claw3d printer set-default <id> # only needed when 2+ printers; first printer auto-becomes default
claw3d printer remove <id>
Parse user input: "Creality K2 SE Living Room 192.168.28.102:4408" → name="Creality K2 SE Living Room", host=192.168.28.102, port=4408. If user also sends 3MF, add --profile-from-3mf <path>.
Default printer: The first printer added is automatically set as the default. When there is only one printer, it is always used without asking. When 2+ printers exist and no default is set, ask the user which to use, then run claw3d printer set-default <id> with their choice so subsequent operations don't need to ask again.
No 3MF yet? Add the printer without it, then immediately ask:
To complete setup, please send your Cura project file (.3mf). In Cura: File → Save → "Export Universal Cura Project". This gives me your exact build volume and settings.
Then: claw3d profile create --from-3mf <path> --name "<printer_id>_profile" → claw3d printer set-profile <printer_id> <profile_id>
Fresh start: Run claw3d profile clear, then re-add printer with --profile-from-3mf.
Printer backends: Run claw3d configure backends to see options (Moonraker, PrusaLink, etc.). Community can add backends in claw3d/backends/.
ALWAYS run claw3d printers before sending a print:
Which printer should I send the G-code to?
- [First printer name]
- [Second printer name]
claw3d print --gcode model.gcode [--printer id]
claw3d status [--printer id]
claw3d pause [--printer id]
claw3d resume [--printer id]
claw3d cancel [--printer id]
claw3d camera [--printer id] [--snapshot]
claw3d preheat --extruder 200 --bed 60 [--printer id]
claw3d cooldown [--printer id]
claw3d home [--axes x y z] [--printer id]
claw3d files [--path subdir] [--printer id]
claw3d start --file model.gcode [--printer id]
claw3d emergency-stop [--printer id]
claw3d metadata --file model.gcode [--printer id]
When a model needs more than one build plate:
claw3d queue add plate1.gcode --label "Plate 1", etc.claw3d print --gcode plate1.gcodeclaw3d queue next, ask to start next.claw3d queue add model_plate1.gcode --label "Plate 1"
claw3d queue list
claw3d queue next # pops and returns next path
claw3d queue clear # clear entire queue
Each printer can have a linked default profile. Run claw3d printer list to see links including build volume. Use that profile when slicing for that printer.
Creality K2 Pro (creality_k2_pro): 192.168.1.50:4408 [moonraker] [profile: creality_k2_pro_profile] [350×350×350mm] (default)
The build volume (350×350×350mm) is snapshotted from the profile when it is linked. When slicing AI-generated or user-provided models (no dimensions file), read the printer's build volume from claw3d printer list and use the smallest dimension as the default --max-dimension suggestion, rather than asking the user to guess.
uv tool install claw3d