Image To Video I

v1.0.0

Get animated video clips ready to post, without touching a single slider. Upload your still images (JPG, PNG, WEBP, HEIC, up to 200MB), say something like "t...

0· 95·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for dsewell-583h0/image-to-video-i.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Image To Video I" (dsewell-583h0/image-to-video-i) from ClawHub.
Skill page: https://clawhub.ai/dsewell-583h0/image-to-video-i
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required env vars: NEMO_TOKEN
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install image-to-video-i

ClawHub CLI

Package manager switcher

npx clawhub@latest install image-to-video-i
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
The name/description match the actions in SKILL.md (upload images, request renders, download MP4). Requesting a service token (NEMO_TOKEN) is proportionate to a remote-rendering API. One inconsistency: the registry metadata listed no required config paths, while the skill YAML metadata includes a configPaths entry (~/.config/nemovideo/). This mismatch should be clarified.
Instruction Scope
Runtime instructions stay within the image→video workflow (session creation, SSE chat, upload, export, polling). They do instruct the agent to derive X-Skill-Platform from install paths (checking ~/.clawhub/ or ~/.cursor/skills/), which implies reading the agent's install path — a minor scope expansion but understandable for attribution headers. Instructions explicitly say not to print tokens or raw JSON. No instructions ask for unrelated files, secrets, or system-wide data.
Install Mechanism
This is an instruction-only skill with no install spec or code files, so nothing will be written to disk by an install step. That is the lowest-risk install profile.
Credentials
Only NEMO_TOKEN is required (declared primaryEnv), which is reasonable. The skill also documents obtaining an anonymous token via an API call if NEMO_TOKEN is absent. Verify you are comfortable with an environment token granting access to the external rendering service and with uploading your images to that service.
Persistence & Privilege
The skill does not request always:true or other elevated persistent privileges. It does not contain install-time scripts or modifications to other skills or system-wide settings.
Assessment
This skill appears to do what it says: it uploads images to a remote rendering service and returns rendered video. Before installing/use: (1) Confirm you trust the external domain (mega-api-prod.nemovideo.ai) and the privacy terms for uploaded images (you will be sending your images off-device). (2) Understand that NEMO_TOKEN (or an anonymously fetched token) is required — that token grants the skill access to your account/credits on the service. (3) Clarify the metadata inconsistency about ~/.config/nemovideo/ (does the skill read that directory?). If you cannot verify the service or are handling sensitive images, run this skill in a sandboxed environment or avoid using it.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🖼️ Clawdis
EnvNEMO_TOKEN
Primary envNEMO_TOKEN
latestvk978b5xfch9v2v8gsv7dn8s2xx858sp8
95downloads
0stars
1versions
Updated 6d ago
v1.0.0
MIT-0

Getting Started

Ready when you are. Drop your still images here or describe what you want to make.

Try saying:

  • "convert a single product photo or landscape image into a 1080p MP4"
  • "turn this image into a 10-second animated video with smooth motion"
  • "converting static images into short animated videos for marketers, social media creators"

Automatic Setup

On first interaction, connect to the processing API before doing anything else. Show a brief status like "Setting things up...".

Token: If NEMO_TOKEN environment variable is already set, use it and skip to Session below.

Free token: Generate a UUID as client identifier, then POST to https://mega-api-prod.nemovideo.ai/api/auth/anonymous-token with header X-Client-Id: <uuid>. The response field data.token becomes your NEMO_TOKEN (100 credits, 7-day expiry).

Session: POST to https://mega-api-prod.nemovideo.ai/api/tasks/me/with-session/nemo_agent with Bearer auth and body {"task_name":"project"}. Save session_id from the response.

Confirm to the user you're connected and ready. Don't print tokens or raw JSON.

Image to Video — Convert Images Into Video Clips

Send me your still images and describe the result you want. The AI video creation runs on remote GPU nodes — nothing to install on your machine.

A quick example: upload a single product photo or landscape image, type "turn this image into a 10-second animated video with smooth motion", and you'll get a 1080p MP4 back in roughly 30-60 seconds. All rendering happens server-side.

Worth noting: high-contrast images with clear subjects produce the most natural-looking motion.

Matching Input to Actions

User prompts referencing image to video i, aspect ratio, text overlays, or audio tracks get routed to the corresponding action via keyword and intent classification.

User says...ActionSkip SSE?
"export" / "导出" / "download" / "send me the video"→ §3.5 Export
"credits" / "积分" / "balance" / "余额"→ §3.3 Credits
"status" / "状态" / "show tracks"→ §3.4 State
"upload" / "上传" / user sends file→ §3.2 Upload
Everything else (generate, edit, add BGM…)→ §3.1 SSE

Cloud Render Pipeline Details

Each export job queues on a cloud GPU node that composites video layers, applies platform-spec compression (H.264, up to 1080x1920), and returns a download URL within 30-90 seconds. The session token carries render job IDs, so closing the tab before completion orphans the job.

All calls go to https://mega-api-prod.nemovideo.ai. The main endpoints:

  1. SessionPOST /api/tasks/me/with-session/nemo_agent with {"task_name":"project","language":"<lang>"}. Gives you a session_id.
  2. Chat (SSE)POST /run_sse with session_id and your message in new_message.parts[0].text. Set Accept: text/event-stream. Up to 15 min.
  3. UploadPOST /api/upload-video/nemo_agent/me/<sid> — multipart file or JSON with URLs.
  4. CreditsGET /api/credits/balance/simple — returns available, frozen, total.
  5. StateGET /api/state/nemo_agent/me/<sid>/latest — current draft and media info.
  6. ExportPOST /api/render/proxy/lambda with render ID and draft JSON. Poll GET /api/render/proxy/lambda/<id> every 30s for completed status and download URL.

Formats: mp4, mov, avi, webm, mkv, jpg, png, gif, webp, mp3, wav, m4a, aac.

Headers are derived from this file's YAML frontmatter. X-Skill-Source is image-to-video-i, X-Skill-Version comes from the version field, and X-Skill-Platform is detected from the install path (~/.clawhub/ = clawhub, ~/.cursor/skills/ = cursor, otherwise unknown).

All requests must include: Authorization: Bearer <NEMO_TOKEN>, X-Skill-Source, X-Skill-Version, X-Skill-Platform. Missing attribution headers will cause export to fail with 402.

Draft JSON uses short keys: t for tracks, tt for track type (0=video, 1=audio, 7=text), sg for segments, d for duration in ms, m for metadata.

Example timeline summary:

Timeline (3 tracks): 1. Video: city timelapse (0-10s) 2. BGM: Lo-fi (0-10s, 35%) 3. Title: "Urban Dreams" (0-3s)

Translating GUI Instructions

The backend responds as if there's a visual interface. Map its instructions to API calls:

  • "click" or "点击" → execute the action via the relevant endpoint
  • "open" or "打开" → query session state to get the data
  • "drag/drop" or "拖拽" → send the edit command through SSE
  • "preview in timeline" → show a text summary of current tracks
  • "Export" or "导出" → run the export workflow

SSE Event Handling

EventAction
Text responseApply GUI translation (§4), present to user
Tool call/resultProcess internally, don't forward
heartbeat / empty data:Keep waiting. Every 2 min: "⏳ Still working..."
Stream closesProcess final response

~30% of editing operations return no text in the SSE stream. When this happens: poll session state to verify the edit was applied, then summarize changes to the user.

Error Codes

  • 0 — success, continue normally
  • 1001 — token expired or invalid; re-acquire via /api/auth/anonymous-token
  • 1002 — session not found; create a new one
  • 2001 — out of credits; anonymous users get a registration link with ?bind=<id>, registered users top up
  • 4001 — unsupported file type; show accepted formats
  • 4002 — file too large; suggest compressing or trimming
  • 400 — missing X-Client-Id; generate one and retry
  • 402 — free plan export blocked; not a credit issue, subscription tier
  • 429 — rate limited; wait 30s and retry once

Common Workflows

Quick edit: Upload → "turn this image into a 10-second animated video with smooth motion" → Download MP4. Takes 30-60 seconds for a 30-second clip.

Batch style: Upload multiple files in one session. Process them one by one with different instructions. Each gets its own render.

Iterative: Start with a rough cut, preview the result, then refine. The session keeps your timeline state so you can keep tweaking.

Tips and Tricks

The backend processes faster when you're specific. Instead of "make it look better", try "turn this image into a 10-second animated video with smooth motion" — concrete instructions get better results.

Max file size is 200MB. Stick to JPG, PNG, WEBP, HEIC for the smoothest experience.

PNG images with clean backgrounds give the AI more accurate motion generation.

Comments

Loading comments...