Image To Video Ai Movement

v1.0.0

Get animated video clips ready to post, without touching a single slider. Upload your still images (JPG, PNG, WEBP, HEIC, up to 200MB), say something like "a...

0· 82·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for tk8544-b/image-to-video-ai-movement.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Image To Video Ai Movement" (tk8544-b/image-to-video-ai-movement) from ClawHub.
Skill page: https://clawhub.ai/tk8544-b/image-to-video-ai-movement
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required env vars: NEMO_TOKEN
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install image-to-video-ai-movement

ClawHub CLI

Package manager switcher

npx clawhub@latest install image-to-video-ai-movement
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The skill is presented as a cloud-based image→video renderer and the single required credential (NEMO_TOKEN) and network endpoints align with that purpose. Note: the SKILL.md frontmatter lists a config path (~/.config/nemovideo/) and describes detecting install paths (e.g., ~/.clawhub/) to set an X-Skill-Platform header; the registry metadata did not declare required config paths. This is an inconsistency worth confirming (the skill may read the agent's install path or a per-user nemovideo config).
Instruction Scope
Runtime instructions stay within the described scope: create or use a token, create a session, upload media, call render/export endpoints, and handle SSE or polling. It does not instruct reading unrelated files or environment variables beyond token/optional config path and checking install path to set attribution headers.
Install Mechanism
Instruction-only skill with no install spec and no code files — lowest installation risk. All network interactions are to the documented nemovideo API host; no downloads or arbitrary code execution are specified.
Credentials
Only a single service credential (NEMO_TOKEN) is required, which matches a cloud rendering service. The skill also supports obtaining a short-lived anonymous token if none is provided — this behavior is documented in SKILL.md and is proportionate.
Persistence & Privilege
The skill is not force-included (always: false) and does not request system-level persistence or modification of other skills. It uses ephemeral session IDs and cloud-side job IDs for renders.
Assessment
This skill appears to do what it says: it uploads your images to mega-api-prod.nemovideo.ai and returns rendered video files. Before installing, consider: (1) You will need to provide a NEMO_TOKEN or allow the skill to request an anonymous token from nemovideo.ai — if you provide a token it will be sent as a Bearer credential on API calls. (2) Your images and any prompt text are uploaded to a third-party service; avoid sending sensitive private data. (3) Confirm the apparent metadata inconsistency: SKILL.md references a local config path (~/.config/nemovideo/) and install-path detection (to set X-Skill-Platform) even though the registry listing did not declare config paths — ask the author whether the skill reads those paths and what it stores locally. (4) If you need stronger privacy, run this only with throwaway/anonymous credentials or test with non-sensitive images first.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🎞️ Clawdis
EnvNEMO_TOKEN
Primary envNEMO_TOKEN
latestvk9721mvqcfyx8ybg25vwnwp03585ac89
82downloads
0stars
1versions
Updated 6d ago
v1.0.0
MIT-0

Getting Started

Share your still images and I'll get started on AI motion generation. Or just tell me what you're thinking.

Try saying:

  • "animate my still images"
  • "export 1080p MP4"
  • "animate this image with a slow"

Getting Connected

Before handling any user request, establish a connection to the backend API. Show a brief status like "Connecting...".

If NEMO_TOKEN is in the environment, use it directly and create a session. Otherwise, acquire a free starter token:

  • Generate a UUID as client identifier
  • POST to https://mega-api-prod.nemovideo.ai/api/auth/anonymous-token with the X-Client-Id header
  • The response includes a token with 100 free credits valid for 7 days — use it as NEMO_TOKEN

Then create a session by POSTing to https://mega-api-prod.nemovideo.ai/api/tasks/me/with-session/nemo_agent with Bearer authorization and body {"task_name":"project","language":"en"}. The session_id in the response is needed for all following requests.

Tell the user you're ready. Keep the technical details out of the chat.

Image to Video AI Movement — Animate Images into Video Clips

This tool takes your still images and runs AI motion generation through a cloud rendering pipeline. You upload, describe what you want, and download the result.

Say you have a single product photo or portrait image and want to animate this image with a slow zoom and subtle motion effect — the backend processes it in about 20-40 seconds and hands you a 1080p MP4.

Tip: images with clear subjects and simple backgrounds produce smoother motion results.

Matching Input to Actions

User prompts referencing image to video ai movement, aspect ratio, text overlays, or audio tracks get routed to the corresponding action via keyword and intent classification.

User says...ActionSkip SSE?
"export" / "导出" / "download" / "send me the video"→ §3.5 Export
"credits" / "积分" / "balance" / "余额"→ §3.3 Credits
"status" / "状态" / "show tracks"→ §3.4 State
"upload" / "上传" / user sends file→ §3.2 Upload
Everything else (generate, edit, add BGM…)→ §3.1 SSE

Cloud Render Pipeline Details

Each export job queues on a cloud GPU node that composites video layers, applies platform-spec compression (H.264, up to 1080x1920), and returns a download URL within 30-90 seconds. The session token carries render job IDs, so closing the tab before completion orphans the job.

Base URL: https://mega-api-prod.nemovideo.ai

EndpointMethodPurpose
/api/tasks/me/with-session/nemo_agentPOSTStart a new editing session. Body: {"task_name":"project","language":"<lang>"}. Returns session_id.
/run_ssePOSTSend a user message. Body includes app_name, session_id, new_message. Stream response with Accept: text/event-stream. Timeout: 15 min.
/api/upload-video/nemo_agent/me/<sid>POSTUpload a file (multipart) or URL.
/api/credits/balance/simpleGETCheck remaining credits (available, frozen, total).
/api/state/nemo_agent/me/<sid>/latestGETFetch current timeline state (draft, video_infos, generated_media).
/api/render/proxy/lambdaPOSTStart export. Body: {"id":"render_<ts>","sessionId":"<sid>","draft":<json>,"output":{"format":"mp4","quality":"high"}}. Poll status every 30s.

Accepted file types: mp4, mov, avi, webm, mkv, jpg, png, gif, webp, mp3, wav, m4a, aac.

Headers are derived from this file's YAML frontmatter. X-Skill-Source is image-to-video-ai-movement, X-Skill-Version comes from the version field, and X-Skill-Platform is detected from the install path (~/.clawhub/ = clawhub, ~/.cursor/skills/ = cursor, otherwise unknown).

All requests must include: Authorization: Bearer <NEMO_TOKEN>, X-Skill-Source, X-Skill-Version, X-Skill-Platform. Missing attribution headers will cause export to fail with 402.

Error Codes

  • 0 — success, continue normally
  • 1001 — token expired or invalid; re-acquire via /api/auth/anonymous-token
  • 1002 — session not found; create a new one
  • 2001 — out of credits; anonymous users get a registration link with ?bind=<id>, registered users top up
  • 4001 — unsupported file type; show accepted formats
  • 4002 — file too large; suggest compressing or trimming
  • 400 — missing X-Client-Id; generate one and retry
  • 402 — free plan export blocked; not a credit issue, subscription tier
  • 429 — rate limited; wait 30s and retry once

SSE Event Handling

EventAction
Text responseApply GUI translation (§4), present to user
Tool call/resultProcess internally, don't forward
heartbeat / empty data:Keep waiting. Every 2 min: "⏳ Still working..."
Stream closesProcess final response

~30% of editing operations return no text in the SSE stream. When this happens: poll session state to verify the edit was applied, then summarize changes to the user.

Backend Response Translation

The backend assumes a GUI exists. Translate these into API actions:

Backend saysYou do
"click [button]" / "点击"Execute via API
"open [panel]" / "打开"Query session state
"drag/drop" / "拖拽"Send edit via SSE
"preview in timeline"Show track summary
"Export button" / "导出"Execute export workflow

Draft JSON uses short keys: t for tracks, tt for track type (0=video, 1=audio, 7=text), sg for segments, d for duration in ms, m for metadata.

Example timeline summary:

Timeline (3 tracks): 1. Video: city timelapse (0-10s) 2. BGM: Lo-fi (0-10s, 35%) 3. Title: "Urban Dreams" (0-3s)

Tips and Tricks

The backend processes faster when you're specific. Instead of "make it look better", try "animate this image with a slow zoom and subtle motion effect" — concrete instructions get better results.

Max file size is 200MB. Stick to JPG, PNG, WEBP, HEIC for the smoothest experience.

Export as MP4 for widest compatibility across social platforms.

Common Workflows

Quick edit: Upload → "animate this image with a slow zoom and subtle motion effect" → Download MP4. Takes 20-40 seconds for a 30-second clip.

Batch style: Upload multiple files in one session. Process them one by one with different instructions. Each gets its own render.

Iterative: Start with a rough cut, preview the result, then refine. The session keeps your timeline state so you can keep tweaking.

Comments

Loading comments...