Image To Video Colab

v1.0.0

Get animated video clips ready to post, without touching a single slider. Upload your still images (JPG, PNG, WEBP, GIF, up to 200MB), say something like "an...

0· 39·0 current·0 all-time
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The name/description describe a cloud-based image→video service and the SKILL.md only requests a single service token (NEMO_TOKEN) and documents API endpoints on mega-api-prod.nemovideo.ai — this is coherent with the stated purpose. Minor note: metadata lists a config path (~/.config/nemovideo/) but the instructions do not reference reading that path.
Instruction Scope
Instructions are limited to: checking/obtaining NEMO_TOKEN (anonymous token flow), creating sessions, uploading files, streaming SSE, polling job state, and exporting results. The skill does not instruct reading unrelated system files, scanning shell history, or exfiltrating unspecified environment variables. It explicitly instructs not to expose tokens or raw API output.
Install Mechanism
No install spec or code files are present (instruction-only). That minimizes disk-write and supply-chain risk.
Credentials
Only NEMO_TOKEN is required (declared as primaryEnv), which is appropriate for a cloud rendering API. The only oddity is the metadata-declared config path (~/.config/nemovideo/) which is not referenced in SKILL.md — this is inconsistent but not by itself dangerous.
Persistence & Privilege
always is false and the skill is user-invocable. It does not request permanent/always-on inclusion or modification of other skills or system-wide settings.
Assessment
This skill appears to do exactly what it says: it uploads images to nemovideo.ai and returns rendered videos, and it needs a NEMO_TOKEN to authenticate. Before installing/use: 1) Confirm you trust mega-api-prod.nemovideo.ai (privacy and retention of uploaded images matters). 2) Don’t upload sensitive or proprietary images unless you’re comfortable with that service’s policies. 3) If you don’t already have a NEMO_TOKEN, the skill can obtain a short-lived anonymous token — that’s normal but means your uploads go to their cloud. 4) Note the metadata lists a config path (~/.config/nemovideo/) though the instructions do not use it — ask the skill author why that path is declared. 5) Because the skill streams and uploads files, ensure your environment policy allows network/file uploads you expect. If any of these are a problem, do not install or provide your long-lived credentials; prefer anonymous usage or review the service’s terms first.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🖼️ Clawdis
EnvNEMO_TOKEN
Primary envNEMO_TOKEN
latestvk979tkcfjqv8dyyn237g9cvz7d85adxf
39downloads
0stars
1versions
Updated 22h ago
v1.0.0
MIT-0

Getting Started

Send me your still images and I'll handle the AI video creation. Or just describe what you're after.

Try saying:

  • "convert three product photos in JPG format into a 1080p MP4"
  • "animate these images into a smooth video with transitions"
  • "turning static images into animated video sequences for content creators and marketers"

Quick Start Setup

This skill connects to a cloud processing backend. On first use, set up the connection automatically and let the user know ("Connecting...").

Token check: Look for NEMO_TOKEN in the environment. If found, skip to session creation. Otherwise:

  • Generate a UUID as client identifier
  • POST https://mega-api-prod.nemovideo.ai/api/auth/anonymous-token with X-Client-Id header
  • Extract data.token from the response — this is your NEMO_TOKEN (100 free credits, 7-day expiry)

Session: POST https://mega-api-prod.nemovideo.ai/api/tasks/me/with-session/nemo_agent with Bearer auth and body {"task_name":"project"}. Keep the returned session_id for all operations.

Let the user know with a brief "Ready!" when setup is complete. Don't expose tokens or raw API output.

Image to Video Colab — Convert Images into Video Clips

This tool takes your still images and runs AI video creation through a cloud rendering pipeline. You upload, describe what you want, and download the result.

Say you have three product photos in JPG format and want to animate these images into a smooth video with transitions — the backend processes it in about 1-2 minutes and hands you a 1080p MP4.

Tip: fewer images per batch process faster and produce smoother results.

Matching Input to Actions

User prompts referencing image to video colab, aspect ratio, text overlays, or audio tracks get routed to the corresponding action via keyword and intent classification.

User says...ActionSkip SSE?
"export" / "导出" / "download" / "send me the video"→ §3.5 Export
"credits" / "积分" / "balance" / "余额"→ §3.3 Credits
"status" / "状态" / "show tracks"→ §3.4 State
"upload" / "上传" / user sends file→ §3.2 Upload
Everything else (generate, edit, add BGM…)→ §3.1 SSE

Cloud Render Pipeline Details

Each export job queues on a cloud GPU node that composites video layers, applies platform-spec compression (H.264, up to 1080x1920), and returns a download URL within 30-90 seconds. The session token carries render job IDs, so closing the tab before completion orphans the job.

Base URL: https://mega-api-prod.nemovideo.ai

EndpointMethodPurpose
/api/tasks/me/with-session/nemo_agentPOSTStart a new editing session. Body: {"task_name":"project","language":"<lang>"}. Returns session_id.
/run_ssePOSTSend a user message. Body includes app_name, session_id, new_message. Stream response with Accept: text/event-stream. Timeout: 15 min.
/api/upload-video/nemo_agent/me/<sid>POSTUpload a file (multipart) or URL.
/api/credits/balance/simpleGETCheck remaining credits (available, frozen, total).
/api/state/nemo_agent/me/<sid>/latestGETFetch current timeline state (draft, video_infos, generated_media).
/api/render/proxy/lambdaPOSTStart export. Body: {"id":"render_<ts>","sessionId":"<sid>","draft":<json>,"output":{"format":"mp4","quality":"high"}}. Poll status every 30s.

Accepted file types: mp4, mov, avi, webm, mkv, jpg, png, gif, webp, mp3, wav, m4a, aac.

Three attribution headers are required on every request and must match this file's frontmatter:

HeaderValue
X-Skill-Sourceimage-to-video-colab
X-Skill-Versionfrontmatter version
X-Skill-Platformauto-detect: clawhub / cursor / unknown from install path

All requests must include: Authorization: Bearer <NEMO_TOKEN>, X-Skill-Source, X-Skill-Version, X-Skill-Platform. Missing attribution headers will cause export to fail with 402.

Error Codes

  • 0 — success, continue normally
  • 1001 — token expired or invalid; re-acquire via /api/auth/anonymous-token
  • 1002 — session not found; create a new one
  • 2001 — out of credits; anonymous users get a registration link with ?bind=<id>, registered users top up
  • 4001 — unsupported file type; show accepted formats
  • 4002 — file too large; suggest compressing or trimming
  • 400 — missing X-Client-Id; generate one and retry
  • 402 — free plan export blocked; not a credit issue, subscription tier
  • 429 — rate limited; wait 30s and retry once

Reading the SSE Stream

Text events go straight to the user (after GUI translation). Tool calls stay internal. Heartbeats and empty data: lines mean the backend is still working — show "⏳ Still working..." every 2 minutes.

About 30% of edit operations close the stream without any text. When that happens, poll /api/state to confirm the timeline changed, then tell the user what was updated.

Translating GUI Instructions

The backend responds as if there's a visual interface. Map its instructions to API calls:

  • "click" or "点击" → execute the action via the relevant endpoint
  • "open" or "打开" → query session state to get the data
  • "drag/drop" or "拖拽" → send the edit command through SSE
  • "preview in timeline" → show a text summary of current tracks
  • "Export" or "导出" → run the export workflow

Draft JSON uses short keys: t for tracks, tt for track type (0=video, 1=audio, 7=text), sg for segments, d for duration in ms, m for metadata.

Example timeline summary:

Timeline (3 tracks): 1. Video: city timelapse (0-10s) 2. BGM: Lo-fi (0-10s, 35%) 3. Title: "Urban Dreams" (0-3s)

Common Workflows

Quick edit: Upload → "animate these images into a smooth video with transitions" → Download MP4. Takes 1-2 minutes for a 30-second clip.

Batch style: Upload multiple files in one session. Process them one by one with different instructions. Each gets its own render.

Iterative: Start with a rough cut, preview the result, then refine. The session keeps your timeline state so you can keep tweaking.

Tips and Tricks

The backend processes faster when you're specific. Instead of "make it look better", try "animate these images into a smooth video with transitions" — concrete instructions get better results.

Max file size is 200MB. Stick to JPG, PNG, WEBP, GIF for the smoothest experience.

Export as MP4 for widest compatibility.

Comments

Loading comments...