Image To Video Maker Free

v1.0.0

Skip the learning curve of professional editing software. Describe what you want — turn these photos into a slideshow video with transitions and music — and...

0· 62·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for linmillsd7/image-to-video-maker-free.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Image To Video Maker Free" (linmillsd7/image-to-video-maker-free) from ClawHub.
Skill page: https://clawhub.ai/linmillsd7/image-to-video-maker-free
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required env vars: NEMO_TOKEN
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install image-to-video-maker-free

ClawHub CLI

Package manager switcher

npx clawhub@latest install image-to-video-maker-free
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
Name/description match the required credential (NEMO_TOKEN) and the API endpoints described in SKILL.md. One inconsistency: registry metadata listed no required config paths, but the SKILL.md frontmatter includes a configPaths entry (~/.config/nemovideo/). That mismatch is likely a packaging/metadata error but worth noting.
Instruction Scope
Instructions are narrowly scoped to creating sessions, uploading media, streaming SSE for generation, checking credits, and starting exports on mega-api-prod.nemovideo.ai. They do not instruct reading unrelated local files or unrelated environment variables. The only potential extra requirement is determining X-Skill-Platform by checking common install paths (~/.clawhub/, ~/.cursor/skills/) which implies a small, focused filesystem check to set a header.
Install Mechanism
No install spec or code files (instruction-only), so nothing is downloaded or written by the skill itself. This reduces risk — network calls happen at runtime to the third-party API.
Credentials
Only a single credential (NEMO_TOKEN) is declared as required and used. The skill can also obtain an anonymous token via the service's anonymous-token endpoint if none is provided. That is proportionate for a cloud rendering service. Users should note uploads include their media.
Persistence & Privilege
Skill is not always-enabled and does not request elevated or permanent agent privileges. It does request storing a session_id for the session lifetime (expected). Autonomous invocation is allowed (platform default) but not combined with other broad privileges.
Assessment
This skill appears to be what it says: a client for a cloud image→video service that requires a NEMO_TOKEN and uploads your media to mega-api-prod.nemovideo.ai for rendering. Before installing: 1) Consider privacy — all images/audio you send are uploaded to their servers; avoid sending sensitive or confidential media. 2) If you don't want to provide a permanent credential, let the skill use the anonymous-token flow (tokens last 7 days and provide 100 free credits). 3) Note the small metadata mismatch (SKILL.md lists a config path even though registry metadata did not) — likely harmless but a sign of sloppy packaging. 4) The skill sets attribution headers (X-Skill-*) and may detect common install paths to populate X-Skill-Platform; this requires checking a couple of known directories only. 5) If you need stronger assurance, contact the skill author or run network-monitoring to confirm uploads only go to the described domain and inspect what metadata is sent. If any of those checks fail or you cannot trust the remote endpoint, do not use the skill with sensitive content.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🖼️ Clawdis
EnvNEMO_TOKEN
Primary envNEMO_TOKEN
latestvk978d1r3snsxbm8d1j9wraat3d85h75k
62downloads
0stars
1versions
Updated 3d ago
v1.0.0
MIT-0

Getting Started

Ready when you are. Drop your images here or describe what you want to make.

Try saying:

  • "convert five product photos in JPG format into a 1080p MP4"
  • "turn these photos into a slideshow video with transitions and music"
  • "converting static images into shareable videos for social media creators"

First-Time Connection

When a user first opens this skill, connect to the processing backend automatically. Briefly let them know (e.g. "Setting up...").

Authentication: Check if NEMO_TOKEN is set in the environment. If it is, skip to step 2.

  1. Obtain a free token: Generate a random UUID as client identifier. POST to https://mega-api-prod.nemovideo.ai/api/auth/anonymous-token with header X-Client-Id set to that UUID. The response data.token is your NEMO_TOKEN — 100 free credits, valid 7 days.
  2. Create a session: POST to https://mega-api-prod.nemovideo.ai/api/tasks/me/with-session/nemo_agent with Authorization: Bearer <token>, Content-Type: application/json, and body {"task_name":"project","language":"<detected>"}. Store the returned session_id for all subsequent requests.

Keep setup communication brief. Don't display raw API responses or token values to the user.

Image to Video Maker Free — Convert Photos into MP4 Videos

Send me your images and describe the result you want. The AI video creation runs on remote GPU nodes — nothing to install on your machine.

A quick example: upload five product photos in JPG format, type "turn these photos into a slideshow video with transitions and music", and you'll get a 1080p MP4 back in roughly 30-60 seconds. All rendering happens server-side.

Worth noting: fewer images per video means faster processing and smoother transitions.

Matching Input to Actions

User prompts referencing image to video maker free, aspect ratio, text overlays, or audio tracks get routed to the corresponding action via keyword and intent classification.

User says...ActionSkip SSE?
"export" / "导出" / "download" / "send me the video"→ §3.5 Export
"credits" / "积分" / "balance" / "余额"→ §3.3 Credits
"status" / "状态" / "show tracks"→ §3.4 State
"upload" / "上传" / user sends file→ §3.2 Upload
Everything else (generate, edit, add BGM…)→ §3.1 SSE

Cloud Render Pipeline Details

Each export job queues on a cloud GPU node that composites video layers, applies platform-spec compression (H.264, up to 1080x1920), and returns a download URL within 30-90 seconds. The session token carries render job IDs, so closing the tab before completion orphans the job.

Base URL: https://mega-api-prod.nemovideo.ai

EndpointMethodPurpose
/api/tasks/me/with-session/nemo_agentPOSTStart a new editing session. Body: {"task_name":"project","language":"<lang>"}. Returns session_id.
/run_ssePOSTSend a user message. Body includes app_name, session_id, new_message. Stream response with Accept: text/event-stream. Timeout: 15 min.
/api/upload-video/nemo_agent/me/<sid>POSTUpload a file (multipart) or URL.
/api/credits/balance/simpleGETCheck remaining credits (available, frozen, total).
/api/state/nemo_agent/me/<sid>/latestGETFetch current timeline state (draft, video_infos, generated_media).
/api/render/proxy/lambdaPOSTStart export. Body: {"id":"render_<ts>","sessionId":"<sid>","draft":<json>,"output":{"format":"mp4","quality":"high"}}. Poll status every 30s.

Accepted file types: mp4, mov, avi, webm, mkv, jpg, png, gif, webp, mp3, wav, m4a, aac.

Headers are derived from this file's YAML frontmatter. X-Skill-Source is image-to-video-maker-free, X-Skill-Version comes from the version field, and X-Skill-Platform is detected from the install path (~/.clawhub/ = clawhub, ~/.cursor/skills/ = cursor, otherwise unknown).

All requests must include: Authorization: Bearer <NEMO_TOKEN>, X-Skill-Source, X-Skill-Version, X-Skill-Platform. Missing attribution headers will cause export to fail with 402.

Error Handling

CodeMeaningAction
0SuccessContinue
1001Bad/expired tokenRe-auth via anonymous-token (tokens expire after 7 days)
1002Session not foundNew session §3.0
2001No creditsAnonymous: show registration URL with ?bind=<id> (get <id> from create-session or state response when needed). Registered: "Top up credits in your account"
4001Unsupported fileShow supported formats
4002File too largeSuggest compress/trim
400Missing X-Client-IdGenerate Client-Id and retry (see §1)
402Free plan export blockedSubscription tier issue, NOT credits. "Register or upgrade your plan to unlock export."
429Rate limit (1 token/client/7 days)Retry in 30s once

SSE Event Handling

EventAction
Text responseApply GUI translation (§4), present to user
Tool call/resultProcess internally, don't forward
heartbeat / empty data:Keep waiting. Every 2 min: "⏳ Still working..."
Stream closesProcess final response

~30% of editing operations return no text in the SSE stream. When this happens: poll session state to verify the edit was applied, then summarize changes to the user.

Translating GUI Instructions

The backend responds as if there's a visual interface. Map its instructions to API calls:

  • "click" or "点击" → execute the action via the relevant endpoint
  • "open" or "打开" → query session state to get the data
  • "drag/drop" or "拖拽" → send the edit command through SSE
  • "preview in timeline" → show a text summary of current tracks
  • "Export" or "导出" → run the export workflow

Draft JSON uses short keys: t for tracks, tt for track type (0=video, 1=audio, 7=text), sg for segments, d for duration in ms, m for metadata.

Example timeline summary:

Timeline (3 tracks): 1. Video: city timelapse (0-10s) 2. BGM: Lo-fi (0-10s, 35%) 3. Title: "Urban Dreams" (0-3s)

Tips and Tricks

The backend processes faster when you're specific. Instead of "make it look better", try "turn these photos into a slideshow video with transitions and music" — concrete instructions get better results.

Max file size is 200MB. Stick to JPG, PNG, WEBP, GIF for the smoothest experience.

Export as MP4 for widest compatibility across social platforms.

Common Workflows

Quick edit: Upload → "turn these photos into a slideshow video with transitions and music" → Download MP4. Takes 30-60 seconds for a 30-second clip.

Batch style: Upload multiple files in one session. Process them one by one with different instructions. Each gets its own render.

Iterative: Start with a rough cut, preview the result, then refine. The session keeps your timeline state so you can keep tweaking.

Comments

Loading comments...