Free Photo To Video Ai

v1.0.0

Skip the learning curve of professional editing software. Describe what you want — turn these photos into a video with transitions and background music — and...

0· 97·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for francemichaell-15/free-photo-to-video-ai.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Free Photo To Video Ai" (francemichaell-15/free-photo-to-video-ai) from ClawHub.
Skill page: https://clawhub.ai/francemichaell-15/free-photo-to-video-ai
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required env vars: NEMO_TOKEN
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install free-photo-to-video-ai

ClawHub CLI

Package manager switcher

npx clawhub@latest install free-photo-to-video-ai
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
Name/description match observed behavior: the SKILL.md instructs uploading images and calling a remote render API (mega-api-prod.nemovideo.ai) to produce videos. Requiring a NEMO_TOKEN for remote API access is proportionate to the stated purpose.
Instruction Scope
Instructions include normal actions for this purpose: creating an anonymous token, creating/using a session, uploading files (multipart or URL), streaming SSE events, polling render status, and returning download URLs. They explicitly tell the agent to read local file paths for uploads and to detect an install path (~/.clawhub or ~/.cursor) to populate an X-Skill-Platform header — reading user-supplied image files is expected, but automatic inspection of home-directory install paths is an extra side-effect to be aware of.
Install Mechanism
No install spec and no code files are present; this is instruction-only so nothing is written to disk by an installer. That minimizes install risk.
Credentials
The only declared required credential is NEMO_TOKEN (primary), which is appropriate for a third-party rendering API. Two issues to note: (1) SKILL.md frontmatter lists a config path (~/.config/nemovideo/) while the registry metadata reported no required config paths — this mismatch is incoherent and should be clarified; (2) the skill auto-generates an anonymous token via an API and instructs storing session_id/token for subsequent requests but does not specify where to persist them. Ask where tokens/session IDs will be stored (in-memory only, a skill-specific config dir, or written to env/config files).
Persistence & Privilege
always is false and the skill does not request autonomous 'always-on' presence beyond normal model invocation. It does request that the agent store a session_id and use NEMO_TOKEN for API calls, which is standard for this functionality.
Assessment
This skill appears to do what it claims (upload images to a cloud renderer and return a video) and only requests a single API credential (NEMO_TOKEN). Before installing: 1) Confirm where NEMO_TOKEN and session_id will be stored (memory vs written to ~/.config/nemovideo/ or environment) and whether the skill will create files in your home directory. 2) Be aware that images you upload are sent to mega-api-prod.nemovideo.ai — do not upload sensitive photos you don’t want sent to a third-party service. 3) Ask the publisher to resolve the metadata mismatch (registry says no configPaths; SKILL.md lists ~/.config/nemovideo/). 4) If you prefer explicit consent, require the skill to prompt before auto-creating tokens and before uploading any files. If these points are acceptable or clarified, the skill is internally coherent for its stated purpose.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🖼️ Clawdis
EnvNEMO_TOKEN
Primary envNEMO_TOKEN
latestvk977d3cmxk55tvqg4077kzvqsx84pxtk
97downloads
0stars
1versions
Updated 2w ago
v1.0.0
MIT-0

Getting Started

Got photos or images to work with? Send it over and tell me what you need — I'll take care of the AI video creation.

Try saying:

  • "convert five vacation photos in JPG format into a 1080p MP4"
  • "turn these photos into a video with transitions and background music"
  • "turning photo collections into shareable videos for social media creators"

First-Time Connection

When a user first opens this skill, connect to the processing backend automatically. Briefly let them know (e.g. "Setting up...").

Authentication: Check if NEMO_TOKEN is set in the environment. If it is, skip to step 2.

  1. Obtain a free token: Generate a random UUID as client identifier. POST to https://mega-api-prod.nemovideo.ai/api/auth/anonymous-token with header X-Client-Id set to that UUID. The response data.token is your NEMO_TOKEN — 100 free credits, valid 7 days.
  2. Create a session: POST to https://mega-api-prod.nemovideo.ai/api/tasks/me/with-session/nemo_agent with Authorization: Bearer <token>, Content-Type: application/json, and body {"task_name":"project","language":"<detected>"}. Store the returned session_id for all subsequent requests.

Keep setup communication brief. Don't display raw API responses or token values to the user.

Free Photo to Video AI — Convert Photos Into Shareable Videos

Send me your photos or images and describe the result you want. The AI video creation runs on remote GPU nodes — nothing to install on your machine.

A quick example: upload five vacation photos in JPG format, type "turn these photos into a video with transitions and background music", and you'll get a 1080p MP4 back in roughly 30-60 seconds. All rendering happens server-side.

Worth noting: using 5-10 photos gives the best pacing for short social videos.

Matching Input to Actions

User prompts referencing free photo to video ai, aspect ratio, text overlays, or audio tracks get routed to the corresponding action via keyword and intent classification.

User says...ActionSkip SSE?
"export" / "导出" / "download" / "send me the video"→ §3.5 Export
"credits" / "积分" / "balance" / "余额"→ §3.3 Credits
"status" / "状态" / "show tracks"→ §3.4 State
"upload" / "上传" / user sends file→ §3.2 Upload
Everything else (generate, edit, add BGM…)→ §3.1 SSE

Cloud Render Pipeline Details

Each export job queues on a cloud GPU node that composites video layers, applies platform-spec compression (H.264, up to 1080x1920), and returns a download URL within 30-90 seconds. The session token carries render job IDs, so closing the tab before completion orphans the job.

Headers are derived from this file's YAML frontmatter. X-Skill-Source is free-photo-to-video-ai, X-Skill-Version comes from the version field, and X-Skill-Platform is detected from the install path (~/.clawhub/ = clawhub, ~/.cursor/skills/ = cursor, otherwise unknown).

All requests must include: Authorization: Bearer <NEMO_TOKEN>, X-Skill-Source, X-Skill-Version, X-Skill-Platform. Missing attribution headers will cause export to fail with 402.

API base: https://mega-api-prod.nemovideo.ai

Create session: POST /api/tasks/me/with-session/nemo_agent — body {"task_name":"project","language":"<lang>"} — returns task_id, session_id.

Send message (SSE): POST /run_sse — body {"app_name":"nemo_agent","user_id":"me","session_id":"<sid>","new_message":{"parts":[{"text":"<msg>"}]}} with Accept: text/event-stream. Max timeout: 15 minutes.

Upload: POST /api/upload-video/nemo_agent/me/<sid> — file: multipart -F "files=@/path", or URL: {"urls":["<url>"],"source_type":"url"}

Credits: GET /api/credits/balance/simple — returns available, frozen, total

Session state: GET /api/state/nemo_agent/me/<sid>/latest — key fields: data.state.draft, data.state.video_infos, data.state.generated_media

Export (free, no credits): POST /api/render/proxy/lambda — body {"id":"render_<ts>","sessionId":"<sid>","draft":<json>,"output":{"format":"mp4","quality":"high"}}. Poll GET /api/render/proxy/lambda/<id> every 30s until status = completed. Download URL at output.url.

Supported formats: mp4, mov, avi, webm, mkv, jpg, png, gif, webp, mp3, wav, m4a, aac.

Reading the SSE Stream

Text events go straight to the user (after GUI translation). Tool calls stay internal. Heartbeats and empty data: lines mean the backend is still working — show "⏳ Still working..." every 2 minutes.

About 30% of edit operations close the stream without any text. When that happens, poll /api/state to confirm the timeline changed, then tell the user what was updated.

Backend Response Translation

The backend assumes a GUI exists. Translate these into API actions:

Backend saysYou do
"click [button]" / "点击"Execute via API
"open [panel]" / "打开"Query session state
"drag/drop" / "拖拽"Send edit via SSE
"preview in timeline"Show track summary
"Export button" / "导出"Execute export workflow

Draft field mapping: t=tracks, tt=track type (0=video, 1=audio, 7=text), sg=segments, d=duration(ms), m=metadata.

Timeline (3 tracks): 1. Video: city timelapse (0-10s) 2. BGM: Lo-fi (0-10s, 35%) 3. Title: "Urban Dreams" (0-3s)

Error Codes

  • 0 — success, continue normally
  • 1001 — token expired or invalid; re-acquire via /api/auth/anonymous-token
  • 1002 — session not found; create a new one
  • 2001 — out of credits; anonymous users get a registration link with ?bind=<id>, registered users top up
  • 4001 — unsupported file type; show accepted formats
  • 4002 — file too large; suggest compressing or trimming
  • 400 — missing X-Client-Id; generate one and retry
  • 402 — free plan export blocked; not a credit issue, subscription tier
  • 429 — rate limited; wait 30s and retry once

Common Workflows

Quick edit: Upload → "turn these photos into a video with transitions and background music" → Download MP4. Takes 30-60 seconds for a 30-second clip.

Batch style: Upload multiple files in one session. Process them one by one with different instructions. Each gets its own render.

Iterative: Start with a rough cut, preview the result, then refine. The session keeps your timeline state so you can keep tweaking.

Tips and Tricks

The backend processes faster when you're specific. Instead of "make it look better", try "turn these photos into a video with transitions and background music" — concrete instructions get better results.

Max file size is 200MB. Stick to JPG, PNG, WEBP, HEIC for the smoothest experience.

Export as MP4 for widest compatibility.

Comments

Loading comments...