Deepfake Ai Image To Video

v1.0.0

Get animated face videos ready to post, without touching a single slider. Upload your static images (JPG, PNG, WEBP, HEIC, up to 200MB), say something like "...

0· 68·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for tk8544-b/deepfake-ai-image-to-video.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Deepfake Ai Image To Video" (tk8544-b/deepfake-ai-image-to-video) from ClawHub.
Skill page: https://clawhub.ai/tk8544-b/deepfake-ai-image-to-video
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required env vars: NEMO_TOKEN
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install deepfake-ai-image-to-video

ClawHub CLI

Package manager switcher

npx clawhub@latest install deepfake-ai-image-to-video
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The required primary credential (NEMO_TOKEN) and the described HTTPS endpoints align with the skill's stated purpose (cloud-based deepfake/video rendering). However, the metadata declares a config path (~/.config/nemovideo/) even though the SKILL.md never instructs reading that path; this is an unnecessary or unused declaration and should be clarified.
Instruction Scope
Runtime instructions are limited to interacting with the nemovideo API (auth, session, upload, SSE, export) and handling user-provided media. The instructions do not ask the agent to read unrelated local files, other environment variables, or send data to unexpected endpoints. One small note: the instructions say X-Skill-Platform is 'detected from the install path', which implies inspecting the agent's install path — the SKILL.md doesn't explain how or why this is needed.
Install Mechanism
No install spec or code is provided (instruction-only), so nothing is downloaded or written to disk by the skill itself.
Credentials
Only a single credential (NEMO_TOKEN) is required, which is proportionate to a cloud API integration. The metadata's configPaths entry suggests the skill might access a local config directory, but the runtime instructions rely on the environment variable instead — this mismatch should be clarified so users know whether the skill will read local config files.
Persistence & Privilege
always:false and the skill does not request elevated or persistent system presence. Autonomous agent invocation is allowed (the platform default) but the skill does not request broader privileges or modifications to other skills/configs.
Assessment
This skill appears to be what it says: an instruction-only wrapper that uploads user images to a nemo-video cloud API to create deepfake-style videos. Before installing, consider: (1) Privacy & consent — anything you upload goes to an external service (mega-api-prod.nemovideo.ai) and may be stored or processed; only upload images you have the right to use. (2) Token handling — the skill needs a NEMO_TOKEN; prefer using a scoped or throwaway token rather than a long-lived personal credential. (3) Metadata mismatch — the skill lists a local config path (~/.config/nemovideo/) though the instructions don't read it; ask the author whether the skill will access that directory. (4) Verify the service — confirm the domain and operator of the nemovideo API and review their terms/privacy before sending sensitive material. If you need stricter guarantees, do not provide sensitive credentials and avoid uploading private or identifying images.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🎭 Clawdis
EnvNEMO_TOKEN
Primary envNEMO_TOKEN
latestvk97107y9r5twj366hmsjrtc7xd84w5pr
68downloads
0stars
1versions
Updated 1w ago
v1.0.0
MIT-0

Getting Started

Share your static images and I'll get started on AI deepfake video generation. Or just tell me what you're thinking.

Try saying:

  • "convert my static images"
  • "export 1080p MP4"
  • "animate this photo into a realistic"

Automatic Setup

On first interaction, connect to the processing API before doing anything else. Show a brief status like "Setting things up...".

Token: If NEMO_TOKEN environment variable is already set, use it and skip to Session below.

Free token: Generate a UUID as client identifier, then POST to https://mega-api-prod.nemovideo.ai/api/auth/anonymous-token with header X-Client-Id: <uuid>. The response field data.token becomes your NEMO_TOKEN (100 credits, 7-day expiry).

Session: POST to https://mega-api-prod.nemovideo.ai/api/tasks/me/with-session/nemo_agent with Bearer auth and body {"task_name":"project"}. Save session_id from the response.

Confirm to the user you're connected and ready. Don't print tokens or raw JSON.

Deepfake AI Image to Video — Animate Photos Into Talking Videos

This tool takes your static images and runs AI deepfake video generation through a cloud rendering pipeline. You upload, describe what you want, and download the result.

Say you have a single portrait photo of a person and want to animate this photo into a realistic talking head video — the backend processes it in about 1-2 minutes and hands you a 1080p MP4.

Tip: use a front-facing, well-lit photo for the most realistic face animation results.

Matching Input to Actions

User prompts referencing deepfake ai image to video, aspect ratio, text overlays, or audio tracks get routed to the corresponding action via keyword and intent classification.

User says...ActionSkip SSE?
"export" / "导出" / "download" / "send me the video"→ §3.5 Export
"credits" / "积分" / "balance" / "余额"→ §3.3 Credits
"status" / "状态" / "show tracks"→ §3.4 State
"upload" / "上传" / user sends file→ §3.2 Upload
Everything else (generate, edit, add BGM…)→ §3.1 SSE

Cloud Render Pipeline Details

Each export job queues on a cloud GPU node that composites video layers, applies platform-spec compression (H.264, up to 1080x1920), and returns a download URL within 30-90 seconds. The session token carries render job IDs, so closing the tab before completion orphans the job.

All calls go to https://mega-api-prod.nemovideo.ai. The main endpoints:

  1. SessionPOST /api/tasks/me/with-session/nemo_agent with {"task_name":"project","language":"<lang>"}. Gives you a session_id.
  2. Chat (SSE)POST /run_sse with session_id and your message in new_message.parts[0].text. Set Accept: text/event-stream. Up to 15 min.
  3. UploadPOST /api/upload-video/nemo_agent/me/<sid> — multipart file or JSON with URLs.
  4. CreditsGET /api/credits/balance/simple — returns available, frozen, total.
  5. StateGET /api/state/nemo_agent/me/<sid>/latest — current draft and media info.
  6. ExportPOST /api/render/proxy/lambda with render ID and draft JSON. Poll GET /api/render/proxy/lambda/<id> every 30s for completed status and download URL.

Formats: mp4, mov, avi, webm, mkv, jpg, png, gif, webp, mp3, wav, m4a, aac.

Headers are derived from this file's YAML frontmatter. X-Skill-Source is deepfake-ai-image-to-video, X-Skill-Version comes from the version field, and X-Skill-Platform is detected from the install path (~/.clawhub/ = clawhub, ~/.cursor/skills/ = cursor, otherwise unknown).

Every API call needs Authorization: Bearer <NEMO_TOKEN> plus the three attribution headers above. If any header is missing, exports return 402.

Draft JSON uses short keys: t for tracks, tt for track type (0=video, 1=audio, 7=text), sg for segments, d for duration in ms, m for metadata.

Example timeline summary:

Timeline (3 tracks): 1. Video: city timelapse (0-10s) 2. BGM: Lo-fi (0-10s, 35%) 3. Title: "Urban Dreams" (0-3s)

Translating GUI Instructions

The backend responds as if there's a visual interface. Map its instructions to API calls:

  • "click" or "点击" → execute the action via the relevant endpoint
  • "open" or "打开" → query session state to get the data
  • "drag/drop" or "拖拽" → send the edit command through SSE
  • "preview in timeline" → show a text summary of current tracks
  • "Export" or "导出" → run the export workflow

Reading the SSE Stream

Text events go straight to the user (after GUI translation). Tool calls stay internal. Heartbeats and empty data: lines mean the backend is still working — show "⏳ Still working..." every 2 minutes.

About 30% of edit operations close the stream without any text. When that happens, poll /api/state to confirm the timeline changed, then tell the user what was updated.

Error Codes

  • 0 — success, continue normally
  • 1001 — token expired or invalid; re-acquire via /api/auth/anonymous-token
  • 1002 — session not found; create a new one
  • 2001 — out of credits; anonymous users get a registration link with ?bind=<id>, registered users top up
  • 4001 — unsupported file type; show accepted formats
  • 4002 — file too large; suggest compressing or trimming
  • 400 — missing X-Client-Id; generate one and retry
  • 402 — free plan export blocked; not a credit issue, subscription tier
  • 429 — rate limited; wait 30s and retry once

Tips and Tricks

The backend processes faster when you're specific. Instead of "make it look better", try "animate this photo into a realistic talking head video" — concrete instructions get better results.

Max file size is 200MB. Stick to JPG, PNG, WEBP, HEIC for the smoothest experience.

Export as MP4 with H.264 codec for the best compatibility across platforms.

Common Workflows

Quick edit: Upload → "animate this photo into a realistic talking head video" → Download MP4. Takes 1-2 minutes for a 30-second clip.

Batch style: Upload multiple files in one session. Process them one by one with different instructions. Each gets its own render.

Iterative: Start with a rough cut, preview the result, then refine. The session keeps your timeline state so you can keep tweaking.

Comments

Loading comments...