Image To Video Deevid

v1.0.0

Skip the learning curve of professional editing software. Describe what you want — turn this image into a 5-second animated video clip — and get animated vid...

0· 66·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for mhogan2013-9/image-to-video-deevid.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Image To Video Deevid" (mhogan2013-9/image-to-video-deevid) from ClawHub.
Skill page: https://clawhub.ai/mhogan2013-9/image-to-video-deevid
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required env vars: NEMO_TOKEN
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install image-to-video-deevid

ClawHub CLI

Package manager switcher

npx clawhub@latest install image-to-video-deevid
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description: image-to-video. Declared primary credential: NEMO_TOKEN. SKILL.md exclusively describes calls to a single remote rendering API (mega-api-prod.nemovideo.ai) for uploads, sessions, SSE, and rendering. Requiring an API token and a config path for a video service is consistent with the stated purpose.
Instruction Scope
The instructions tell the agent to obtain/use a NEMO_TOKEN, create sessions, upload files, stream SSE, poll render status, and return download URLs — all expected for a cloud-rendering image→video service. They require adding specific attribution headers and auto-detecting platform/install path for X-Skill-Platform, which implies the agent may inspect its install path. The skill does not instruct reading unrelated system files or other environment variables. Minor note: metadata lists a config path (~/.config/nemovideo/) but the runtime steps do not clearly specify reading/writing that path.
Install Mechanism
No install spec and no code files (instruction-only). This minimizes on-disk persistent changes and is proportionate for a simple API-integration skill.
Credentials
Only a single credential (NEMO_TOKEN) is required and used for Bearer authorization with the described service. That is appropriate for this functionality. The metadata mentions a config path; the instructions do not require other secrets or unrelated credentials.
Persistence & Privilege
always is false and the skill does not ask to modify other skills or system-wide agent settings. It instructs saving session_id for the session lifecycle, which is normal and scoped to the service.
Assessment
This skill appears to be what it says: it sends images and commands to a remote rendering service and requires a single NEMO_TOKEN. Before installing, consider: (1) Privacy — your images will be uploaded to mega-api-prod.nemovideo.ai; avoid sending sensitive or private images unless you trust the service and have read its terms. (2) Token scope — provide a token only for this service; avoid reusing high-privilege or long-lived credentials from other systems. (3) Traceability — the skill has no homepage or source repo listed; if you need more assurance, ask the publisher for docs or a privacy policy. (4) Storage — the skill may persist session identifiers for job tracking; confirm retention policies if that matters. If you’re uncomfortable, use anonymous-token flow (short-lived, limited credits) rather than setting a long-lived NEMO_TOKEN.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🎬 Clawdis
EnvNEMO_TOKEN
Primary envNEMO_TOKEN
latestvk97fgaxy9nzwh7w5sa41swhaks84yj4y
66downloads
0stars
1versions
Updated 1w ago
v1.0.0
MIT-0

Getting Started

Share your static images and I'll get started on AI video creation. Or just tell me what you're thinking.

Try saying:

  • "convert my static images"
  • "export 1080p MP4"
  • "turn this image into a 5-second"

Automatic Setup

On first interaction, connect to the processing API before doing anything else. Show a brief status like "Setting things up...".

Token: If NEMO_TOKEN environment variable is already set, use it and skip to Session below.

Free token: Generate a UUID as client identifier, then POST to https://mega-api-prod.nemovideo.ai/api/auth/anonymous-token with header X-Client-Id: <uuid>. The response field data.token becomes your NEMO_TOKEN (100 credits, 7-day expiry).

Session: POST to https://mega-api-prod.nemovideo.ai/api/tasks/me/with-session/nemo_agent with Bearer auth and body {"task_name":"project"}. Save session_id from the response.

Confirm to the user you're connected and ready. Don't print tokens or raw JSON.

Image to Video - Deevid — Convert Images into Video Clips

Drop your static images in the chat and tell me what you need. I'll handle the AI video creation on cloud GPUs — you don't need anything installed locally.

Here's a typical use: you send a a single product photo or landscape image, ask for turn this image into a 5-second animated video clip, and about 30-60 seconds later you've got a MP4 file ready to download. The whole thing runs at 1080p by default.

One thing worth knowing — high-contrast images with clear subjects produce the smoothest motion output.

Matching Input to Actions

User prompts referencing image to video deevid, aspect ratio, text overlays, or audio tracks get routed to the corresponding action via keyword and intent classification.

User says...ActionSkip SSE?
"export" / "导出" / "download" / "send me the video"→ §3.5 Export
"credits" / "积分" / "balance" / "余额"→ §3.3 Credits
"status" / "状态" / "show tracks"→ §3.4 State
"upload" / "上传" / user sends file→ §3.2 Upload
Everything else (generate, edit, add BGM…)→ §3.1 SSE

Cloud Render Pipeline Details

Each export job queues on a cloud GPU node that composites video layers, applies platform-spec compression (H.264, up to 1080x1920), and returns a download URL within 30-90 seconds. The session token carries render job IDs, so closing the tab before completion orphans the job.

Base URL: https://mega-api-prod.nemovideo.ai

EndpointMethodPurpose
/api/tasks/me/with-session/nemo_agentPOSTStart a new editing session. Body: {"task_name":"project","language":"<lang>"}. Returns session_id.
/run_ssePOSTSend a user message. Body includes app_name, session_id, new_message. Stream response with Accept: text/event-stream. Timeout: 15 min.
/api/upload-video/nemo_agent/me/<sid>POSTUpload a file (multipart) or URL.
/api/credits/balance/simpleGETCheck remaining credits (available, frozen, total).
/api/state/nemo_agent/me/<sid>/latestGETFetch current timeline state (draft, video_infos, generated_media).
/api/render/proxy/lambdaPOSTStart export. Body: {"id":"render_<ts>","sessionId":"<sid>","draft":<json>,"output":{"format":"mp4","quality":"high"}}. Poll status every 30s.

Accepted file types: mp4, mov, avi, webm, mkv, jpg, png, gif, webp, mp3, wav, m4a, aac.

Three attribution headers are required on every request and must match this file's frontmatter:

HeaderValue
X-Skill-Sourceimage-to-video-deevid
X-Skill-Versionfrontmatter version
X-Skill-Platformauto-detect: clawhub / cursor / unknown from install path

All requests must include: Authorization: Bearer <NEMO_TOKEN>, X-Skill-Source, X-Skill-Version, X-Skill-Platform. Missing attribution headers will cause export to fail with 402.

Error Codes

  • 0 — success, continue normally
  • 1001 — token expired or invalid; re-acquire via /api/auth/anonymous-token
  • 1002 — session not found; create a new one
  • 2001 — out of credits; anonymous users get a registration link with ?bind=<id>, registered users top up
  • 4001 — unsupported file type; show accepted formats
  • 4002 — file too large; suggest compressing or trimming
  • 400 — missing X-Client-Id; generate one and retry
  • 402 — free plan export blocked; not a credit issue, subscription tier
  • 429 — rate limited; wait 30s and retry once

SSE Event Handling

EventAction
Text responseApply GUI translation (§4), present to user
Tool call/resultProcess internally, don't forward
heartbeat / empty data:Keep waiting. Every 2 min: "⏳ Still working..."
Stream closesProcess final response

~30% of editing operations return no text in the SSE stream. When this happens: poll session state to verify the edit was applied, then summarize changes to the user.

Translating GUI Instructions

The backend responds as if there's a visual interface. Map its instructions to API calls:

  • "click" or "点击" → execute the action via the relevant endpoint
  • "open" or "打开" → query session state to get the data
  • "drag/drop" or "拖拽" → send the edit command through SSE
  • "preview in timeline" → show a text summary of current tracks
  • "Export" or "导出" → run the export workflow

Draft JSON uses short keys: t for tracks, tt for track type (0=video, 1=audio, 7=text), sg for segments, d for duration in ms, m for metadata.

Example timeline summary:

Timeline (3 tracks): 1. Video: city timelapse (0-10s) 2. BGM: Lo-fi (0-10s, 35%) 3. Title: "Urban Dreams" (0-3s)

Common Workflows

Quick edit: Upload → "turn this image into a 5-second animated video clip" → Download MP4. Takes 30-60 seconds for a 30-second clip.

Batch style: Upload multiple files in one session. Process them one by one with different instructions. Each gets its own render.

Iterative: Start with a rough cut, preview the result, then refine. The session keeps your timeline state so you can keep tweaking.

Tips and Tricks

The backend processes faster when you're specific. Instead of "make it look better", try "turn this image into a 5-second animated video clip" — concrete instructions get better results.

Max file size is 200MB. Stick to JPG, PNG, WEBP, HEIC for the smoothest experience.

Export as MP4 for widest compatibility across social platforms.

Comments

Loading comments...