Diffusion Video

v1.0.0

Turn a still landscape photo or short reference clip into 1080p diffusion-generated video clips just by typing what you need. Whether it's generating AI anim...

0· 96·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for tk8544-b/diffusion-video.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Diffusion Video" (tk8544-b/diffusion-video) from ClawHub.
Skill page: https://clawhub.ai/tk8544-b/diffusion-video
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required env vars: NEMO_TOKEN
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install diffusion-video

ClawHub CLI

Package manager switcher

npx clawhub@latest install diffusion-video
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
Name/description describe remote video generation and the skill only requires a single service token (NEMO_TOKEN) and API calls to a video-rendering backend. Asking for an API token and session management is proportionate to the stated purpose.
Instruction Scope
SKILL.md contains concrete API calls for authentication, session creation, SSE streaming, uploads, and export polling — all consistent with a remote render service. Minor scope notes: the frontmatter mentions a config path (~/.config/nemovideo/) and the doc asks to auto-detect install-path for X-Skill-Platform — these imply the agent may read runtime/install path or local config, which is not strictly necessary for basic operation and is inconsistent with the registry metadata that listed no config paths.
Install Mechanism
Instruction-only skill with no install spec and no code files — nothing is downloaded or written to disk by an install step. Low installation risk.
Credentials
Only a single service credential (NEMO_TOKEN) is required and used in the documented API flows. The SKILL.md explicitly supports generating an anonymous token if no env var is set. No unrelated secrets or broad system credentials are requested.
Persistence & Privilege
Skill is not always-enabled and uses normal autonomous invocation. It instructs saving session_id for the rendering session (expected). The earlier-mentioned config path in frontmatter suggests it may read or write under ~/.config/nemovideo/ — that capability is not declared in the registry metadata and should be clarified if true.
Assessment
This skill sends any images/videos you provide to the external service at mega-api-prod.nemovideo.ai and uses a NEMO_TOKEN (or obtains an anonymous token) to process and return rendered videos. Before installing or using: (1) Verify you trust the nemo service and its privacy/retention policy because uploaded media will leave your machine; (2) Prefer using a personal account token over anonymous tokens if you need auditability/control; (3) Ask the skill author to clarify the config-path behavior (the SKILL.md references ~/.config/nemovideo/ and auto-detecting install path) because that implies local file access not otherwise documented in registry metadata; (4) Don't provide other secrets or sensitive files to the skill. If you want stronger assurances, request the skill to document exactly what it reads/writes locally and to remove or explain the configPath/install-path auto-detection.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🎞️ Clawdis
EnvNEMO_TOKEN
Primary envNEMO_TOKEN
latestvk97anehy9dkcvr4qxfh1ffzftx859331
96downloads
0stars
1versions
Updated 6d ago
v1.0.0
MIT-0

Getting Started

Got images or video to work with? Send it over and tell me what you need — I'll take care of the AI diffusion video generation.

Try saying:

  • "generate a still landscape photo or short reference clip into a 1080p MP4"
  • "animate this image into a 5-second diffusion video with cinematic motion"
  • "generating AI animated videos from still images or text prompts for content creators, AI artists, marketers"

Automatic Setup

On first interaction, connect to the processing API before doing anything else. Show a brief status like "Setting things up...".

Token: If NEMO_TOKEN environment variable is already set, use it and skip to Session below.

Free token: Generate a UUID as client identifier, then POST to https://mega-api-prod.nemovideo.ai/api/auth/anonymous-token with header X-Client-Id: <uuid>. The response field data.token becomes your NEMO_TOKEN (100 credits, 7-day expiry).

Session: POST to https://mega-api-prod.nemovideo.ai/api/tasks/me/with-session/nemo_agent with Bearer auth and body {"task_name":"project"}. Save session_id from the response.

Confirm to the user you're connected and ready. Don't print tokens or raw JSON.

Diffusion Video — Generate AI Diffusion Video Clips

Send me your images or video and describe the result you want. The AI diffusion video generation runs on remote GPU nodes — nothing to install on your machine.

A quick example: upload a still landscape photo or short reference clip, type "animate this image into a 5-second diffusion video with cinematic motion", and you'll get a 1080p MP4 back in roughly 1-3 minutes. All rendering happens server-side.

Worth noting: simpler, high-contrast images produce cleaner diffusion motion results.

Matching Input to Actions

User prompts referencing diffusion video, aspect ratio, text overlays, or audio tracks get routed to the corresponding action via keyword and intent classification.

User says...ActionSkip SSE?
"export" / "导出" / "download" / "send me the video"→ §3.5 Export
"credits" / "积分" / "balance" / "余额"→ §3.3 Credits
"status" / "状态" / "show tracks"→ §3.4 State
"upload" / "上传" / user sends file→ §3.2 Upload
Everything else (generate, edit, add BGM…)→ §3.1 SSE

Cloud Render Pipeline Details

Each export job queues on a cloud GPU node that composites video layers, applies platform-spec compression (H.264, up to 1080x1920), and returns a download URL within 30-90 seconds. The session token carries render job IDs, so closing the tab before completion orphans the job.

Every API call needs Authorization: Bearer <NEMO_TOKEN> plus the three attribution headers above. If any header is missing, exports return 402.

Three attribution headers are required on every request and must match this file's frontmatter:

HeaderValue
X-Skill-Sourcediffusion-video
X-Skill-Versionfrontmatter version
X-Skill-Platformauto-detect: clawhub / cursor / unknown from install path

API base: https://mega-api-prod.nemovideo.ai

Create session: POST /api/tasks/me/with-session/nemo_agent — body {"task_name":"project","language":"<lang>"} — returns task_id, session_id.

Send message (SSE): POST /run_sse — body {"app_name":"nemo_agent","user_id":"me","session_id":"<sid>","new_message":{"parts":[{"text":"<msg>"}]}} with Accept: text/event-stream. Max timeout: 15 minutes.

Upload: POST /api/upload-video/nemo_agent/me/<sid> — file: multipart -F "files=@/path", or URL: {"urls":["<url>"],"source_type":"url"}

Credits: GET /api/credits/balance/simple — returns available, frozen, total

Session state: GET /api/state/nemo_agent/me/<sid>/latest — key fields: data.state.draft, data.state.video_infos, data.state.generated_media

Export (free, no credits): POST /api/render/proxy/lambda — body {"id":"render_<ts>","sessionId":"<sid>","draft":<json>,"output":{"format":"mp4","quality":"high"}}. Poll GET /api/render/proxy/lambda/<id> every 30s until status = completed. Download URL at output.url.

Supported formats: mp4, mov, avi, webm, mkv, jpg, png, gif, webp, mp3, wav, m4a, aac.

Error Handling

CodeMeaningAction
0SuccessContinue
1001Bad/expired tokenRe-auth via anonymous-token (tokens expire after 7 days)
1002Session not foundNew session §3.0
2001No creditsAnonymous: show registration URL with ?bind=<id> (get <id> from create-session or state response when needed). Registered: "Top up credits in your account"
4001Unsupported fileShow supported formats
4002File too largeSuggest compress/trim
400Missing X-Client-IdGenerate Client-Id and retry (see §1)
402Free plan export blockedSubscription tier issue, NOT credits. "Register or upgrade your plan to unlock export."
429Rate limit (1 token/client/7 days)Retry in 30s once

Backend Response Translation

The backend assumes a GUI exists. Translate these into API actions:

Backend saysYou do
"click [button]" / "点击"Execute via API
"open [panel]" / "打开"Query session state
"drag/drop" / "拖拽"Send edit via SSE
"preview in timeline"Show track summary
"Export button" / "导出"Execute export workflow

SSE Event Handling

EventAction
Text responseApply GUI translation (§4), present to user
Tool call/resultProcess internally, don't forward
heartbeat / empty data:Keep waiting. Every 2 min: "⏳ Still working..."
Stream closesProcess final response

~30% of editing operations return no text in the SSE stream. When this happens: poll session state to verify the edit was applied, then summarize changes to the user.

Draft JSON uses short keys: t for tracks, tt for track type (0=video, 1=audio, 7=text), sg for segments, d for duration in ms, m for metadata.

Example timeline summary:

Timeline (3 tracks): 1. Video: city timelapse (0-10s) 2. BGM: Lo-fi (0-10s, 35%) 3. Title: "Urban Dreams" (0-3s)

Tips and Tricks

The backend processes faster when you're specific. Instead of "make it look better", try "animate this image into a 5-second diffusion video with cinematic motion" — concrete instructions get better results.

Max file size is 200MB. Stick to MP4, MOV, PNG, JPG for the smoothest experience.

Export as MP4 with H.264 codec for widest platform compatibility.

Common Workflows

Quick edit: Upload → "animate this image into a 5-second diffusion video with cinematic motion" → Download MP4. Takes 1-3 minutes for a 30-second clip.

Batch style: Upload multiple files in one session. Process them one by one with different instructions. Each gets its own render.

Iterative: Start with a rough cut, preview the result, then refine. The session keeps your timeline state so you can keep tweaking.

Comments

Loading comments...