Create Video With Ai

v1.0.0

create images or clips into AI-generated videos with this create-video-with-ai skill. Works with MP4, MOV, JPG, PNG files up to 500MB. marketers and content...

0· 92·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for mory128/create-video-with-ai.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Create Video With Ai" (mory128/create-video-with-ai) from ClawHub.
Skill page: https://clawhub.ai/mory128/create-video-with-ai
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required env vars: NEMO_TOKEN
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install create-video-with-ai

ClawHub CLI

Package manager switcher

npx clawhub@latest install create-video-with-ai
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
Name/description (AI video generation from images/clips) matches the actions the SKILL.md instructs (session creation, SSE message streaming, upload endpoints, render/export). Requesting a single NEMO_TOKEN credential is appropriate for a hosted API.
Instruction Scope
Instructions direct the agent to create a session, stream SSE messages, upload local files via multipart, and poll render status — all expected for a remote render service. It also documents generating an anonymous token if no NEMO_TOKEN is present and deriving a few attribution headers (including detecting an install path). Those are implementation details the skill needs, but the anonymous-token flow means the agent will contact an external endpoint and may upload user files to that service.
Install Mechanism
No install spec or code files are present (instruction-only), so nothing is downloaded or written to disk by an installer. This minimizes supply-chain risk.
Credentials
Only NEMO_TOKEN is required (declared as primaryEnv). The metadata also lists a configPath (~/.config/nemovideo/) which is plausible for storing local service config, but no other unrelated secrets or credentials are requested.
Persistence & Privilege
always is false and the skill does not request elevated or cross-skill privileges. It does allow normal autonomous invocation (the platform default), which is expected for a callable skill.
Assessment
This skill appears to do what it says: it sends your images/clips to an external rendering service (mega-api-prod.nemovideo.ai) using a NEMO_TOKEN or a short-lived anonymous token it can obtain for you. Before installing: (1) confirm you trust nemovideo.ai and review its privacy/terms if you will upload sensitive media; (2) prefer supplying your own NEMO_TOKEN rather than relying on the anonymous-token flow if you want tighter control; (3) be aware uploads are sent to a third party and jobs may continue server-side if you close your client (orphaned jobs); (4) if you need higher assurance, ask the publisher for a reputable homepage or code repo to validate the backend.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🎬 Clawdis
EnvNEMO_TOKEN
Primary envNEMO_TOKEN
latestvk9713x40g3tqyrzt2c5vbnqhv184kjfx
92downloads
0stars
1versions
Updated 2w ago
v1.0.0
MIT-0

Getting Started

Send me your images or clips and I'll handle the AI video creation. Or just describe what you're after.

Try saying:

  • "create five product photos and a logo file into a 1080p MP4"
  • "turn these photos into a 30-second promo video with music and text overlays"
  • "generating videos from images or text prompts without manual editing for marketers and content creators"

Getting Connected

Before handling any user request, establish a connection to the backend API. Show a brief status like "Connecting...".

If NEMO_TOKEN is in the environment, use it directly and create a session. Otherwise, acquire a free starter token:

  • Generate a UUID as client identifier
  • POST to https://mega-api-prod.nemovideo.ai/api/auth/anonymous-token with the X-Client-Id header
  • The response includes a token with 100 free credits valid for 7 days — use it as NEMO_TOKEN

Then create a session by POSTing to https://mega-api-prod.nemovideo.ai/api/tasks/me/with-session/nemo_agent with Bearer authorization and body {"task_name":"project","language":"en"}. The session_id in the response is needed for all following requests.

Tell the user you're ready. Keep the technical details out of the chat.

Create Video with AI — Generate Videos from Your Assets

Send me your images or clips and describe the result you want. The AI video creation runs on remote GPU nodes — nothing to install on your machine.

A quick example: upload five product photos and a logo file, type "turn these photos into a 30-second promo video with music and text overlays", and you'll get a 1080p MP4 back in roughly 1-2 minutes. All rendering happens server-side.

Worth noting: using fewer than 10 images speeds up generation noticeably.

Matching Input to Actions

User prompts referencing create video with ai, aspect ratio, text overlays, or audio tracks get routed to the corresponding action via keyword and intent classification.

User says...ActionSkip SSE?
"export" / "导出" / "download" / "send me the video"→ §3.5 Export
"credits" / "积分" / "balance" / "余额"→ §3.3 Credits
"status" / "状态" / "show tracks"→ §3.4 State
"upload" / "上传" / user sends file→ §3.2 Upload
Everything else (generate, edit, add BGM…)→ §3.1 SSE

Cloud Render Pipeline Details

Each export job queues on a cloud GPU node that composites video layers, applies platform-spec compression (H.264, up to 1080x1920), and returns a download URL within 30-90 seconds. The session token carries render job IDs, so closing the tab before completion orphans the job.

Headers are derived from this file's YAML frontmatter. X-Skill-Source is create-video-with-ai, X-Skill-Version comes from the version field, and X-Skill-Platform is detected from the install path (~/.clawhub/ = clawhub, ~/.cursor/skills/ = cursor, otherwise unknown).

Include Authorization: Bearer <NEMO_TOKEN> and all attribution headers on every request — omitting them triggers a 402 on export.

API base: https://mega-api-prod.nemovideo.ai

Create session: POST /api/tasks/me/with-session/nemo_agent — body {"task_name":"project","language":"<lang>"} — returns task_id, session_id.

Send message (SSE): POST /run_sse — body {"app_name":"nemo_agent","user_id":"me","session_id":"<sid>","new_message":{"parts":[{"text":"<msg>"}]}} with Accept: text/event-stream. Max timeout: 15 minutes.

Upload: POST /api/upload-video/nemo_agent/me/<sid> — file: multipart -F "files=@/path", or URL: {"urls":["<url>"],"source_type":"url"}

Credits: GET /api/credits/balance/simple — returns available, frozen, total

Session state: GET /api/state/nemo_agent/me/<sid>/latest — key fields: data.state.draft, data.state.video_infos, data.state.generated_media

Export (free, no credits): POST /api/render/proxy/lambda — body {"id":"render_<ts>","sessionId":"<sid>","draft":<json>,"output":{"format":"mp4","quality":"high"}}. Poll GET /api/render/proxy/lambda/<id> every 30s until status = completed. Download URL at output.url.

Supported formats: mp4, mov, avi, webm, mkv, jpg, png, gif, webp, mp3, wav, m4a, aac.

SSE Event Handling

EventAction
Text responseApply GUI translation (§4), present to user
Tool call/resultProcess internally, don't forward
heartbeat / empty data:Keep waiting. Every 2 min: "⏳ Still working..."
Stream closesProcess final response

~30% of editing operations return no text in the SSE stream. When this happens: poll session state to verify the edit was applied, then summarize changes to the user.

Backend Response Translation

The backend assumes a GUI exists. Translate these into API actions:

Backend saysYou do
"click [button]" / "点击"Execute via API
"open [panel]" / "打开"Query session state
"drag/drop" / "拖拽"Send edit via SSE
"preview in timeline"Show track summary
"Export button" / "导出"Execute export workflow

Draft JSON uses short keys: t for tracks, tt for track type (0=video, 1=audio, 7=text), sg for segments, d for duration in ms, m for metadata.

Example timeline summary:

Timeline (3 tracks): 1. Video: city timelapse (0-10s) 2. BGM: Lo-fi (0-10s, 35%) 3. Title: "Urban Dreams" (0-3s)

Error Handling

CodeMeaningAction
0SuccessContinue
1001Bad/expired tokenRe-auth via anonymous-token (tokens expire after 7 days)
1002Session not foundNew session §3.0
2001No creditsAnonymous: show registration URL with ?bind=<id> (get <id> from create-session or state response when needed). Registered: "Top up credits in your account"
4001Unsupported fileShow supported formats
4002File too largeSuggest compress/trim
400Missing X-Client-IdGenerate Client-Id and retry (see §1)
402Free plan export blockedSubscription tier issue, NOT credits. "Register or upgrade your plan to unlock export."
429Rate limit (1 token/client/7 days)Retry in 30s once

Tips and Tricks

The backend processes faster when you're specific. Instead of "make it look better", try "turn these photos into a 30-second promo video with music and text overlays" — concrete instructions get better results.

Max file size is 500MB. Stick to MP4, MOV, JPG, PNG for the smoothest experience.

Export as MP4 for widest compatibility across platforms.

Common Workflows

Quick edit: Upload → "turn these photos into a 30-second promo video with music and text overlays" → Download MP4. Takes 1-2 minutes for a 30-second clip.

Batch style: Upload multiple files in one session. Process them one by one with different instructions. Each gets its own render.

Iterative: Start with a rough cut, preview the result, then refine. The session keeps your timeline state so you can keep tweaking.

Comments

Loading comments...