Image And Text To Video

v1.0.0

generate images and text into AI-generated video with this skill. Works with JPG, PNG, WEBP, GIF files up to 200MB. marketers, social media creators, small b...

0· 24·0 current·0 all-time
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
The skill's name/description (image+text → video) align with the actions described (upload images, create sessions, render/export). Requesting a single service token (NEMO_TOKEN) is reasonable. Minor inconsistency: the registry top-level metadata lists no required config paths, but the SKILL.md frontmatter metadata declares a config path (~/.config/nemovideo/). This mismatch should be clarified but does not by itself contradict the stated purpose.
Instruction Scope
SKILL.md instructs the agent to: use NEMO_TOKEN if present, otherwise obtain an anonymous token from the service; create a session; upload files (multipart or URL); use SSE for long-running requests; poll render status and return download URLs. Those steps are within scope for a cloud render service. Slight scope creep: instructions suggest detecting the agent's install path to set X-Skill-Platform and reference a local config path (~/.config/nemovideo/) in metadata — this implies the agent might inspect local paths, which isn't strictly necessary for core functionality and should be limited to explicit needs. The skill also instructs not to print tokens/raw JSON, which is appropriate.
Install Mechanism
No install spec and no code files (instruction-only). This minimizes on-disk risk; runtime will only make external API calls as described.
Credentials
The only declared credential is NEMO_TOKEN (primaryEnv), which is proportional for a third-party rendering API. The SKILL.md will request an anonymous token if none exists. The only slightly surprising element is the declared config path in the skill metadata (~/.config/nemovideo/), which could cause the agent to read local config files if implemented — that should be justified or removed if not needed.
Persistence & Privilege
The skill is not always-enabled and does not request elevated platform privileges. It will perform network calls to the external service and can be invoked autonomously by the agent (default behavior), which is expected for a connector skill. There is no evidence it modifies other skills or system settings.
Assessment
This skill appears to do what it says: it will upload your images/text to nemo's API and return rendered video URLs. Before installing: 1) Confirm you trust the external service (mega-api-prod.nemovideo.ai) and are comfortable uploading any media you send — private images will leave your machine. 2) Note the skill may read or look for ~/.config/nemovideo/ (metadata mismatch) — ask the author to confirm whether the agent will access local config files. 3) Only the NEMO_TOKEN is required; the skill can obtain a short-lived anonymous token if none is present. 4) Because the source is unknown, prefer testing with non-sensitive content and review the service’s privacy/billing terms. If you need clarification, ask the skill author to reconcile the registry metadata vs SKILL.md (configPaths) and to confirm they will not access unrelated local files or other credentials.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🎬 Clawdis
EnvNEMO_TOKEN
Primary envNEMO_TOKEN
latestvk975xxqevvrctrahttm6zxcgmh858c8k
24downloads
0stars
1versions
Updated 8h ago
v1.0.0
MIT-0

Getting Started

Share your images and text and I'll get started on AI video creation. Or just tell me what you're thinking.

Try saying:

  • "generate my images and text"
  • "export 1080p MP4"
  • "turn these images and caption into"

Automatic Setup

On first interaction, connect to the processing API before doing anything else. Show a brief status like "Setting things up...".

Token: If NEMO_TOKEN environment variable is already set, use it and skip to Session below.

Free token: Generate a UUID as client identifier, then POST to https://mega-api-prod.nemovideo.ai/api/auth/anonymous-token with header X-Client-Id: <uuid>. The response field data.token becomes your NEMO_TOKEN (100 credits, 7-day expiry).

Session: POST to https://mega-api-prod.nemovideo.ai/api/tasks/me/with-session/nemo_agent with Bearer auth and body {"task_name":"project"}. Save session_id from the response.

Confirm to the user you're connected and ready. Don't print tokens or raw JSON.

Image and Text to Video — Create Videos from Images and Text

This tool takes your images and text and runs AI video creation through a cloud rendering pipeline. You upload, describe what you want, and download the result.

Say you have three product photos and a short description and want to turn these images and caption into a 15-second promo video — the backend processes it in about 30-90 seconds and hands you a 1080p MP4.

Tip: clear, descriptive text prompts produce more accurate motion and scene transitions.

Matching Input to Actions

User prompts referencing image and text to video, aspect ratio, text overlays, or audio tracks get routed to the corresponding action via keyword and intent classification.

User says...ActionSkip SSE?
"export" / "导出" / "download" / "send me the video"→ §3.5 Export
"credits" / "积分" / "balance" / "余额"→ §3.3 Credits
"status" / "状态" / "show tracks"→ §3.4 State
"upload" / "上传" / user sends file→ §3.2 Upload
Everything else (generate, edit, add BGM…)→ §3.1 SSE

Cloud Render Pipeline Details

Each export job queues on a cloud GPU node that composites video layers, applies platform-spec compression (H.264, up to 1080x1920), and returns a download URL within 30-90 seconds. The session token carries render job IDs, so closing the tab before completion orphans the job.

Headers are derived from this file's YAML frontmatter. X-Skill-Source is image-and-text-to-video, X-Skill-Version comes from the version field, and X-Skill-Platform is detected from the install path (~/.clawhub/ = clawhub, ~/.cursor/skills/ = cursor, otherwise unknown).

All requests must include: Authorization: Bearer <NEMO_TOKEN>, X-Skill-Source, X-Skill-Version, X-Skill-Platform. Missing attribution headers will cause export to fail with 402.

API base: https://mega-api-prod.nemovideo.ai

Create session: POST /api/tasks/me/with-session/nemo_agent — body {"task_name":"project","language":"<lang>"} — returns task_id, session_id.

Send message (SSE): POST /run_sse — body {"app_name":"nemo_agent","user_id":"me","session_id":"<sid>","new_message":{"parts":[{"text":"<msg>"}]}} with Accept: text/event-stream. Max timeout: 15 minutes.

Upload: POST /api/upload-video/nemo_agent/me/<sid> — file: multipart -F "files=@/path", or URL: {"urls":["<url>"],"source_type":"url"}

Credits: GET /api/credits/balance/simple — returns available, frozen, total

Session state: GET /api/state/nemo_agent/me/<sid>/latest — key fields: data.state.draft, data.state.video_infos, data.state.generated_media

Export (free, no credits): POST /api/render/proxy/lambda — body {"id":"render_<ts>","sessionId":"<sid>","draft":<json>,"output":{"format":"mp4","quality":"high"}}. Poll GET /api/render/proxy/lambda/<id> every 30s until status = completed. Download URL at output.url.

Supported formats: mp4, mov, avi, webm, mkv, jpg, png, gif, webp, mp3, wav, m4a, aac.

SSE Event Handling

EventAction
Text responseApply GUI translation (§4), present to user
Tool call/resultProcess internally, don't forward
heartbeat / empty data:Keep waiting. Every 2 min: "⏳ Still working..."
Stream closesProcess final response

~30% of editing operations return no text in the SSE stream. When this happens: poll session state to verify the edit was applied, then summarize changes to the user.

Translating GUI Instructions

The backend responds as if there's a visual interface. Map its instructions to API calls:

  • "click" or "点击" → execute the action via the relevant endpoint
  • "open" or "打开" → query session state to get the data
  • "drag/drop" or "拖拽" → send the edit command through SSE
  • "preview in timeline" → show a text summary of current tracks
  • "Export" or "导出" → run the export workflow

Draft JSON uses short keys: t for tracks, tt for track type (0=video, 1=audio, 7=text), sg for segments, d for duration in ms, m for metadata.

Example timeline summary:

Timeline (3 tracks): 1. Video: city timelapse (0-10s) 2. BGM: Lo-fi (0-10s, 35%) 3. Title: "Urban Dreams" (0-3s)

Error Codes

  • 0 — success, continue normally
  • 1001 — token expired or invalid; re-acquire via /api/auth/anonymous-token
  • 1002 — session not found; create a new one
  • 2001 — out of credits; anonymous users get a registration link with ?bind=<id>, registered users top up
  • 4001 — unsupported file type; show accepted formats
  • 4002 — file too large; suggest compressing or trimming
  • 400 — missing X-Client-Id; generate one and retry
  • 402 — free plan export blocked; not a credit issue, subscription tier
  • 429 — rate limited; wait 30s and retry once

Common Workflows

Quick edit: Upload → "turn these images and caption into a 15-second promo video" → Download MP4. Takes 30-90 seconds for a 30-second clip.

Batch style: Upload multiple files in one session. Process them one by one with different instructions. Each gets its own render.

Iterative: Start with a rough cut, preview the result, then refine. The session keeps your timeline state so you can keep tweaking.

Tips and Tricks

The backend processes faster when you're specific. Instead of "make it look better", try "turn these images and caption into a 15-second promo video" — concrete instructions get better results.

Max file size is 200MB. Stick to JPG, PNG, WEBP, GIF for the smoothest experience.

Export as MP4 for widest compatibility across social platforms and devices.

Comments

Loading comments...