Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Image To Video Open Source

v1.0.0

convert images into animated video clips with this skill. Works with JPG, PNG, WEBP, GIF files up to 200MB. developers and content creators use it for conver...

0· 69·0 current·0 all-time
bypeandrover adam@peand-rover

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for peand-rover/image-to-video-open-source.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Image To Video Open Source" (peand-rover/image-to-video-open-source) from ClawHub.
Skill page: https://clawhub.ai/peand-rover/image-to-video-open-source
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required env vars: NEMO_TOKEN
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install image-to-video-open-source

ClawHub CLI

Package manager switcher

npx clawhub@latest install image-to-video-open-source
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The skill claims to be 'Open Source' in its name but all runtime instructions call a proprietary cloud API (https://mega-api-prod.nemovideo.ai). Required credential (NEMO_TOKEN) and upload/render endpoints align with an online rendering service, but the 'open source' label is misleading.
Instruction Scope
SKILL.md gives detailed API workflows (auth, session creation, SSE, upload, render/poll). These are within the stated purpose (cloud video rendering). However the instructions also ask the agent to read the skill's YAML frontmatter and detect install path (~/.clawhub/ or ~/.cursor/skills/) to set X-Skill-Platform — that implies reading local install paths/metadata which is not strictly required for rendering and could reveal local environment details.
Install Mechanism
Instruction-only skill with no install spec or bundled code — lowest installation risk. The skill relies on calling an external API rather than installing binaries.
Credentials
Only one environment variable is required (NEMO_TOKEN), which is proportionate for a third-party API. The skill will, if NEMO_TOKEN is absent, fetch an anonymous token from the external API and use it — this behavior should be made explicit to end users because it causes network contact and creation of credentials without the user's explicit key.
Persistence & Privilege
always is false and the skill does not request system-wide persistence or modify other skills. No elevated privileges are requested.
What to consider before installing
This skill appears to be a cloud-based image→video renderer and needs a NEMO_TOKEN (or will request an anonymous token from mega-api-prod.nemovideo.ai). Before installing: 1) Be aware the 'Open Source' label seems misleading — processing runs on a proprietary cloud endpoint, not locally. 2) The skill will contact an external service (it may create an anonymous token if you haven't provided one) and will include headers that reveal skill/version/platform; it may attempt to detect install paths to set those headers. 3) Only provide non-sensitive images and decide whether you want to supply your own NEMO_TOKEN (preferred) vs letting it fetch an anonymous token. 4) Ask the publisher for source code or a homepage and confirm the service's privacy/data-retention policy if you plan to upload private content. If you need stronger guarantees (local/offline processing, open-source implementation), do not install until the author provides source or a trustworthy homepage.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🖼️ Clawdis
EnvNEMO_TOKEN
Primary envNEMO_TOKEN
latestvk977vb5jk1hf077av9qzww9d9s84s0h0
69downloads
0stars
1versions
Updated 2w ago
v1.0.0
MIT-0

Getting Started

Send me your images and I'll handle the AI video creation. Or just describe what you're after.

Try saying:

  • "convert three landscape photos in JPG format into a 1080p MP4"
  • "turn these images into a smooth video with transitions"
  • "converting static images into shareable video content for developers and content creators"

Getting Connected

Before handling any user request, establish a connection to the backend API. Show a brief status like "Connecting...".

If NEMO_TOKEN is in the environment, use it directly and create a session. Otherwise, acquire a free starter token:

  • Generate a UUID as client identifier
  • POST to https://mega-api-prod.nemovideo.ai/api/auth/anonymous-token with the X-Client-Id header
  • The response includes a token with 100 free credits valid for 7 days — use it as NEMO_TOKEN

Then create a session by POSTing to https://mega-api-prod.nemovideo.ai/api/tasks/me/with-session/nemo_agent with Bearer authorization and body {"task_name":"project","language":"en"}. The session_id in the response is needed for all following requests.

Tell the user you're ready. Keep the technical details out of the chat.

Image to Video Open Source — Convert Images Into Video Clips

Drop your images in the chat and tell me what you need. I'll handle the AI video creation on cloud GPUs — you don't need anything installed locally.

Here's a typical use: you send a three landscape photos in JPG format, ask for turn these images into a smooth video with transitions, and about 30-60 seconds later you've got a MP4 file ready to download. The whole thing runs at 1080p by default.

One thing worth knowing — fewer images per batch means faster processing and cleaner transitions.

Matching Input to Actions

User prompts referencing image to video open source, aspect ratio, text overlays, or audio tracks get routed to the corresponding action via keyword and intent classification.

User says...ActionSkip SSE?
"export" / "导出" / "download" / "send me the video"→ §3.5 Export
"credits" / "积分" / "balance" / "余额"→ §3.3 Credits
"status" / "状态" / "show tracks"→ §3.4 State
"upload" / "上传" / user sends file→ §3.2 Upload
Everything else (generate, edit, add BGM…)→ §3.1 SSE

Cloud Render Pipeline Details

Each export job queues on a cloud GPU node that composites video layers, applies platform-spec compression (H.264, up to 1080x1920), and returns a download URL within 30-90 seconds. The session token carries render job IDs, so closing the tab before completion orphans the job.

Skill attribution — read from this file's YAML frontmatter at runtime:

  • X-Skill-Source: image-to-video-open-source
  • X-Skill-Version: from frontmatter version
  • X-Skill-Platform: detect from install path (~/.clawhub/clawhub, ~/.cursor/skills/cursor, else unknown)

All requests must include: Authorization: Bearer <NEMO_TOKEN>, X-Skill-Source, X-Skill-Version, X-Skill-Platform. Missing attribution headers will cause export to fail with 402.

API base: https://mega-api-prod.nemovideo.ai

Create session: POST /api/tasks/me/with-session/nemo_agent — body {"task_name":"project","language":"<lang>"} — returns task_id, session_id.

Send message (SSE): POST /run_sse — body {"app_name":"nemo_agent","user_id":"me","session_id":"<sid>","new_message":{"parts":[{"text":"<msg>"}]}} with Accept: text/event-stream. Max timeout: 15 minutes.

Upload: POST /api/upload-video/nemo_agent/me/<sid> — file: multipart -F "files=@/path", or URL: {"urls":["<url>"],"source_type":"url"}

Credits: GET /api/credits/balance/simple — returns available, frozen, total

Session state: GET /api/state/nemo_agent/me/<sid>/latest — key fields: data.state.draft, data.state.video_infos, data.state.generated_media

Export (free, no credits): POST /api/render/proxy/lambda — body {"id":"render_<ts>","sessionId":"<sid>","draft":<json>,"output":{"format":"mp4","quality":"high"}}. Poll GET /api/render/proxy/lambda/<id> every 30s until status = completed. Download URL at output.url.

Supported formats: mp4, mov, avi, webm, mkv, jpg, png, gif, webp, mp3, wav, m4a, aac.

Reading the SSE Stream

Text events go straight to the user (after GUI translation). Tool calls stay internal. Heartbeats and empty data: lines mean the backend is still working — show "⏳ Still working..." every 2 minutes.

About 30% of edit operations close the stream without any text. When that happens, poll /api/state to confirm the timeline changed, then tell the user what was updated.

Translating GUI Instructions

The backend responds as if there's a visual interface. Map its instructions to API calls:

  • "click" or "点击" → execute the action via the relevant endpoint
  • "open" or "打开" → query session state to get the data
  • "drag/drop" or "拖拽" → send the edit command through SSE
  • "preview in timeline" → show a text summary of current tracks
  • "Export" or "导出" → run the export workflow

Draft JSON uses short keys: t for tracks, tt for track type (0=video, 1=audio, 7=text), sg for segments, d for duration in ms, m for metadata.

Example timeline summary:

Timeline (3 tracks): 1. Video: city timelapse (0-10s) 2. BGM: Lo-fi (0-10s, 35%) 3. Title: "Urban Dreams" (0-3s)

Error Codes

  • 0 — success, continue normally
  • 1001 — token expired or invalid; re-acquire via /api/auth/anonymous-token
  • 1002 — session not found; create a new one
  • 2001 — out of credits; anonymous users get a registration link with ?bind=<id>, registered users top up
  • 4001 — unsupported file type; show accepted formats
  • 4002 — file too large; suggest compressing or trimming
  • 400 — missing X-Client-Id; generate one and retry
  • 402 — free plan export blocked; not a credit issue, subscription tier
  • 429 — rate limited; wait 30s and retry once

Common Workflows

Quick edit: Upload → "turn these images into a smooth video with transitions" → Download MP4. Takes 30-60 seconds for a 30-second clip.

Batch style: Upload multiple files in one session. Process them one by one with different instructions. Each gets its own render.

Iterative: Start with a rough cut, preview the result, then refine. The session keeps your timeline state so you can keep tweaking.

Tips and Tricks

The backend processes faster when you're specific. Instead of "make it look better", try "turn these images into a smooth video with transitions" — concrete instructions get better results.

Max file size is 200MB. Stick to JPG, PNG, WEBP, GIF for the smoothest experience.

Export as MP4 for widest compatibility.

Comments

Loading comments...