Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Image To Video Kling Ai

v1.0.0

Skip the learning curve of professional editing software. Describe what you want — animate this image into a 5-second cinematic video clip — and get animated...

0· 63·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for mhogan2013-9/image-to-video-kling-ai.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Image To Video Kling Ai" (mhogan2013-9/image-to-video-kling-ai) from ClawHub.
Skill page: https://clawhub.ai/mhogan2013-9/image-to-video-kling-ai
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required env vars: NEMO_TOKEN
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install image-to-video-kling-ai

ClawHub CLI

Package manager switcher

npx clawhub@latest install image-to-video-kling-ai
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The skill's name/description (convert images to short videos) aligns with the actions described in SKILL.md: uploading images, creating sessions, submitting render jobs, polling for results. The declared config path (~/.config/nemovideo/) and primary credential (NEMO_TOKEN) match the stated remote service.
Instruction Scope
Instructions tell the agent to POST to nemovideo.ai endpoints to obtain anonymous tokens (if NEMO_TOKEN is not set), create sessions, upload files, send SSE generation messages, and poll render status. These network actions are expected for a cloud rendering service. Notable: the skill auto-generates/obtains a token and instructs storing the session_id/token for future requests (it also tells the agent not to display raw tokens). Automatic token acquisition and token/session persistence are behavior users should be aware of.
Install Mechanism
No install spec or code files are present (instruction-only), so nothing is written to disk by an installer. This is the lowest install risk.
Credentials
The only required environment variable is NEMO_TOKEN, which is proportional to the described API usage. Small inconsistency: registry metadata lists NEMO_TOKEN as required, but the runtime instructions also describe how to obtain a free anonymous NEMO_TOKEN if none is provided. This is not dangerous but is a behavioral mismatch worth noting: the skill can operate without a pre-provided secret by acquiring one from the external API.
Persistence & Privilege
The skill does not request elevated platform privileges or 'always' inclusion. It does instruct storing a session_id/token for subsequent API calls and references a per-service config directory (~/.config/nemovideo/), which is reasonable for resuming jobs but means state (tokens/job IDs) may be persisted locally.
Assessment
This skill uploads your images and prompts to an external service (mega-api-prod.nemovideo.ai) and needs an API token (NEMO_TOKEN). It will automatically request a short-lived anonymous token if you don't provide one and will persist session IDs/tokens for continuing jobs. Before installing: (1) Consider privacy — do not upload images containing sensitive personal or proprietary data unless you trust the service and its retention policy; (2) Decide whether to supply your own NEMO_TOKEN or allow the skill to auto-create one; supplying your own gives you more control over token lifecycle; (3) Check whether the skill writes to ~/.config/nemovideo/ on your machine (it likely will store session state/tokens there); (4) If you need stronger guarantees, inspect network logs or run the skill in a sandboxed environment; (5) If you have concerns about the service, avoid installing or limit the skill's use to non-sensitive content.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🎬 Clawdis
EnvNEMO_TOKEN
Primary envNEMO_TOKEN
latestvk97dw4yhneh7pfnf5sk1px03pd84xnp3
63downloads
0stars
1versions
Updated 1w ago
v1.0.0
MIT-0

Getting Started

Got images to work with? Send it over and tell me what you need — I'll take care of the AI video creation.

Try saying:

  • "convert a single product photo or illustrated scene into a 1080p MP4"
  • "animate this image into a 5-second cinematic video clip"
  • "turning still images into short AI-generated video clips for content creators, marketers, social media managers"

First-Time Connection

When a user first opens this skill, connect to the processing backend automatically. Briefly let them know (e.g. "Setting up...").

Authentication: Check if NEMO_TOKEN is set in the environment. If it is, skip to step 2.

  1. Obtain a free token: Generate a random UUID as client identifier. POST to https://mega-api-prod.nemovideo.ai/api/auth/anonymous-token with header X-Client-Id set to that UUID. The response data.token is your NEMO_TOKEN — 100 free credits, valid 7 days.
  2. Create a session: POST to https://mega-api-prod.nemovideo.ai/api/tasks/me/with-session/nemo_agent with Authorization: Bearer <token>, Content-Type: application/json, and body {"task_name":"project","language":"<detected>"}. Store the returned session_id for all subsequent requests.

Keep setup communication brief. Don't display raw API responses or token values to the user.

Image to Video Kling AI — Convert Images into Video Clips

Send me your images and describe the result you want. The AI video creation runs on remote GPU nodes — nothing to install on your machine.

A quick example: upload a single product photo or illustrated scene, type "animate this image into a 5-second cinematic video clip", and you'll get a 1080p MP4 back in roughly 1-2 minutes. All rendering happens server-side.

Worth noting: high-contrast images with clear subjects produce the most fluid motion results.

Matching Input to Actions

User prompts referencing image to video kling ai, aspect ratio, text overlays, or audio tracks get routed to the corresponding action via keyword and intent classification.

User says...ActionSkip SSE?
"export" / "导出" / "download" / "send me the video"→ §3.5 Export
"credits" / "积分" / "balance" / "余额"→ §3.3 Credits
"status" / "状态" / "show tracks"→ §3.4 State
"upload" / "上传" / user sends file→ §3.2 Upload
Everything else (generate, edit, add BGM…)→ §3.1 SSE

Cloud Render Pipeline Details

Each export job queues on a cloud GPU node that composites video layers, applies platform-spec compression (H.264, up to 1080x1920), and returns a download URL within 30-90 seconds. The session token carries render job IDs, so closing the tab before completion orphans the job.

All requests must include: Authorization: Bearer <NEMO_TOKEN>, X-Skill-Source, X-Skill-Version, X-Skill-Platform. Missing attribution headers will cause export to fail with 402.

Skill attribution — read from this file's YAML frontmatter at runtime:

  • X-Skill-Source: image-to-video-kling-ai
  • X-Skill-Version: from frontmatter version
  • X-Skill-Platform: detect from install path (~/.clawhub/clawhub, ~/.cursor/skills/cursor, else unknown)

API base: https://mega-api-prod.nemovideo.ai

Create session: POST /api/tasks/me/with-session/nemo_agent — body {"task_name":"project","language":"<lang>"} — returns task_id, session_id.

Send message (SSE): POST /run_sse — body {"app_name":"nemo_agent","user_id":"me","session_id":"<sid>","new_message":{"parts":[{"text":"<msg>"}]}} with Accept: text/event-stream. Max timeout: 15 minutes.

Upload: POST /api/upload-video/nemo_agent/me/<sid> — file: multipart -F "files=@/path", or URL: {"urls":["<url>"],"source_type":"url"}

Credits: GET /api/credits/balance/simple — returns available, frozen, total

Session state: GET /api/state/nemo_agent/me/<sid>/latest — key fields: data.state.draft, data.state.video_infos, data.state.generated_media

Export (free, no credits): POST /api/render/proxy/lambda — body {"id":"render_<ts>","sessionId":"<sid>","draft":<json>,"output":{"format":"mp4","quality":"high"}}. Poll GET /api/render/proxy/lambda/<id> every 30s until status = completed. Download URL at output.url.

Supported formats: mp4, mov, avi, webm, mkv, jpg, png, gif, webp, mp3, wav, m4a, aac.

Error Codes

  • 0 — success, continue normally
  • 1001 — token expired or invalid; re-acquire via /api/auth/anonymous-token
  • 1002 — session not found; create a new one
  • 2001 — out of credits; anonymous users get a registration link with ?bind=<id>, registered users top up
  • 4001 — unsupported file type; show accepted formats
  • 4002 — file too large; suggest compressing or trimming
  • 400 — missing X-Client-Id; generate one and retry
  • 402 — free plan export blocked; not a credit issue, subscription tier
  • 429 — rate limited; wait 30s and retry once

Backend Response Translation

The backend assumes a GUI exists. Translate these into API actions:

Backend saysYou do
"click [button]" / "点击"Execute via API
"open [panel]" / "打开"Query session state
"drag/drop" / "拖拽"Send edit via SSE
"preview in timeline"Show track summary
"Export button" / "导出"Execute export workflow

SSE Event Handling

EventAction
Text responseApply GUI translation (§4), present to user
Tool call/resultProcess internally, don't forward
heartbeat / empty data:Keep waiting. Every 2 min: "⏳ Still working..."
Stream closesProcess final response

~30% of editing operations return no text in the SSE stream. When this happens: poll session state to verify the edit was applied, then summarize changes to the user.

Draft field mapping: t=tracks, tt=track type (0=video, 1=audio, 7=text), sg=segments, d=duration(ms), m=metadata.

Timeline (3 tracks): 1. Video: city timelapse (0-10s) 2. BGM: Lo-fi (0-10s, 35%) 3. Title: "Urban Dreams" (0-3s)

Common Workflows

Quick edit: Upload → "animate this image into a 5-second cinematic video clip" → Download MP4. Takes 1-2 minutes for a 30-second clip.

Batch style: Upload multiple files in one session. Process them one by one with different instructions. Each gets its own render.

Iterative: Start with a rough cut, preview the result, then refine. The session keeps your timeline state so you can keep tweaking.

Tips and Tricks

The backend processes faster when you're specific. Instead of "make it look better", try "animate this image into a 5-second cinematic video clip" — concrete instructions get better results.

Max file size is 10MB. Stick to JPG, PNG, WEBP, BMP for the smoothest experience.

Use PNG for input images to preserve quality and avoid compression artifacts.

Comments

Loading comments...