Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Image To Video Google

v1.0.0

Skip the learning curve of professional editing software. Describe what you want — turn these photos into a short video with transitions and music — and get...

0· 110·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for mhogan2013-9/image-to-video-google.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Image To Video Google" (mhogan2013-9/image-to-video-google) from ClawHub.
Skill page: https://clawhub.ai/mhogan2013-9/image-to-video-google
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required env vars: NEMO_TOKEN
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install image-to-video-google

ClawHub CLI

Package manager switcher

npx clawhub@latest install image-to-video-google
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The skill name/description (image-to-video) matches the API calls and endpoints in SKILL.md (upload, render, export). However the name includes 'Google' while all endpoints are for nemovideo.ai (branding mismatch), and the SKILL.md metadata lists a required config path (~/.config/nemovideo/) even though the registry metadata earlier reported no required config paths — an internal inconsistency.
Instruction Scope
Runtime instructions explicitly upload user images and send them to a remote GPU-backed render service (mega-api-prod.nemovideo.ai). That is expected for this purpose, but you should be aware that user files are transmitted off-device. The skill also instructs the agent to read this file's YAML frontmatter and to detect the install path (~/.clawhub/, ~/.cursor/skills/) to set attribution headers — this implies the agent will inspect the skill's own files and possibly local install paths.
Install Mechanism
Instruction-only skill with no install spec and no code files. Nothing is downloaded or written to disk by an installer; risk from install mechanism is low.
Credentials
Only one credential is requested (NEMO_TOKEN), which is proportionate for an API-backed render service. The skill will use NEMO_TOKEN if present, otherwise it instructs generating an anonymous token via the service's auth endpoint. The SKILL.md metadata also references a config path (~/.config/nemovideo/), which wasn't listed in the registry's 'Required config paths' — this mismatch should be resolved before trusting the skill.
Persistence & Privilege
always:false and no special privileges requested. The skill can be invoked autonomously (platform default), but it does not request permanent 'always' inclusion or system-wide configuration changes.
What to consider before installing
This skill appears to do what it claims (upload images, request a render, return a download URL), but review these points before installing: (1) Privacy — your images will be uploaded to mega-api-prod.nemovideo.ai; do not send sensitive photos unless you trust that service and its retention policy. (2) Credential use — the skill will use NEMO_TOKEN from your environment if present or acquire an anonymous token automatically; supplying your own token gives the skill access tied to that token. (3) Naming mismatch and metadata inconsistency — the 'Google' name vs. nemovideo.ai endpoints and the differing configPath declarations are red flags about quality control; ask the publisher to clarify. (4) Attribution headers — the skill reads its frontmatter and may inspect install paths to build headers; confirm this behavior is acceptable. If you need higher assurance, request the skill author to provide a privacy policy, a canonical homepage, or to remove the misleading branding and fix metadata inconsistencies.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🖼️ Clawdis
EnvNEMO_TOKEN
Primary envNEMO_TOKEN
latestvk9757342jbnh7671x0ann3ve5x856y4c
110downloads
0stars
1versions
Updated 1w ago
v1.0.0
MIT-0

Getting Started

Send me your images and I'll handle the AI video creation. Or just describe what you're after.

Try saying:

  • "convert three landscape photos from a recent trip into a 1080p MP4"
  • "turn these photos into a short video with transitions and music"
  • "converting still photos into shareable videos for marketers"

Getting Connected

Before handling any user request, establish a connection to the backend API. Show a brief status like "Connecting...".

If NEMO_TOKEN is in the environment, use it directly and create a session. Otherwise, acquire a free starter token:

  • Generate a UUID as client identifier
  • POST to https://mega-api-prod.nemovideo.ai/api/auth/anonymous-token with the X-Client-Id header
  • The response includes a token with 100 free credits valid for 7 days — use it as NEMO_TOKEN

Then create a session by POSTing to https://mega-api-prod.nemovideo.ai/api/tasks/me/with-session/nemo_agent with Bearer authorization and body {"task_name":"project","language":"en"}. The session_id in the response is needed for all following requests.

Tell the user you're ready. Keep the technical details out of the chat.

Image to Video Google — Convert Images Into Shareable Videos

Send me your images and describe the result you want. The AI video creation runs on remote GPU nodes — nothing to install on your machine.

A quick example: upload three landscape photos from a recent trip, type "turn these photos into a short video with transitions and music", and you'll get a 1080p MP4 back in roughly 30-60 seconds. All rendering happens server-side.

Worth noting: using fewer than 10 images keeps processing time under a minute.

Matching Input to Actions

User prompts referencing image to video google, aspect ratio, text overlays, or audio tracks get routed to the corresponding action via keyword and intent classification.

User says...ActionSkip SSE?
"export" / "导出" / "download" / "send me the video"→ §3.5 Export
"credits" / "积分" / "balance" / "余额"→ §3.3 Credits
"status" / "状态" / "show tracks"→ §3.4 State
"upload" / "上传" / user sends file→ §3.2 Upload
Everything else (generate, edit, add BGM…)→ §3.1 SSE

Cloud Render Pipeline Details

Each export job queues on a cloud GPU node that composites video layers, applies platform-spec compression (H.264, up to 1080x1920), and returns a download URL within 30-90 seconds. The session token carries render job IDs, so closing the tab before completion orphans the job.

All calls go to https://mega-api-prod.nemovideo.ai. The main endpoints:

  1. SessionPOST /api/tasks/me/with-session/nemo_agent with {"task_name":"project","language":"<lang>"}. Gives you a session_id.
  2. Chat (SSE)POST /run_sse with session_id and your message in new_message.parts[0].text. Set Accept: text/event-stream. Up to 15 min.
  3. UploadPOST /api/upload-video/nemo_agent/me/<sid> — multipart file or JSON with URLs.
  4. CreditsGET /api/credits/balance/simple — returns available, frozen, total.
  5. StateGET /api/state/nemo_agent/me/<sid>/latest — current draft and media info.
  6. ExportPOST /api/render/proxy/lambda with render ID and draft JSON. Poll GET /api/render/proxy/lambda/<id> every 30s for completed status and download URL.

Formats: mp4, mov, avi, webm, mkv, jpg, png, gif, webp, mp3, wav, m4a, aac.

Skill attribution — read from this file's YAML frontmatter at runtime:

  • X-Skill-Source: image-to-video-google
  • X-Skill-Version: from frontmatter version
  • X-Skill-Platform: detect from install path (~/.clawhub/clawhub, ~/.cursor/skills/cursor, else unknown)

Every API call needs Authorization: Bearer <NEMO_TOKEN> plus the three attribution headers above. If any header is missing, exports return 402.

Draft field mapping: t=tracks, tt=track type (0=video, 1=audio, 7=text), sg=segments, d=duration(ms), m=metadata.

Timeline (3 tracks): 1. Video: city timelapse (0-10s) 2. BGM: Lo-fi (0-10s, 35%) 3. Title: "Urban Dreams" (0-3s)

Backend Response Translation

The backend assumes a GUI exists. Translate these into API actions:

Backend saysYou do
"click [button]" / "点击"Execute via API
"open [panel]" / "打开"Query session state
"drag/drop" / "拖拽"Send edit via SSE
"preview in timeline"Show track summary
"Export button" / "导出"Execute export workflow

Reading the SSE Stream

Text events go straight to the user (after GUI translation). Tool calls stay internal. Heartbeats and empty data: lines mean the backend is still working — show "⏳ Still working..." every 2 minutes.

About 30% of edit operations close the stream without any text. When that happens, poll /api/state to confirm the timeline changed, then tell the user what was updated.

Error Codes

  • 0 — success, continue normally
  • 1001 — token expired or invalid; re-acquire via /api/auth/anonymous-token
  • 1002 — session not found; create a new one
  • 2001 — out of credits; anonymous users get a registration link with ?bind=<id>, registered users top up
  • 4001 — unsupported file type; show accepted formats
  • 4002 — file too large; suggest compressing or trimming
  • 400 — missing X-Client-Id; generate one and retry
  • 402 — free plan export blocked; not a credit issue, subscription tier
  • 429 — rate limited; wait 30s and retry once

Tips and Tricks

The backend processes faster when you're specific. Instead of "make it look better", try "turn these photos into a short video with transitions and music" — concrete instructions get better results.

Max file size is 200MB. Stick to JPG, PNG, WEBP, HEIC for the smoothest experience.

Export as MP4 for widest compatibility.

Common Workflows

Quick edit: Upload → "turn these photos into a short video with transitions and music" → Download MP4. Takes 30-60 seconds for a 30-second clip.

Batch style: Upload multiple files in one session. Process them one by one with different instructions. Each gets its own render.

Iterative: Start with a rough cut, preview the result, then refine. The session keeps your timeline state so you can keep tweaking.

Comments

Loading comments...