Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Image To Video In Chatgpt

v1.0.0

Turn three product photos in JPG format into 1080p animated image videos just by typing what you need. Whether it's converting still images into shareable vi...

0· 61·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for tk8544-b/image-to-video-in-chatgpt.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Image To Video In Chatgpt" (tk8544-b/image-to-video-in-chatgpt) from ClawHub.
Skill page: https://clawhub.ai/tk8544-b/image-to-video-in-chatgpt
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required env vars: NEMO_TOKEN
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install image-to-video-in-chatgpt

ClawHub CLI

Package manager switcher

npx clawhub@latest install image-to-video-in-chatgpt
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The skill claims to convert JPG images to 1080p videos via a remote GPU service and its runtime instructions call a nemo-video API — the single required credential (NEMO_TOKEN) is appropriate for that purpose. However the SKILL.md frontmatter lists a config path (~/.config/nemovideo/) while the registry metadata earlier said no required config paths, which is an unexplained inconsistency.
Instruction Scope
Instructions are focused on the remote API workflow (auth, session creation, SSE chat, upload, export). They instruct the agent to create an anonymous token if no NEMO_TOKEN is set and to store session_id. The SKILL.md also tells the agent to read its own YAML frontmatter for attribution and to detect install path (which implies accessing local paths); reading the skill file for metadata is reasonable, but the install-path detection and the frontmatter-configPath entry suggest the skill might look at local config locations — this is not explicitly declared in the registry and should be clarified. All user images will be uploaded to mega-api-prod.nemovideo.ai (expected for remote rendering) — note privacy implications.
Install Mechanism
No install spec or code files — instruction-only skill (no binaries downloaded or archives extracted). This is the lowest install risk.
Credentials
Only one required environment variable (NEMO_TOKEN), which is proportionate for a remote API. The skill also documents a flow to obtain a temporary anonymous token via a public endpoint (UUID → anonymous-token), which is reasonable. Still, the registry/frontmatter mismatch about config paths is unexplained and should be resolved before trusting automatic local config reads.
Persistence & Privilege
always is false and there is no install; the skill does not request persistent system-level privileges. It will keep session state (session_id) for the API session, which is normal for this use case.
What to consider before installing
This skill appears to do what it says (upload images to a remote nemo-video service and return rendered videos) and asks only for a single API token. Before installing or providing a NEMO_TOKEN: 1) Verify the backend domain (mega-api-prod.nemovideo.ai) and prefer an official homepage or repository — there is none listed. 2) Understand privacy: your images are uploaded to the provider; test with non-sensitive images first and confirm their retention/deletion policy. 3) Prefer using the anonymous-token flow (ephemeral token) rather than a long-lived personal token if possible. 4) Note the small registry vs SKILL.md mismatch (config path listed in frontmatter but not in registry) — ask the author whether the skill will read ~/.config/nemovideo/ or other local files. 5) If you must provide a NEMO_TOKEN, scope it narrowly (least privilege) and rotate/revoke after testing. If you want higher assurance, request the skill's source repo or official provider documentation before use.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🎞️ Clawdis
EnvNEMO_TOKEN
Primary envNEMO_TOKEN
latestvk979550f10y81t2990v4gzv3s184w85g
61downloads
0stars
1versions
Updated 1w ago
v1.0.0
MIT-0

Getting Started

Share your static images and I'll get started on AI video creation. Or just tell me what you're thinking.

Try saying:

  • "convert my static images"
  • "export 1080p MP4"
  • "turn these images into a smooth"

Automatic Setup

On first interaction, connect to the processing API before doing anything else. Show a brief status like "Setting things up...".

Token: If NEMO_TOKEN environment variable is already set, use it and skip to Session below.

Free token: Generate a UUID as client identifier, then POST to https://mega-api-prod.nemovideo.ai/api/auth/anonymous-token with header X-Client-Id: <uuid>. The response field data.token becomes your NEMO_TOKEN (100 credits, 7-day expiry).

Session: POST to https://mega-api-prod.nemovideo.ai/api/tasks/me/with-session/nemo_agent with Bearer auth and body {"task_name":"project"}. Save session_id from the response.

Confirm to the user you're connected and ready. Don't print tokens or raw JSON.

Image to Video in ChatGPT — Convert Images into Video Clips

Send me your static images and describe the result you want. The AI video creation runs on remote GPU nodes — nothing to install on your machine.

A quick example: upload three product photos in JPG format, type "turn these images into a smooth animated video with transitions", and you'll get a 1080p MP4 back in roughly 30-60 seconds. All rendering happens server-side.

Worth noting: using fewer images with clear subjects produces smoother motion results.

Matching Input to Actions

User prompts referencing image to video in chatgpt, aspect ratio, text overlays, or audio tracks get routed to the corresponding action via keyword and intent classification.

User says...ActionSkip SSE?
"export" / "导出" / "download" / "send me the video"→ §3.5 Export
"credits" / "积分" / "balance" / "余额"→ §3.3 Credits
"status" / "状态" / "show tracks"→ §3.4 State
"upload" / "上传" / user sends file→ §3.2 Upload
Everything else (generate, edit, add BGM…)→ §3.1 SSE

Cloud Render Pipeline Details

Each export job queues on a cloud GPU node that composites video layers, applies platform-spec compression (H.264, up to 1080x1920), and returns a download URL within 30-90 seconds. The session token carries render job IDs, so closing the tab before completion orphans the job.

All calls go to https://mega-api-prod.nemovideo.ai. The main endpoints:

  1. SessionPOST /api/tasks/me/with-session/nemo_agent with {"task_name":"project","language":"<lang>"}. Gives you a session_id.
  2. Chat (SSE)POST /run_sse with session_id and your message in new_message.parts[0].text. Set Accept: text/event-stream. Up to 15 min.
  3. UploadPOST /api/upload-video/nemo_agent/me/<sid> — multipart file or JSON with URLs.
  4. CreditsGET /api/credits/balance/simple — returns available, frozen, total.
  5. StateGET /api/state/nemo_agent/me/<sid>/latest — current draft and media info.
  6. ExportPOST /api/render/proxy/lambda with render ID and draft JSON. Poll GET /api/render/proxy/lambda/<id> every 30s for completed status and download URL.

Formats: mp4, mov, avi, webm, mkv, jpg, png, gif, webp, mp3, wav, m4a, aac.

Skill attribution — read from this file's YAML frontmatter at runtime:

  • X-Skill-Source: image-to-video-in-chatgpt
  • X-Skill-Version: from frontmatter version
  • X-Skill-Platform: detect from install path (~/.clawhub/clawhub, ~/.cursor/skills/cursor, else unknown)

Include Authorization: Bearer <NEMO_TOKEN> and all attribution headers on every request — omitting them triggers a 402 on export.

Draft field mapping: t=tracks, tt=track type (0=video, 1=audio, 7=text), sg=segments, d=duration(ms), m=metadata.

Timeline (3 tracks): 1. Video: city timelapse (0-10s) 2. BGM: Lo-fi (0-10s, 35%) 3. Title: "Urban Dreams" (0-3s)

Backend Response Translation

The backend assumes a GUI exists. Translate these into API actions:

Backend saysYou do
"click [button]" / "点击"Execute via API
"open [panel]" / "打开"Query session state
"drag/drop" / "拖拽"Send edit via SSE
"preview in timeline"Show track summary
"Export button" / "导出"Execute export workflow

Reading the SSE Stream

Text events go straight to the user (after GUI translation). Tool calls stay internal. Heartbeats and empty data: lines mean the backend is still working — show "⏳ Still working..." every 2 minutes.

About 30% of edit operations close the stream without any text. When that happens, poll /api/state to confirm the timeline changed, then tell the user what was updated.

Error Handling

CodeMeaningAction
0SuccessContinue
1001Bad/expired tokenRe-auth via anonymous-token (tokens expire after 7 days)
1002Session not foundNew session §3.0
2001No creditsAnonymous: show registration URL with ?bind=<id> (get <id> from create-session or state response when needed). Registered: "Top up credits in your account"
4001Unsupported fileShow supported formats
4002File too largeSuggest compress/trim
400Missing X-Client-IdGenerate Client-Id and retry (see §1)
402Free plan export blockedSubscription tier issue, NOT credits. "Register or upgrade your plan to unlock export."
429Rate limit (1 token/client/7 days)Retry in 30s once

Tips and Tricks

The backend processes faster when you're specific. Instead of "make it look better", try "turn these images into a smooth animated video with transitions" — concrete instructions get better results.

Max file size is 200MB. Stick to JPG, PNG, WEBP, GIF for the smoothest experience.

Export as MP4 for widest compatibility across platforms.

Common Workflows

Quick edit: Upload → "turn these images into a smooth animated video with transitions" → Download MP4. Takes 30-60 seconds for a 30-second clip.

Batch style: Upload multiple files in one session. Process them one by one with different instructions. Each gets its own render.

Iterative: Start with a rough cut, preview the result, then refine. The session keeps your timeline state so you can keep tweaking.

Comments

Loading comments...