Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Openai Image To Video

v1.0.0

Skip the learning curve of professional editing software. Describe what you want — turn this image into a 5-second animated video clip — and get animated vid...

0· 18·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for linmillsd7/openai-image-to-video.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Openai Image To Video" (linmillsd7/openai-image-to-video) from ClawHub.
Skill page: https://clawhub.ai/linmillsd7/openai-image-to-video
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required env vars: NEMO_TOKEN
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Canonical install target

openclaw skills install linmillsd7/openai-image-to-video

ClawHub CLI

Package manager switcher

npx clawhub@latest install openai-image-to-video
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
!
Purpose & Capability
The skill is named and marketed 'OpenAI Image To Video' but all runtime instructions and required credential (NEMO_TOKEN) point to a third‑party service at mega-api-prod.nemovideo.ai (Nemo). The registry metadata included with the submission lists no config paths, yet the SKILL.md frontmatter declares a config path (~/.config/nemovideo/). This mismatch and the use of 'OpenAI' in the display name is misleading and may cause users to believe the skill is an official OpenAI product when it is not.
!
Instruction Scope
The SKILL.md instructs the agent to automatically create anonymous tokens by POSTing a generated UUID to an external API if NEMO_TOKEN is not set, to create and persist sessions, and to send user files to the external API. It also instructs the agent to detect install path to set an attribution header (which implies reading filesystem paths). These network calls and local reads are within the expected scope for a remote rendering service, but the file instructs hiding raw responses/token values from the user — that reduces transparency and is worth noting.
Install Mechanism
Instruction-only skill with no install spec and no code files — nothing is downloaded or written by an installer step. This is the lowest risk installation pattern.
!
Credentials
The skill requests a single credential (NEMO_TOKEN), which matches the described API usage. However, the registry metadata given to the evaluator lists no required config paths while the SKILL.md frontmatter lists ~/.config/nemovideo/; that discrepancy is unexplained. Also the skill will create an anonymous token automatically when none is provided, which means it will perform network calls and persist session data unless the user supplies their own token.
Persistence & Privilege
The skill is not always-enabled and allows user invocation. It instructs storing session_id and possibly writing to ~/.config/nemovideo/ per frontmatter; storing its own session state is normal for this class of skill, but the unspecified storage location and lack of explicit details about what is written are worth checking.
What to consider before installing
This skill appears to be a client for a third-party service (nemovideo.ai), not an official OpenAI product despite the name. Before installing or using it: (1) verify the publisher and service reputation — there's no homepage and owner ID is unknown; (2) prefer to set your own NEMO_TOKEN rather than letting the skill automatically fetch one; (3) expect the skill to upload any images you provide to an external API and to store session identifiers under a config path (the SKILL.md mentions ~/.config/nemovideo/); (4) if you need more assurance, ask the publisher where session data is stored, what is written to disk, and whether the token grants only limited, revocable access. If these answers are not satisfactory, avoid installing or using the skill with sensitive files or credentials.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🎬 Clawdis
EnvNEMO_TOKEN
Primary envNEMO_TOKEN
latestvk97a6scjf1ndasrqvjkjrmtf2x85fhnn
18downloads
0stars
1versions
Updated 3h ago
v1.0.0
MIT-0

Getting Started

Share your images and I'll get started on AI video creation. Or just tell me what you're thinking.

Try saying:

  • "convert my images"
  • "export 1080p MP4"
  • "turn this image into a 5-second"

First-Time Connection

When a user first opens this skill, connect to the processing backend automatically. Briefly let them know (e.g. "Setting up...").

Authentication: Check if NEMO_TOKEN is set in the environment. If it is, skip to step 2.

  1. Obtain a free token: Generate a random UUID as client identifier. POST to https://mega-api-prod.nemovideo.ai/api/auth/anonymous-token with header X-Client-Id set to that UUID. The response data.token is your NEMO_TOKEN — 100 free credits, valid 7 days.
  2. Create a session: POST to https://mega-api-prod.nemovideo.ai/api/tasks/me/with-session/nemo_agent with Authorization: Bearer <token>, Content-Type: application/json, and body {"task_name":"project","language":"<detected>"}. Store the returned session_id for all subsequent requests.

Keep setup communication brief. Don't display raw API responses or token values to the user.

OpenAI Image to Video — Convert Images Into Video Clips

Send me your images and describe the result you want. The AI video creation runs on remote GPU nodes — nothing to install on your machine.

A quick example: upload a single product photo or illustrated scene, type "turn this image into a 5-second animated video clip", and you'll get a 1080p MP4 back in roughly 30-60 seconds. All rendering happens server-side.

Worth noting: high-contrast images with clear subjects produce smoother motion results.

Matching Input to Actions

User prompts referencing openai image to video, aspect ratio, text overlays, or audio tracks get routed to the corresponding action via keyword and intent classification.

User says...ActionSkip SSE?
"export" / "导出" / "download" / "send me the video"→ §3.5 Export
"credits" / "积分" / "balance" / "余额"→ §3.3 Credits
"status" / "状态" / "show tracks"→ §3.4 State
"upload" / "上传" / user sends file→ §3.2 Upload
Everything else (generate, edit, add BGM…)→ §3.1 SSE

Cloud Render Pipeline Details

Each export job queues on a cloud GPU node that composites video layers, applies platform-spec compression (H.264, up to 1080x1920), and returns a download URL within 30-90 seconds. The session token carries render job IDs, so closing the tab before completion orphans the job.

Every API call needs Authorization: Bearer <NEMO_TOKEN> plus the three attribution headers above. If any header is missing, exports return 402.

Headers are derived from this file's YAML frontmatter. X-Skill-Source is openai-image-to-video, X-Skill-Version comes from the version field, and X-Skill-Platform is detected from the install path (~/.clawhub/ = clawhub, ~/.cursor/skills/ = cursor, otherwise unknown).

API base: https://mega-api-prod.nemovideo.ai

Create session: POST /api/tasks/me/with-session/nemo_agent — body {"task_name":"project","language":"<lang>"} — returns task_id, session_id.

Send message (SSE): POST /run_sse — body {"app_name":"nemo_agent","user_id":"me","session_id":"<sid>","new_message":{"parts":[{"text":"<msg>"}]}} with Accept: text/event-stream. Max timeout: 15 minutes.

Upload: POST /api/upload-video/nemo_agent/me/<sid> — file: multipart -F "files=@/path", or URL: {"urls":["<url>"],"source_type":"url"}

Credits: GET /api/credits/balance/simple — returns available, frozen, total

Session state: GET /api/state/nemo_agent/me/<sid>/latest — key fields: data.state.draft, data.state.video_infos, data.state.generated_media

Export (free, no credits): POST /api/render/proxy/lambda — body {"id":"render_<ts>","sessionId":"<sid>","draft":<json>,"output":{"format":"mp4","quality":"high"}}. Poll GET /api/render/proxy/lambda/<id> every 30s until status = completed. Download URL at output.url.

Supported formats: mp4, mov, avi, webm, mkv, jpg, png, gif, webp, mp3, wav, m4a, aac.

Error Handling

CodeMeaningAction
0SuccessContinue
1001Bad/expired tokenRe-auth via anonymous-token (tokens expire after 7 days)
1002Session not foundNew session §3.0
2001No creditsAnonymous: show registration URL with ?bind=<id> (get <id> from create-session or state response when needed). Registered: "Top up credits in your account"
4001Unsupported fileShow supported formats
4002File too largeSuggest compress/trim
400Missing X-Client-IdGenerate Client-Id and retry (see §1)
402Free plan export blockedSubscription tier issue, NOT credits. "Register or upgrade your plan to unlock export."
429Rate limit (1 token/client/7 days)Retry in 30s once

Translating GUI Instructions

The backend responds as if there's a visual interface. Map its instructions to API calls:

  • "click" or "点击" → execute the action via the relevant endpoint
  • "open" or "打开" → query session state to get the data
  • "drag/drop" or "拖拽" → send the edit command through SSE
  • "preview in timeline" → show a text summary of current tracks
  • "Export" or "导出" → run the export workflow

Reading the SSE Stream

Text events go straight to the user (after GUI translation). Tool calls stay internal. Heartbeats and empty data: lines mean the backend is still working — show "⏳ Still working..." every 2 minutes.

About 30% of edit operations close the stream without any text. When that happens, poll /api/state to confirm the timeline changed, then tell the user what was updated.

Draft field mapping: t=tracks, tt=track type (0=video, 1=audio, 7=text), sg=segments, d=duration(ms), m=metadata.

Timeline (3 tracks): 1. Video: city timelapse (0-10s) 2. BGM: Lo-fi (0-10s, 35%) 3. Title: "Urban Dreams" (0-3s)

Common Workflows

Quick edit: Upload → "turn this image into a 5-second animated video clip" → Download MP4. Takes 30-60 seconds for a 30-second clip.

Batch style: Upload multiple files in one session. Process them one by one with different instructions. Each gets its own render.

Iterative: Start with a rough cut, preview the result, then refine. The session keeps your timeline state so you can keep tweaking.

Tips and Tricks

The backend processes faster when you're specific. Instead of "make it look better", try "turn this image into a 5-second animated video clip" — concrete instructions get better results.

Max file size is 200MB. Stick to JPG, PNG, WEBP, HEIC for the smoothest experience.

Export as MP4 for widest compatibility across social platforms.

Comments

Loading comments...