Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Ai Image To Video Anime

v1.0.0

convert still images into animated anime clips with this skill. Works with PNG, JPG, WEBP, BMP files up to 200MB. anime artists and fans use it for turning s...

0· 16·0 current·0 all-time
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Name/description align with the documented API endpoints (upload, render, export). Requesting a NEMO_TOKEN is plausible for a cloud render service. However, the skill frontmatter in SKILL.md lists a config path (~/.config/nemovideo/) while the registry metadata lists no required config paths — an inconsistency. Also the SKILL.md describes an anonymous-token fallback flow, which conflicts with the registry claiming NEMO_TOKEN is required.
Instruction Scope
Instructions are focused on session creation, SSE streaming, uploads, and exports — all within the stated image→video domain. They require saving session_id and using Bearer auth, include detailed endpoints, and mandate attribution headers. No instructions ask the agent to read arbitrary system files or unrelated credentials, but the file frontmatter implies a config directory which the instructions never reference explicitly. The skill also asks the agent to 'auto-detect' install path for X-Skill-Platform which may require filesystem inspection not otherwise justified.
Install Mechanism
Instruction-only skill with no install spec and no code files. This is low-risk from an installation perspective (nothing downloaded or written by an installer).
!
Credentials
The registry declares a single env var NEMO_TOKEN (primary credential) which is appropriate for an external API. But SKILL.md provides an explicit anonymous-token acquisition flow if NEMO_TOKEN is absent, implying the env var is optional — this is a direct mismatch. SKILL.md's frontmatter mentions a config path (~/.config/nemovideo/) which, if actually required, would grant access to a user-specific directory that could contain tokens; the registry shows no config paths. These inconsistencies make it unclear whether you must provide persistent credentials (NEMO_TOKEN) or whether the skill will generate and store tokens/session info itself.
Persistence & Privilege
The skill is not always-enabled and is user-invocable; it does instruct saving a session_id for ongoing jobs but does not request elevated platform privileges or system-wide changes. No 'always: true' or other high-privilege flags are present.
Scan Findings in Context
[no_code_files_or_regex_findings] expected: The regex-based scanner found nothing because this is an instruction-only skill with no code files. That is expected, but absence of findings is not evidence of safety — the SKILL.md is the surface to review.
What to consider before installing
Before installing, confirm two things with the skill author: (1) Does the skill actually require you to set NEMO_TOKEN, or will it always use the anonymous-token flow? The registry says NEMO_TOKEN is required, but SKILL.md offers a fallback — that mismatch should be clarified. (2) Why does the SKILL.md frontmatter reference ~/.config/nemovideo/? If the skill will read or write that config directory it could access stored credentials. If you proceed, avoid pasting long-lived secrets into the environment unless you trust the developer and the API domain (mega-api-prod.nemovideo.ai). Prefer using the anonymous-token flow (short-lived) or run the skill in an environment without other secrets. Ask the author to document where session tokens are stored and to remove/justify any need to access user config paths. If you cannot verify the publisher or API domain, do not provide persistent credentials.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🎌 Clawdis
EnvNEMO_TOKEN
Primary envNEMO_TOKEN
latestvk973ckabqxxb8kyxybhrkhk2118580be
16downloads
0stars
1versions
Updated 3h ago
v1.0.0
MIT-0

Getting Started

Share your still images and I'll get started on AI anime video generation. Or just tell me what you're thinking.

Try saying:

  • "convert my still images"
  • "export 1080p MP4"
  • "convert this character image into a"

Automatic Setup

On first interaction, connect to the processing API before doing anything else. Show a brief status like "Setting things up...".

Token: If NEMO_TOKEN environment variable is already set, use it and skip to Session below.

Free token: Generate a UUID as client identifier, then POST to https://mega-api-prod.nemovideo.ai/api/auth/anonymous-token with header X-Client-Id: <uuid>. The response field data.token becomes your NEMO_TOKEN (100 credits, 7-day expiry).

Session: POST to https://mega-api-prod.nemovideo.ai/api/tasks/me/with-session/nemo_agent with Bearer auth and body {"task_name":"project"}. Save session_id from the response.

Confirm to the user you're connected and ready. Don't print tokens or raw JSON.

AI Image to Video Anime — Convert Images into Anime Clips

This tool takes your still images and runs AI anime video generation through a cloud rendering pipeline. You upload, describe what you want, and download the result.

Say you have a single anime-style character illustration in PNG format and want to convert this character image into a short animated anime video clip — the backend processes it in about 1-2 minutes and hands you a 1080p MP4.

Tip: high-contrast anime-style images with clear outlines produce the smoothest motion results.

Matching Input to Actions

User prompts referencing ai image to video anime, aspect ratio, text overlays, or audio tracks get routed to the corresponding action via keyword and intent classification.

User says...ActionSkip SSE?
"export" / "导出" / "download" / "send me the video"→ §3.5 Export
"credits" / "积分" / "balance" / "余额"→ §3.3 Credits
"status" / "状态" / "show tracks"→ §3.4 State
"upload" / "上传" / user sends file→ §3.2 Upload
Everything else (generate, edit, add BGM…)→ §3.1 SSE

Cloud Render Pipeline Details

Each export job queues on a cloud GPU node that composites video layers, applies platform-spec compression (H.264, up to 1080x1920), and returns a download URL within 30-90 seconds. The session token carries render job IDs, so closing the tab before completion orphans the job.

Include Authorization: Bearer <NEMO_TOKEN> and all attribution headers on every request — omitting them triggers a 402 on export.

Three attribution headers are required on every request and must match this file's frontmatter:

HeaderValue
X-Skill-Sourceai-image-to-video-anime
X-Skill-Versionfrontmatter version
X-Skill-Platformauto-detect: clawhub / cursor / unknown from install path

API base: https://mega-api-prod.nemovideo.ai

Create session: POST /api/tasks/me/with-session/nemo_agent — body {"task_name":"project","language":"<lang>"} — returns task_id, session_id.

Send message (SSE): POST /run_sse — body {"app_name":"nemo_agent","user_id":"me","session_id":"<sid>","new_message":{"parts":[{"text":"<msg>"}]}} with Accept: text/event-stream. Max timeout: 15 minutes.

Upload: POST /api/upload-video/nemo_agent/me/<sid> — file: multipart -F "files=@/path", or URL: {"urls":["<url>"],"source_type":"url"}

Credits: GET /api/credits/balance/simple — returns available, frozen, total

Session state: GET /api/state/nemo_agent/me/<sid>/latest — key fields: data.state.draft, data.state.video_infos, data.state.generated_media

Export (free, no credits): POST /api/render/proxy/lambda — body {"id":"render_<ts>","sessionId":"<sid>","draft":<json>,"output":{"format":"mp4","quality":"high"}}. Poll GET /api/render/proxy/lambda/<id> every 30s until status = completed. Download URL at output.url.

Supported formats: mp4, mov, avi, webm, mkv, jpg, png, gif, webp, mp3, wav, m4a, aac.

Error Codes

  • 0 — success, continue normally
  • 1001 — token expired or invalid; re-acquire via /api/auth/anonymous-token
  • 1002 — session not found; create a new one
  • 2001 — out of credits; anonymous users get a registration link with ?bind=<id>, registered users top up
  • 4001 — unsupported file type; show accepted formats
  • 4002 — file too large; suggest compressing or trimming
  • 400 — missing X-Client-Id; generate one and retry
  • 402 — free plan export blocked; not a credit issue, subscription tier
  • 429 — rate limited; wait 30s and retry once

Translating GUI Instructions

The backend responds as if there's a visual interface. Map its instructions to API calls:

  • "click" or "点击" → execute the action via the relevant endpoint
  • "open" or "打开" → query session state to get the data
  • "drag/drop" or "拖拽" → send the edit command through SSE
  • "preview in timeline" → show a text summary of current tracks
  • "Export" or "导出" → run the export workflow

Reading the SSE Stream

Text events go straight to the user (after GUI translation). Tool calls stay internal. Heartbeats and empty data: lines mean the backend is still working — show "⏳ Still working..." every 2 minutes.

About 30% of edit operations close the stream without any text. When that happens, poll /api/state to confirm the timeline changed, then tell the user what was updated.

Draft JSON uses short keys: t for tracks, tt for track type (0=video, 1=audio, 7=text), sg for segments, d for duration in ms, m for metadata.

Example timeline summary:

Timeline (3 tracks): 1. Video: city timelapse (0-10s) 2. BGM: Lo-fi (0-10s, 35%) 3. Title: "Urban Dreams" (0-3s)

Tips and Tricks

The backend processes faster when you're specific. Instead of "make it look better", try "convert this character image into a short animated anime video clip" — concrete instructions get better results.

Max file size is 200MB. Stick to PNG, JPG, WEBP, BMP for the smoothest experience.

Export as MP4 for widest compatibility.

Common Workflows

Quick edit: Upload → "convert this character image into a short animated anime video clip" → Download MP4. Takes 1-2 minutes for a 30-second clip.

Batch style: Upload multiple files in one session. Process them one by one with different instructions. Each gets its own render.

Iterative: Start with a rough cut, preview the result, then refine. The session keeps your timeline state so you can keep tweaking.

Comments

Loading comments...