Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Ai Video Generator Free Chat

v1.0.0

generate text prompts into AI generated videos with this skill. Works with MP4, MOV, WebM, GIF files up to 500MB. content creators use it for generating vide...

0· 38·0 current·0 all-time
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Name and description match the runtime instructions (calls to a nemo video API, upload, render, SSE). Requesting a single service token (NEMO_TOKEN) is coherent for a cloud-rendering video service. However, the SKILL.md frontmatter includes a configPaths entry (~/.config/nemovideo/) while the registry listing says no required config paths — this mismatch is unexplained.
Instruction Scope
Instructions are specific about API endpoints, session flow, SSE, uploads, and token acquisition. They explicitly tell the agent to POST for anonymous tokens and to 'save session_id'. They also direct deriving X-Skill-Platform from the agent install path (e.g., ~/.clawhub/, ~/.cursor/skills/), which implies the agent may need to inspect its filesystem/installation path — this is outside pure 'send requests' behavior and should be confirmed.
Install Mechanism
No install spec and no code files (instruction-only) — the skill does not write code to disk or download external artifacts, which is the lowest-risk install mechanism.
Credentials
Only a single credential (NEMO_TOKEN) is declared as required and is appropriate for a third-party video API. But the SKILL.md also references a config path in its metadata and expects generation/storage of anonymous tokens and session IDs; clarify whether the skill will persist tokens/session IDs to disk and whether it actually reads ~/.config/nemovideo/.
Persistence & Privilege
always is false and the skill does not request elevated platform-wide privileges. The only potential persistence is saving a session_id or anonymous token (7-day expiry) — the spec does not require permanent always-on presence or changes to other skills.
What to consider before installing
Before installing: 1) Confirm the NEMO_TOKEN usage — only provide a token you trust; prefer an ephemeral/limited token or anonymous flow for testing. 2) Ask the skill author to clarify the config path discrepancy: SKILL.md mentions ~/.config/nemovideo/ but the registry lists no required config paths — find out whether the skill will read or write that directory. 3) Verify where session tokens and anonymous tokens are stored (memory vs disk) and how long they last; avoid storing long-lived credentials. 4) Be aware: uploading files sends your media to an external service (mega-api-prod.nemovideo.ai). Do not upload sensitive or private content unless you trust that endpoint and its privacy policy. 5) If you’re uncomfortable with the skill inspecting install paths to fill X-Skill-Platform headers (it may reveal local install layout), request a version that omits that behavior. 6) If possible, test with throwaway data and a throwaway token first. These issues look like sloppy metadata/instructioning rather than overtly malicious behavior, but clarify the points above before enabling the skill.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🤖 Clawdis
EnvNEMO_TOKEN
Primary envNEMO_TOKEN
latestvk970vz83wcr0g45cqjb2vh9amh85b0r4
38downloads
0stars
1versions
Updated 22h ago
v1.0.0
MIT-0

Getting Started

Share your text prompts and I'll get started on AI video generation. Or just tell me what you're thinking.

Try saying:

  • "generate my text prompts"
  • "export 1080p MP4"
  • "generate a 30-second explainer video from"

Automatic Setup

On first interaction, connect to the processing API before doing anything else. Show a brief status like "Setting things up...".

Token: If NEMO_TOKEN environment variable is already set, use it and skip to Session below.

Free token: Generate a UUID as client identifier, then POST to https://mega-api-prod.nemovideo.ai/api/auth/anonymous-token with header X-Client-Id: <uuid>. The response field data.token becomes your NEMO_TOKEN (100 credits, 7-day expiry).

Session: POST to https://mega-api-prod.nemovideo.ai/api/tasks/me/with-session/nemo_agent with Bearer auth and body {"task_name":"project"}. Save session_id from the response.

Confirm to the user you're connected and ready. Don't print tokens or raw JSON.

AI Video Generator Free Chat — Generate Videos via Chat Prompts

Drop your text prompts in the chat and tell me what you need. I'll handle the AI video generation on cloud GPUs — you don't need anything installed locally.

Here's a typical use: you send a a short text description of a product demo scene, ask for generate a 30-second explainer video from my script using a chat prompt, and about 1-2 minutes later you've got a MP4 file ready to download. The whole thing runs at 1080p by default.

One thing worth knowing — shorter and more specific chat prompts produce more accurate video results.

Matching Input to Actions

User prompts referencing ai video generator free chat, aspect ratio, text overlays, or audio tracks get routed to the corresponding action via keyword and intent classification.

User says...ActionSkip SSE?
"export" / "导出" / "download" / "send me the video"→ §3.5 Export
"credits" / "积分" / "balance" / "余额"→ §3.3 Credits
"status" / "状态" / "show tracks"→ §3.4 State
"upload" / "上传" / user sends file→ §3.2 Upload
Everything else (generate, edit, add BGM…)→ §3.1 SSE

Cloud Render Pipeline Details

Each export job queues on a cloud GPU node that composites video layers, applies platform-spec compression (H.264, up to 1080x1920), and returns a download URL within 30-90 seconds. The session token carries render job IDs, so closing the tab before completion orphans the job.

Base URL: https://mega-api-prod.nemovideo.ai

EndpointMethodPurpose
/api/tasks/me/with-session/nemo_agentPOSTStart a new editing session. Body: {"task_name":"project","language":"<lang>"}. Returns session_id.
/run_ssePOSTSend a user message. Body includes app_name, session_id, new_message. Stream response with Accept: text/event-stream. Timeout: 15 min.
/api/upload-video/nemo_agent/me/<sid>POSTUpload a file (multipart) or URL.
/api/credits/balance/simpleGETCheck remaining credits (available, frozen, total).
/api/state/nemo_agent/me/<sid>/latestGETFetch current timeline state (draft, video_infos, generated_media).
/api/render/proxy/lambdaPOSTStart export. Body: {"id":"render_<ts>","sessionId":"<sid>","draft":<json>,"output":{"format":"mp4","quality":"high"}}. Poll status every 30s.

Accepted file types: mp4, mov, avi, webm, mkv, jpg, png, gif, webp, mp3, wav, m4a, aac.

Headers are derived from this file's YAML frontmatter. X-Skill-Source is ai-video-generator-free-chat, X-Skill-Version comes from the version field, and X-Skill-Platform is detected from the install path (~/.clawhub/ = clawhub, ~/.cursor/skills/ = cursor, otherwise unknown).

Every API call needs Authorization: Bearer <NEMO_TOKEN> plus the three attribution headers above. If any header is missing, exports return 402.

Error Codes

  • 0 — success, continue normally
  • 1001 — token expired or invalid; re-acquire via /api/auth/anonymous-token
  • 1002 — session not found; create a new one
  • 2001 — out of credits; anonymous users get a registration link with ?bind=<id>, registered users top up
  • 4001 — unsupported file type; show accepted formats
  • 4002 — file too large; suggest compressing or trimming
  • 400 — missing X-Client-Id; generate one and retry
  • 402 — free plan export blocked; not a credit issue, subscription tier
  • 429 — rate limited; wait 30s and retry once

Reading the SSE Stream

Text events go straight to the user (after GUI translation). Tool calls stay internal. Heartbeats and empty data: lines mean the backend is still working — show "⏳ Still working..." every 2 minutes.

About 30% of edit operations close the stream without any text. When that happens, poll /api/state to confirm the timeline changed, then tell the user what was updated.

Translating GUI Instructions

The backend responds as if there's a visual interface. Map its instructions to API calls:

  • "click" or "点击" → execute the action via the relevant endpoint
  • "open" or "打开" → query session state to get the data
  • "drag/drop" or "拖拽" → send the edit command through SSE
  • "preview in timeline" → show a text summary of current tracks
  • "Export" or "导出" → run the export workflow

Draft field mapping: t=tracks, tt=track type (0=video, 1=audio, 7=text), sg=segments, d=duration(ms), m=metadata.

Timeline (3 tracks): 1. Video: city timelapse (0-10s) 2. BGM: Lo-fi (0-10s, 35%) 3. Title: "Urban Dreams" (0-3s)

Common Workflows

Quick edit: Upload → "generate a 30-second explainer video from my script using a chat prompt" → Download MP4. Takes 1-2 minutes for a 30-second clip.

Batch style: Upload multiple files in one session. Process them one by one with different instructions. Each gets its own render.

Iterative: Start with a rough cut, preview the result, then refine. The session keeps your timeline state so you can keep tweaking.

Tips and Tricks

The backend processes faster when you're specific. Instead of "make it look better", try "generate a 30-second explainer video from my script using a chat prompt" — concrete instructions get better results.

Max file size is 500MB. Stick to MP4, MOV, WebM, GIF for the smoothest experience.

Export as MP4 for widest compatibility across social and web platforms.

Comments

Loading comments...