Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Free Video Generator Bot

v1.0.0

Turn a short text description like 'a sunset over a city skyline' into 1080p AI-generated videos just by typing what you need. Whether it's generating short...

0· 34·0 current·0 all-time
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Name/description (text→AI video generation) align with the endpoints and flows in SKILL.md: session creation, SSE chat, upload, export. The requested credential (NEMO_TOKEN) and the API host (mega-api-prod.nemovideo.ai) are consistent with the stated purpose.
Instruction Scope
Instructions are mostly scoped to video-generation workflows (session creation, SSE, upload/export). Notable behaviors: (1) SKILL.md requires reading its own YAML frontmatter for X-Skill-* headers (reasonable), (2) it instructs detecting the install path/platform by checking user home paths (~/.clawhub, ~/.cursor/skills/) which implies filesystem probing beyond a pure network-only integration, and (3) it references a config path (~/.config/nemovideo/) in the frontmatter. These file/path reads should be expected but are worth explicit consent because they touch user home directories.
Install Mechanism
Instruction-only skill with no install spec and no code files — lowest install risk. Nothing is downloaded or written by an installer in the package metadata.
Credentials
The only declared credential is NEMO_TOKEN (primaryEnv). SKILL.md also describes creating an anonymous token via the service if NEMO_TOKEN is absent, which is proportionate. Minor inconsistency: registry metadata earlier listed no required config paths, but the SKILL.md frontmatter declares ~/.config/nemovideo/ — this mismatch should be clarified. No unrelated cloud creds or broad secrets are requested.
Persistence & Privilege
always is false and the skill does not request permanent platform-wide privileges. It will create/keep a session_id for operations (normal for this use-case) but doesn't claim to modify other skills or persist beyond typical session data.
What to consider before installing
This skill appears to implement a legitimate text→video service and only needs a NEMO_TOKEN (or can obtain an anonymous token from nemovideo.ai). Before installing or using it: - Be aware that your prompts and any uploaded images/audio will be sent to mega-api-prod.nemovideo.ai — do not upload sensitive or private material you wouldn't want processed by an external service. - Confirm the service/domain (nemovideo.ai) is one you trust; the package has no homepage or known owner metadata. - Note the SKILL.md asks the agent to check your home-directory install paths (~/.clawhub, ~/.cursor/skills/) and a config directory (~/.config/nemovideo/). That means the agent may read those locations to set headers or store session data; if you prefer, run the skill in a sandboxed environment or with a limited account. - There is a metadata mismatch: registry metadata shows no config paths while SKILL.md frontmatter does. Ask the publisher to clarify why the skill needs ~/.config/nemovideo/ and which files it will read/write. - If you already have a NEMO_TOKEN set in your environment, consider providing a scoped/ephemeral token rather than long-lived credentials. If you don’t want the skill to obtain a token automatically, avoid leaving network access enabled or do not invoke the skill. If you want higher confidence, ask the publisher for: a homepage/repository, explicit documentation of which files are read/written, and a privacy policy describing how uploaded media and generated tokens are handled.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🤖 Clawdis
EnvNEMO_TOKEN
Primary envNEMO_TOKEN
latestvk9797dhrym8sfq3p41serq1cc9854rmy
34downloads
0stars
1versions
Updated 7h ago
v1.0.0
MIT-0

Getting Started

Share your text prompts or images and I'll get started on AI video generation. Or just tell me what you're thinking.

Try saying:

  • "generate my text prompts or images"
  • "export 1080p MP4"
  • "generate a 30-second video from my"

Quick Start Setup

This skill connects to a cloud processing backend. On first use, set up the connection automatically and let the user know ("Connecting...").

Token check: Look for NEMO_TOKEN in the environment. If found, skip to session creation. Otherwise:

  • Generate a UUID as client identifier
  • POST https://mega-api-prod.nemovideo.ai/api/auth/anonymous-token with X-Client-Id header
  • Extract data.token from the response — this is your NEMO_TOKEN (100 free credits, 7-day expiry)

Session: POST https://mega-api-prod.nemovideo.ai/api/tasks/me/with-session/nemo_agent with Bearer auth and body {"task_name":"project"}. Keep the returned session_id for all operations.

Let the user know with a brief "Ready!" when setup is complete. Don't expose tokens or raw API output.

Free Video Generator Bot — Generate Videos from Text or Images

Send me your text prompts or images and describe the result you want. The AI video generation runs on remote GPU nodes — nothing to install on your machine.

A quick example: upload a short text description like 'a sunset over a city skyline', type "generate a 30-second video from my product description with background music", and you'll get a 1080p MP4 back in roughly 1-2 minutes. All rendering happens server-side.

Worth noting: shorter, specific prompts produce more accurate results than long vague ones.

Matching Input to Actions

User prompts referencing free video generator bot, aspect ratio, text overlays, or audio tracks get routed to the corresponding action via keyword and intent classification.

User says...ActionSkip SSE?
"export" / "导出" / "download" / "send me the video"→ §3.5 Export
"credits" / "积分" / "balance" / "余额"→ §3.3 Credits
"status" / "状态" / "show tracks"→ §3.4 State
"upload" / "上传" / user sends file→ §3.2 Upload
Everything else (generate, edit, add BGM…)→ §3.1 SSE

Cloud Render Pipeline Details

Each export job queues on a cloud GPU node that composites video layers, applies platform-spec compression (H.264, up to 1080x1920), and returns a download URL within 30-90 seconds. The session token carries render job IDs, so closing the tab before completion orphans the job.

All calls go to https://mega-api-prod.nemovideo.ai. The main endpoints:

  1. SessionPOST /api/tasks/me/with-session/nemo_agent with {"task_name":"project","language":"<lang>"}. Gives you a session_id.
  2. Chat (SSE)POST /run_sse with session_id and your message in new_message.parts[0].text. Set Accept: text/event-stream. Up to 15 min.
  3. UploadPOST /api/upload-video/nemo_agent/me/<sid> — multipart file or JSON with URLs.
  4. CreditsGET /api/credits/balance/simple — returns available, frozen, total.
  5. StateGET /api/state/nemo_agent/me/<sid>/latest — current draft and media info.
  6. ExportPOST /api/render/proxy/lambda with render ID and draft JSON. Poll GET /api/render/proxy/lambda/<id> every 30s for completed status and download URL.

Formats: mp4, mov, avi, webm, mkv, jpg, png, gif, webp, mp3, wav, m4a, aac.

Skill attribution — read from this file's YAML frontmatter at runtime:

  • X-Skill-Source: free-video-generator-bot
  • X-Skill-Version: from frontmatter version
  • X-Skill-Platform: detect from install path (~/.clawhub/clawhub, ~/.cursor/skills/cursor, else unknown)

Every API call needs Authorization: Bearer <NEMO_TOKEN> plus the three attribution headers above. If any header is missing, exports return 402.

Draft field mapping: t=tracks, tt=track type (0=video, 1=audio, 7=text), sg=segments, d=duration(ms), m=metadata.

Timeline (3 tracks): 1. Video: city timelapse (0-10s) 2. BGM: Lo-fi (0-10s, 35%) 3. Title: "Urban Dreams" (0-3s)

Translating GUI Instructions

The backend responds as if there's a visual interface. Map its instructions to API calls:

  • "click" or "点击" → execute the action via the relevant endpoint
  • "open" or "打开" → query session state to get the data
  • "drag/drop" or "拖拽" → send the edit command through SSE
  • "preview in timeline" → show a text summary of current tracks
  • "Export" or "导出" → run the export workflow

Reading the SSE Stream

Text events go straight to the user (after GUI translation). Tool calls stay internal. Heartbeats and empty data: lines mean the backend is still working — show "⏳ Still working..." every 2 minutes.

About 30% of edit operations close the stream without any text. When that happens, poll /api/state to confirm the timeline changed, then tell the user what was updated.

Error Handling

CodeMeaningAction
0SuccessContinue
1001Bad/expired tokenRe-auth via anonymous-token (tokens expire after 7 days)
1002Session not foundNew session §3.0
2001No creditsAnonymous: show registration URL with ?bind=<id> (get <id> from create-session or state response when needed). Registered: "Top up credits in your account"
4001Unsupported fileShow supported formats
4002File too largeSuggest compress/trim
400Missing X-Client-IdGenerate Client-Id and retry (see §1)
402Free plan export blockedSubscription tier issue, NOT credits. "Register or upgrade your plan to unlock export."
429Rate limit (1 token/client/7 days)Retry in 30s once

Tips and Tricks

The backend processes faster when you're specific. Instead of "make it look better", try "generate a 30-second video from my product description with background music" — concrete instructions get better results.

Max file size is 200MB. Stick to MP4, MOV, JPG, PNG for the smoothest experience.

Export as MP4 for widest compatibility across social platforms.

Common Workflows

Quick edit: Upload → "generate a 30-second video from my product description with background music" → Download MP4. Takes 1-2 minutes for a 30-second clip.

Batch style: Upload multiple files in one session. Process them one by one with different instructions. Each gets its own render.

Iterative: Start with a rough cut, preview the result, then refine. The session keeps your timeline state so you can keep tweaking.

Comments

Loading comments...