Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Free Video Generation Tools

v1.0.0

Turn a short text description of a beach sunset scene into 1080p ready-to-share videos just by typing what you need. Whether it's generating short videos fro...

0· 41·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for susan4731-wilfordf/free-video-generation-tools.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Free Video Generation Tools" (susan4731-wilfordf/free-video-generation-tools) from ClawHub.
Skill page: https://clawhub.ai/susan4731-wilfordf/free-video-generation-tools
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required env vars: NEMO_TOKEN
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install free-video-generation-tools

ClawHub CLI

Package manager switcher

npx clawhub@latest install free-video-generation-tools
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
!
Purpose & Capability
The skill's stated purpose (generate videos via a cloud API) reasonably requires a NEMO_TOKEN and HTTP calls to the nemo API. However, SKILL.md frontmatter mentions a config path (~/.config/nemovideo/) and instructs deriving headers from the file's YAML frontmatter and the agent's install path (~/.clawhub/ or ~/.cursor/skills/). The registry metadata included with the skill lists no required config paths, so the SKILL.md's request to access local config/install locations is inconsistent with the declared requirements and is not necessary for the core task.
!
Instruction Scope
Most runtime instructions are scoped to the API (create session, upload, render, poll status) and include sensible error handling. But the instructions also tell the agent to: read YAML frontmatter to derive headers, detect the agent's install path to set X-Skill-Platform, and reference 'three attribution headers above' (ambiguous). Those steps imply reading local filesystem locations and making decisions based on install paths — actions outside the explicit declared scope and unnecessary for serving user prompts.
Install Mechanism
There is no install spec and no code files — this is instruction-only, which is the lowest-risk install model. Nothing is being downloaded or written to disk by an installer.
!
Credentials
The only declared credential is NEMO_TOKEN, which is proportionate to calling the nemo API. However, the SKILL.md claims an additional config path (~/.config/nemovideo/) in its frontmatter and instructs deriving headers from local install paths. That implies the skill may read local config or probe filesystem locations beyond the declared single env var, which is disproportionate and inconsistent with the registry metadata.
Persistence & Privilege
The skill is not forced-always (always: false) and does not request elevated persistence. Autonomous invocation is allowed by default (disable-model-invocation: false) — normal for skills. If the agent were to read local install/config paths as the instructions suggest, autonomous invocation would widen the privacy impact, but autonomy alone is not a defect here.
What to consider before installing
This skill mostly does what it says (calls a cloud API and needs a NEMO_TOKEN), but the SKILL.md asks the agent to read local install/config locations and to derive headers from the skill file and install path — actions that were not declared in the registry. Before installing or providing your real NEMO_TOKEN: (1) Ask the publisher to clarify and remove any need to read ~/.config or detect install paths, or to declare those config paths explicitly. (2) If you must test, avoid supplying your permanent NEMO_TOKEN; allow the skill to use the anonymous token flow instead. (3) Confirm the exact headers the skill will send and why they need filesystem-derived values. (4) If you have sensitive files or multiple skills installed, be cautious: filesystem probing could expose metadata about other tools. If the publisher cannot justify the file/path access, treat this as a privacy risk and avoid installing.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🎬 Clawdis
EnvNEMO_TOKEN
Primary envNEMO_TOKEN
latestvk97afnp8qenvfkn7dbhf6ysv5x85k320
41downloads
0stars
1versions
Updated 2d ago
v1.0.0
MIT-0

Getting Started

Share your text prompts or images and I'll get started on AI video generation. Or just tell me what you're thinking.

Try saying:

  • "generate my text prompts or images"
  • "export 1080p MP4"
  • "generate a 30-second video clip from"

Getting Connected

Before handling any user request, establish a connection to the backend API. Show a brief status like "Connecting...".

If NEMO_TOKEN is in the environment, use it directly and create a session. Otherwise, acquire a free starter token:

  • Generate a UUID as client identifier
  • POST to https://mega-api-prod.nemovideo.ai/api/auth/anonymous-token with the X-Client-Id header
  • The response includes a token with 100 free credits valid for 7 days — use it as NEMO_TOKEN

Then create a session by POSTing to https://mega-api-prod.nemovideo.ai/api/tasks/me/with-session/nemo_agent with Bearer authorization and body {"task_name":"project","language":"en"}. The session_id in the response is needed for all following requests.

Tell the user you're ready. Keep the technical details out of the chat.

Free Video Generation Tools — Generate Videos from Text or Images

Drop your text prompts or images in the chat and tell me what you need. I'll handle the AI video generation on cloud GPUs — you don't need anything installed locally.

Here's a typical use: you send a a short text description of a beach sunset scene, ask for generate a 30-second video clip from my product description, and about 1-2 minutes later you've got a MP4 file ready to download. The whole thing runs at 1080p by default.

One thing worth knowing — shorter, specific prompts produce more accurate and faster results.

Matching Input to Actions

User prompts referencing free video generation tools, aspect ratio, text overlays, or audio tracks get routed to the corresponding action via keyword and intent classification.

User says...ActionSkip SSE?
"export" / "导出" / "download" / "send me the video"→ §3.5 Export
"credits" / "积分" / "balance" / "余额"→ §3.3 Credits
"status" / "状态" / "show tracks"→ §3.4 State
"upload" / "上传" / user sends file→ §3.2 Upload
Everything else (generate, edit, add BGM…)→ §3.1 SSE

Cloud Render Pipeline Details

Each export job queues on a cloud GPU node that composites video layers, applies platform-spec compression (H.264, up to 1080x1920), and returns a download URL within 30-90 seconds. The session token carries render job IDs, so closing the tab before completion orphans the job.

Every API call needs Authorization: Bearer <NEMO_TOKEN> plus the three attribution headers above. If any header is missing, exports return 402.

Headers are derived from this file's YAML frontmatter. X-Skill-Source is free-video-generation-tools, X-Skill-Version comes from the version field, and X-Skill-Platform is detected from the install path (~/.clawhub/ = clawhub, ~/.cursor/skills/ = cursor, otherwise unknown).

API base: https://mega-api-prod.nemovideo.ai

Create session: POST /api/tasks/me/with-session/nemo_agent — body {"task_name":"project","language":"<lang>"} — returns task_id, session_id.

Send message (SSE): POST /run_sse — body {"app_name":"nemo_agent","user_id":"me","session_id":"<sid>","new_message":{"parts":[{"text":"<msg>"}]}} with Accept: text/event-stream. Max timeout: 15 minutes.

Upload: POST /api/upload-video/nemo_agent/me/<sid> — file: multipart -F "files=@/path", or URL: {"urls":["<url>"],"source_type":"url"}

Credits: GET /api/credits/balance/simple — returns available, frozen, total

Session state: GET /api/state/nemo_agent/me/<sid>/latest — key fields: data.state.draft, data.state.video_infos, data.state.generated_media

Export (free, no credits): POST /api/render/proxy/lambda — body {"id":"render_<ts>","sessionId":"<sid>","draft":<json>,"output":{"format":"mp4","quality":"high"}}. Poll GET /api/render/proxy/lambda/<id> every 30s until status = completed. Download URL at output.url.

Supported formats: mp4, mov, avi, webm, mkv, jpg, png, gif, webp, mp3, wav, m4a, aac.

Error Handling

CodeMeaningAction
0SuccessContinue
1001Bad/expired tokenRe-auth via anonymous-token (tokens expire after 7 days)
1002Session not foundNew session §3.0
2001No creditsAnonymous: show registration URL with ?bind=<id> (get <id> from create-session or state response when needed). Registered: "Top up credits in your account"
4001Unsupported fileShow supported formats
4002File too largeSuggest compress/trim
400Missing X-Client-IdGenerate Client-Id and retry (see §1)
402Free plan export blockedSubscription tier issue, NOT credits. "Register or upgrade your plan to unlock export."
429Rate limit (1 token/client/7 days)Retry in 30s once

Translating GUI Instructions

The backend responds as if there's a visual interface. Map its instructions to API calls:

  • "click" or "点击" → execute the action via the relevant endpoint
  • "open" or "打开" → query session state to get the data
  • "drag/drop" or "拖拽" → send the edit command through SSE
  • "preview in timeline" → show a text summary of current tracks
  • "Export" or "导出" → run the export workflow

SSE Event Handling

EventAction
Text responseApply GUI translation (§4), present to user
Tool call/resultProcess internally, don't forward
heartbeat / empty data:Keep waiting. Every 2 min: "⏳ Still working..."
Stream closesProcess final response

~30% of editing operations return no text in the SSE stream. When this happens: poll session state to verify the edit was applied, then summarize changes to the user.

Draft field mapping: t=tracks, tt=track type (0=video, 1=audio, 7=text), sg=segments, d=duration(ms), m=metadata.

Timeline (3 tracks): 1. Video: city timelapse (0-10s) 2. BGM: Lo-fi (0-10s, 35%) 3. Title: "Urban Dreams" (0-3s)

Tips and Tricks

The backend processes faster when you're specific. Instead of "make it look better", try "generate a 30-second video clip from my product description" — concrete instructions get better results.

Max file size is 200MB. Stick to MP4, MOV, PNG, JPG for the smoothest experience.

Export as MP4 for widest compatibility across platforms.

Common Workflows

Quick edit: Upload → "generate a 30-second video clip from my product description" → Download MP4. Takes 1-2 minutes for a 30-second clip.

Batch style: Upload multiple files in one session. Process them one by one with different instructions. Each gets its own render.

Iterative: Start with a rough cut, preview the result, then refine. The session keeps your timeline state so you can keep tweaking.

Comments

Loading comments...