Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Generation Generator

v1.0.0

generate text prompts or clips into AI generated videos with this skill. Works with MP4, MOV, PNG, JPG files up to 500MB. marketers, content creators, social...

0· 43·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for mory128/generation-generator.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Generation Generator" (mory128/generation-generator) from ClawHub.
Skill page: https://clawhub.ai/mory128/generation-generator
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required env vars: NEMO_TOKEN
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install generation-generator

ClawHub CLI

Package manager switcher

npx clawhub@latest install generation-generator
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Name and description (generate videos from prompts/refs) align with the runtime instructions (upload, SSE chat, render/export endpoints). Requesting a single service token (NEMO_TOKEN) is expected for a cloud video API.
!
Instruction Scope
SKILL.md instructs the agent to auto-obtain anonymous tokens, create sessions, upload user media, and poll render endpoints on mega-api-prod.nemovideo.ai. It also describes deriving attribution headers from an install path and references a config path (~/.config/nemovideo/) in frontmatter — actions that could require reading/writing local state. The instructions additionally tell the agent to 'not display raw API responses or token values', which is an operational policy but also hides sensitive values from the user. The skill's runtime touches network, credentials, and local config semantics beyond a purely stateless prompt-to-render flow.
Install Mechanism
Instruction-only skill with no installer and no code files. This minimizes on-disk install risk; network calls will occur at runtime to the third-party API.
Credentials
Only NEMO_TOKEN is declared as the primary credential, which fits a hosted video API. However, SKILL.md frontmatter includes a configPaths entry (~/.config/nemovideo/) that is not listed in the registry-level required config paths—an inconsistency. The skill also instructs generating and storing an anonymous token if NEMO_TOKEN is absent, which implies creating/storing credentials locally or in-memory.
Persistence & Privilege
always is false and autonomous invocation is allowed (platform default). The skill requests session persistence for ongoing renders, which is reasonable for a render pipeline, but there is no explicit description of where session tokens are persisted (memory vs disk).
What to consider before installing
This skill appears to implement a real text→video workflow, but it will contact an external service (mega-api-prod.nemovideo.ai), obtain/use a token (NEMO_TOKEN), and upload user media to that service. Before installing: 1) Prefer supplying your own NEMO_TOKEN (don’t let the skill auto-generate/persist credentials) if you trust the vendor; 2) Don’t upload sensitive content—files up to 500MB will be sent off-host; 3) Ask the publisher for provenance (homepage, privacy policy, source code, or company identity); 4) Confirm where session/token data will be stored (in-memory vs written under ~/.config/nemovideo/); 5) Verify data retention/processing and whether uploads are inspected or logged. The metadata mismatch (registry says no configPaths but SKILL.md lists one) and the lack of a verified source/homepage are the main reasons to treat this as suspicious. If the publisher identity and storage/retention practices are provided and sensible, confidence could rise to benign.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🎬 Clawdis
EnvNEMO_TOKEN
Primary envNEMO_TOKEN
latestvk97dawp0yfz2j91x1xcvkp42j985p8hx
43downloads
0stars
1versions
Updated 18h ago
v1.0.0
MIT-0

Getting Started

Got text prompts or clips to work with? Send it over and tell me what you need — I'll take care of the AI video generation.

Try saying:

  • "generate a text prompt describing a 30-second product demo scene into a 1080p MP4"
  • "generate a 30-second video from this script about a new coffee brand"
  • "generating videos from text prompts or reference images for marketers, content creators, social media managers"

First-Time Connection

When a user first opens this skill, connect to the processing backend automatically. Briefly let them know (e.g. "Setting up...").

Authentication: Check if NEMO_TOKEN is set in the environment. If it is, skip to step 2.

  1. Obtain a free token: Generate a random UUID as client identifier. POST to https://mega-api-prod.nemovideo.ai/api/auth/anonymous-token with header X-Client-Id set to that UUID. The response data.token is your NEMO_TOKEN — 100 free credits, valid 7 days.
  2. Create a session: POST to https://mega-api-prod.nemovideo.ai/api/tasks/me/with-session/nemo_agent with Authorization: Bearer <token>, Content-Type: application/json, and body {"task_name":"project","language":"<detected>"}. Store the returned session_id for all subsequent requests.

Keep setup communication brief. Don't display raw API responses or token values to the user.

Generation Generator — Generate Videos from Text Prompts

This tool takes your text prompts or clips and runs AI video generation through a cloud rendering pipeline. You upload, describe what you want, and download the result.

Say you have a text prompt describing a 30-second product demo scene and want to generate a 30-second video from this script about a new coffee brand — the backend processes it in about 1-2 minutes and hands you a 1080p MP4.

Tip: shorter and more specific prompts tend to produce more accurate video results.

Matching Input to Actions

User prompts referencing generation generator, aspect ratio, text overlays, or audio tracks get routed to the corresponding action via keyword and intent classification.

User says...ActionSkip SSE?
"export" / "导出" / "download" / "send me the video"→ §3.5 Export
"credits" / "积分" / "balance" / "余额"→ §3.3 Credits
"status" / "状态" / "show tracks"→ §3.4 State
"upload" / "上传" / user sends file→ §3.2 Upload
Everything else (generate, edit, add BGM…)→ §3.1 SSE

Cloud Render Pipeline Details

Each export job queues on a cloud GPU node that composites video layers, applies platform-spec compression (H.264, up to 1080x1920), and returns a download URL within 30-90 seconds. The session token carries render job IDs, so closing the tab before completion orphans the job.

All calls go to https://mega-api-prod.nemovideo.ai. The main endpoints:

  1. SessionPOST /api/tasks/me/with-session/nemo_agent with {"task_name":"project","language":"<lang>"}. Gives you a session_id.
  2. Chat (SSE)POST /run_sse with session_id and your message in new_message.parts[0].text. Set Accept: text/event-stream. Up to 15 min.
  3. UploadPOST /api/upload-video/nemo_agent/me/<sid> — multipart file or JSON with URLs.
  4. CreditsGET /api/credits/balance/simple — returns available, frozen, total.
  5. StateGET /api/state/nemo_agent/me/<sid>/latest — current draft and media info.
  6. ExportPOST /api/render/proxy/lambda with render ID and draft JSON. Poll GET /api/render/proxy/lambda/<id> every 30s for completed status and download URL.

Formats: mp4, mov, avi, webm, mkv, jpg, png, gif, webp, mp3, wav, m4a, aac.

Headers are derived from this file's YAML frontmatter. X-Skill-Source is generation-generator, X-Skill-Version comes from the version field, and X-Skill-Platform is detected from the install path (~/.clawhub/ = clawhub, ~/.cursor/skills/ = cursor, otherwise unknown).

Every API call needs Authorization: Bearer <NEMO_TOKEN> plus the three attribution headers above. If any header is missing, exports return 402.

Draft JSON uses short keys: t for tracks, tt for track type (0=video, 1=audio, 7=text), sg for segments, d for duration in ms, m for metadata.

Example timeline summary:

Timeline (3 tracks): 1. Video: city timelapse (0-10s) 2. BGM: Lo-fi (0-10s, 35%) 3. Title: "Urban Dreams" (0-3s)

Translating GUI Instructions

The backend responds as if there's a visual interface. Map its instructions to API calls:

  • "click" or "点击" → execute the action via the relevant endpoint
  • "open" or "打开" → query session state to get the data
  • "drag/drop" or "拖拽" → send the edit command through SSE
  • "preview in timeline" → show a text summary of current tracks
  • "Export" or "导出" → run the export workflow

SSE Event Handling

EventAction
Text responseApply GUI translation (§4), present to user
Tool call/resultProcess internally, don't forward
heartbeat / empty data:Keep waiting. Every 2 min: "⏳ Still working..."
Stream closesProcess final response

~30% of editing operations return no text in the SSE stream. When this happens: poll session state to verify the edit was applied, then summarize changes to the user.

Error Codes

  • 0 — success, continue normally
  • 1001 — token expired or invalid; re-acquire via /api/auth/anonymous-token
  • 1002 — session not found; create a new one
  • 2001 — out of credits; anonymous users get a registration link with ?bind=<id>, registered users top up
  • 4001 — unsupported file type; show accepted formats
  • 4002 — file too large; suggest compressing or trimming
  • 400 — missing X-Client-Id; generate one and retry
  • 402 — free plan export blocked; not a credit issue, subscription tier
  • 429 — rate limited; wait 30s and retry once

Common Workflows

Quick edit: Upload → "generate a 30-second video from this script about a new coffee brand" → Download MP4. Takes 1-2 minutes for a 30-second clip.

Batch style: Upload multiple files in one session. Process them one by one with different instructions. Each gets its own render.

Iterative: Start with a rough cut, preview the result, then refine. The session keeps your timeline state so you can keep tweaking.

Tips and Tricks

The backend processes faster when you're specific. Instead of "make it look better", try "generate a 30-second video from this script about a new coffee brand" — concrete instructions get better results.

Max file size is 500MB. Stick to MP4, MOV, PNG, JPG for the smoothest experience.

Export as MP4 for widest compatibility across platforms and devices.

Comments

Loading comments...