Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Text To Video Generative Ai

v1.0.0

Skip the learning curve of professional editing software. Describe what you want — generate a 15-second video of a sunset over a city skyline with cinematic...

0· 94·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for vynbosserman65/text-to-video-generative-ai.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Text To Video Generative Ai" (vynbosserman65/text-to-video-generative-ai) from ClawHub.
Skill page: https://clawhub.ai/vynbosserman65/text-to-video-generative-ai
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required env vars: NEMO_TOKEN
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install text-to-video-generative-ai

ClawHub CLI

Package manager switcher

npx clawhub@latest install text-to-video-generative-ai
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The skill's name/description (text-to-video) align with its network calls and the NEMO_TOKEN credential. However the frontmatter/metadata references a config path (~/.config/nemovideo/) while the registry metadata elsewhere lists no required config paths—this mismatch is unexplained. Requiring NEMO_TOKEN is reasonable, but the skill can also obtain an anonymous token itself, so declaring the env var as strictly required is inconsistent.
!
Instruction Scope
SKILL.md directs the agent to call multiple API endpoints, upload user files (up to 500MB), hold session tokens, and stream SSE responses — all expected for a cloud render client. Concerns: it asks the agent to auto-detect an 'install path' to set X-Skill-Platform (odd for an instruction-only skill), and requires persistent session/token handling in memory (and implies saving session_id). There is no instruction to access unrelated local files, but the install-path detection and configPath mention broaden the scope in unclear ways.
Install Mechanism
Instruction-only skill with no install spec or bundled code — lowest install risk. Nothing is downloaded or written by an installer per the provided metadata.
Credentials
Only one credential (NEMO_TOKEN) is declared, which fits a cloud API client. But the skill also documents creating an anonymous token by calling the provider endpoint (so the env var might not be strictly necessary). The frontmatter's configPaths entry is inconsistent with the registry summary (no required config paths), which suggests either sloppy metadata or an expectation of reading/writing ~/.config/nemovideo/ that isn't made explicit.
Persistence & Privilege
always:false and no special system-wide privileges requested. The skill expects to manage its own session token and session_id, which is normal. It does not request permanent presence or modification of other skills.
What to consider before installing
This skill appears to be a client for a third-party video-generation API and needs a NEMO_TOKEN (or will create a short-lived anonymous token) and permission to upload files to that service. Before installing: verify the provider domain (mega-api-prod.nemovideo.ai), confirm the privacy/data retention policy (you will upload media and text), and consider testing with non-sensitive content. Ask the publisher for clarification about the config path (~/.config/nemovideo/) and the 'X-Skill-Platform' auto-detection (how it determines an install path for an instruction-only skill). Prefer using an account token you control rather than letting the skill create an anonymous token if you care about traceability. Because of the metadata inconsistencies and ambiguous platform-detection step, proceed only after those points are clarified.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🎬 Clawdis
EnvNEMO_TOKEN
Primary envNEMO_TOKEN
latestvk975jgw6dm8ajrbbzps8naj0xd859bqn
94downloads
0stars
1versions
Updated 1w ago
v1.0.0
MIT-0

Getting Started

Share your text prompts and I'll get started on AI video generation. Or just tell me what you're thinking.

Try saying:

  • "generate my text prompts"
  • "export 1080p MP4"
  • "generate a 15-second video of a"

Automatic Setup

On first interaction, connect to the processing API before doing anything else. Show a brief status like "Setting things up...".

Token: If NEMO_TOKEN environment variable is already set, use it and skip to Session below.

Free token: Generate a UUID as client identifier, then POST to https://mega-api-prod.nemovideo.ai/api/auth/anonymous-token with header X-Client-Id: <uuid>. The response field data.token becomes your NEMO_TOKEN (100 credits, 7-day expiry).

Session: POST to https://mega-api-prod.nemovideo.ai/api/tasks/me/with-session/nemo_agent with Bearer auth and body {"task_name":"project"}. Save session_id from the response.

Confirm to the user you're connected and ready. Don't print tokens or raw JSON.

Text to Video Generative AI — Turn Text Prompts into Videos

Drop your text prompts in the chat and tell me what you need. I'll handle the AI video generation on cloud GPUs — you don't need anything installed locally.

Here's a typical use: you send a a two-sentence scene description, ask for generate a 15-second video of a sunset over a city skyline with cinematic camera movement, and about 1-3 minutes later you've got a MP4 file ready to download. The whole thing runs at 1080p by default.

One thing worth knowing — shorter, more specific prompts produce more consistent and accurate video output.

Matching Input to Actions

User prompts referencing text to video generative ai, aspect ratio, text overlays, or audio tracks get routed to the corresponding action via keyword and intent classification.

User says...ActionSkip SSE?
"export" / "导出" / "download" / "send me the video"→ §3.5 Export
"credits" / "积分" / "balance" / "余额"→ §3.3 Credits
"status" / "状态" / "show tracks"→ §3.4 State
"upload" / "上传" / user sends file→ §3.2 Upload
Everything else (generate, edit, add BGM…)→ §3.1 SSE

Cloud Render Pipeline Details

Each export job queues on a cloud GPU node that composites video layers, applies platform-spec compression (H.264, up to 1080x1920), and returns a download URL within 30-90 seconds. The session token carries render job IDs, so closing the tab before completion orphans the job.

Base URL: https://mega-api-prod.nemovideo.ai

EndpointMethodPurpose
/api/tasks/me/with-session/nemo_agentPOSTStart a new editing session. Body: {"task_name":"project","language":"<lang>"}. Returns session_id.
/run_ssePOSTSend a user message. Body includes app_name, session_id, new_message. Stream response with Accept: text/event-stream. Timeout: 15 min.
/api/upload-video/nemo_agent/me/<sid>POSTUpload a file (multipart) or URL.
/api/credits/balance/simpleGETCheck remaining credits (available, frozen, total).
/api/state/nemo_agent/me/<sid>/latestGETFetch current timeline state (draft, video_infos, generated_media).
/api/render/proxy/lambdaPOSTStart export. Body: {"id":"render_<ts>","sessionId":"<sid>","draft":<json>,"output":{"format":"mp4","quality":"high"}}. Poll status every 30s.

Accepted file types: mp4, mov, avi, webm, mkv, jpg, png, gif, webp, mp3, wav, m4a, aac.

Three attribution headers are required on every request and must match this file's frontmatter:

HeaderValue
X-Skill-Sourcetext-to-video-generative-ai
X-Skill-Versionfrontmatter version
X-Skill-Platformauto-detect: clawhub / cursor / unknown from install path

All requests must include: Authorization: Bearer <NEMO_TOKEN>, X-Skill-Source, X-Skill-Version, X-Skill-Platform. Missing attribution headers will cause export to fail with 402.

Error Handling

CodeMeaningAction
0SuccessContinue
1001Bad/expired tokenRe-auth via anonymous-token (tokens expire after 7 days)
1002Session not foundNew session §3.0
2001No creditsAnonymous: show registration URL with ?bind=<id> (get <id> from create-session or state response when needed). Registered: "Top up credits in your account"
4001Unsupported fileShow supported formats
4002File too largeSuggest compress/trim
400Missing X-Client-IdGenerate Client-Id and retry (see §1)
402Free plan export blockedSubscription tier issue, NOT credits. "Register or upgrade your plan to unlock export."
429Rate limit (1 token/client/7 days)Retry in 30s once

SSE Event Handling

EventAction
Text responseApply GUI translation (§4), present to user
Tool call/resultProcess internally, don't forward
heartbeat / empty data:Keep waiting. Every 2 min: "⏳ Still working..."
Stream closesProcess final response

~30% of editing operations return no text in the SSE stream. When this happens: poll session state to verify the edit was applied, then summarize changes to the user.

Translating GUI Instructions

The backend responds as if there's a visual interface. Map its instructions to API calls:

  • "click" or "点击" → execute the action via the relevant endpoint
  • "open" or "打开" → query session state to get the data
  • "drag/drop" or "拖拽" → send the edit command through SSE
  • "preview in timeline" → show a text summary of current tracks
  • "Export" or "导出" → run the export workflow

Draft JSON uses short keys: t for tracks, tt for track type (0=video, 1=audio, 7=text), sg for segments, d for duration in ms, m for metadata.

Example timeline summary:

Timeline (3 tracks): 1. Video: city timelapse (0-10s) 2. BGM: Lo-fi (0-10s, 35%) 3. Title: "Urban Dreams" (0-3s)

Tips and Tricks

The backend processes faster when you're specific. Instead of "make it look better", try "generate a 15-second video of a sunset over a city skyline with cinematic camera movement" — concrete instructions get better results.

Max file size is 500MB. Stick to TXT, DOCX, PDF, plain text for the smoothest experience.

Export as MP4 for widest compatibility across social platforms and video players.

Common Workflows

Quick edit: Upload → "generate a 15-second video of a sunset over a city skyline with cinematic camera movement" → Download MP4. Takes 1-3 minutes for a 30-second clip.

Batch style: Upload multiple files in one session. Process them one by one with different instructions. Each gets its own render.

Iterative: Start with a rough cut, preview the result, then refine. The session keeps your timeline state so you can keep tweaking.

Comments

Loading comments...