Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Dubbing Ffmpeg

v1.0.0

Turn a 3-minute MP4 video in English into 1080p dubbed MP4 videos just by typing what you need. Whether it's replacing original audio with dubbed voice in an...

0· 23·0 current·0 all-time
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Name implies local ffmpeg-based processing, but the SKILL.md describes a cloud render pipeline (mega-api-prod.nemovideo.ai) and declares no required local binaries; registry metadata listed no configPaths while the skill frontmatter requires ~/.config/nemovideo/. This mismatch may surprise users expecting purely local processing.
Instruction Scope
Instructions direct the agent to upload user media to a third-party cloud service, create or use an anonymous token, maintain sessions, stream SSE responses, and include attribution headers. These are expected for a cloud dubbing service, but the skill also instructs the agent to detect install paths (to set X-Skill-Platform), which requires reading local filesystem state and potentially exposes local path layout — a scope expansion the README does not clearly justify.
Install Mechanism
Instruction-only skill with no install spec and no code files — nothing is downloaded or written by an installer, which reduces installation risk.
Credentials
Only NEMO_TOKEN is declared as required, and SKILL.md describes creating an anonymous token if missing. This is proportionate to a cloud API client, but the token creation and use are network operations to a third-party endpoint; ensure you trust that endpoint before giving the token or allowing anonymous token generation.
Persistence & Privilege
Skill is not always-enabled and does not request elevated or persistent system-wide privileges. Autonomous invocation is allowed (platform default) but not combined with other high-risk flags.
What to consider before installing
This skill implements cloud-based dubbing (uploads your video to mega-api-prod.nemovideo.ai) rather than running ffmpeg locally — if you expected local processing, do not install. The skill needs or will obtain a NEMO_TOKEN and will upload media files; only use it with non-sensitive videos unless you trust the service and understand its data-retention/privacy policy. Note the SKILL.md also asks the agent to read install paths for an attribution header (minor privacy exposure). Before installing: verify the service domain and owner, test with a dummy video, and confirm you’re comfortable with files and tokens being sent to the remote API. If you need local-only processing, look for a skill that explicitly requires ffmpeg and does not upload files externally.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🎙️ Clawdis
EnvNEMO_TOKEN
Primary envNEMO_TOKEN
latestvk972m61fd430x0drg0fckfyhex8527hj
23downloads
0stars
1versions
Updated 1d ago
v1.0.0
MIT-0

Getting Started

Send me your video files and I'll handle the AI audio dubbing. Or just describe what you're after.

Try saying:

  • "convert a 3-minute MP4 video in English into a 1080p MP4"
  • "dub this video into Spanish and replace the original audio track"
  • "replacing original audio with dubbed voice in another language for content creators"

Quick Start Setup

This skill connects to a cloud processing backend. On first use, set up the connection automatically and let the user know ("Connecting...").

Token check: Look for NEMO_TOKEN in the environment. If found, skip to session creation. Otherwise:

  • Generate a UUID as client identifier
  • POST https://mega-api-prod.nemovideo.ai/api/auth/anonymous-token with X-Client-Id header
  • Extract data.token from the response — this is your NEMO_TOKEN (100 free credits, 7-day expiry)

Session: POST https://mega-api-prod.nemovideo.ai/api/tasks/me/with-session/nemo_agent with Bearer auth and body {"task_name":"project"}. Keep the returned session_id for all operations.

Let the user know with a brief "Ready!" when setup is complete. Don't expose tokens or raw API output.

Dubbing FFmpeg — Dub and Export Localized Videos

This tool takes your video files and runs AI audio dubbing through a cloud rendering pipeline. You upload, describe what you want, and download the result.

Say you have a 3-minute MP4 video in English and want to dub this video into Spanish and replace the original audio track — the backend processes it in about 1-3 minutes and hands you a 1080p MP4.

Tip: shorter clips under 5 minutes process significantly faster and with higher sync accuracy.

Matching Input to Actions

User prompts referencing dubbing ffmpeg, aspect ratio, text overlays, or audio tracks get routed to the corresponding action via keyword and intent classification.

User says...ActionSkip SSE?
"export" / "导出" / "download" / "send me the video"→ §3.5 Export
"credits" / "积分" / "balance" / "余额"→ §3.3 Credits
"status" / "状态" / "show tracks"→ §3.4 State
"upload" / "上传" / user sends file→ §3.2 Upload
Everything else (generate, edit, add BGM…)→ §3.1 SSE

Cloud Render Pipeline Details

Each export job queues on a cloud GPU node that composites video layers, applies platform-spec compression (H.264, up to 1080x1920), and returns a download URL within 30-90 seconds. The session token carries render job IDs, so closing the tab before completion orphans the job.

Base URL: https://mega-api-prod.nemovideo.ai

EndpointMethodPurpose
/api/tasks/me/with-session/nemo_agentPOSTStart a new editing session. Body: {"task_name":"project","language":"<lang>"}. Returns session_id.
/run_ssePOSTSend a user message. Body includes app_name, session_id, new_message. Stream response with Accept: text/event-stream. Timeout: 15 min.
/api/upload-video/nemo_agent/me/<sid>POSTUpload a file (multipart) or URL.
/api/credits/balance/simpleGETCheck remaining credits (available, frozen, total).
/api/state/nemo_agent/me/<sid>/latestGETFetch current timeline state (draft, video_infos, generated_media).
/api/render/proxy/lambdaPOSTStart export. Body: {"id":"render_<ts>","sessionId":"<sid>","draft":<json>,"output":{"format":"mp4","quality":"high"}}. Poll status every 30s.

Accepted file types: mp4, mov, avi, webm, mkv, jpg, png, gif, webp, mp3, wav, m4a, aac.

Skill attribution — read from this file's YAML frontmatter at runtime:

  • X-Skill-Source: dubbing-ffmpeg
  • X-Skill-Version: from frontmatter version
  • X-Skill-Platform: detect from install path (~/.clawhub/clawhub, ~/.cursor/skills/cursor, else unknown)

All requests must include: Authorization: Bearer <NEMO_TOKEN>, X-Skill-Source, X-Skill-Version, X-Skill-Platform. Missing attribution headers will cause export to fail with 402.

Error Handling

CodeMeaningAction
0SuccessContinue
1001Bad/expired tokenRe-auth via anonymous-token (tokens expire after 7 days)
1002Session not foundNew session §3.0
2001No creditsAnonymous: show registration URL with ?bind=<id> (get <id> from create-session or state response when needed). Registered: "Top up credits in your account"
4001Unsupported fileShow supported formats
4002File too largeSuggest compress/trim
400Missing X-Client-IdGenerate Client-Id and retry (see §1)
402Free plan export blockedSubscription tier issue, NOT credits. "Register or upgrade your plan to unlock export."
429Rate limit (1 token/client/7 days)Retry in 30s once

Reading the SSE Stream

Text events go straight to the user (after GUI translation). Tool calls stay internal. Heartbeats and empty data: lines mean the backend is still working — show "⏳ Still working..." every 2 minutes.

About 30% of edit operations close the stream without any text. When that happens, poll /api/state to confirm the timeline changed, then tell the user what was updated.

Translating GUI Instructions

The backend responds as if there's a visual interface. Map its instructions to API calls:

  • "click" or "点击" → execute the action via the relevant endpoint
  • "open" or "打开" → query session state to get the data
  • "drag/drop" or "拖拽" → send the edit command through SSE
  • "preview in timeline" → show a text summary of current tracks
  • "Export" or "导出" → run the export workflow

Draft JSON uses short keys: t for tracks, tt for track type (0=video, 1=audio, 7=text), sg for segments, d for duration in ms, m for metadata.

Example timeline summary:

Timeline (3 tracks): 1. Video: city timelapse (0-10s) 2. BGM: Lo-fi (0-10s, 35%) 3. Title: "Urban Dreams" (0-3s)

Tips and Tricks

The backend processes faster when you're specific. Instead of "make it look better", try "dub this video into Spanish and replace the original audio track" — concrete instructions get better results.

Max file size is 500MB. Stick to MP4, MOV, AVI, MKV for the smoothest experience.

Export as MP4 with H.264 codec for best compatibility across platforms.

Common Workflows

Quick edit: Upload → "dub this video into Spanish and replace the original audio track" → Download MP4. Takes 1-3 minutes for a 30-second clip.

Batch style: Upload multiple files in one session. Process them one by one with different instructions. Each gets its own render.

Iterative: Start with a rough cut, preview the result, then refine. The session keeps your timeline state so you can keep tweaking.

Comments

Loading comments...