Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Subtitle Generator In Video

v1.0.0

Get captioned video files ready to post, without touching a single slider. Upload your video files (MP4, MOV, AVI, WebM, up to 500MB), say something like "ge...

0· 40·0 current·0 all-time
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The name/description (subtitle + render) aligns with the API endpoints and workflows documented in SKILL.md (upload, SSE, export). Requesting a single service token (NEMO_TOKEN) is appropriate for a hosted rendering/subtitling service. However, the SKILL.md frontmatter declares a required config path (~/.config/nemovideo/) while the registry metadata lists no required config paths — this mismatch should be clarified.
Instruction Scope
Runtime instructions stay focused on uploading video, creating a backend session, sending SSE messages, polling state, and exporting the rendered file. The only notable behaviors are: (1) if NEMO_TOKEN is absent the skill instructs the agent to obtain an anonymous token by POSTing to the service (network I/O), and (2) the skill asks to auto-detect platform/install path for an X-Skill-Platform header. Both are consistent with remote service usage but expand what the agent will read/emit (network calls and a small amount of runtime environment inspection).
Install Mechanism
This is an instruction-only skill with no install specification and no code files — lowest-risk delivery mechanism (nothing new written to disk by the skill itself).
Credentials
Only one credential (NEMO_TOKEN) is declared as required, which fits the described API. The SKILL.md frontmatter, however, references a config path (~/.config/nemovideo/) not present in the registry metadata — that suggests the skill might expect local config files or persisted tokens. Also the instructions include creating/using an anonymous token which the agent will treat like NEMO_TOKEN for the session; confirm whether tokens are stored persistently and where.
Persistence & Privilege
The skill is not force-included (always: false) and is user-invocable; it does not request elevated platform presence or try to modify other skills. The instructions do use session tokens and remote job IDs (normal for queued rendering services) but do not request persistent platform privileges.
What to consider before installing
This skill appears to do what it says (upload videos, generate subtitles, render exports) and needs one service token (NEMO_TOKEN). Before you install or use it: (1) confirm the NEMO_TOKEN source and whether the agent will persist the anonymous token it can create (and where it would be stored, e.g., ~/.config/nemovideo/); (2) avoid uploading sensitive or private videos until you trust the service and its privacy policy; (3) verify the API domain (https://mega-api-prod.nemovideo.ai) is the official provider you expect; (4) if you don’t already have a token, be cautious about allowing the agent to perform network calls that mint tokens automatically — ask for manual approval or provide a token you control. The metadata mismatch (config path present in SKILL.md but not in registry) lowers certainty; request clarification or source code before granting access if you need higher assurance.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🎬 Clawdis
EnvNEMO_TOKEN
Primary envNEMO_TOKEN
latestvk97236ywdpaecxh8kkgp3rqt6984x3xh
40downloads
0stars
1versions
Updated 3d ago
v1.0.0
MIT-0

Getting Started

Share your video files and I'll get started on subtitle generation. Or just tell me what you're thinking.

Try saying:

  • "add my video files"
  • "export 1080p MP4"
  • "generate subtitles in English and add"

Getting Connected

Before handling any user request, establish a connection to the backend API. Show a brief status like "Connecting...".

If NEMO_TOKEN is in the environment, use it directly and create a session. Otherwise, acquire a free starter token:

  • Generate a UUID as client identifier
  • POST to https://mega-api-prod.nemovideo.ai/api/auth/anonymous-token with the X-Client-Id header
  • The response includes a token with 100 free credits valid for 7 days — use it as NEMO_TOKEN

Then create a session by POSTing to https://mega-api-prod.nemovideo.ai/api/tasks/me/with-session/nemo_agent with Bearer authorization and body {"task_name":"project","language":"en"}. The session_id in the response is needed for all following requests.

Tell the user you're ready. Keep the technical details out of the chat.

Subtitle Generator in Video — Generate and Embed Video Subtitles

Send me your video files and describe the result you want. The subtitle generation runs on remote GPU nodes — nothing to install on your machine.

A quick example: upload a 3-minute tutorial video in MP4, type "generate subtitles in English and add them as burned-in captions", and you'll get a 1080p MP4 back in roughly 30-60 seconds. All rendering happens server-side.

Worth noting: shorter clips under 5 minutes generate subtitles significantly faster.

Matching Input to Actions

User prompts referencing subtitle generator in video, aspect ratio, text overlays, or audio tracks get routed to the corresponding action via keyword and intent classification.

User says...ActionSkip SSE?
"export" / "导出" / "download" / "send me the video"→ §3.5 Export
"credits" / "积分" / "balance" / "余额"→ §3.3 Credits
"status" / "状态" / "show tracks"→ §3.4 State
"upload" / "上传" / user sends file→ §3.2 Upload
Everything else (generate, edit, add BGM…)→ §3.1 SSE

Cloud Render Pipeline Details

Each export job queues on a cloud GPU node that composites video layers, applies platform-spec compression (H.264, up to 1080x1920), and returns a download URL within 30-90 seconds. The session token carries render job IDs, so closing the tab before completion orphans the job.

Three attribution headers are required on every request and must match this file's frontmatter:

HeaderValue
X-Skill-Sourcesubtitle-generator-in-video
X-Skill-Versionfrontmatter version
X-Skill-Platformauto-detect: clawhub / cursor / unknown from install path

Include Authorization: Bearer <NEMO_TOKEN> and all attribution headers on every request — omitting them triggers a 402 on export.

API base: https://mega-api-prod.nemovideo.ai

Create session: POST /api/tasks/me/with-session/nemo_agent — body {"task_name":"project","language":"<lang>"} — returns task_id, session_id.

Send message (SSE): POST /run_sse — body {"app_name":"nemo_agent","user_id":"me","session_id":"<sid>","new_message":{"parts":[{"text":"<msg>"}]}} with Accept: text/event-stream. Max timeout: 15 minutes.

Upload: POST /api/upload-video/nemo_agent/me/<sid> — file: multipart -F "files=@/path", or URL: {"urls":["<url>"],"source_type":"url"}

Credits: GET /api/credits/balance/simple — returns available, frozen, total

Session state: GET /api/state/nemo_agent/me/<sid>/latest — key fields: data.state.draft, data.state.video_infos, data.state.generated_media

Export (free, no credits): POST /api/render/proxy/lambda — body {"id":"render_<ts>","sessionId":"<sid>","draft":<json>,"output":{"format":"mp4","quality":"high"}}. Poll GET /api/render/proxy/lambda/<id> every 30s until status = completed. Download URL at output.url.

Supported formats: mp4, mov, avi, webm, mkv, jpg, png, gif, webp, mp3, wav, m4a, aac.

Reading the SSE Stream

Text events go straight to the user (after GUI translation). Tool calls stay internal. Heartbeats and empty data: lines mean the backend is still working — show "⏳ Still working..." every 2 minutes.

About 30% of edit operations close the stream without any text. When that happens, poll /api/state to confirm the timeline changed, then tell the user what was updated.

Translating GUI Instructions

The backend responds as if there's a visual interface. Map its instructions to API calls:

  • "click" or "点击" → execute the action via the relevant endpoint
  • "open" or "打开" → query session state to get the data
  • "drag/drop" or "拖拽" → send the edit command through SSE
  • "preview in timeline" → show a text summary of current tracks
  • "Export" or "导出" → run the export workflow

Draft field mapping: t=tracks, tt=track type (0=video, 1=audio, 7=text), sg=segments, d=duration(ms), m=metadata.

Timeline (3 tracks): 1. Video: city timelapse (0-10s) 2. BGM: Lo-fi (0-10s, 35%) 3. Title: "Urban Dreams" (0-3s)

Error Codes

  • 0 — success, continue normally
  • 1001 — token expired or invalid; re-acquire via /api/auth/anonymous-token
  • 1002 — session not found; create a new one
  • 2001 — out of credits; anonymous users get a registration link with ?bind=<id>, registered users top up
  • 4001 — unsupported file type; show accepted formats
  • 4002 — file too large; suggest compressing or trimming
  • 400 — missing X-Client-Id; generate one and retry
  • 402 — free plan export blocked; not a credit issue, subscription tier
  • 429 — rate limited; wait 30s and retry once

Tips and Tricks

The backend processes faster when you're specific. Instead of "make it look better", try "generate subtitles in English and add them as burned-in captions" — concrete instructions get better results.

Max file size is 500MB. Stick to MP4, MOV, AVI, WebM for the smoothest experience.

Export as MP4 for widest compatibility across platforms and devices.

Common Workflows

Quick edit: Upload → "generate subtitles in English and add them as burned-in captions" → Download MP4. Takes 30-60 seconds for a 30-second clip.

Batch style: Upload multiple files in one session. Process them one by one with different instructions. Each gets its own render.

Iterative: Start with a rough cut, preview the result, then refine. The session keeps your timeline state so you can keep tweaking.

Comments

Loading comments...