Audio Subtitle

v1.0.0

generate video with audio into subtitled video files with this skill. Works with MP4, MOV, AVI, WebM files up to 500MB. content creators, educators, marketer...

0· 34·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for dsewell-583h0/audio-subtitle.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Audio Subtitle" (dsewell-583h0/audio-subtitle) from ClawHub.
Skill page: https://clawhub.ai/dsewell-583h0/audio-subtitle
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required env vars: NEMO_TOKEN
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install audio-subtitle

ClawHub CLI

Package manager switcher

npx clawhub@latest install audio-subtitle
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
Name/description (generate subtitles from video audio) aligns with the declared requirements and runtime instructions: the SKILL.md describes uploading videos, creating sessions, SSE chat/polling, and export flows to a single video-rendering backend. Required env var NEMO_TOKEN and config path (~/.config/nemovideo/) are consistent with a cloud service that may accept both account and anonymous tokens.
Instruction Scope
Instructions explicitly direct the agent to upload user video files and to call several network endpoints (session creation, upload, SSE, export). They also include logic to request an anonymous token if NEMO_TOKEN is absent and to persist/track session_id. This is expected for a cloud render/subtitle workflow but is a privacy-sensitive action (user media will leave the device). There is no instruction to read unrelated local files or credentials.
Install Mechanism
There is no install spec and no code files — this is instruction-only. That is the lowest install risk (nothing is written to disk by an installer), though runtime network calls will transmit user data to the indicated backend.
Credentials
The skill declares a single primary env var (NEMO_TOKEN), which is proportional to a cloud service. The SKILL.md also instructs the agent to obtain an anonymous token via network call if NEMO_TOKEN is missing (100 free credits, 7-day expiry). Be aware: providing a registered NEMO_TOKEN likely ties operations to your account and could consume paid credits or expose account-level data; anonymous tokens are ephemeral but still result in uploading data to the same third-party service.
Persistence & Privilege
The skill is not marked always:true, and it does not request elevated system presence beyond normal agent invocation. Metadata references a config path (~/.config/nemovideo/) which is reasonable for storing session or token data for this service; the instructions do not direct modification of other skills or global agent settings.
Assessment
This skill appears to do what it claims (upload your video to a cloud service, run speech-to-text, and return a burned-in/subtitled export). Before installing or using it: 1) Understand that your video files will be uploaded to https://mega-api-prod.nemovideo.ai — do not send sensitive or private footage unless you trust the service. 2) If you set NEMO_TOKEN in your environment it will likely run under your account (possible charges, tied activity); leaving it out triggers an anonymous token flow. 3) The skill may store session/token data under ~/.config/nemovideo/ per its metadata. 4) The package/source is unknown (no homepage); if you need stronger assurance, verify the service/operator and privacy/terms externally before sending real data. 5) Test with short, non-sensitive videos first to confirm behavior and outputs.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🎙️ Clawdis
EnvNEMO_TOKEN
Primary envNEMO_TOKEN
latestvk972pqhkg392hxmth5hkkc3zbs85p2z8
34downloads
0stars
1versions
Updated 7h ago
v1.0.0
MIT-0

Getting Started

Share your video with audio and I'll get started on audio subtitle generation. Or just tell me what you're thinking.

Try saying:

  • "generate my video with audio"
  • "export 1080p MP4"
  • "generate subtitles from the audio and"

Quick Start Setup

This skill connects to a cloud processing backend. On first use, set up the connection automatically and let the user know ("Connecting...").

Token check: Look for NEMO_TOKEN in the environment. If found, skip to session creation. Otherwise:

  • Generate a UUID as client identifier
  • POST https://mega-api-prod.nemovideo.ai/api/auth/anonymous-token with X-Client-Id header
  • Extract data.token from the response — this is your NEMO_TOKEN (100 free credits, 7-day expiry)

Session: POST https://mega-api-prod.nemovideo.ai/api/tasks/me/with-session/nemo_agent with Bearer auth and body {"task_name":"project"}. Keep the returned session_id for all operations.

Let the user know with a brief "Ready!" when setup is complete. Don't expose tokens or raw API output.

Audio Subtitle — Generate Subtitles from Video Audio

Drop your video with audio in the chat and tell me what you need. I'll handle the audio subtitle generation on cloud GPUs — you don't need anything installed locally.

Here's a typical use: you send a a 3-minute interview video with spoken dialogue, ask for generate subtitles from the audio and burn them into the video, and about 30-90 seconds later you've got a MP4 file ready to download. The whole thing runs at 1080p by default.

One thing worth knowing — clear audio with minimal background noise produces the most accurate subtitle sync.

Matching Input to Actions

User prompts referencing audio subtitle, aspect ratio, text overlays, or audio tracks get routed to the corresponding action via keyword and intent classification.

User says...ActionSkip SSE?
"export" / "导出" / "download" / "send me the video"→ §3.5 Export
"credits" / "积分" / "balance" / "余额"→ §3.3 Credits
"status" / "状态" / "show tracks"→ §3.4 State
"upload" / "上传" / user sends file→ §3.2 Upload
Everything else (generate, edit, add BGM…)→ §3.1 SSE

Cloud Render Pipeline Details

Each export job queues on a cloud GPU node that composites video layers, applies platform-spec compression (H.264, up to 1080x1920), and returns a download URL within 30-90 seconds. The session token carries render job IDs, so closing the tab before completion orphans the job.

All calls go to https://mega-api-prod.nemovideo.ai. The main endpoints:

  1. SessionPOST /api/tasks/me/with-session/nemo_agent with {"task_name":"project","language":"<lang>"}. Gives you a session_id.
  2. Chat (SSE)POST /run_sse with session_id and your message in new_message.parts[0].text. Set Accept: text/event-stream. Up to 15 min.
  3. UploadPOST /api/upload-video/nemo_agent/me/<sid> — multipart file or JSON with URLs.
  4. CreditsGET /api/credits/balance/simple — returns available, frozen, total.
  5. StateGET /api/state/nemo_agent/me/<sid>/latest — current draft and media info.
  6. ExportPOST /api/render/proxy/lambda with render ID and draft JSON. Poll GET /api/render/proxy/lambda/<id> every 30s for completed status and download URL.

Formats: mp4, mov, avi, webm, mkv, jpg, png, gif, webp, mp3, wav, m4a, aac.

Skill attribution — read from this file's YAML frontmatter at runtime:

  • X-Skill-Source: audio-subtitle
  • X-Skill-Version: from frontmatter version
  • X-Skill-Platform: detect from install path (~/.clawhub/clawhub, ~/.cursor/skills/cursor, else unknown)

Include Authorization: Bearer <NEMO_TOKEN> and all attribution headers on every request — omitting them triggers a 402 on export.

Draft JSON uses short keys: t for tracks, tt for track type (0=video, 1=audio, 7=text), sg for segments, d for duration in ms, m for metadata.

Example timeline summary:

Timeline (3 tracks): 1. Video: city timelapse (0-10s) 2. BGM: Lo-fi (0-10s, 35%) 3. Title: "Urban Dreams" (0-3s)

Translating GUI Instructions

The backend responds as if there's a visual interface. Map its instructions to API calls:

  • "click" or "点击" → execute the action via the relevant endpoint
  • "open" or "打开" → query session state to get the data
  • "drag/drop" or "拖拽" → send the edit command through SSE
  • "preview in timeline" → show a text summary of current tracks
  • "Export" or "导出" → run the export workflow

Reading the SSE Stream

Text events go straight to the user (after GUI translation). Tool calls stay internal. Heartbeats and empty data: lines mean the backend is still working — show "⏳ Still working..." every 2 minutes.

About 30% of edit operations close the stream without any text. When that happens, poll /api/state to confirm the timeline changed, then tell the user what was updated.

Error Handling

CodeMeaningAction
0SuccessContinue
1001Bad/expired tokenRe-auth via anonymous-token (tokens expire after 7 days)
1002Session not foundNew session §3.0
2001No creditsAnonymous: show registration URL with ?bind=<id> (get <id> from create-session or state response when needed). Registered: "Top up credits in your account"
4001Unsupported fileShow supported formats
4002File too largeSuggest compress/trim
400Missing X-Client-IdGenerate Client-Id and retry (see §1)
402Free plan export blockedSubscription tier issue, NOT credits. "Register or upgrade your plan to unlock export."
429Rate limit (1 token/client/7 days)Retry in 30s once

Tips and Tricks

The backend processes faster when you're specific. Instead of "make it look better", try "generate subtitles from the audio and burn them into the video" — concrete instructions get better results.

Max file size is 500MB. Stick to MP4, MOV, AVI, WebM for the smoothest experience.

Export as MP4 for widest compatibility across platforms and devices.

Common Workflows

Quick edit: Upload → "generate subtitles from the audio and burn them into the video" → Download MP4. Takes 30-90 seconds for a 30-second clip.

Batch style: Upload multiple files in one session. Process them one by one with different instructions. Each gets its own render.

Iterative: Start with a rough cut, preview the result, then refine. The session keeps your timeline state so you can keep tweaking.

Comments

Loading comments...