Topmediai Ai Music

v1.0.0

Turn a 60-second product demo video into 1080p music-backed videos just by typing what you need. Whether it's adding AI-generated background music to videos...

0· 56·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for vynbosserman65/topmediai-ai-music.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Topmediai Ai Music" (vynbosserman65/topmediai-ai-music) from ClawHub.
Skill page: https://clawhub.ai/vynbosserman65/topmediai-ai-music
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required env vars: NEMO_TOKEN
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install topmediai-ai-music

ClawHub CLI

Package manager switcher

npx clawhub@latest install topmediai-ai-music
Security Scan
VirusTotalVirusTotal
Pending
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
The name/description (AI music for videos) aligns with the declared primary credential (NEMO_TOKEN) and the API endpoints referenced (nemovideo.ai). Required binaries are none and the declared env var is exactly what an API-backed service would need. One minor inconsistency: the registry metadata lists no required config paths, but the SKILL.md frontmatter declares a configPaths (~/.config/nemovideo/). This is a small mismatch but does not change the core purpose.
Instruction Scope
The SKILL.md gives detailed runtime instructions that remain within the stated scope (create session, upload video, SSE chat, export/polling). It also instructs the agent to read the skill's YAML frontmatter at runtime and to detect install path patterns (e.g., ~/.clawhub, ~/.cursor/skills/) — this implies reading local files/paths which is plausible for header attribution but is a local-file access the user should be aware of. The skill also instructs automatic acquisition of an anonymous token if NEMO_TOKEN is missing (network call). All SSE and upload behavior is consistent with a cloud rendering workflow.
Install Mechanism
No install spec or code files are present; this is instruction-only so nothing additional is written to disk during installation. This is the lowest-risk install model.
Credentials
The skill only requests a single credential (NEMO_TOKEN) which is proportionate to a cloud API integration. The SKILL.md also references a config path in its frontmatter (~/.config/nemovideo/), which the registry did not list — another small metadata mismatch. No unrelated secrets or broad environment access are requested.
Persistence & Privilege
always is false and the skill is user-invocable; it does not request permanent/always-on presence or modifications to other skills. Autonomous invocation (model invocation enabled) is the platform default and is not an additional red flag here.
Assessment
This skill appears to be a straightforward wrapper around a cloud rendering API (mega-api-prod.nemovideo.ai) and will upload your media and prompt text to that service. Before installing or using it: 1) Be comfortable that your videos/audio will be sent to the remote host and that generated or uploaded media may be stored/processed there. 2) Confirm you trust the NEMO_TOKEN you provide (know its scope and lifetime); if you don't set NEMO_TOKEN the skill will obtain an anonymous token automatically. 3) Note the skill may read local install paths or its own frontmatter to populate attribution headers — if you prefer no local file reads, avoid installing. 4) The SKILL.md contains a small metadata inconsistency (declared configPaths in frontmatter vs registry listing none) — not critical but worth noting. If any of the above is unacceptable (sensitive videos, unknown token scopes, or avoiding local filesystem reads), don't install or run this skill. Otherwise it is internally consistent with its declared purpose.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🎵 Clawdis
EnvNEMO_TOKEN
Primary envNEMO_TOKEN
latestvk97bvjvhw7jjgj4zdq7a8xzq39853fkx
56downloads
0stars
1versions
Updated 1w ago
v1.0.0
MIT-0

Getting Started

Share your video or text and I'll get started on AI music generation. Or just tell me what you're thinking.

Try saying:

  • "generate my video or text"
  • "export 1080p MP4"
  • "generate background music that matches the"

Getting Connected

Before handling any user request, establish a connection to the backend API. Show a brief status like "Connecting...".

If NEMO_TOKEN is in the environment, use it directly and create a session. Otherwise, acquire a free starter token:

  • Generate a UUID as client identifier
  • POST to https://mega-api-prod.nemovideo.ai/api/auth/anonymous-token with the X-Client-Id header
  • The response includes a token with 100 free credits valid for 7 days — use it as NEMO_TOKEN

Then create a session by POSTing to https://mega-api-prod.nemovideo.ai/api/tasks/me/with-session/nemo_agent with Bearer authorization and body {"task_name":"project","language":"en"}. The session_id in the response is needed for all following requests.

Tell the user you're ready. Keep the technical details out of the chat.

TopMediai AI Music — Generate Music for Videos

This tool takes your video or text and runs AI music generation through a cloud rendering pipeline. You upload, describe what you want, and download the result.

Say you have a 60-second product demo video and want to generate background music that matches the mood and length of my video — the backend processes it in about 20-40 seconds and hands you a 1080p MP4.

Tip: shorter videos allow the AI to sync music transitions more accurately.

Matching Input to Actions

User prompts referencing topmediai ai music, aspect ratio, text overlays, or audio tracks get routed to the corresponding action via keyword and intent classification.

User says...ActionSkip SSE?
"export" / "导出" / "download" / "send me the video"→ §3.5 Export
"credits" / "积分" / "balance" / "余额"→ §3.3 Credits
"status" / "状态" / "show tracks"→ §3.4 State
"upload" / "上传" / user sends file→ §3.2 Upload
Everything else (generate, edit, add BGM…)→ §3.1 SSE

Cloud Render Pipeline Details

Each export job queues on a cloud GPU node that composites video layers, applies platform-spec compression (H.264, up to 1080x1920), and returns a download URL within 30-90 seconds. The session token carries render job IDs, so closing the tab before completion orphans the job.

All calls go to https://mega-api-prod.nemovideo.ai. The main endpoints:

  1. SessionPOST /api/tasks/me/with-session/nemo_agent with {"task_name":"project","language":"<lang>"}. Gives you a session_id.
  2. Chat (SSE)POST /run_sse with session_id and your message in new_message.parts[0].text. Set Accept: text/event-stream. Up to 15 min.
  3. UploadPOST /api/upload-video/nemo_agent/me/<sid> — multipart file or JSON with URLs.
  4. CreditsGET /api/credits/balance/simple — returns available, frozen, total.
  5. StateGET /api/state/nemo_agent/me/<sid>/latest — current draft and media info.
  6. ExportPOST /api/render/proxy/lambda with render ID and draft JSON. Poll GET /api/render/proxy/lambda/<id> every 30s for completed status and download URL.

Formats: mp4, mov, avi, webm, mkv, jpg, png, gif, webp, mp3, wav, m4a, aac.

Skill attribution — read from this file's YAML frontmatter at runtime:

  • X-Skill-Source: topmediai-ai-music
  • X-Skill-Version: from frontmatter version
  • X-Skill-Platform: detect from install path (~/.clawhub/clawhub, ~/.cursor/skills/cursor, else unknown)

Include Authorization: Bearer <NEMO_TOKEN> and all attribution headers on every request — omitting them triggers a 402 on export.

Draft field mapping: t=tracks, tt=track type (0=video, 1=audio, 7=text), sg=segments, d=duration(ms), m=metadata.

Timeline (3 tracks): 1. Video: city timelapse (0-10s) 2. BGM: Lo-fi (0-10s, 35%) 3. Title: "Urban Dreams" (0-3s)

Translating GUI Instructions

The backend responds as if there's a visual interface. Map its instructions to API calls:

  • "click" or "点击" → execute the action via the relevant endpoint
  • "open" or "打开" → query session state to get the data
  • "drag/drop" or "拖拽" → send the edit command through SSE
  • "preview in timeline" → show a text summary of current tracks
  • "Export" or "导出" → run the export workflow

SSE Event Handling

EventAction
Text responseApply GUI translation (§4), present to user
Tool call/resultProcess internally, don't forward
heartbeat / empty data:Keep waiting. Every 2 min: "⏳ Still working..."
Stream closesProcess final response

~30% of editing operations return no text in the SSE stream. When this happens: poll session state to verify the edit was applied, then summarize changes to the user.

Error Codes

  • 0 — success, continue normally
  • 1001 — token expired or invalid; re-acquire via /api/auth/anonymous-token
  • 1002 — session not found; create a new one
  • 2001 — out of credits; anonymous users get a registration link with ?bind=<id>, registered users top up
  • 4001 — unsupported file type; show accepted formats
  • 4002 — file too large; suggest compressing or trimming
  • 400 — missing X-Client-Id; generate one and retry
  • 402 — free plan export blocked; not a credit issue, subscription tier
  • 429 — rate limited; wait 30s and retry once

Tips and Tricks

The backend processes faster when you're specific. Instead of "make it look better", try "generate background music that matches the mood and length of my video" — concrete instructions get better results.

Max file size is 500MB. Stick to MP4, MOV, AVI, WebM for the smoothest experience.

Export as MP4 for widest compatibility.

Common Workflows

Quick edit: Upload → "generate background music that matches the mood and length of my video" → Download MP4. Takes 20-40 seconds for a 30-second clip.

Batch style: Upload multiple files in one session. Process them one by one with different instructions. Each gets its own render.

Iterative: Start with a rough cut, preview the result, then refine. The session keeps your timeline state so you can keep tweaking.

Comments

Loading comments...