Solo Video

v1.0.0

Turn a 2-minute selfie-style talking head video into 1080p polished solo video just by typing what you need. Whether it's editing single-person videos for so...

0· 52·0 current·0 all-time
bypeandrover adam@peand-rover
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description (solo video editing) match the declared primary credential (NEMO_TOKEN) and the API endpoints in SKILL.md. Minor inconsistency: the registry metadata listed no required config paths, but the SKILL.md frontmatter declares a config path (~/.config/nemovideo/) — this may indicate the skill expects a local config but the registry entry didn't surface it.
Instruction Scope
Instructions stay within video-editing scope (create session, upload file, SSE, render/export). They explicitly instruct checking for NEMO_TOKEN and, if missing, obtaining an anonymous token via a POST to the nemovideo API. The skill requires adding attribution headers and 'auto-detect' of platform from install path (which implies reading the agent/install path), but otherwise does not instruct reading arbitrary host files or unrelated credentials.
Install Mechanism
No install spec or code files — instruction-only skill. Nothing is downloaded or written to disk by an installer.
Credentials
Only a single credential (NEMO_TOKEN) is declared as primary and is appropriate for a hosted editing service. The SKILL.md can auto-generate a short-lived anonymous token if none is provided, which is coherent but means the skill can operate without a pre-provisioned secret. The SKILL.md's frontmatter mentions a config path (~/.config/nemovideo/), which was not declared in the registry; that mismatch should be clarified.
Persistence & Privilege
always:false and no install-time persistence; autonomous invocation allowed (platform default). The skill does not request elevated or system-wide modifications.
Assessment
This skill will upload any video you provide to the external service at mega-api-prod.nemovideo.ai and use a NEMO_TOKEN (you can provide one or the skill will obtain a short-lived anonymous token). Before installing, confirm you are comfortable sending your videos and any embedded content to that domain and review that service's privacy/terms. Ask the publisher to explain the registry/frontmatter mismatch about ~/.config/nemovideo/ (does the skill read a local config?), and verify you trust the endpoint and headers required (X-Skill-Source, X-Skill-Version, X-Skill-Platform). If you don't want automatic token creation, supply your own NEMO_TOKEN or avoid using the skill.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🎥 Clawdis
EnvNEMO_TOKEN
Primary envNEMO_TOKEN
latestvk97fzpmkmwn90b02rd5dj96mm585619x
52downloads
0stars
1versions
Updated 1d ago
v1.0.0
MIT-0

Getting Started

Share your single video file and I'll get started on AI solo video editing. Or just tell me what you're thinking.

Try saying:

  • "edit my single video file"
  • "export 1080p MP4"
  • "clean up background noise, cut silences,"

Quick Start Setup

This skill connects to a cloud processing backend. On first use, set up the connection automatically and let the user know ("Connecting...").

Token check: Look for NEMO_TOKEN in the environment. If found, skip to session creation. Otherwise:

  • Generate a UUID as client identifier
  • POST https://mega-api-prod.nemovideo.ai/api/auth/anonymous-token with X-Client-Id header
  • Extract data.token from the response — this is your NEMO_TOKEN (100 free credits, 7-day expiry)

Session: POST https://mega-api-prod.nemovideo.ai/api/tasks/me/with-session/nemo_agent with Bearer auth and body {"task_name":"project"}. Keep the returned session_id for all operations.

Let the user know with a brief "Ready!" when setup is complete. Don't expose tokens or raw API output.

Solo Video — Edit and Export Solo Videos

This tool takes your single video file and runs AI solo video editing through a cloud rendering pipeline. You upload, describe what you want, and download the result.

Say you have a 2-minute selfie-style talking head video and want to clean up background noise, cut silences, and add animated captions — the backend processes it in about 30-60 seconds and hands you a 1080p MP4.

Tip: trimming silence before upload speeds up AI processing noticeably.

Matching Input to Actions

User prompts referencing solo video, aspect ratio, text overlays, or audio tracks get routed to the corresponding action via keyword and intent classification.

User says...ActionSkip SSE?
"export" / "导出" / "download" / "send me the video"→ §3.5 Export
"credits" / "积分" / "balance" / "余额"→ §3.3 Credits
"status" / "状态" / "show tracks"→ §3.4 State
"upload" / "上传" / user sends file→ §3.2 Upload
Everything else (generate, edit, add BGM…)→ §3.1 SSE

Cloud Render Pipeline Details

Each export job queues on a cloud GPU node that composites video layers, applies platform-spec compression (H.264, up to 1080x1920), and returns a download URL within 30-90 seconds. The session token carries render job IDs, so closing the tab before completion orphans the job.

Base URL: https://mega-api-prod.nemovideo.ai

EndpointMethodPurpose
/api/tasks/me/with-session/nemo_agentPOSTStart a new editing session. Body: {"task_name":"project","language":"<lang>"}. Returns session_id.
/run_ssePOSTSend a user message. Body includes app_name, session_id, new_message. Stream response with Accept: text/event-stream. Timeout: 15 min.
/api/upload-video/nemo_agent/me/<sid>POSTUpload a file (multipart) or URL.
/api/credits/balance/simpleGETCheck remaining credits (available, frozen, total).
/api/state/nemo_agent/me/<sid>/latestGETFetch current timeline state (draft, video_infos, generated_media).
/api/render/proxy/lambdaPOSTStart export. Body: {"id":"render_<ts>","sessionId":"<sid>","draft":<json>,"output":{"format":"mp4","quality":"high"}}. Poll status every 30s.

Accepted file types: mp4, mov, avi, webm, mkv, jpg, png, gif, webp, mp3, wav, m4a, aac.

Three attribution headers are required on every request and must match this file's frontmatter:

HeaderValue
X-Skill-Sourcesolo-video
X-Skill-Versionfrontmatter version
X-Skill-Platformauto-detect: clawhub / cursor / unknown from install path

Every API call needs Authorization: Bearer <NEMO_TOKEN> plus the three attribution headers above. If any header is missing, exports return 402.

Error Handling

CodeMeaningAction
0SuccessContinue
1001Bad/expired tokenRe-auth via anonymous-token (tokens expire after 7 days)
1002Session not foundNew session §3.0
2001No creditsAnonymous: show registration URL with ?bind=<id> (get <id> from create-session or state response when needed). Registered: "Top up credits in your account"
4001Unsupported fileShow supported formats
4002File too largeSuggest compress/trim
400Missing X-Client-IdGenerate Client-Id and retry (see §1)
402Free plan export blockedSubscription tier issue, NOT credits. "Register or upgrade your plan to unlock export."
429Rate limit (1 token/client/7 days)Retry in 30s once

SSE Event Handling

EventAction
Text responseApply GUI translation (§4), present to user
Tool call/resultProcess internally, don't forward
heartbeat / empty data:Keep waiting. Every 2 min: "⏳ Still working..."
Stream closesProcess final response

~30% of editing operations return no text in the SSE stream. When this happens: poll session state to verify the edit was applied, then summarize changes to the user.

Translating GUI Instructions

The backend responds as if there's a visual interface. Map its instructions to API calls:

  • "click" or "点击" → execute the action via the relevant endpoint
  • "open" or "打开" → query session state to get the data
  • "drag/drop" or "拖拽" → send the edit command through SSE
  • "preview in timeline" → show a text summary of current tracks
  • "Export" or "导出" → run the export workflow

Draft field mapping: t=tracks, tt=track type (0=video, 1=audio, 7=text), sg=segments, d=duration(ms), m=metadata.

Timeline (3 tracks): 1. Video: city timelapse (0-10s) 2. BGM: Lo-fi (0-10s, 35%) 3. Title: "Urban Dreams" (0-3s)

Common Workflows

Quick edit: Upload → "clean up background noise, cut silences, and add animated captions" → Download MP4. Takes 30-60 seconds for a 30-second clip.

Batch style: Upload multiple files in one session. Process them one by one with different instructions. Each gets its own render.

Iterative: Start with a rough cut, preview the result, then refine. The session keeps your timeline state so you can keep tweaking.

Tips and Tricks

The backend processes faster when you're specific. Instead of "make it look better", try "clean up background noise, cut silences, and add animated captions" — concrete instructions get better results.

Max file size is 500MB. Stick to MP4, MOV, AVI, WebM for the smoothest experience.

Export as MP4 for widest compatibility across all platforms.

Comments

Loading comments...