Shorts Editor Online

v1.0.0

Get polished short clips ready to post, without touching a single slider. Upload your raw video clips (MP4, MOV, AVI, WebM, up to 500MB), say something like...

0· 57·0 current·0 all-time
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
Name/description (AI cloud shorts editor) match the only required credential (NEMO_TOKEN) and the listed API endpoints on mega-api-prod.nemovideo.ai. There is a small metadata mismatch: the frontmatter declares a config path (~/.config/nemovideo/) while the registry summary listed no required config paths, but this is coherent with the service purpose.
Instruction Scope
SKILL.md instructs the agent to upload user media, create sessions, stream SSE, poll render status, and include attribution headers. It also instructs reading the skill file's YAML frontmatter and detecting the agent install path to set X-Skill-Platform; both require local file/environment inspection but are reasonable for header attribution. All network calls target the stated API domain; the instructions do not request unrelated local secrets or system-wide credentials.
Install Mechanism
Instruction-only skill with no install spec or external downloads, which is the lowest-risk install profile.
Credentials
Only one credential is required (NEMO_TOKEN) which is appropriate for a hosted editing API. The skill will also generate an anonymous token if none is present. The frontmatter references a config path (~/.config/nemovideo/) which could contain related tokens/config — that is consistent with the service but should be noted since it implies access to a local config file.
Persistence & Privilege
Skill is not always-included and does not request elevated or platform-wide privileges. It creates and uses transient sessions/tokens for the service as expected for a cloud editor.
Assessment
This skill is coherent with a cloud video-editing service: it requires/uses a NEMO_TOKEN and will upload whatever media you provide to mega-api-prod.nemovideo.ai to do editing and rendering. Before installing/using it: 1) Confirm you trust the external service and are comfortable uploading the videos (privacy/legal risk). 2) Be aware the skill will read its frontmatter and may inspect an install path or ~/.config/nemovideo/ if present to set attribution headers. 3) If you don't provide a NEMO_TOKEN, the skill will request an anonymous token from the service (grants temporary credits). 4) Note the package has no listed homepage/source; absence of a known project page increases operational risk — prefer skills with a verifiable upstream. If you need to protect sensitive footage, do not use this skill or verify the vendor and their privacy terms first.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

✂️ Clawdis
EnvNEMO_TOKEN
Primary envNEMO_TOKEN
latestvk978rsmejjhv0txm00m288qm7d855be7
57downloads
0stars
1versions
Updated 2d ago
v1.0.0
MIT-0

Getting Started

Got raw video clips to work with? Send it over and tell me what you need — I'll take care of the AI shorts editing.

Try saying:

  • "edit a 3-minute vertical phone recording into a 1080p MP4"
  • "trim the clip to 60 seconds, add text overlays, and sync cuts to the beat"
  • "editing vertical short-form videos for YouTube Shorts, TikTok, and Reels for TikTok creators"

Getting Connected

Before handling any user request, establish a connection to the backend API. Show a brief status like "Connecting...".

If NEMO_TOKEN is in the environment, use it directly and create a session. Otherwise, acquire a free starter token:

  • Generate a UUID as client identifier
  • POST to https://mega-api-prod.nemovideo.ai/api/auth/anonymous-token with the X-Client-Id header
  • The response includes a token with 100 free credits valid for 7 days — use it as NEMO_TOKEN

Then create a session by POSTing to https://mega-api-prod.nemovideo.ai/api/tasks/me/with-session/nemo_agent with Bearer authorization and body {"task_name":"project","language":"en"}. The session_id in the response is needed for all following requests.

Tell the user you're ready. Keep the technical details out of the chat.

Shorts Editor Online — Edit and Export Short Videos

This tool takes your raw video clips and runs AI shorts editing through a cloud rendering pipeline. You upload, describe what you want, and download the result.

Say you have a 3-minute vertical phone recording and want to trim the clip to 60 seconds, add text overlays, and sync cuts to the beat — the backend processes it in about 30-60 seconds and hands you a 1080p MP4.

Tip: vertical 9:16 video uploads process faster and export ready for all short-form platforms.

Matching Input to Actions

User prompts referencing shorts editor online, aspect ratio, text overlays, or audio tracks get routed to the corresponding action via keyword and intent classification.

User says...ActionSkip SSE?
"export" / "导出" / "download" / "send me the video"→ §3.5 Export
"credits" / "积分" / "balance" / "余额"→ §3.3 Credits
"status" / "状态" / "show tracks"→ §3.4 State
"upload" / "上传" / user sends file→ §3.2 Upload
Everything else (generate, edit, add BGM…)→ §3.1 SSE

Cloud Render Pipeline Details

Each export job queues on a cloud GPU node that composites video layers, applies platform-spec compression (H.264, up to 1080x1920), and returns a download URL within 30-90 seconds. The session token carries render job IDs, so closing the tab before completion orphans the job.

Base URL: https://mega-api-prod.nemovideo.ai

EndpointMethodPurpose
/api/tasks/me/with-session/nemo_agentPOSTStart a new editing session. Body: {"task_name":"project","language":"<lang>"}. Returns session_id.
/run_ssePOSTSend a user message. Body includes app_name, session_id, new_message. Stream response with Accept: text/event-stream. Timeout: 15 min.
/api/upload-video/nemo_agent/me/<sid>POSTUpload a file (multipart) or URL.
/api/credits/balance/simpleGETCheck remaining credits (available, frozen, total).
/api/state/nemo_agent/me/<sid>/latestGETFetch current timeline state (draft, video_infos, generated_media).
/api/render/proxy/lambdaPOSTStart export. Body: {"id":"render_<ts>","sessionId":"<sid>","draft":<json>,"output":{"format":"mp4","quality":"high"}}. Poll status every 30s.

Accepted file types: mp4, mov, avi, webm, mkv, jpg, png, gif, webp, mp3, wav, m4a, aac.

Skill attribution — read from this file's YAML frontmatter at runtime:

  • X-Skill-Source: shorts-editor-online
  • X-Skill-Version: from frontmatter version
  • X-Skill-Platform: detect from install path (~/.clawhub/clawhub, ~/.cursor/skills/cursor, else unknown)

All requests must include: Authorization: Bearer <NEMO_TOKEN>, X-Skill-Source, X-Skill-Version, X-Skill-Platform. Missing attribution headers will cause export to fail with 402.

Error Handling

CodeMeaningAction
0SuccessContinue
1001Bad/expired tokenRe-auth via anonymous-token (tokens expire after 7 days)
1002Session not foundNew session §3.0
2001No creditsAnonymous: show registration URL with ?bind=<id> (get <id> from create-session or state response when needed). Registered: "Top up credits in your account"
4001Unsupported fileShow supported formats
4002File too largeSuggest compress/trim
400Missing X-Client-IdGenerate Client-Id and retry (see §1)
402Free plan export blockedSubscription tier issue, NOT credits. "Register or upgrade your plan to unlock export."
429Rate limit (1 token/client/7 days)Retry in 30s once

Reading the SSE Stream

Text events go straight to the user (after GUI translation). Tool calls stay internal. Heartbeats and empty data: lines mean the backend is still working — show "⏳ Still working..." every 2 minutes.

About 30% of edit operations close the stream without any text. When that happens, poll /api/state to confirm the timeline changed, then tell the user what was updated.

Translating GUI Instructions

The backend responds as if there's a visual interface. Map its instructions to API calls:

  • "click" or "点击" → execute the action via the relevant endpoint
  • "open" or "打开" → query session state to get the data
  • "drag/drop" or "拖拽" → send the edit command through SSE
  • "preview in timeline" → show a text summary of current tracks
  • "Export" or "导出" → run the export workflow

Draft field mapping: t=tracks, tt=track type (0=video, 1=audio, 7=text), sg=segments, d=duration(ms), m=metadata.

Timeline (3 tracks): 1. Video: city timelapse (0-10s) 2. BGM: Lo-fi (0-10s, 35%) 3. Title: "Urban Dreams" (0-3s)

Common Workflows

Quick edit: Upload → "trim the clip to 60 seconds, add text overlays, and sync cuts to the beat" → Download MP4. Takes 30-60 seconds for a 30-second clip.

Batch style: Upload multiple files in one session. Process them one by one with different instructions. Each gets its own render.

Iterative: Start with a rough cut, preview the result, then refine. The session keeps your timeline state so you can keep tweaking.

Tips and Tricks

The backend processes faster when you're specific. Instead of "make it look better", try "trim the clip to 60 seconds, add text overlays, and sync cuts to the beat" — concrete instructions get better results.

Max file size is 500MB. Stick to MP4, MOV, AVI, WebM for the smoothest experience.

Export as MP4 with H.264 codec for the best compatibility across YouTube Shorts, TikTok, and Instagram Reels.

Comments

Loading comments...