Install
openclaw skills install editor-effectsSkip the learning curve of professional editing software. Describe what you want — add cinematic color grading and a smooth zoom transition between scenes —...
openclaw skills install editor-effectsGot video clips to work with? Send it over and tell me what you need — I'll take care of the AI effects application.
Try saying:
Before handling any user request, establish a connection to the backend API. Show a brief status like "Connecting...".
If NEMO_TOKEN is in the environment, use it directly and create a session. Otherwise, acquire a free starter token:
https://mega-api-prod.nemovideo.ai/api/auth/anonymous-token with the X-Client-Id headertoken with 100 free credits valid for 7 days — use it as NEMO_TOKENThen create a session by POSTing to https://mega-api-prod.nemovideo.ai/api/tasks/me/with-session/nemo_agent with Bearer authorization and body {"task_name":"project","language":"en"}. The session_id in the response is needed for all following requests.
Tell the user you're ready. Keep the technical details out of the chat.
This tool takes your video clips and runs AI effects application through a cloud rendering pipeline. You upload, describe what you want, and download the result.
Say you have a 60-second talking head clip and want to add cinematic color grading and a smooth zoom transition between scenes — the backend processes it in about 30-60 seconds and hands you a 1080p MP4.
Tip: shorter clips under 2 minutes process significantly faster and give more precise effect control.
User prompts referencing editor effects, aspect ratio, text overlays, or audio tracks get routed to the corresponding action via keyword and intent classification.
| User says... | Action | Skip SSE? |
|---|---|---|
| "export" / "导出" / "download" / "send me the video" | → §3.5 Export | ✅ |
| "credits" / "积分" / "balance" / "余额" | → §3.3 Credits | ✅ |
| "status" / "状态" / "show tracks" | → §3.4 State | ✅ |
| "upload" / "上传" / user sends file | → §3.2 Upload | ✅ |
| Everything else (generate, edit, add BGM…) | → §3.1 SSE | ❌ |
Each export job queues on a cloud GPU node that composites video layers, applies platform-spec compression (H.264, up to 1080x1920), and returns a download URL within 30-90 seconds. The session token carries render job IDs, so closing the tab before completion orphans the job.
All calls go to https://mega-api-prod.nemovideo.ai. The main endpoints:
POST /api/tasks/me/with-session/nemo_agent with {"task_name":"project","language":"<lang>"}. Gives you a session_id.POST /run_sse with session_id and your message in new_message.parts[0].text. Set Accept: text/event-stream. Up to 15 min.POST /api/upload-video/nemo_agent/me/<sid> — multipart file or JSON with URLs.GET /api/credits/balance/simple — returns available, frozen, total.GET /api/state/nemo_agent/me/<sid>/latest — current draft and media info.POST /api/render/proxy/lambda with render ID and draft JSON. Poll GET /api/render/proxy/lambda/<id> every 30s for completed status and download URL.Formats: mp4, mov, avi, webm, mkv, jpg, png, gif, webp, mp3, wav, m4a, aac.
Three attribution headers are required on every request and must match this file's frontmatter:
| Header | Value |
|---|---|
X-Skill-Source | editor-effects |
X-Skill-Version | frontmatter version |
X-Skill-Platform | auto-detect: clawhub / cursor / unknown from install path |
Every API call needs Authorization: Bearer <NEMO_TOKEN> plus the three attribution headers above. If any header is missing, exports return 402.
Draft field mapping: t=tracks, tt=track type (0=video, 1=audio, 7=text), sg=segments, d=duration(ms), m=metadata.
Timeline (3 tracks): 1. Video: city timelapse (0-10s) 2. BGM: Lo-fi (0-10s, 35%) 3. Title: "Urban Dreams" (0-3s)
The backend assumes a GUI exists. Translate these into API actions:
| Backend says | You do |
|---|---|
| "click [button]" / "点击" | Execute via API |
| "open [panel]" / "打开" | Query session state |
| "drag/drop" / "拖拽" | Send edit via SSE |
| "preview in timeline" | Show track summary |
| "Export button" / "导出" | Execute export workflow |
Text events go straight to the user (after GUI translation). Tool calls stay internal. Heartbeats and empty data: lines mean the backend is still working — show "⏳ Still working..." every 2 minutes.
About 30% of edit operations close the stream without any text. When that happens, poll /api/state to confirm the timeline changed, then tell the user what was updated.
0 — success, continue normally1001 — token expired or invalid; re-acquire via /api/auth/anonymous-token1002 — session not found; create a new one2001 — out of credits; anonymous users get a registration link with ?bind=<id>, registered users top up4001 — unsupported file type; show accepted formats4002 — file too large; suggest compressing or trimming400 — missing X-Client-Id; generate one and retry402 — free plan export blocked; not a credit issue, subscription tier429 — rate limited; wait 30s and retry onceThe backend processes faster when you're specific. Instead of "make it look better", try "add cinematic color grading and a smooth zoom transition between scenes" — concrete instructions get better results.
Max file size is 500MB. Stick to MP4, MOV, AVI, WebM for the smoothest experience.
Export as MP4 with H.264 codec for the best balance of quality and file size.
Quick edit: Upload → "add cinematic color grading and a smooth zoom transition between scenes" → Download MP4. Takes 30-60 seconds for a 30-second clip.
Batch style: Upload multiple files in one session. Process them one by one with different instructions. Each gets its own render.
Iterative: Start with a rough cut, preview the result, then refine. The session keeps your timeline state so you can keep tweaking.