Ai Ugc Video Editor Job

v1.0.0

Turn a 60-second phone-recorded product review clip into 1080p polished UGC clips just by typing what you need. Whether it's editing user-generated content i...

0· 30·0 current·0 all-time
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name and description describe a cloud UGC video editor and the SKILL.md only requires a NEMO_TOKEN and describes API endpoints for uploading, editing, SSE, and exporting — all consistent with cloud rendering. One minor inconsistency: the skill's YAML frontmatter lists a config path (~/.config/nemovideo/) as metadata, but the registry summary showed no required config paths; this is likely metadata drift rather than malicious.
Instruction Scope
Runtime instructions are narrowly focused on authenticating, creating a session, uploading media, sending SSE edit messages, polling render status, and returning download URLs. The skill asks to read its own frontmatter and detect its install path for attribution headers (reasonable for self-attribution). It does not instruct reading unrelated system files or unrelated environment variables. It will upload user-provided media to the external rendering API — expected for this functionality.
Install Mechanism
Instruction-only skill with no install spec and no code files. This is lower-risk because nothing is written to disk by an installer; network calls happen at runtime per the documented API endpoints.
Credentials
The skill only requires a single credential (NEMO_TOKEN), which is proportionate to a cloud-rendering service. Note: the frontmatter also references a config path (~/.config/nemovideo/) which suggests the skill might look for local config files — the registry metadata did not list required config paths. This mismatch should be clarified but is not in itself a major red flag.
Persistence & Privilege
The skill is not marked always:true and does not request system-wide changes. It instructs saving a session_id for ongoing work (expected behavior) and does not attempt to modify other skills or global agent settings.
Assessment
This skill appears coherent for cloud UGC video editing. Before installing: 1) Verify you trust the external domain (mega-api-prod.nemovideo.ai) and its privacy/terms because your uploaded videos will be sent there. 2) Provide only a NEMO_TOKEN scoped to this service (avoid reusing sensitive credentials). 3) Note the skill can mint a short-lived anonymous token if no token is set — this will still send your files to the service. 4) The frontmatter mentions a local config path (~/.config/nemovideo/); confirm whether the skill will read files there if you keep sensitive data in that folder. 5) Test with non-sensitive footage first and revoke or rotate any token you supplied if you stop using the skill.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🎬 Clawdis
EnvNEMO_TOKEN
Primary envNEMO_TOKEN
latestvk97bpf72zf80kc26v7frgjxnbn855jhf
30downloads
0stars
1versions
Updated 9h ago
v1.0.0
MIT-0

Getting Started

Share your raw UGC footage and I'll get started on AI UGC video editing. Or just tell me what you're thinking.

Try saying:

  • "edit my raw UGC footage"
  • "export 1080p MP4"
  • "trim silences, add captions, and cut"

Automatic Setup

On first interaction, connect to the processing API before doing anything else. Show a brief status like "Setting things up...".

Token: If NEMO_TOKEN environment variable is already set, use it and skip to Session below.

Free token: Generate a UUID as client identifier, then POST to https://mega-api-prod.nemovideo.ai/api/auth/anonymous-token with header X-Client-Id: <uuid>. The response field data.token becomes your NEMO_TOKEN (100 credits, 7-day expiry).

Session: POST to https://mega-api-prod.nemovideo.ai/api/tasks/me/with-session/nemo_agent with Bearer auth and body {"task_name":"project"}. Save session_id from the response.

Confirm to the user you're connected and ready. Don't print tokens or raw JSON.

AI UGC Video Editor — Edit UGC Clips for Social Ads

This tool takes your raw UGC footage and runs AI UGC video editing through a cloud rendering pipeline. You upload, describe what you want, and download the result.

Say you have a 60-second phone-recorded product review clip and want to trim silences, add captions, and cut for a 30-second social ad — the backend processes it in about 1-2 minutes and hands you a 1080p MP4.

Tip: keeping source clips under 90 seconds speeds up AI processing significantly.

Matching Input to Actions

User prompts referencing ai ugc video editor job, aspect ratio, text overlays, or audio tracks get routed to the corresponding action via keyword and intent classification.

User says...ActionSkip SSE?
"export" / "导出" / "download" / "send me the video"→ §3.5 Export
"credits" / "积分" / "balance" / "余额"→ §3.3 Credits
"status" / "状态" / "show tracks"→ §3.4 State
"upload" / "上传" / user sends file→ §3.2 Upload
Everything else (generate, edit, add BGM…)→ §3.1 SSE

Cloud Render Pipeline Details

Each export job queues on a cloud GPU node that composites video layers, applies platform-spec compression (H.264, up to 1080x1920), and returns a download URL within 30-90 seconds. The session token carries render job IDs, so closing the tab before completion orphans the job.

Skill attribution — read from this file's YAML frontmatter at runtime:

  • X-Skill-Source: ai-ugc-video-editor-job
  • X-Skill-Version: from frontmatter version
  • X-Skill-Platform: detect from install path (~/.clawhub/clawhub, ~/.cursor/skills/cursor, else unknown)

Every API call needs Authorization: Bearer <NEMO_TOKEN> plus the three attribution headers above. If any header is missing, exports return 402.

API base: https://mega-api-prod.nemovideo.ai

Create session: POST /api/tasks/me/with-session/nemo_agent — body {"task_name":"project","language":"<lang>"} — returns task_id, session_id.

Send message (SSE): POST /run_sse — body {"app_name":"nemo_agent","user_id":"me","session_id":"<sid>","new_message":{"parts":[{"text":"<msg>"}]}} with Accept: text/event-stream. Max timeout: 15 minutes.

Upload: POST /api/upload-video/nemo_agent/me/<sid> — file: multipart -F "files=@/path", or URL: {"urls":["<url>"],"source_type":"url"}

Credits: GET /api/credits/balance/simple — returns available, frozen, total

Session state: GET /api/state/nemo_agent/me/<sid>/latest — key fields: data.state.draft, data.state.video_infos, data.state.generated_media

Export (free, no credits): POST /api/render/proxy/lambda — body {"id":"render_<ts>","sessionId":"<sid>","draft":<json>,"output":{"format":"mp4","quality":"high"}}. Poll GET /api/render/proxy/lambda/<id> every 30s until status = completed. Download URL at output.url.

Supported formats: mp4, mov, avi, webm, mkv, jpg, png, gif, webp, mp3, wav, m4a, aac.

SSE Event Handling

EventAction
Text responseApply GUI translation (§4), present to user
Tool call/resultProcess internally, don't forward
heartbeat / empty data:Keep waiting. Every 2 min: "⏳ Still working..."
Stream closesProcess final response

~30% of editing operations return no text in the SSE stream. When this happens: poll session state to verify the edit was applied, then summarize changes to the user.

Backend Response Translation

The backend assumes a GUI exists. Translate these into API actions:

Backend saysYou do
"click [button]" / "点击"Execute via API
"open [panel]" / "打开"Query session state
"drag/drop" / "拖拽"Send edit via SSE
"preview in timeline"Show track summary
"Export button" / "导出"Execute export workflow

Draft field mapping: t=tracks, tt=track type (0=video, 1=audio, 7=text), sg=segments, d=duration(ms), m=metadata.

Timeline (3 tracks): 1. Video: city timelapse (0-10s) 2. BGM: Lo-fi (0-10s, 35%) 3. Title: "Urban Dreams" (0-3s)

Error Handling

CodeMeaningAction
0SuccessContinue
1001Bad/expired tokenRe-auth via anonymous-token (tokens expire after 7 days)
1002Session not foundNew session §3.0
2001No creditsAnonymous: show registration URL with ?bind=<id> (get <id> from create-session or state response when needed). Registered: "Top up credits in your account"
4001Unsupported fileShow supported formats
4002File too largeSuggest compress/trim
400Missing X-Client-IdGenerate Client-Id and retry (see §1)
402Free plan export blockedSubscription tier issue, NOT credits. "Register or upgrade your plan to unlock export."
429Rate limit (1 token/client/7 days)Retry in 30s once

Common Workflows

Quick edit: Upload → "trim silences, add captions, and cut for a 30-second social ad" → Download MP4. Takes 1-2 minutes for a 30-second clip.

Batch style: Upload multiple files in one session. Process them one by one with different instructions. Each gets its own render.

Iterative: Start with a rough cut, preview the result, then refine. The session keeps your timeline state so you can keep tweaking.

Tips and Tricks

The backend processes faster when you're specific. Instead of "make it look better", try "trim silences, add captions, and cut for a 30-second social ad" — concrete instructions get better results.

Max file size is 500MB. Stick to MP4, MOV, WebM, AVI for the smoothest experience.

Export as MP4 with H.264 codec for widest compatibility across ad platforms.

Comments

Loading comments...