Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Ugc Editor

v1.0.0

Turn a 60-second smartphone clip of a product unboxing into 1080p polished UGC clips just by typing what you need. Whether it's editing raw creator footage i...

0· 59·0 current·0 all-time
bypeandrover adam@peand-rover

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for peand-rover/ugc-editor.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Ugc Editor" (peand-rover/ugc-editor) from ClawHub.
Skill page: https://clawhub.ai/peand-rover/ugc-editor
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required env vars: NEMO_TOKEN
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install ugc-editor

ClawHub CLI

Package manager switcher

npx clawhub@latest install ugc-editor
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The name/description (UGC video editing) lines up with the runtime instructions: upload video files, open a session, use SSE for edits, and call a render/export endpoint. Requested primary credential NEMO_TOKEN is appropriate for an external rendering API. However the SKILL.md frontmatter declares a required config path (~/.config/nemovideo/) while the registry metadata reported no config paths — this mismatch should be resolved (why would the skill need a local config dir?).
Instruction Scope
Instructions stay within the stated purpose: they describe how to authenticate (including auto-creating an anonymous token), create sessions, upload media, stream edits, poll exports, and return download URLs. Important runtime behaviors to be aware of: the skill will (a) auto-contact an external domain (mega-api-prod.nemovideo.ai), (b) potentially auto-generate and store an anonymous token if NEMO_TOKEN is not present, and (c) upload user media to remote GPU nodes. Those actions are consistent with the described cloud-rendering service, but they have privacy implications for user content.
Install Mechanism
No install spec and no code files — instruction-only skill. This is low-risk from an install/execution perspective because nothing is downloaded or written by an installer step. All runtime behavior is network calls described in SKILL.md.
!
Credentials
The only declared required env var is NEMO_TOKEN (primaryEnv), which is appropriate. But the SKILL.md frontmatter includes a configPaths requirement (~/.config/nemovideo/) that is not reflected in the registry metadata provided earlier. If the skill will read that path it could access additional tokens/configs; the registry and SKILL.md disagree. Also note that the skill instructs generating and storing anonymous tokens — the storage location and lifetime are unspecified.
Persistence & Privilege
always:false and no installer means the skill does not force permanent presence. The skill does instruct storing a session_id and possibly an anonymous token for later calls, which is normal for a remote service integration. There's no instruction to modify other skills or global agent settings.
What to consider before installing
This skill appears to be a front-end for a cloud video-rendering service and will upload whatever video you give it to mega-api-prod.nemovideo.ai and may automatically create and store an anonymous NEMO_TOKEN if you don't supply one. Before installing or using it: (1) confirm the skill author/source and the domain (nemovideo.ai) are trustworthy; (2) do not upload sensitive or private videos unless you accept they will be processed on that external server; (3) prefer to supply your own NEMO_TOKEN rather than allowing the skill to auto-generate and store tokens if you want control; (4) ask the author to explain the discrepancy between registry metadata (no config paths) and the SKILL.md frontmatter (requires ~/.config/nemovideo/) — ensure the skill will not read arbitrary local config files; (5) request a privacy/retention policy for uploaded media and tokens. If you cannot verify these points, treat the skill with caution or avoid installing it.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🎬 Clawdis
EnvNEMO_TOKEN
Primary envNEMO_TOKEN
latestvk97fash3q3z9szh5hmxms0kb6985dd7j
59downloads
0stars
1versions
Updated 5d ago
v1.0.0
MIT-0

Getting Started

Ready when you are. Drop your raw video clips here or describe what you want to make.

Try saying:

  • "edit a 60-second smartphone clip of a product unboxing into a 1080p MP4"
  • "trim the intro, add captions, and overlay a trending audio track"
  • "editing raw creator footage into platform-ready UGC content for TikTok creators"

First-Time Connection

When a user first opens this skill, connect to the processing backend automatically. Briefly let them know (e.g. "Setting up...").

Authentication: Check if NEMO_TOKEN is set in the environment. If it is, skip to step 2.

  1. Obtain a free token: Generate a random UUID as client identifier. POST to https://mega-api-prod.nemovideo.ai/api/auth/anonymous-token with header X-Client-Id set to that UUID. The response data.token is your NEMO_TOKEN — 100 free credits, valid 7 days.
  2. Create a session: POST to https://mega-api-prod.nemovideo.ai/api/tasks/me/with-session/nemo_agent with Authorization: Bearer <token>, Content-Type: application/json, and body {"task_name":"project","language":"<detected>"}. Store the returned session_id for all subsequent requests.

Keep setup communication brief. Don't display raw API responses or token values to the user.

UGC Editor — Edit Raw Clips Into UGC Content

Send me your raw video clips and describe the result you want. The AI UGC editing runs on remote GPU nodes — nothing to install on your machine.

A quick example: upload a 60-second smartphone clip of a product unboxing, type "trim the intro, add captions, and overlay a trending audio track", and you'll get a 1080p MP4 back in roughly 30-60 seconds. All rendering happens server-side.

Worth noting: vertical 9:16 footage works best for TikTok and Reels output.

Matching Input to Actions

User prompts referencing ugc editor, aspect ratio, text overlays, or audio tracks get routed to the corresponding action via keyword and intent classification.

User says...ActionSkip SSE?
"export" / "导出" / "download" / "send me the video"→ §3.5 Export
"credits" / "积分" / "balance" / "余额"→ §3.3 Credits
"status" / "状态" / "show tracks"→ §3.4 State
"upload" / "上传" / user sends file→ §3.2 Upload
Everything else (generate, edit, add BGM…)→ §3.1 SSE

Cloud Render Pipeline Details

Each export job queues on a cloud GPU node that composites video layers, applies platform-spec compression (H.264, up to 1080x1920), and returns a download URL within 30-90 seconds. The session token carries render job IDs, so closing the tab before completion orphans the job.

All calls go to https://mega-api-prod.nemovideo.ai. The main endpoints:

  1. SessionPOST /api/tasks/me/with-session/nemo_agent with {"task_name":"project","language":"<lang>"}. Gives you a session_id.
  2. Chat (SSE)POST /run_sse with session_id and your message in new_message.parts[0].text. Set Accept: text/event-stream. Up to 15 min.
  3. UploadPOST /api/upload-video/nemo_agent/me/<sid> — multipart file or JSON with URLs.
  4. CreditsGET /api/credits/balance/simple — returns available, frozen, total.
  5. StateGET /api/state/nemo_agent/me/<sid>/latest — current draft and media info.
  6. ExportPOST /api/render/proxy/lambda with render ID and draft JSON. Poll GET /api/render/proxy/lambda/<id> every 30s for completed status and download URL.

Formats: mp4, mov, avi, webm, mkv, jpg, png, gif, webp, mp3, wav, m4a, aac.

Three attribution headers are required on every request and must match this file's frontmatter:

HeaderValue
X-Skill-Sourceugc-editor
X-Skill-Versionfrontmatter version
X-Skill-Platformauto-detect: clawhub / cursor / unknown from install path

Every API call needs Authorization: Bearer <NEMO_TOKEN> plus the three attribution headers above. If any header is missing, exports return 402.

Draft field mapping: t=tracks, tt=track type (0=video, 1=audio, 7=text), sg=segments, d=duration(ms), m=metadata.

Timeline (3 tracks): 1. Video: city timelapse (0-10s) 2. BGM: Lo-fi (0-10s, 35%) 3. Title: "Urban Dreams" (0-3s)

Backend Response Translation

The backend assumes a GUI exists. Translate these into API actions:

Backend saysYou do
"click [button]" / "点击"Execute via API
"open [panel]" / "打开"Query session state
"drag/drop" / "拖拽"Send edit via SSE
"preview in timeline"Show track summary
"Export button" / "导出"Execute export workflow

Reading the SSE Stream

Text events go straight to the user (after GUI translation). Tool calls stay internal. Heartbeats and empty data: lines mean the backend is still working — show "⏳ Still working..." every 2 minutes.

About 30% of edit operations close the stream without any text. When that happens, poll /api/state to confirm the timeline changed, then tell the user what was updated.

Error Handling

CodeMeaningAction
0SuccessContinue
1001Bad/expired tokenRe-auth via anonymous-token (tokens expire after 7 days)
1002Session not foundNew session §3.0
2001No creditsAnonymous: show registration URL with ?bind=<id> (get <id> from create-session or state response when needed). Registered: "Top up credits in your account"
4001Unsupported fileShow supported formats
4002File too largeSuggest compress/trim
400Missing X-Client-IdGenerate Client-Id and retry (see §1)
402Free plan export blockedSubscription tier issue, NOT credits. "Register or upgrade your plan to unlock export."
429Rate limit (1 token/client/7 days)Retry in 30s once

Tips and Tricks

The backend processes faster when you're specific. Instead of "make it look better", try "trim the intro, add captions, and overlay a trending audio track" — concrete instructions get better results.

Max file size is 500MB. Stick to MP4, MOV, WebM, AVI for the smoothest experience.

Export as MP4 with H.264 codec for widest platform compatibility.

Common Workflows

Quick edit: Upload → "trim the intro, add captions, and overlay a trending audio track" → Download MP4. Takes 30-60 seconds for a 30-second clip.

Batch style: Upload multiple files in one session. Process them one by one with different instructions. Each gets its own render.

Iterative: Start with a rough cut, preview the result, then refine. The session keeps your timeline state so you can keep tweaking.

Comments

Loading comments...