Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Automatic Video Editing

v1.0.0

Skip the learning curve of professional editing software. Describe what you want — trim the silences, add transitions, and sync cuts to the background music...

0· 48·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for susan4731-wilfordf/automatic-video-editing.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Automatic Video Editing" (susan4731-wilfordf/automatic-video-editing) from ClawHub.
Skill page: https://clawhub.ai/susan4731-wilfordf/automatic-video-editing
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required env vars: NEMO_TOKEN
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install automatic-video-editing

ClawHub CLI

Package manager switcher

npx clawhub@latest install automatic-video-editing
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The skill's purpose (cloud-based automatic video editing) matches the API calls and file-upload instructions in SKILL.md. However the registry metadata declares NEMO_TOKEN as a required env var while the runtime instructions explicitly provide a fallback to obtain an anonymous token if NEMO_TOKEN is not set—this is internally inconsistent (the token is relevant to the purpose, but its 'required' status is ambiguous). The SKILL.md frontmatter also lists a config path (~/.config/nemovideo/) whereas the registry metadata above states no required config paths.
Instruction Scope
Instructions are focused on creating sessions, uploading video files (multipart or by URL), starting renders, polling for completion, and returning download URLs. These actions are appropriate for a remote video-editing service. The skill does instruct the agent to read local file paths for uploads and to store/use session tokens, which is expected for this functionality.
Install Mechanism
No install spec or code files are present (instruction-only). This minimizes on-device persistence and reduces install-time risk.
!
Credentials
Only NEMO_TOKEN is listed as the primary credential which is proportional to a cloud API. However the metadata vs. SKILL.md contradiction (required env var vs. automatic anonymous-token acquisition) is concerning because it is unclear whether the skill expects a long-lived secret in the environment or will generate and persist anonymous tokens itself. The SKILL.md frontmatter also mentions a config path, which is not reflected in the registry metadata—this mismatch could cause the agent to access or expect files in user config directories.
Persistence & Privilege
The skill is not always-enabled and is user-invocable only. It does ask the agent to create and reuse session tokens and session IDs for job polling, but it does not request system-wide privileges or modifications to other skills. No install means no permanent daemon is created by the skill itself.
What to consider before installing
This skill appears to do what it says (upload video to a remote GPU service, render, and return a URL), but there are a few red flags to consider before installing: 1) Metadata inconsistencies — the registry claims NEMO_TOKEN is required while SKILL.md provides an automatic anonymous-token fallback and also mentions a config path not listed elsewhere. Ask the maintainer which behavior to expect and whether tokens are stored on disk or only in-memory. 2) Privacy and trust — all uploads and renders go to https://mega-api-prod.nemovideo.ai; you will be sending potentially sensitive video/audio off your device (confirm the provider's privacy policy and retention rules). 3) Token handling — verify how long anonymous tokens last, whether they are persisted, and whether they can be revoked. 4) Confirm headers and attribution — the skill requires custom headers; ensure these do not leak additional metadata you don't want shared. If you need stronger assurance, request a maintainer contact, a homepage or privacy policy, or prefer using this skill only interactively (not as an always-on automation).

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🎬 Clawdis
EnvNEMO_TOKEN
Primary envNEMO_TOKEN
latestvk973tfz72sz6ttb7k64nkysehs85hxkh
48downloads
0stars
1versions
Updated 3d ago
v1.0.0
MIT-0

Getting Started

Got raw video footage to work with? Send it over and tell me what you need — I'll take care of the AI video editing.

Try saying:

  • "edit a 3-minute unedited phone recording into a 1080p MP4"
  • "trim the silences, add transitions, and sync cuts to the background music"
  • "automatically cutting and polishing raw footage into a shareable video for content creators and marketers"

First-Time Connection

When a user first opens this skill, connect to the processing backend automatically. Briefly let them know (e.g. "Setting up...").

Authentication: Check if NEMO_TOKEN is set in the environment. If it is, skip to step 2.

  1. Obtain a free token: Generate a random UUID as client identifier. POST to https://mega-api-prod.nemovideo.ai/api/auth/anonymous-token with header X-Client-Id set to that UUID. The response data.token is your NEMO_TOKEN — 100 free credits, valid 7 days.
  2. Create a session: POST to https://mega-api-prod.nemovideo.ai/api/tasks/me/with-session/nemo_agent with Authorization: Bearer <token>, Content-Type: application/json, and body {"task_name":"project","language":"<detected>"}. Store the returned session_id for all subsequent requests.

Keep setup communication brief. Don't display raw API responses or token values to the user.

Automatic Video Editing — Edit and Export Polished Videos

Send me your raw video footage and describe the result you want. The AI video editing runs on remote GPU nodes — nothing to install on your machine.

A quick example: upload a 3-minute unedited phone recording, type "trim the silences, add transitions, and sync cuts to the background music", and you'll get a 1080p MP4 back in roughly 1-2 minutes. All rendering happens server-side.

Worth noting: shorter clips under 2 minutes process significantly faster and give the AI more precise results.

Matching Input to Actions

User prompts referencing automatic video editing, aspect ratio, text overlays, or audio tracks get routed to the corresponding action via keyword and intent classification.

User says...ActionSkip SSE?
"export" / "导出" / "download" / "send me the video"→ §3.5 Export
"credits" / "积分" / "balance" / "余额"→ §3.3 Credits
"status" / "状态" / "show tracks"→ §3.4 State
"upload" / "上传" / user sends file→ §3.2 Upload
Everything else (generate, edit, add BGM…)→ §3.1 SSE

Cloud Render Pipeline Details

Each export job queues on a cloud GPU node that composites video layers, applies platform-spec compression (H.264, up to 1080x1920), and returns a download URL within 30-90 seconds. The session token carries render job IDs, so closing the tab before completion orphans the job.

All requests must include: Authorization: Bearer <NEMO_TOKEN>, X-Skill-Source, X-Skill-Version, X-Skill-Platform. Missing attribution headers will cause export to fail with 402.

Three attribution headers are required on every request and must match this file's frontmatter:

HeaderValue
X-Skill-Sourceautomatic-video-editing
X-Skill-Versionfrontmatter version
X-Skill-Platformauto-detect: clawhub / cursor / unknown from install path

API base: https://mega-api-prod.nemovideo.ai

Create session: POST /api/tasks/me/with-session/nemo_agent — body {"task_name":"project","language":"<lang>"} — returns task_id, session_id.

Send message (SSE): POST /run_sse — body {"app_name":"nemo_agent","user_id":"me","session_id":"<sid>","new_message":{"parts":[{"text":"<msg>"}]}} with Accept: text/event-stream. Max timeout: 15 minutes.

Upload: POST /api/upload-video/nemo_agent/me/<sid> — file: multipart -F "files=@/path", or URL: {"urls":["<url>"],"source_type":"url"}

Credits: GET /api/credits/balance/simple — returns available, frozen, total

Session state: GET /api/state/nemo_agent/me/<sid>/latest — key fields: data.state.draft, data.state.video_infos, data.state.generated_media

Export (free, no credits): POST /api/render/proxy/lambda — body {"id":"render_<ts>","sessionId":"<sid>","draft":<json>,"output":{"format":"mp4","quality":"high"}}. Poll GET /api/render/proxy/lambda/<id> every 30s until status = completed. Download URL at output.url.

Supported formats: mp4, mov, avi, webm, mkv, jpg, png, gif, webp, mp3, wav, m4a, aac.

Error Codes

  • 0 — success, continue normally
  • 1001 — token expired or invalid; re-acquire via /api/auth/anonymous-token
  • 1002 — session not found; create a new one
  • 2001 — out of credits; anonymous users get a registration link with ?bind=<id>, registered users top up
  • 4001 — unsupported file type; show accepted formats
  • 4002 — file too large; suggest compressing or trimming
  • 400 — missing X-Client-Id; generate one and retry
  • 402 — free plan export blocked; not a credit issue, subscription tier
  • 429 — rate limited; wait 30s and retry once

Backend Response Translation

The backend assumes a GUI exists. Translate these into API actions:

Backend saysYou do
"click [button]" / "点击"Execute via API
"open [panel]" / "打开"Query session state
"drag/drop" / "拖拽"Send edit via SSE
"preview in timeline"Show track summary
"Export button" / "导出"Execute export workflow

SSE Event Handling

EventAction
Text responseApply GUI translation (§4), present to user
Tool call/resultProcess internally, don't forward
heartbeat / empty data:Keep waiting. Every 2 min: "⏳ Still working..."
Stream closesProcess final response

~30% of editing operations return no text in the SSE stream. When this happens: poll session state to verify the edit was applied, then summarize changes to the user.

Draft field mapping: t=tracks, tt=track type (0=video, 1=audio, 7=text), sg=segments, d=duration(ms), m=metadata.

Timeline (3 tracks): 1. Video: city timelapse (0-10s) 2. BGM: Lo-fi (0-10s, 35%) 3. Title: "Urban Dreams" (0-3s)

Tips and Tricks

The backend processes faster when you're specific. Instead of "make it look better", try "trim the silences, add transitions, and sync cuts to the background music" — concrete instructions get better results.

Max file size is 500MB. Stick to MP4, MOV, AVI, WebM for the smoothest experience.

Export as MP4 for widest compatibility across platforms and devices.

Common Workflows

Quick edit: Upload → "trim the silences, add transitions, and sync cuts to the background music" → Download MP4. Takes 1-2 minutes for a 30-second clip.

Batch style: Upload multiple files in one session. Process them one by one with different instructions. Each gets its own render.

Iterative: Start with a rough cut, preview the result, then refine. The session keeps your timeline state so you can keep tweaking.

Comments

Loading comments...