Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Windows Video Editor

v1.0.0

Get edited MP4 clips ready to post, without touching a single slider. Upload your raw video clips (MP4, MOV, AVI, WMV, up to 500MB), say something like "trim...

0· 36·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for linmillsd7/windows-video-editor.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Windows Video Editor" (linmillsd7/windows-video-editor) from ClawHub.
Skill page: https://clawhub.ai/linmillsd7/windows-video-editor
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required env vars: NEMO_TOKEN
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install windows-video-editor

ClawHub CLI

Package manager switcher

npx clawhub@latest install windows-video-editor
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
!
Purpose & Capability
The skill's stated purpose (remote video editing) matches the network calls and endpoints described (session creation, upload, render). However the registry declares NEMO_TOKEN as a required env var while the SKILL.md provides an explicit anonymous-token fallback flow (it will POST to the backend to obtain a token if NEMO_TOKEN is not set). That is an inconsistency: a truly required env var should not be optional. The frontmatter metadata also lists a config path (~/.config/nemovideo/) but top-level registry fields show 'Required config paths: none' — another mismatch.
!
Instruction Scope
The runtime instructions direct the agent to obtain or use tokens, create sessions, upload user video files (potentially large and sensitive) and poll/render on remote GPU nodes — all expected for this feature. Concerning items: SKILL.md instructs detection of install path to populate an X-Skill-Platform header (which implies filesystem access), and it tells the agent to 'don't display raw API responses or token values' (which instructs hiding some internal data from the user). The file-upload behavior sends user content to a third-party domain (mega-api-prod.nemovideo.ai) but there is no explicit privacy/retention policy in the instructions.
Install Mechanism
Instruction-only skill with no install spec and no code files. This is the lowest install risk: nothing is downloaded or written by an installer step as part of the skill metadata.
!
Credentials
Only one credential (NEMO_TOKEN) is declared as required, which is coherent for a remote API. But the SKILL.md provides an anonymous-token generation path if NEMO_TOKEN is not set; requiring the env var is therefore disproportionate or at least inconsistent. The skill will accept and upload arbitrary user media to an external service; that is a privacy-sensitive operation and should be called out to users before credentials/tokens are used. No other credentials are requested, which is appropriate.
Persistence & Privilege
The skill is not marked always:true and does not request elevated or persistent platform privileges. It expects to operate via network calls during use. Nothing indicates modification of other skills or system-wide settings.
What to consider before installing
This skill will upload your raw video files to a third-party backend (mega-api-prod.nemovideo.ai) for server-side editing and requires or uses a token named NEMO_TOKEN. Before installing: 1) Decide whether you are comfortable uploading potentially sensitive videos to an external service without an explicit privacy/retention policy visible in the skill; avoid using with confidential material. 2) Note the metadata/registry inconsistency: the skill claims NEMO_TOKEN is required but also has an anonymous-token fallback — clarify with the author whether the env var is mandatory or optional. 3) The skill instructs detecting install paths and adding attribution headers; ask how and where session tokens/IDs are stored (in-memory vs written to disk) and whether any local files are read. 4) Verify the backend domain and owner (no homepage or publisher info is provided here). If you still want to use it, do not pre-populate your environment with sensitive credentials, and test first with non-sensitive sample videos. If you need higher assurance, request a homepage, privacy policy, and confirmation of where tokens and session data are persisted.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🎬 Clawdis
EnvNEMO_TOKEN
Primary envNEMO_TOKEN
latestvk97fw0s9dksv34j4c86jqd71wd85nd7r
36downloads
0stars
1versions
Updated 14h ago
v1.0.0
MIT-0

Getting Started

Send me your raw video clips and I'll handle the AI video editing. Or just describe what you're after.

Try saying:

  • "edit a 3-minute desktop screen recording into a 1080p MP4"
  • "trim the pauses, add transitions, and export as MP4"
  • "editing and trimming video clips on a Windows-style interface for casual creators and Windows users"

First-Time Connection

When a user first opens this skill, connect to the processing backend automatically. Briefly let them know (e.g. "Setting up...").

Authentication: Check if NEMO_TOKEN is set in the environment. If it is, skip to step 2.

  1. Obtain a free token: Generate a random UUID as client identifier. POST to https://mega-api-prod.nemovideo.ai/api/auth/anonymous-token with header X-Client-Id set to that UUID. The response data.token is your NEMO_TOKEN — 100 free credits, valid 7 days.
  2. Create a session: POST to https://mega-api-prod.nemovideo.ai/api/tasks/me/with-session/nemo_agent with Authorization: Bearer <token>, Content-Type: application/json, and body {"task_name":"project","language":"<detected>"}. Store the returned session_id for all subsequent requests.

Keep setup communication brief. Don't display raw API responses or token values to the user.

Windows Video Editor — Edit and Export Video Clips

Send me your raw video clips and describe the result you want. The AI video editing runs on remote GPU nodes — nothing to install on your machine.

A quick example: upload a 3-minute desktop screen recording, type "trim the pauses, add transitions, and export as MP4", and you'll get a 1080p MP4 back in roughly 1-2 minutes. All rendering happens server-side.

Worth noting: shorter clips under 2 minutes process significantly faster.

Matching Input to Actions

User prompts referencing windows video editor, aspect ratio, text overlays, or audio tracks get routed to the corresponding action via keyword and intent classification.

User says...ActionSkip SSE?
"export" / "导出" / "download" / "send me the video"→ §3.5 Export
"credits" / "积分" / "balance" / "余额"→ §3.3 Credits
"status" / "状态" / "show tracks"→ §3.4 State
"upload" / "上传" / user sends file→ §3.2 Upload
Everything else (generate, edit, add BGM…)→ §3.1 SSE

Cloud Render Pipeline Details

Each export job queues on a cloud GPU node that composites video layers, applies platform-spec compression (H.264, up to 1080x1920), and returns a download URL within 30-90 seconds. The session token carries render job IDs, so closing the tab before completion orphans the job.

Base URL: https://mega-api-prod.nemovideo.ai

EndpointMethodPurpose
/api/tasks/me/with-session/nemo_agentPOSTStart a new editing session. Body: {"task_name":"project","language":"<lang>"}. Returns session_id.
/run_ssePOSTSend a user message. Body includes app_name, session_id, new_message. Stream response with Accept: text/event-stream. Timeout: 15 min.
/api/upload-video/nemo_agent/me/<sid>POSTUpload a file (multipart) or URL.
/api/credits/balance/simpleGETCheck remaining credits (available, frozen, total).
/api/state/nemo_agent/me/<sid>/latestGETFetch current timeline state (draft, video_infos, generated_media).
/api/render/proxy/lambdaPOSTStart export. Body: {"id":"render_<ts>","sessionId":"<sid>","draft":<json>,"output":{"format":"mp4","quality":"high"}}. Poll status every 30s.

Accepted file types: mp4, mov, avi, webm, mkv, jpg, png, gif, webp, mp3, wav, m4a, aac.

Skill attribution — read from this file's YAML frontmatter at runtime:

  • X-Skill-Source: windows-video-editor
  • X-Skill-Version: from frontmatter version
  • X-Skill-Platform: detect from install path (~/.clawhub/clawhub, ~/.cursor/skills/cursor, else unknown)

Include Authorization: Bearer <NEMO_TOKEN> and all attribution headers on every request — omitting them triggers a 402 on export.

Error Codes

  • 0 — success, continue normally
  • 1001 — token expired or invalid; re-acquire via /api/auth/anonymous-token
  • 1002 — session not found; create a new one
  • 2001 — out of credits; anonymous users get a registration link with ?bind=<id>, registered users top up
  • 4001 — unsupported file type; show accepted formats
  • 4002 — file too large; suggest compressing or trimming
  • 400 — missing X-Client-Id; generate one and retry
  • 402 — free plan export blocked; not a credit issue, subscription tier
  • 429 — rate limited; wait 30s and retry once

SSE Event Handling

EventAction
Text responseApply GUI translation (§4), present to user
Tool call/resultProcess internally, don't forward
heartbeat / empty data:Keep waiting. Every 2 min: "⏳ Still working..."
Stream closesProcess final response

~30% of editing operations return no text in the SSE stream. When this happens: poll session state to verify the edit was applied, then summarize changes to the user.

Backend Response Translation

The backend assumes a GUI exists. Translate these into API actions:

Backend saysYou do
"click [button]" / "点击"Execute via API
"open [panel]" / "打开"Query session state
"drag/drop" / "拖拽"Send edit via SSE
"preview in timeline"Show track summary
"Export button" / "导出"Execute export workflow

Draft field mapping: t=tracks, tt=track type (0=video, 1=audio, 7=text), sg=segments, d=duration(ms), m=metadata.

Timeline (3 tracks): 1. Video: city timelapse (0-10s) 2. BGM: Lo-fi (0-10s, 35%) 3. Title: "Urban Dreams" (0-3s)

Tips and Tricks

The backend processes faster when you're specific. Instead of "make it look better", try "trim the pauses, add transitions, and export as MP4" — concrete instructions get better results.

Max file size is 500MB. Stick to MP4, MOV, AVI, WMV for the smoothest experience.

Export as MP4 for widest compatibility across Windows and web platforms.

Common Workflows

Quick edit: Upload → "trim the pauses, add transitions, and export as MP4" → Download MP4. Takes 1-2 minutes for a 30-second clip.

Batch style: Upload multiple files in one session. Process them one by one with different instructions. Each gets its own render.

Iterative: Start with a rough cut, preview the result, then refine. The session keeps your timeline state so you can keep tweaking.

Comments

Loading comments...