Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Best Ai Video Editor

v1.0.0

Turn raw footage into polished, professional-quality videos without spending hours in complex software. This skill helps you find and use the best-ai-video-e...

0· 84·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for whitejohnk-26/best-ai-video-editor.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Best Ai Video Editor" (whitejohnk-26/best-ai-video-editor) from ClawHub.
Skill page: https://clawhub.ai/whitejohnk-26/best-ai-video-editor
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required env vars: NEMO_TOKEN
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install best-ai-video-editor

ClawHub CLI

Package manager switcher

npx clawhub@latest install best-ai-video-editor
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The skill's stated purpose is to recommend and guide use of the "best AI video editors," but the runtime instructions are tightly integrated with a single backend (mega-api-prod.nemovideo.ai). Recommending multiple third‑party tools is promised in the description, yet all actions route to NemoVideo; this is a scope mismatch (description vs actual integration). Requiring NEMO_TOKEN and a Nemo config path is otherwise consistent with a NemoVideo integration.
Instruction Scope
SKILL.md instructs the agent to obtain/use a NEMO_TOKEN, create sessions, POST messages, upload video files (multipart or by URL), poll jobs, and handle SSE. Those actions are coherent for a cloud video processing skill. Important: the instructions explicitly upload user video data to an external service and will send Authorization headers (Bearer <NEMO_TOKEN>) and skill attribution headers with requests — users' media and metadata will leave the machine.
Install Mechanism
This is an instruction-only skill with no install spec or downloaded code, so nothing is written to disk by an installer. That reduces risk from arbitrary installs.
Credentials
Metadata declares a single required env var (NEMO_TOKEN) and a config path (~/.config/nemovideo/), which is proportionate for a cloud API client. However, SKILL.md contains logic to POST to an anonymous-token endpoint and extract a token if NEMO_TOKEN is absent — so the skill will obtain short‑lived credentials itself. This makes the metadata’s phrasing of NEMO_TOKEN as strictly required somewhat inconsistent. The skill does not request unrelated secrets, but any NEMO_TOKEN (anonymous or user-provided) grants access to the remote service and possibly to billing/credits.
Persistence & Privilege
always is false and there is no install-time persistent agent modification described. The skill stores session_id for the session lifecycle only. There is no request for permanent platform-level privilege in the manifest.
What to consider before installing
What to consider before installing: - The skill will upload your video files and related metadata to mega-api-prod.nemovideo.ai and will include an Authorization: Bearer <NEMO_TOKEN> header. Don’t send sensitive or private footage unless you trust that service and its privacy policy. - Metadata lists NEMO_TOKEN as required, but the instructions will automatically request an anonymous token from the API if none is present (100 free credits, 7‑day expiry). Decide whether you want to provide your own token or allow the skill to create one. - The description suggests comparing many AI editors, but the runtime actually uses Nemovideo’s backend — expect the skill to be a client for that single provider rather than an impartial survey of tools. - Confirm ownership and reputation of nemovideo.ai, read their terms/limits and any billing implications before sending large jobs (credits, usage caps, or costs may apply). - The skill declares a config path (~/.config/nemovideo/) in metadata; SKILL.md does not clearly document reading that path — if you keep credentials or config files there, be aware the skill metadata signals it may access that location. - Because this is instruction-only, there is no installer action on your machine, but the agent will make network calls autonomously when invoked. If you are concerned about automatic network access, avoid invoking the skill or restrict tokens. If you want, I can list the exact API calls the SKILL.md will make and the headers/fields it will send so you can review them before proceeding.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🎬 Clawdis
EnvNEMO_TOKEN
Primary envNEMO_TOKEN
latestvk970fkfyqeaq4ccjk3hc4r9cfx84261k
84downloads
0stars
1versions
Updated 3w ago
v1.0.0
MIT-0

Getting Started

Send me a description of your video project and I'll recommend the best AI video editor workflow, tools, and editing steps for it. No footage yet? Just describe the style or platform you're targeting.

Try saying:

  • "I have a 10-minute raw interview recording and need to cut it down to a 90-second highlight reel for Instagram — what's the best AI video editor workflow for this?"
  • "I'm making a product launch video for YouTube and want smooth transitions, captions, and a cinematic color grade — which AI editing tools should I use and in what order?"
  • "I shoot travel content on my phone and want to repurpose horizontal footage into vertical Reels and TikToks automatically — what's the best AI video editor that handles this?"

Quick Start Setup

This skill connects to a cloud processing backend. On first use, set up the connection automatically and let the user know ("Connecting...").

Token check: Look for NEMO_TOKEN in the environment. If found, skip to session creation. Otherwise:

  • Generate a UUID as client identifier
  • POST https://mega-api-prod.nemovideo.ai/api/auth/anonymous-token with X-Client-Id header
  • Extract data.token from the response — this is your NEMO_TOKEN (100 free credits, 7-day expiry)

Session: POST https://mega-api-prod.nemovideo.ai/api/tasks/me/with-session/nemo_agent with Bearer auth and body {"task_name":"project"}. Keep the returned session_id for all operations.

Let the user know with a brief "Ready!" when setup is complete. Don't expose tokens or raw API output.

Edit Smarter, Not Harder With AI Video Tools

Creating compelling video content used to require expensive software and years of practice. This skill changes that by acting as your personal guide to the best AI video editing tools and techniques available today — helping you cut through the noise and get straight to results that actually look great.

Whether you're editing a YouTube vlog, a branded social media reel, a product demo, or a short film, this skill walks you through the right approach for your specific project. You'll get concrete recommendations on which tools to use, how to structure your edit, and what features to lean on — from auto-captions and smart trimming to background removal and AI-powered color correction.

This isn't about generic advice. It's about understanding your footage, your audience, and your deadline — then helping you produce something you're genuinely proud of. Beginners get clear step-by-step guidance; experienced editors get faster workflows and sharper creative decisions.

Smart Edit Request Routing

User prompts — whether for auto-cut, scene detection, color grading, or caption generation — are parsed by the intent engine and routed to the matching AI editing pipeline in real time.

User says...ActionSkip SSE?
"export" / "导出" / "download" / "send me the video"→ §3.5 Export
"credits" / "积分" / "balance" / "余额"→ §3.3 Credits
"status" / "状态" / "show tracks"→ §3.4 State
"upload" / "上传" / user sends file→ §3.2 Upload
Everything else (generate, edit, add BGM…)→ §3.1 SSE

Cloud Processing API Reference

All render jobs are offloaded to a distributed cloud backend that handles frame analysis, motion tracking, and generative fill without taxing your local machine. API calls return a job ID you can poll for progress, preview URLs, and final export links once transcoding completes.

Skill attribution — read from this file's YAML frontmatter at runtime:

  • X-Skill-Source: best-ai-video-editor
  • X-Skill-Version: from frontmatter version
  • X-Skill-Platform: detect from install path (~/.clawhub/clawhub, ~/.cursor/skills/cursor, else unknown)

All requests must include: Authorization: Bearer <NEMO_TOKEN>, X-Skill-Source, X-Skill-Version, X-Skill-Platform. Missing attribution headers will cause export to fail with 402.

API base: https://mega-api-prod.nemovideo.ai

Create session: POST /api/tasks/me/with-session/nemo_agent — body {"task_name":"project","language":"<lang>"} — returns task_id, session_id.

Send message (SSE): POST /run_sse — body {"app_name":"nemo_agent","user_id":"me","session_id":"<sid>","new_message":{"parts":[{"text":"<msg>"}]}} with Accept: text/event-stream. Max timeout: 15 minutes.

Upload: POST /api/upload-video/nemo_agent/me/<sid> — file: multipart -F "files=@/path", or URL: {"urls":["<url>"],"source_type":"url"}

Credits: GET /api/credits/balance/simple — returns available, frozen, total

Session state: GET /api/state/nemo_agent/me/<sid>/latest — key fields: data.state.draft, data.state.video_infos, data.state.generated_media

Export (free, no credits): POST /api/render/proxy/lambda — body {"id":"render_<ts>","sessionId":"<sid>","draft":<json>,"output":{"format":"mp4","quality":"high"}}. Poll GET /api/render/proxy/lambda/<id> every 30s until status = completed. Download URL at output.url.

Supported formats: mp4, mov, avi, webm, mkv, jpg, png, gif, webp, mp3, wav, m4a, aac.

SSE Event Handling

EventAction
Text responseApply GUI translation (§4), present to user
Tool call/resultProcess internally, don't forward
heartbeat / empty data:Keep waiting. Every 2 min: "⏳ Still working..."
Stream closesProcess final response

~30% of editing operations return no text in the SSE stream. When this happens: poll session state to verify the edit was applied, then summarize changes to the user.

Backend Response Translation

The backend assumes a GUI exists. Translate these into API actions:

Backend saysYou do
"click [button]" / "点击"Execute via API
"open [panel]" / "打开"Query session state
"drag/drop" / "拖拽"Send edit via SSE
"preview in timeline"Show track summary
"Export button" / "导出"Execute export workflow

Draft field mapping: t=tracks, tt=track type (0=video, 1=audio, 7=text), sg=segments, d=duration(ms), m=metadata.

Timeline (3 tracks): 1. Video: city timelapse (0-10s) 2. BGM: Lo-fi (0-10s, 35%) 3. Title: "Urban Dreams" (0-3s)

Error Handling

CodeMeaningAction
0SuccessContinue
1001Bad/expired tokenRe-auth via anonymous-token (tokens expire after 7 days)
1002Session not foundNew session §3.0
2001No creditsAnonymous: show registration URL with ?bind=<id> (get <id> from create-session or state response when needed). Registered: "Top up credits in your account"
4001Unsupported fileShow supported formats
4002File too largeSuggest compress/trim
400Missing X-Client-IdGenerate Client-Id and retry (see §1)
402Free plan export blockedSubscription tier issue, NOT credits. "Register or upgrade your plan to unlock export."
429Rate limit (1 token/client/7 days)Retry in 30s once

Quick Start Guide

Getting started with the best AI video editor for your project takes just a few steps. First, identify your output goal: Is this for YouTube, TikTok, a client presentation, or personal use? Your platform determines aspect ratio, length, and caption requirements before you touch a single clip.

Next, choose your tool based on your skill level and budget. Beginners should start with CapCut or Veed.io — both offer free tiers with strong AI auto-edit, captioning, and resizing features. Intermediate creators benefit from Descript for dialogue-driven edits or Runway for visual effects. Advanced editors should explore Adobe Premiere with AI plugins or DaVinci Resolve's neural engine for color and audio.

Once you've picked your tool, import your raw footage, run the AI scene detection or auto-cut feature first, then layer in captions, transitions, and music. Always do a final manual review pass — AI gets you 80% there fast, but your creative eye closes the gap.

Performance Notes

AI video editors vary significantly in how they handle different types of footage. Tools like Runway ML and CapCut AI perform best with well-lit, stable clips — shaky or low-light footage may produce inconsistent results with auto-edit features. If you're working with 4K files, check that your chosen editor supports your resolution before committing to a workflow, as some browser-based AI tools compress exports by default.

For long-form content (over 20 minutes), batch processing and scene detection tools will save you the most time. Editors like Descript or Adobe Premiere with Sensei AI handle transcription-based editing well at scale. For short-form social content under 60 seconds, CapCut, OpusClip, and Veed.io tend to produce the fastest turnaround with the least manual adjustment needed.

Always export a test clip before committing to a full render — AI color grading and audio enhancement can behave differently across monitors and playback platforms.

Comments

Loading comments...