Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Video Editing Ai Android

v1.0.0

Get edited MP4 clips ready to post, without touching a single slider. Upload your raw video footage (MP4, MOV, AVI, WebM, up to 500MB), say something like "t...

0· 30·0 current·0 all-time
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Name/description (AI video editing) align with the runtime instructions: uploading videos, creating a session, streaming edits, and starting renders. The single required credential (NEMO_TOKEN) is consistent with a remote API integration.
Instruction Scope
SKILL.md instructs the agent to generate/use a bearer token, create sessions, upload files, stream SSE, poll state, and include attribution headers. Those are all within a video-editing integration. However it also instructs the agent to read this file's YAML frontmatter and detect its install path to set X-Skill-Platform (checking paths like ~/.clawhub/ or ~/.cursor/skills/). Asking the agent to inspect local install paths and include that derived metadata is outside pure editing functionality and could expose local state about other tooling/paths — verify whether that is necessary and where that data is sent.
Install Mechanism
This is instruction-only (no install spec, no code files). That is the lowest-risk install pattern — nothing is downloaded or written by an installer step from a remote URL in the skill package itself.
Credentials
The skill declares a single primary credential NEMO_TOKEN, which is appropriate for calling the remote NEMO API. Minor inconsistency: the YAML frontmatter mentions a configPaths entry (~/.config/nemovideo/) while the registry metadata summary reported no required config paths. Confirm whether the skill will attempt to read that config path or write tokens to disk. Also note: the skill tells the agent to request an anonymous token from the API and use it as NEMO_TOKEN — check how/where that token will be stored and for how long.
Persistence & Privilege
The skill is not always-enabled and does not request elevated platform privileges. It asks the agent to save and reuse session_id and tokens (normal for remote sessions), but does not request to modify other skills or system-wide settings.
What to consider before installing
This skill appears to do what it says (upload your videos to a remote render service). Before installing or using it, check: 1) The domain (mega-api-prod.nemovideo.ai) — do you trust it to receive your videos and store them? 2) Where the NEMO_TOKEN will be stored (in-memory vs written to disk/environment) and how long it's valid; treat any token as sensitive. 3) Whether the skill will read local paths or config directories (it asks to detect install path and references ~/.config/nemovideo/) — if you prefer not to expose local filesystem metadata, decline or ask the skill author to remove that behavior. 4) Privacy: uploading raw video may include PII; confirm retention and deletion policies with the service. If you cannot verify the remote service or token handling, avoid installing or running the skill.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🎬 Clawdis
EnvNEMO_TOKEN
Primary envNEMO_TOKEN
latestvk97a7064mgn1b226gnjx17dy7985ax6t
30downloads
0stars
1versions
Updated 21h ago
v1.0.0
MIT-0

Getting Started

Share your raw video footage and I'll get started on AI video editing. Or just tell me what you're thinking.

Try saying:

  • "edit my raw video footage"
  • "export 1080p MP4"
  • "trim the clip, add transitions, and"

Automatic Setup

On first interaction, connect to the processing API before doing anything else. Show a brief status like "Setting things up...".

Token: If NEMO_TOKEN environment variable is already set, use it and skip to Session below.

Free token: Generate a UUID as client identifier, then POST to https://mega-api-prod.nemovideo.ai/api/auth/anonymous-token with header X-Client-Id: <uuid>. The response field data.token becomes your NEMO_TOKEN (100 credits, 7-day expiry).

Session: POST to https://mega-api-prod.nemovideo.ai/api/tasks/me/with-session/nemo_agent with Bearer auth and body {"task_name":"project"}. Save session_id from the response.

Confirm to the user you're connected and ready. Don't print tokens or raw JSON.

Video Editing AI Android — Edit and Export Videos with AI

Send me your raw video footage and describe the result you want. The AI video editing runs on remote GPU nodes — nothing to install on your machine.

A quick example: upload a 60-second phone recording from an Android device, type "trim the clip, add transitions, and export it ready for Instagram", and you'll get a 1080p MP4 back in roughly 30-60 seconds. All rendering happens server-side.

Worth noting: shorter clips under 2 minutes process significantly faster.

Matching Input to Actions

User prompts referencing video editing ai android, aspect ratio, text overlays, or audio tracks get routed to the corresponding action via keyword and intent classification.

User says...ActionSkip SSE?
"export" / "导出" / "download" / "send me the video"→ §3.5 Export
"credits" / "积分" / "balance" / "余额"→ §3.3 Credits
"status" / "状态" / "show tracks"→ §3.4 State
"upload" / "上传" / user sends file→ §3.2 Upload
Everything else (generate, edit, add BGM…)→ §3.1 SSE

Cloud Render Pipeline Details

Each export job queues on a cloud GPU node that composites video layers, applies platform-spec compression (H.264, up to 1080x1920), and returns a download URL within 30-90 seconds. The session token carries render job IDs, so closing the tab before completion orphans the job.

Base URL: https://mega-api-prod.nemovideo.ai

EndpointMethodPurpose
/api/tasks/me/with-session/nemo_agentPOSTStart a new editing session. Body: {"task_name":"project","language":"<lang>"}. Returns session_id.
/run_ssePOSTSend a user message. Body includes app_name, session_id, new_message. Stream response with Accept: text/event-stream. Timeout: 15 min.
/api/upload-video/nemo_agent/me/<sid>POSTUpload a file (multipart) or URL.
/api/credits/balance/simpleGETCheck remaining credits (available, frozen, total).
/api/state/nemo_agent/me/<sid>/latestGETFetch current timeline state (draft, video_infos, generated_media).
/api/render/proxy/lambdaPOSTStart export. Body: {"id":"render_<ts>","sessionId":"<sid>","draft":<json>,"output":{"format":"mp4","quality":"high"}}. Poll status every 30s.

Accepted file types: mp4, mov, avi, webm, mkv, jpg, png, gif, webp, mp3, wav, m4a, aac.

Skill attribution — read from this file's YAML frontmatter at runtime:

  • X-Skill-Source: video-editing-ai-android
  • X-Skill-Version: from frontmatter version
  • X-Skill-Platform: detect from install path (~/.clawhub/clawhub, ~/.cursor/skills/cursor, else unknown)

Include Authorization: Bearer <NEMO_TOKEN> and all attribution headers on every request — omitting them triggers a 402 on export.

Error Codes

  • 0 — success, continue normally
  • 1001 — token expired or invalid; re-acquire via /api/auth/anonymous-token
  • 1002 — session not found; create a new one
  • 2001 — out of credits; anonymous users get a registration link with ?bind=<id>, registered users top up
  • 4001 — unsupported file type; show accepted formats
  • 4002 — file too large; suggest compressing or trimming
  • 400 — missing X-Client-Id; generate one and retry
  • 402 — free plan export blocked; not a credit issue, subscription tier
  • 429 — rate limited; wait 30s and retry once

Reading the SSE Stream

Text events go straight to the user (after GUI translation). Tool calls stay internal. Heartbeats and empty data: lines mean the backend is still working — show "⏳ Still working..." every 2 minutes.

About 30% of edit operations close the stream without any text. When that happens, poll /api/state to confirm the timeline changed, then tell the user what was updated.

Translating GUI Instructions

The backend responds as if there's a visual interface. Map its instructions to API calls:

  • "click" or "点击" → execute the action via the relevant endpoint
  • "open" or "打开" → query session state to get the data
  • "drag/drop" or "拖拽" → send the edit command through SSE
  • "preview in timeline" → show a text summary of current tracks
  • "Export" or "导出" → run the export workflow

Draft field mapping: t=tracks, tt=track type (0=video, 1=audio, 7=text), sg=segments, d=duration(ms), m=metadata.

Timeline (3 tracks): 1. Video: city timelapse (0-10s) 2. BGM: Lo-fi (0-10s, 35%) 3. Title: "Urban Dreams" (0-3s)

Tips and Tricks

The backend processes faster when you're specific. Instead of "make it look better", try "trim the clip, add transitions, and export it ready for Instagram" — concrete instructions get better results.

Max file size is 500MB. Stick to MP4, MOV, AVI, WebM for the smoothest experience.

Export as MP4 for widest compatibility across Android and social platforms.

Common Workflows

Quick edit: Upload → "trim the clip, add transitions, and export it ready for Instagram" → Download MP4. Takes 30-60 seconds for a 30-second clip.

Batch style: Upload multiple files in one session. Process them one by one with different instructions. Each gets its own render.

Iterative: Start with a rough cut, preview the result, then refine. The session keeps your timeline state so you can keep tweaking.

Comments

Loading comments...