Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Video Tool Background

v1.0.0

Get background-replaced videos ready to post, without touching a single slider. Upload your video clips (MP4, MOV, AVI, WebM, up to 500MB), say something lik...

0· 49·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for mhogan2013-9/video-tool-background.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Video Tool Background" (mhogan2013-9/video-tool-background) from ClawHub.
Skill page: https://clawhub.ai/mhogan2013-9/video-tool-background
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required env vars: NEMO_TOKEN
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install video-tool-background

ClawHub CLI

Package manager switcher

npx clawhub@latest install video-tool-background
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The name and description (cloud AI background replacement) match the API endpoints and actions described in SKILL.md. Requesting a NEMO_TOKEN is appropriate for a third‑party rendering service. However, SKILL.md also describes auto-generating anonymous tokens and references a config path (~/.config/nemovideo/) in its frontmatter while the registry metadata lists no required config paths — an inconsistency between declared requirements and the runtime instructions.
!
Instruction Scope
Runtime instructions tell the agent to: call an external domain (mega-api-prod.nemovideo.ai), generate a UUID client id, obtain/store a token if NEMO_TOKEN is absent, create sessions, upload user videos, poll render jobs, and infer install path (~/.clawhub/, ~/.cursor/skills/) for attribution. Uploading user videos and creating/storing tokens are expected for this service but have privacy/security implications. The skill also instructs the agent to 'store the returned session_id' without specifying storage location or retention policy.
Install Mechanism
No install spec and no code files are present (instruction-only), so nothing will be written to disk by an installer. This lowers the supply‑chain risk compared with fetching/executing remote archives.
!
Credentials
Only one env var (NEMO_TOKEN) is declared which fits the service. But the SKILL.md instructs the skill to automatically obtain an anonymous token from the external API when NEMO_TOKEN is not set — meaning the skill can operate without an explicitly provided credential. The frontmatter's configPaths (~/.config/nemovideo/) suggests the skill may persist tokens/sessions locally, but the registry metadata did not declare that path. Requiring or creating credentials and storing them locally without clear disclosure is a proportionality and transparency concern.
Persistence & Privilege
always is false (no forced system-wide inclusion). The skill expects to persist session state/tokens (implied by 'store session_id' and the frontmatter config path), but it does not declare where or how long. Autonomous invocation is allowed (default), which combined with the ability to auto-create tokens and upload files increases blast radius if the endpoint or behavior is untrusted.
What to consider before installing
This skill will send any uploaded videos to an external service (mega-api-prod.nemovideo.ai) and will auto-create an anonymous token if you don't provide NEMO_TOKEN—it may also persist session/token data locally. Before installing: (1) verify you trust the external domain and read that service's privacy/terms (especially for uploading personal or sensitive video); (2) prefer to supply your own NEMO_TOKEN rather than letting the skill auto-generate one; (3) ask the author where tokens/session IDs are stored and how long they are retained (the frontmatter hints at ~/.config/nemovideo/ but the registry didn't declare this); (4) if concerned about privacy, run the skill in a sandboxed environment or avoid uploading sensitive videos. The behavior is coherent with the advertised purpose, but these credential/storage inconsistencies and the external uploads warrant caution.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🎬 Clawdis
EnvNEMO_TOKEN
Primary envNEMO_TOKEN
latestvk9731crgke25nkzznrb94qg9mx85gbnp
49downloads
0stars
1versions
Updated 2d ago
v1.0.0
MIT-0

Getting Started

Send me your video clips and I'll handle the AI background replacement. Or just describe what you're after.

Try saying:

  • "replace a 30-second talking-head video clip into a 1080p MP4"
  • "remove my background and replace it with a plain white studio backdrop"
  • "removing or swapping video backgrounds without a green screen for content creators"

First-Time Connection

When a user first opens this skill, connect to the processing backend automatically. Briefly let them know (e.g. "Setting up...").

Authentication: Check if NEMO_TOKEN is set in the environment. If it is, skip to step 2.

  1. Obtain a free token: Generate a random UUID as client identifier. POST to https://mega-api-prod.nemovideo.ai/api/auth/anonymous-token with header X-Client-Id set to that UUID. The response data.token is your NEMO_TOKEN — 100 free credits, valid 7 days.
  2. Create a session: POST to https://mega-api-prod.nemovideo.ai/api/tasks/me/with-session/nemo_agent with Authorization: Bearer <token>, Content-Type: application/json, and body {"task_name":"project","language":"<detected>"}. Store the returned session_id for all subsequent requests.

Keep setup communication brief. Don't display raw API responses or token values to the user.

Video Tool Background — Replace Backgrounds in Any Video

This tool takes your video clips and runs AI background replacement through a cloud rendering pipeline. You upload, describe what you want, and download the result.

Say you have a 30-second talking-head video clip and want to remove my background and replace it with a plain white studio backdrop — the backend processes it in about 30-60 seconds and hands you a 1080p MP4.

Tip: solid clothing colors help the AI distinguish you from the background more accurately.

Matching Input to Actions

User prompts referencing video tool background, aspect ratio, text overlays, or audio tracks get routed to the corresponding action via keyword and intent classification.

User says...ActionSkip SSE?
"export" / "导出" / "download" / "send me the video"→ §3.5 Export
"credits" / "积分" / "balance" / "余额"→ §3.3 Credits
"status" / "状态" / "show tracks"→ §3.4 State
"upload" / "上传" / user sends file→ §3.2 Upload
Everything else (generate, edit, add BGM…)→ §3.1 SSE

Cloud Render Pipeline Details

Each export job queues on a cloud GPU node that composites video layers, applies platform-spec compression (H.264, up to 1080x1920), and returns a download URL within 30-90 seconds. The session token carries render job IDs, so closing the tab before completion orphans the job.

Every API call needs Authorization: Bearer <NEMO_TOKEN> plus the three attribution headers above. If any header is missing, exports return 402.

Skill attribution — read from this file's YAML frontmatter at runtime:

  • X-Skill-Source: video-tool-background
  • X-Skill-Version: from frontmatter version
  • X-Skill-Platform: detect from install path (~/.clawhub/clawhub, ~/.cursor/skills/cursor, else unknown)

API base: https://mega-api-prod.nemovideo.ai

Create session: POST /api/tasks/me/with-session/nemo_agent — body {"task_name":"project","language":"<lang>"} — returns task_id, session_id.

Send message (SSE): POST /run_sse — body {"app_name":"nemo_agent","user_id":"me","session_id":"<sid>","new_message":{"parts":[{"text":"<msg>"}]}} with Accept: text/event-stream. Max timeout: 15 minutes.

Upload: POST /api/upload-video/nemo_agent/me/<sid> — file: multipart -F "files=@/path", or URL: {"urls":["<url>"],"source_type":"url"}

Credits: GET /api/credits/balance/simple — returns available, frozen, total

Session state: GET /api/state/nemo_agent/me/<sid>/latest — key fields: data.state.draft, data.state.video_infos, data.state.generated_media

Export (free, no credits): POST /api/render/proxy/lambda — body {"id":"render_<ts>","sessionId":"<sid>","draft":<json>,"output":{"format":"mp4","quality":"high"}}. Poll GET /api/render/proxy/lambda/<id> every 30s until status = completed. Download URL at output.url.

Supported formats: mp4, mov, avi, webm, mkv, jpg, png, gif, webp, mp3, wav, m4a, aac.

Error Handling

CodeMeaningAction
0SuccessContinue
1001Bad/expired tokenRe-auth via anonymous-token (tokens expire after 7 days)
1002Session not foundNew session §3.0
2001No creditsAnonymous: show registration URL with ?bind=<id> (get <id> from create-session or state response when needed). Registered: "Top up credits in your account"
4001Unsupported fileShow supported formats
4002File too largeSuggest compress/trim
400Missing X-Client-IdGenerate Client-Id and retry (see §1)
402Free plan export blockedSubscription tier issue, NOT credits. "Register or upgrade your plan to unlock export."
429Rate limit (1 token/client/7 days)Retry in 30s once

Translating GUI Instructions

The backend responds as if there's a visual interface. Map its instructions to API calls:

  • "click" or "点击" → execute the action via the relevant endpoint
  • "open" or "打开" → query session state to get the data
  • "drag/drop" or "拖拽" → send the edit command through SSE
  • "preview in timeline" → show a text summary of current tracks
  • "Export" or "导出" → run the export workflow

Reading the SSE Stream

Text events go straight to the user (after GUI translation). Tool calls stay internal. Heartbeats and empty data: lines mean the backend is still working — show "⏳ Still working..." every 2 minutes.

About 30% of edit operations close the stream without any text. When that happens, poll /api/state to confirm the timeline changed, then tell the user what was updated.

Draft field mapping: t=tracks, tt=track type (0=video, 1=audio, 7=text), sg=segments, d=duration(ms), m=metadata.

Timeline (3 tracks): 1. Video: city timelapse (0-10s) 2. BGM: Lo-fi (0-10s, 35%) 3. Title: "Urban Dreams" (0-3s)

Tips and Tricks

The backend processes faster when you're specific. Instead of "make it look better", try "remove my background and replace it with a plain white studio backdrop" — concrete instructions get better results.

Max file size is 500MB. Stick to MP4, MOV, AVI, WebM for the smoothest experience.

Export as MP4 for widest compatibility.

Common Workflows

Quick edit: Upload → "remove my background and replace it with a plain white studio backdrop" → Download MP4. Takes 30-60 seconds for a 30-second clip.

Batch style: Upload multiple files in one session. Process them one by one with different instructions. Each gets its own render.

Iterative: Start with a rough cut, preview the result, then refine. The session keeps your timeline state so you can keep tweaking.

Comments

Loading comments...