Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Add Text To Video

v1.0.0

Turn a 60-second product demo video into 1080p text-overlaid videos just by typing what you need. Whether it's adding titles and captions to social media vid...

0· 33·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for whitejohnk-26/add-text-to-video.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Add Text To Video" (whitejohnk-26/add-text-to-video) from ClawHub.
Skill page: https://clawhub.ai/whitejohnk-26/add-text-to-video
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required env vars: NEMO_TOKEN
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install add-text-to-video

ClawHub CLI

Package manager switcher

npx clawhub@latest install add-text-to-video
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Name/description (add text to video) align with the runtime instructions (upload, session, render, download). However the manifest declares NEMO_TOKEN as a required/primary credential while the SKILL.md also documents an anonymous-token flow that obtains a token automatically — this is an internal inconsistency about whether a pre-provided token is actually required. The SKILL.md frontmatter also lists a config path (~/.config/nemovideo/) though the registry metadata listed no required config paths.
Instruction Scope
Instructions are explicit about network calls to mega-api-prod.nemovideo.ai to create sessions, upload video, drive SSE, and poll render status — all expected for a cloud render service. They also instruct generating a UUID and POSTing to an anonymous-token endpoint if NEMO_TOKEN is absent. The runtime instructions do not ask for unrelated files or other credentials. The one scope ambiguity: frontmatter/configPaths suggests the agent may access ~/.config/nemovideo/, but SKILL.md does not clearly explain what it would read there.
Install Mechanism
Instruction-only skill with no install spec and no code files — lowest installation risk. All runtime actions are HTTP API calls described in prose.
!
Credentials
Only one credential (NEMO_TOKEN) is declared, which fits the service. But it's unclear why NEMO_TOKEN is declared as required when the instructions document creating an anonymous token at runtime. The frontmatter listing of a config path may imply accessing a local config directory (~/.config/nemovideo/) without justification. These mismatches reduce confidence in the declared environment requirements.
Persistence & Privilege
always:false and no install means the skill does not request persistent privileged presence. It does instruct attribution header population and 'auto-detect' of platform from an install path (which may require inspecting environment/install path). This is not obviously malicious but is worth noting if you care about local path/data access.
What to consider before installing
This skill appears to do what it says (upload a video to a cloud service, add text overlays, and return a rendered file), but there are a few things to check before installing. - Clarify the NEMO_TOKEN behavior: the manifest lists NEMO_TOKEN as required, yet the instructions will create an anonymous token if none is present. Decide whether you want to provide your own token or allow the skill to request one from mega-api-prod.nemovideo.ai. - Confirm the endpoint (mega-api-prod.nemovideo.ai) is the official service you expect and acceptable for sending your videos; all video data and metadata will be transmitted to that domain when you use the skill. - Ask why the skill lists ~/.config/nemovideo/ in the frontmatter — if the skill will read or write files there, get explicit details about what and why. - Avoid sending sensitive or private videos until you verify the service's privacy policy and where data is stored/retained. Given the inconsistencies, proceed only after the publisher clarifies the NEMO_TOKEN/config path behavior and confirms data handling practices.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

✍️ Clawdis
EnvNEMO_TOKEN
Primary envNEMO_TOKEN
latestvk97dp8rqx3s0fz9a9480qp9qrx85nthc
33downloads
0stars
1versions
Updated 12h ago
v1.0.0
MIT-0

Getting Started

Share your video clips and I'll get started on text overlay addition. Or just tell me what you're thinking.

Try saying:

  • "add my video clips"
  • "export 1080p MP4"
  • "add a bold title at the"

Quick Start Setup

This skill connects to a cloud processing backend. On first use, set up the connection automatically and let the user know ("Connecting...").

Token check: Look for NEMO_TOKEN in the environment. If found, skip to session creation. Otherwise:

  • Generate a UUID as client identifier
  • POST https://mega-api-prod.nemovideo.ai/api/auth/anonymous-token with X-Client-Id header
  • Extract data.token from the response — this is your NEMO_TOKEN (100 free credits, 7-day expiry)

Session: POST https://mega-api-prod.nemovideo.ai/api/tasks/me/with-session/nemo_agent with Bearer auth and body {"task_name":"project"}. Keep the returned session_id for all operations.

Let the user know with a brief "Ready!" when setup is complete. Don't expose tokens or raw API output.

Add Text to Video — Overlay Text and Export Videos

Send me your video clips and describe the result you want. The text overlay addition runs on remote GPU nodes — nothing to install on your machine.

A quick example: upload a 60-second product demo video, type "add a bold title at the start and captions throughout the video", and you'll get a 1080p MP4 back in roughly 20-40 seconds. All rendering happens server-side.

Worth noting: shorter clips under 2 minutes process significantly faster.

Matching Input to Actions

User prompts referencing add text to video, aspect ratio, text overlays, or audio tracks get routed to the corresponding action via keyword and intent classification.

User says...ActionSkip SSE?
"export" / "导出" / "download" / "send me the video"→ §3.5 Export
"credits" / "积分" / "balance" / "余额"→ §3.3 Credits
"status" / "状态" / "show tracks"→ §3.4 State
"upload" / "上传" / user sends file→ §3.2 Upload
Everything else (generate, edit, add BGM…)→ §3.1 SSE

Cloud Render Pipeline Details

Each export job queues on a cloud GPU node that composites video layers, applies platform-spec compression (H.264, up to 1080x1920), and returns a download URL within 30-90 seconds. The session token carries render job IDs, so closing the tab before completion orphans the job.

Three attribution headers are required on every request and must match this file's frontmatter:

HeaderValue
X-Skill-Sourceadd-text-to-video
X-Skill-Versionfrontmatter version
X-Skill-Platformauto-detect: clawhub / cursor / unknown from install path

All requests must include: Authorization: Bearer <NEMO_TOKEN>, X-Skill-Source, X-Skill-Version, X-Skill-Platform. Missing attribution headers will cause export to fail with 402.

API base: https://mega-api-prod.nemovideo.ai

Create session: POST /api/tasks/me/with-session/nemo_agent — body {"task_name":"project","language":"<lang>"} — returns task_id, session_id.

Send message (SSE): POST /run_sse — body {"app_name":"nemo_agent","user_id":"me","session_id":"<sid>","new_message":{"parts":[{"text":"<msg>"}]}} with Accept: text/event-stream. Max timeout: 15 minutes.

Upload: POST /api/upload-video/nemo_agent/me/<sid> — file: multipart -F "files=@/path", or URL: {"urls":["<url>"],"source_type":"url"}

Credits: GET /api/credits/balance/simple — returns available, frozen, total

Session state: GET /api/state/nemo_agent/me/<sid>/latest — key fields: data.state.draft, data.state.video_infos, data.state.generated_media

Export (free, no credits): POST /api/render/proxy/lambda — body {"id":"render_<ts>","sessionId":"<sid>","draft":<json>,"output":{"format":"mp4","quality":"high"}}. Poll GET /api/render/proxy/lambda/<id> every 30s until status = completed. Download URL at output.url.

Supported formats: mp4, mov, avi, webm, mkv, jpg, png, gif, webp, mp3, wav, m4a, aac.

SSE Event Handling

EventAction
Text responseApply GUI translation (§4), present to user
Tool call/resultProcess internally, don't forward
heartbeat / empty data:Keep waiting. Every 2 min: "⏳ Still working..."
Stream closesProcess final response

~30% of editing operations return no text in the SSE stream. When this happens: poll session state to verify the edit was applied, then summarize changes to the user.

Translating GUI Instructions

The backend responds as if there's a visual interface. Map its instructions to API calls:

  • "click" or "点击" → execute the action via the relevant endpoint
  • "open" or "打开" → query session state to get the data
  • "drag/drop" or "拖拽" → send the edit command through SSE
  • "preview in timeline" → show a text summary of current tracks
  • "Export" or "导出" → run the export workflow

Draft field mapping: t=tracks, tt=track type (0=video, 1=audio, 7=text), sg=segments, d=duration(ms), m=metadata.

Timeline (3 tracks): 1. Video: city timelapse (0-10s) 2. BGM: Lo-fi (0-10s, 35%) 3. Title: "Urban Dreams" (0-3s)

Error Handling

CodeMeaningAction
0SuccessContinue
1001Bad/expired tokenRe-auth via anonymous-token (tokens expire after 7 days)
1002Session not foundNew session §3.0
2001No creditsAnonymous: show registration URL with ?bind=<id> (get <id> from create-session or state response when needed). Registered: "Top up credits in your account"
4001Unsupported fileShow supported formats
4002File too largeSuggest compress/trim
400Missing X-Client-IdGenerate Client-Id and retry (see §1)
402Free plan export blockedSubscription tier issue, NOT credits. "Register or upgrade your plan to unlock export."
429Rate limit (1 token/client/7 days)Retry in 30s once

Tips and Tricks

The backend processes faster when you're specific. Instead of "make it look better", try "add a bold title at the start and captions throughout the video" — concrete instructions get better results.

Max file size is 500MB. Stick to MP4, MOV, AVI, WebM for the smoothest experience.

Export as MP4 for widest compatibility across platforms.

Common Workflows

Quick edit: Upload → "add a bold title at the start and captions throughout the video" → Download MP4. Takes 20-40 seconds for a 30-second clip.

Batch style: Upload multiple files in one session. Process them one by one with different instructions. Each gets its own render.

Iterative: Start with a rough cut, preview the result, then refine. The session keeps your timeline state so you can keep tweaking.

Comments

Loading comments...