Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Editing Text Generator

v1.0.0

Skip the learning curve of professional editing software. Describe what you want — generate on-screen editing instructions as text overlays synced to the vid...

0· 42·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for francemichaell-15/editing-text-generator.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Editing Text Generator" (francemichaell-15/editing-text-generator) from ClawHub.
Skill page: https://clawhub.ai/francemichaell-15/editing-text-generator
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required env vars: NEMO_TOKEN
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install editing-text-generator

ClawHub CLI

Package manager switcher

npx clawhub@latest install editing-text-generator
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Name/description match the runtime instructions: the SKILL.md documents uploading video, creating a session, running SSE-based generation, and downloading rendered MP4s from a cloud backend. The requested credential NEMO_TOKEN is relevant to the cloud service. No unrelated services or credentials are requested.
!
Instruction Scope
Instructions direct the agent to send user video files to an external API (mega-api-prod.nemovideo.ai), create sessions, upload, poll renders, and stream SSE. They also instruct the agent to generate an anonymous token if NEMO_TOKEN isn't present. The doc asks the agent to derive and send attribution headers that depend on the agent's install path (fingerprinting local install paths) — this could expose local environment information. The overall network/file operations are expected for a cloud render skill, but the install-path attribution and anonymous-token creation behavior are notable and should be explicit to users.
Install Mechanism
Instruction-only skill with no install spec or code files; no packages or archives are downloaded. Lowest install risk from an installation-mechanism perspective.
!
Credentials
Registry lists NEMO_TOKEN as a required primary env var, but SKILL.md provides an anonymous-token flow that creates a token automatically if NEMO_TOKEN is absent. Additionally, the SKILL.md frontmatter includes a configPaths entry (~/.config/nemovideo/) while the registry metadata earlier indicated no required config paths — this inconsistency is unexplained. Aside from NEMO_TOKEN, no other secrets are requested.
Persistence & Privilege
Skill is not always-enabled and does not request system-wide privileges. It uses ephemeral session tokens and render job IDs; nothing in the SKILL.md asks to modify other skills or persist itself permanently.
What to consider before installing
What to consider before installing: - This skill sends your video files to an external service (mega-api-prod.nemovideo.ai) for processing. If the content is sensitive, review the service's privacy/security policies or avoid uploading. - The registry claims NEMO_TOKEN is required, but the skill can obtain an anonymous token itself. Decide whether you want to supply your own token (gives potentially longer/paid access) or rely on the anonymous flow (limited credits, 7-day expiry). - The SKILL.md instructs the agent to include attribution headers derived from local install paths. That may reveal information about your local environment; ask the publisher why that telemetry is necessary and whether headers can be minimized. - There is an inconsistency between registry metadata (no config paths) and the SKILL.md frontmatter (lists ~/.config/nemovideo/). Ask the publisher to clarify whether the skill will read or write files in that path and whether tokens are stored to disk. - Because the source and homepage are unknown, consider asking for source code or more provenance (who operates mega-api-prod.nemovideo.ai). Verify the domain and operator before sending private content. - If you decide to proceed, avoid providing unrelated credentials, and test with non-sensitive short clips first. Additional information that would raise confidence to "high": publisher/homepage or source repo, explicit privacy policy for the backend, clarification about whether tokens/config are persisted to disk, and explanation/justification for the install-path attribution headers.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

✍️ Clawdis
EnvNEMO_TOKEN
Primary envNEMO_TOKEN
latestvk974yv81wnk6tdnw80hpaf7v6n85jb1p
42downloads
0stars
1versions
Updated 2d ago
v1.0.0
MIT-0

Getting Started

Share your video clips and I'll get started on AI text overlay generation. Or just tell me what you're thinking.

Try saying:

  • "generate my video clips"
  • "export 1080p MP4"
  • "generate on-screen editing instructions as text"

Quick Start Setup

This skill connects to a cloud processing backend. On first use, set up the connection automatically and let the user know ("Connecting...").

Token check: Look for NEMO_TOKEN in the environment. If found, skip to session creation. Otherwise:

  • Generate a UUID as client identifier
  • POST https://mega-api-prod.nemovideo.ai/api/auth/anonymous-token with X-Client-Id header
  • Extract data.token from the response — this is your NEMO_TOKEN (100 free credits, 7-day expiry)

Session: POST https://mega-api-prod.nemovideo.ai/api/tasks/me/with-session/nemo_agent with Bearer auth and body {"task_name":"project"}. Keep the returned session_id for all operations.

Let the user know with a brief "Ready!" when setup is complete. Don't expose tokens or raw API output.

Editing Text Generator — Generate Text Overlays for Videos

This tool takes your video clips and runs AI text overlay generation through a cloud rendering pipeline. You upload, describe what you want, and download the result.

Say you have a 2-minute tutorial video clip and want to generate on-screen editing instructions as text overlays synced to the video — the backend processes it in about 30-60 seconds and hands you a 1080p MP4.

Tip: shorter clips under 60 seconds produce more accurate text placement and sync.

Matching Input to Actions

User prompts referencing editing text generator, aspect ratio, text overlays, or audio tracks get routed to the corresponding action via keyword and intent classification.

User says...ActionSkip SSE?
"export" / "导出" / "download" / "send me the video"→ §3.5 Export
"credits" / "积分" / "balance" / "余额"→ §3.3 Credits
"status" / "状态" / "show tracks"→ §3.4 State
"upload" / "上传" / user sends file→ §3.2 Upload
Everything else (generate, edit, add BGM…)→ §3.1 SSE

Cloud Render Pipeline Details

Each export job queues on a cloud GPU node that composites video layers, applies platform-spec compression (H.264, up to 1080x1920), and returns a download URL within 30-90 seconds. The session token carries render job IDs, so closing the tab before completion orphans the job.

Headers are derived from this file's YAML frontmatter. X-Skill-Source is editing-text-generator, X-Skill-Version comes from the version field, and X-Skill-Platform is detected from the install path (~/.clawhub/ = clawhub, ~/.cursor/skills/ = cursor, otherwise unknown).

Every API call needs Authorization: Bearer <NEMO_TOKEN> plus the three attribution headers above. If any header is missing, exports return 402.

API base: https://mega-api-prod.nemovideo.ai

Create session: POST /api/tasks/me/with-session/nemo_agent — body {"task_name":"project","language":"<lang>"} — returns task_id, session_id.

Send message (SSE): POST /run_sse — body {"app_name":"nemo_agent","user_id":"me","session_id":"<sid>","new_message":{"parts":[{"text":"<msg>"}]}} with Accept: text/event-stream. Max timeout: 15 minutes.

Upload: POST /api/upload-video/nemo_agent/me/<sid> — file: multipart -F "files=@/path", or URL: {"urls":["<url>"],"source_type":"url"}

Credits: GET /api/credits/balance/simple — returns available, frozen, total

Session state: GET /api/state/nemo_agent/me/<sid>/latest — key fields: data.state.draft, data.state.video_infos, data.state.generated_media

Export (free, no credits): POST /api/render/proxy/lambda — body {"id":"render_<ts>","sessionId":"<sid>","draft":<json>,"output":{"format":"mp4","quality":"high"}}. Poll GET /api/render/proxy/lambda/<id> every 30s until status = completed. Download URL at output.url.

Supported formats: mp4, mov, avi, webm, mkv, jpg, png, gif, webp, mp3, wav, m4a, aac.

Reading the SSE Stream

Text events go straight to the user (after GUI translation). Tool calls stay internal. Heartbeats and empty data: lines mean the backend is still working — show "⏳ Still working..." every 2 minutes.

About 30% of edit operations close the stream without any text. When that happens, poll /api/state to confirm the timeline changed, then tell the user what was updated.

Translating GUI Instructions

The backend responds as if there's a visual interface. Map its instructions to API calls:

  • "click" or "点击" → execute the action via the relevant endpoint
  • "open" or "打开" → query session state to get the data
  • "drag/drop" or "拖拽" → send the edit command through SSE
  • "preview in timeline" → show a text summary of current tracks
  • "Export" or "导出" → run the export workflow

Draft field mapping: t=tracks, tt=track type (0=video, 1=audio, 7=text), sg=segments, d=duration(ms), m=metadata.

Timeline (3 tracks): 1. Video: city timelapse (0-10s) 2. BGM: Lo-fi (0-10s, 35%) 3. Title: "Urban Dreams" (0-3s)

Error Handling

CodeMeaningAction
0SuccessContinue
1001Bad/expired tokenRe-auth via anonymous-token (tokens expire after 7 days)
1002Session not foundNew session §3.0
2001No creditsAnonymous: show registration URL with ?bind=<id> (get <id> from create-session or state response when needed). Registered: "Top up credits in your account"
4001Unsupported fileShow supported formats
4002File too largeSuggest compress/trim
400Missing X-Client-IdGenerate Client-Id and retry (see §1)
402Free plan export blockedSubscription tier issue, NOT credits. "Register or upgrade your plan to unlock export."
429Rate limit (1 token/client/7 days)Retry in 30s once

Tips and Tricks

The backend processes faster when you're specific. Instead of "make it look better", try "generate on-screen editing instructions as text overlays synced to the video" — concrete instructions get better results.

Max file size is 500MB. Stick to MP4, MOV, AVI, WebM for the smoothest experience.

Export as MP4 for widest compatibility across platforms and devices.

Common Workflows

Quick edit: Upload → "generate on-screen editing instructions as text overlays synced to the video" → Download MP4. Takes 30-60 seconds for a 30-second clip.

Batch style: Upload multiple files in one session. Process them one by one with different instructions. Each gets its own render.

Iterative: Start with a rough cut, preview the result, then refine. The session keeps your timeline state so you can keep tweaking.

Comments

Loading comments...