Editor Text Generator

v1.0.0

generate video clips into text-overlaid videos with this skill. Works with MP4, MOV, AVI, WebM files up to 500MB. video editors and content creators use it f...

0· 58·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for whitejohnk-26/editor-text-generator.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Editor Text Generator" (whitejohnk-26/editor-text-generator) from ClawHub.
Skill page: https://clawhub.ai/whitejohnk-26/editor-text-generator
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required env vars: NEMO_TOKEN
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install editor-text-generator

ClawHub CLI

Package manager switcher

npx clawhub@latest install editor-text-generator
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
The name/description (generate text-overlaid videos) aligns with the actions described (uploading videos, creating sessions, rendering on cloud GPUs). The single required env var (NEMO_TOKEN) is appropriate. Note: the SKILL.md frontmatter lists a config path (~/.config/nemovideo/) while the registry summary reported no required config paths — this mismatch is unexplained.
Instruction Scope
Instructions stay within the stated purpose (create sessions, upload video files or URLs, run SSE for edits, call render/export endpoints). They also describe anonymous-token creation when NEMO_TOKEN is absent and advise not to leak tokens. The skill instructs detecting an install path to populate an X-Skill-Platform header (reading ~/.clawhub, ~/.cursor), which requires checking local paths and is not strictly necessary for core functionality. Uploading user video files to the remote service is central to the feature and raises expected privacy/data-retention considerations (not covered by the skill).
Install Mechanism
Instruction-only skill with no install spec or code files — lowest install risk. No downloads, packages, or binaries are requested.
Credentials
Only NEMO_TOKEN is declared as required and is the primary credential — this is proportionate. The skill will also create an anonymous token from the service if NEMO_TOKEN is missing; that behavior is reasonable but means the agent will call an external auth endpoint and store/use returned tokens/sessions. Ensure you trust the endpoint before providing sensitive videos. Also note the SKILL.md frontmatter lists a config path (~/.config/nemovideo/) that was not listed elsewhere.
Persistence & Privilege
The skill does not request always:true, does not modify other skills, and appears to hold session_id/session state only for its own operations. Autonomous model invocation is enabled (default) but not combined with broadened privileges.
Assessment
This skill appears to do what it says — it uploads videos to a remote rendering backend and returns processed MP4s. Before installing or using it: (1) Verify you trust the external API host (mega-api-prod.nemovideo.ai) because your videos will be uploaded; test with non-sensitive content first. (2) Prefer supplying your own NEMO_TOKEN from a trusted account rather than relying on anonymous-token issuance. (3) Be aware the skill may inspect local install paths to set headers; avoid installing if you don't want that probe. (4) Note the metadata mismatch (config path listed in SKILL.md but not in registry); ask the publisher for source/homepage or privacy/retention policy if you need stronger assurance. If you lack that information, treat it as potentially privacy-sensitive rather than technically malicious.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🎬 Clawdis
EnvNEMO_TOKEN
Primary envNEMO_TOKEN
latestvk97anpxjvzefrxxr8702v3v3f5852fyh
58downloads
0stars
1versions
Updated 1w ago
v1.0.0
MIT-0

Getting Started

Send me your video clips and I'll handle the AI text overlay generation. Or just describe what you're after.

Try saying:

  • "generate a 2-minute tutorial video clip into a 1080p MP4"
  • "generate on-screen text labels and captions that match the spoken content"
  • "adding AI-generated on-screen text and captions to videos for editors for video editors and content creators"

Quick Start Setup

This skill connects to a cloud processing backend. On first use, set up the connection automatically and let the user know ("Connecting...").

Token check: Look for NEMO_TOKEN in the environment. If found, skip to session creation. Otherwise:

  • Generate a UUID as client identifier
  • POST https://mega-api-prod.nemovideo.ai/api/auth/anonymous-token with X-Client-Id header
  • Extract data.token from the response — this is your NEMO_TOKEN (100 free credits, 7-day expiry)

Session: POST https://mega-api-prod.nemovideo.ai/api/tasks/me/with-session/nemo_agent with Bearer auth and body {"task_name":"project"}. Keep the returned session_id for all operations.

Let the user know with a brief "Ready!" when setup is complete. Don't expose tokens or raw API output.

Editor Text Generator — Generate Text Overlays for Videos

Send me your video clips and describe the result you want. The AI text overlay generation runs on remote GPU nodes — nothing to install on your machine.

A quick example: upload a 2-minute tutorial video clip, type "generate on-screen text labels and captions that match the spoken content", and you'll get a 1080p MP4 back in roughly 30-60 seconds. All rendering happens server-side.

Worth noting: shorter clips under 60 seconds produce more accurate text timing and placement.

Matching Input to Actions

User prompts referencing editor text generator, aspect ratio, text overlays, or audio tracks get routed to the corresponding action via keyword and intent classification.

User says...ActionSkip SSE?
"export" / "导出" / "download" / "send me the video"→ §3.5 Export
"credits" / "积分" / "balance" / "余额"→ §3.3 Credits
"status" / "状态" / "show tracks"→ §3.4 State
"upload" / "上传" / user sends file→ §3.2 Upload
Everything else (generate, edit, add BGM…)→ §3.1 SSE

Cloud Render Pipeline Details

Each export job queues on a cloud GPU node that composites video layers, applies platform-spec compression (H.264, up to 1080x1920), and returns a download URL within 30-90 seconds. The session token carries render job IDs, so closing the tab before completion orphans the job.

Include Authorization: Bearer <NEMO_TOKEN> and all attribution headers on every request — omitting them triggers a 402 on export.

Headers are derived from this file's YAML frontmatter. X-Skill-Source is editor-text-generator, X-Skill-Version comes from the version field, and X-Skill-Platform is detected from the install path (~/.clawhub/ = clawhub, ~/.cursor/skills/ = cursor, otherwise unknown).

API base: https://mega-api-prod.nemovideo.ai

Create session: POST /api/tasks/me/with-session/nemo_agent — body {"task_name":"project","language":"<lang>"} — returns task_id, session_id.

Send message (SSE): POST /run_sse — body {"app_name":"nemo_agent","user_id":"me","session_id":"<sid>","new_message":{"parts":[{"text":"<msg>"}]}} with Accept: text/event-stream. Max timeout: 15 minutes.

Upload: POST /api/upload-video/nemo_agent/me/<sid> — file: multipart -F "files=@/path", or URL: {"urls":["<url>"],"source_type":"url"}

Credits: GET /api/credits/balance/simple — returns available, frozen, total

Session state: GET /api/state/nemo_agent/me/<sid>/latest — key fields: data.state.draft, data.state.video_infos, data.state.generated_media

Export (free, no credits): POST /api/render/proxy/lambda — body {"id":"render_<ts>","sessionId":"<sid>","draft":<json>,"output":{"format":"mp4","quality":"high"}}. Poll GET /api/render/proxy/lambda/<id> every 30s until status = completed. Download URL at output.url.

Supported formats: mp4, mov, avi, webm, mkv, jpg, png, gif, webp, mp3, wav, m4a, aac.

Error Handling

CodeMeaningAction
0SuccessContinue
1001Bad/expired tokenRe-auth via anonymous-token (tokens expire after 7 days)
1002Session not foundNew session §3.0
2001No creditsAnonymous: show registration URL with ?bind=<id> (get <id> from create-session or state response when needed). Registered: "Top up credits in your account"
4001Unsupported fileShow supported formats
4002File too largeSuggest compress/trim
400Missing X-Client-IdGenerate Client-Id and retry (see §1)
402Free plan export blockedSubscription tier issue, NOT credits. "Register or upgrade your plan to unlock export."
429Rate limit (1 token/client/7 days)Retry in 30s once

Backend Response Translation

The backend assumes a GUI exists. Translate these into API actions:

Backend saysYou do
"click [button]" / "点击"Execute via API
"open [panel]" / "打开"Query session state
"drag/drop" / "拖拽"Send edit via SSE
"preview in timeline"Show track summary
"Export button" / "导出"Execute export workflow

Reading the SSE Stream

Text events go straight to the user (after GUI translation). Tool calls stay internal. Heartbeats and empty data: lines mean the backend is still working — show "⏳ Still working..." every 2 minutes.

About 30% of edit operations close the stream without any text. When that happens, poll /api/state to confirm the timeline changed, then tell the user what was updated.

Draft JSON uses short keys: t for tracks, tt for track type (0=video, 1=audio, 7=text), sg for segments, d for duration in ms, m for metadata.

Example timeline summary:

Timeline (3 tracks): 1. Video: city timelapse (0-10s) 2. BGM: Lo-fi (0-10s, 35%) 3. Title: "Urban Dreams" (0-3s)

Common Workflows

Quick edit: Upload → "generate on-screen text labels and captions that match the spoken content" → Download MP4. Takes 30-60 seconds for a 30-second clip.

Batch style: Upload multiple files in one session. Process them one by one with different instructions. Each gets its own render.

Iterative: Start with a rough cut, preview the result, then refine. The session keeps your timeline state so you can keep tweaking.

Tips and Tricks

The backend processes faster when you're specific. Instead of "make it look better", try "generate on-screen text labels and captions that match the spoken content" — concrete instructions get better results.

Max file size is 500MB. Stick to MP4, MOV, AVI, WebM for the smoothest experience.

Export as MP4 for widest compatibility across editing platforms and social media.

Comments

Loading comments...