Generator Online

v1.0.0

generate text or media into ready-to-share videos with this skill. Works with MP4, MOV, JPG, PNG files up to 500MB. marketers, content creators, small busine...

0· 45·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for vynbosserman65/generator-online.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Generator Online" (vynbosserman65/generator-online) from ClawHub.
Skill page: https://clawhub.ai/vynbosserman65/generator-online
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required env vars: NEMO_TOKEN
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install generator-online

ClawHub CLI

Package manager switcher

npx clawhub@latest install generator-online
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
Name/description match the actions in SKILL.md: the skill calls nemo video API endpoints and requires a NEMO_TOKEN. Requested headers and API endpoints are coherent with a remote video-rendering service. Note: the SKILL.md frontmatter mentions a config path (~/.config/nemovideo/) while the registry metadata reported 'Required config paths: none' — a minor metadata mismatch.
Instruction Scope
Runtime instructions are scoped to: acquiring/using NEMO_TOKEN, creating a session, uploading media, sending SSE messages, polling render status, and returning download URLs. There are no instructions to read unrelated local files or harvest other environment variables. The skill does instruct generating an anonymous token and treating the returned value as NEMO_TOKEN (this implies storing/using a bearer token for API calls), which is expected for this service.
Install Mechanism
No install spec and no code files — instruction-only skill. This is low risk from an installation perspective; nothing is written to disk by an installer step in the bundle itself.
Credentials
Only a single credential (NEMO_TOKEN) is required and that aligns with the service API. The frontmatter also lists a config path (~/.config/nemovideo/) which is reasonable for session/cache but contradicts the registry's earlier 'none' value — this discrepancy should be clarified. No unrelated secrets are requested.
Persistence & Privilege
The skill is not always-on and does not request elevated platform privileges. It will create and use session IDs and bearer tokens for the remote service; these are normal for a cloud API client. There's no instruction to modify other skills or global agent settings.
Assessment
This skill appears to do what it claims (upload your media and call a nemo video API) and only needs one service token. Before installing: 1) Note the skill's source/homepage is missing — consider whether you trust requests to https://mega-api-prod.nemovideo.ai and the anonymous-token flow. 2) Uploaded media and any text are sent to that remote service; do not upload sensitive or private content until you’ve verified the provider's privacy policy. 3) The skill may generate an anonymous bearer token (7-day expiry) and use it for requests — be prepared to revoke or clear that token if you stop using the skill. 4) There is a small metadata mismatch about a config path in SKILL.md vs registry metadata; if you need strict auditability, ask the publisher to clarify. 5) Test with non-sensitive samples first and confirm returned download URLs and behavior match expectations.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🎬 Clawdis
EnvNEMO_TOKEN
Primary envNEMO_TOKEN
latestvk97b4g27pb6c1zra872gcpc59h85jhsb
45downloads
0stars
1versions
Updated 1d ago
v1.0.0
MIT-0

Getting Started

Share your text or media and I'll get started on AI video generation. Or just tell me what you're thinking.

Try saying:

  • "generate my text or media"
  • "export 1080p MP4"
  • "generate a 30-second promo video from"

Automatic Setup

On first interaction, connect to the processing API before doing anything else. Show a brief status like "Setting things up...".

Token: If NEMO_TOKEN environment variable is already set, use it and skip to Session below.

Free token: Generate a UUID as client identifier, then POST to https://mega-api-prod.nemovideo.ai/api/auth/anonymous-token with header X-Client-Id: <uuid>. The response field data.token becomes your NEMO_TOKEN (100 credits, 7-day expiry).

Session: POST to https://mega-api-prod.nemovideo.ai/api/tasks/me/with-session/nemo_agent with Bearer auth and body {"task_name":"project"}. Save session_id from the response.

Confirm to the user you're connected and ready. Don't print tokens or raw JSON.

Generator Online — Create and Export Videos Online

Send me your text or media and describe the result you want. The AI video generation runs on remote GPU nodes — nothing to install on your machine.

A quick example: upload a 200-word product description, type "generate a 30-second promo video from this text script", and you'll get a 1080p MP4 back in roughly 1-2 minutes. All rendering happens server-side.

Worth noting: shorter scripts under 100 words generate faster and more focused videos.

Matching Input to Actions

User prompts referencing generator online, aspect ratio, text overlays, or audio tracks get routed to the corresponding action via keyword and intent classification.

User says...ActionSkip SSE?
"export" / "导出" / "download" / "send me the video"→ §3.5 Export
"credits" / "积分" / "balance" / "余额"→ §3.3 Credits
"status" / "状态" / "show tracks"→ §3.4 State
"upload" / "上传" / user sends file→ §3.2 Upload
Everything else (generate, edit, add BGM…)→ §3.1 SSE

Cloud Render Pipeline Details

Each export job queues on a cloud GPU node that composites video layers, applies platform-spec compression (H.264, up to 1080x1920), and returns a download URL within 30-90 seconds. The session token carries render job IDs, so closing the tab before completion orphans the job.

Base URL: https://mega-api-prod.nemovideo.ai

EndpointMethodPurpose
/api/tasks/me/with-session/nemo_agentPOSTStart a new editing session. Body: {"task_name":"project","language":"<lang>"}. Returns session_id.
/run_ssePOSTSend a user message. Body includes app_name, session_id, new_message. Stream response with Accept: text/event-stream. Timeout: 15 min.
/api/upload-video/nemo_agent/me/<sid>POSTUpload a file (multipart) or URL.
/api/credits/balance/simpleGETCheck remaining credits (available, frozen, total).
/api/state/nemo_agent/me/<sid>/latestGETFetch current timeline state (draft, video_infos, generated_media).
/api/render/proxy/lambdaPOSTStart export. Body: {"id":"render_<ts>","sessionId":"<sid>","draft":<json>,"output":{"format":"mp4","quality":"high"}}. Poll status every 30s.

Accepted file types: mp4, mov, avi, webm, mkv, jpg, png, gif, webp, mp3, wav, m4a, aac.

Three attribution headers are required on every request and must match this file's frontmatter:

HeaderValue
X-Skill-Sourcegenerator-online
X-Skill-Versionfrontmatter version
X-Skill-Platformauto-detect: clawhub / cursor / unknown from install path

All requests must include: Authorization: Bearer <NEMO_TOKEN>, X-Skill-Source, X-Skill-Version, X-Skill-Platform. Missing attribution headers will cause export to fail with 402.

Error Codes

  • 0 — success, continue normally
  • 1001 — token expired or invalid; re-acquire via /api/auth/anonymous-token
  • 1002 — session not found; create a new one
  • 2001 — out of credits; anonymous users get a registration link with ?bind=<id>, registered users top up
  • 4001 — unsupported file type; show accepted formats
  • 4002 — file too large; suggest compressing or trimming
  • 400 — missing X-Client-Id; generate one and retry
  • 402 — free plan export blocked; not a credit issue, subscription tier
  • 429 — rate limited; wait 30s and retry once

SSE Event Handling

EventAction
Text responseApply GUI translation (§4), present to user
Tool call/resultProcess internally, don't forward
heartbeat / empty data:Keep waiting. Every 2 min: "⏳ Still working..."
Stream closesProcess final response

~30% of editing operations return no text in the SSE stream. When this happens: poll session state to verify the edit was applied, then summarize changes to the user.

Translating GUI Instructions

The backend responds as if there's a visual interface. Map its instructions to API calls:

  • "click" or "点击" → execute the action via the relevant endpoint
  • "open" or "打开" → query session state to get the data
  • "drag/drop" or "拖拽" → send the edit command through SSE
  • "preview in timeline" → show a text summary of current tracks
  • "Export" or "导出" → run the export workflow

Draft JSON uses short keys: t for tracks, tt for track type (0=video, 1=audio, 7=text), sg for segments, d for duration in ms, m for metadata.

Example timeline summary:

Timeline (3 tracks): 1. Video: city timelapse (0-10s) 2. BGM: Lo-fi (0-10s, 35%) 3. Title: "Urban Dreams" (0-3s)

Tips and Tricks

The backend processes faster when you're specific. Instead of "make it look better", try "generate a 30-second promo video from this text script" — concrete instructions get better results.

Max file size is 500MB. Stick to MP4, MOV, JPG, PNG for the smoothest experience.

Export as MP4 for widest compatibility across social platforms and websites.

Common Workflows

Quick edit: Upload → "generate a 30-second promo video from this text script" → Download MP4. Takes 1-2 minutes for a 30-second clip.

Batch style: Upload multiple files in one session. Process them one by one with different instructions. Each gets its own render.

Iterative: Start with a rough cut, preview the result, then refine. The session keeps your timeline state so you can keep tweaking.

Comments

Loading comments...