Free Text To Video Create

v1.0.0

create text prompt into AI-generated videos with this skill. Works with TXT, DOCX, PDF, copied text files up to 500MB. marketers use it for generating videos...

0· 66·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for susan4731-wilfordf/free-text-to-video-create.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Free Text To Video Create" (susan4731-wilfordf/free-text-to-video-create) from ClawHub.
Skill page: https://clawhub.ai/susan4731-wilfordf/free-text-to-video-create
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required env vars: NEMO_TOKEN
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install free-text-to-video-create

ClawHub CLI

Package manager switcher

npx clawhub@latest install free-text-to-video-create
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
Name/description (text→video) align with the declared primary credential (NEMO_TOKEN) and the SKILL.md which describes calling a remote nemo video API to create/ render videos. Requesting a single API token is expected for this service.
Instruction Scope
Instructions are focused on authenticating, creating a session, streaming via SSE, uploading user files, polling render status, and returning download URLs — all consistent with a remote render service. The SKILL.md tells the agent to create anonymous tokens automatically if NEMO_TOKEN is missing and to 'store' session_id/token for subsequent calls; it does not specify where or how long to persist them. Also the YAML frontmatter lists a config path (~/.config/nemovideo/) not declared in the registry metadata — a minor inconsistency to confirm.
Install Mechanism
Instruction-only skill with no install spec and no code files — lowest install risk. All network activity is to the named API host; there are no downloads or archive extracts.
Credentials
Only one environment variable is required (NEMO_TOKEN) and that matches the API usage. The skill will auto-provision an anonymous token if none is set, which is reasonable for convenience but means the skill can obtain credentials on behalf of the user.
Persistence & Privilege
always is false (normal). The skill instructs the agent to store session_id and tokens for reuse — this is reasonable for session continuity but you should confirm where these values are persisted and for how long (disk, memory, config path). No instructions attempt to change other skills' configs or request elevated system privileges.
Assessment
This skill appears to do what it says: it uploads text/files and calls a nemo video API to generate MP4s. Before installing, consider: (1) the service endpoint (mega-api-prod.nemovideo.ai) will receive any text/files you upload — do not send sensitive or confidential material unless you trust their privacy policy; (2) the skill will create and store an anonymous token and a session_id if you don't provide NEMO_TOKEN — ask where these are stored and how long they persist; (3) verify the domain and the service's reputation if you care about data retention or attribution headers the skill requires; and (4) confirm the minor metadata inconsistency (SKILL.md lists ~/.config/nemovideo/ as a config path but the registry shows none). If you prefer more control, set your own NEMO_TOKEN in the environment rather than relying on automatic anonymous-token creation.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

✍️ Clawdis
EnvNEMO_TOKEN
Primary envNEMO_TOKEN
latestvk975enfta2mk3xsbna1wmv428h84z8ps
66downloads
0stars
1versions
Updated 1w ago
v1.0.0
MIT-0

Getting Started

Got text prompt to work with? Send it over and tell me what you need — I'll take care of the AI video creation.

Try saying:

  • "create a 100-word product description paragraph into a 1080p MP4"
  • "turn this text into a 30-second video with visuals and background music"
  • "generating videos from written content without recording footage for marketers"

First-Time Connection

When a user first opens this skill, connect to the processing backend automatically. Briefly let them know (e.g. "Setting up...").

Authentication: Check if NEMO_TOKEN is set in the environment. If it is, skip to step 2.

  1. Obtain a free token: Generate a random UUID as client identifier. POST to https://mega-api-prod.nemovideo.ai/api/auth/anonymous-token with header X-Client-Id set to that UUID. The response data.token is your NEMO_TOKEN — 100 free credits, valid 7 days.
  2. Create a session: POST to https://mega-api-prod.nemovideo.ai/api/tasks/me/with-session/nemo_agent with Authorization: Bearer <token>, Content-Type: application/json, and body {"task_name":"project","language":"<detected>"}. Store the returned session_id for all subsequent requests.

Keep setup communication brief. Don't display raw API responses or token values to the user.

Free Text to Video Create — Generate Videos from Written Text

Send me your text prompt and describe the result you want. The AI video creation runs on remote GPU nodes — nothing to install on your machine.

A quick example: upload a 100-word product description paragraph, type "turn this text into a 30-second video with visuals and background music", and you'll get a 1080p MP4 back in roughly 1-2 minutes. All rendering happens server-side.

Worth noting: shorter, clearer text produces more accurate scene generation.

Matching Input to Actions

User prompts referencing free text to video create, aspect ratio, text overlays, or audio tracks get routed to the corresponding action via keyword and intent classification.

User says...ActionSkip SSE?
"export" / "导出" / "download" / "send me the video"→ §3.5 Export
"credits" / "积分" / "balance" / "余额"→ §3.3 Credits
"status" / "状态" / "show tracks"→ §3.4 State
"upload" / "上传" / user sends file→ §3.2 Upload
Everything else (generate, edit, add BGM…)→ §3.1 SSE

Cloud Render Pipeline Details

Each export job queues on a cloud GPU node that composites video layers, applies platform-spec compression (H.264, up to 1080x1920), and returns a download URL within 30-90 seconds. The session token carries render job IDs, so closing the tab before completion orphans the job.

Base URL: https://mega-api-prod.nemovideo.ai

EndpointMethodPurpose
/api/tasks/me/with-session/nemo_agentPOSTStart a new editing session. Body: {"task_name":"project","language":"<lang>"}. Returns session_id.
/run_ssePOSTSend a user message. Body includes app_name, session_id, new_message. Stream response with Accept: text/event-stream. Timeout: 15 min.
/api/upload-video/nemo_agent/me/<sid>POSTUpload a file (multipart) or URL.
/api/credits/balance/simpleGETCheck remaining credits (available, frozen, total).
/api/state/nemo_agent/me/<sid>/latestGETFetch current timeline state (draft, video_infos, generated_media).
/api/render/proxy/lambdaPOSTStart export. Body: {"id":"render_<ts>","sessionId":"<sid>","draft":<json>,"output":{"format":"mp4","quality":"high"}}. Poll status every 30s.

Accepted file types: mp4, mov, avi, webm, mkv, jpg, png, gif, webp, mp3, wav, m4a, aac.

Headers are derived from this file's YAML frontmatter. X-Skill-Source is free-text-to-video-create, X-Skill-Version comes from the version field, and X-Skill-Platform is detected from the install path (~/.clawhub/ = clawhub, ~/.cursor/skills/ = cursor, otherwise unknown).

All requests must include: Authorization: Bearer <NEMO_TOKEN>, X-Skill-Source, X-Skill-Version, X-Skill-Platform. Missing attribution headers will cause export to fail with 402.

Error Codes

  • 0 — success, continue normally
  • 1001 — token expired or invalid; re-acquire via /api/auth/anonymous-token
  • 1002 — session not found; create a new one
  • 2001 — out of credits; anonymous users get a registration link with ?bind=<id>, registered users top up
  • 4001 — unsupported file type; show accepted formats
  • 4002 — file too large; suggest compressing or trimming
  • 400 — missing X-Client-Id; generate one and retry
  • 402 — free plan export blocked; not a credit issue, subscription tier
  • 429 — rate limited; wait 30s and retry once

Reading the SSE Stream

Text events go straight to the user (after GUI translation). Tool calls stay internal. Heartbeats and empty data: lines mean the backend is still working — show "⏳ Still working..." every 2 minutes.

About 30% of edit operations close the stream without any text. When that happens, poll /api/state to confirm the timeline changed, then tell the user what was updated.

Translating GUI Instructions

The backend responds as if there's a visual interface. Map its instructions to API calls:

  • "click" or "点击" → execute the action via the relevant endpoint
  • "open" or "打开" → query session state to get the data
  • "drag/drop" or "拖拽" → send the edit command through SSE
  • "preview in timeline" → show a text summary of current tracks
  • "Export" or "导出" → run the export workflow

Draft field mapping: t=tracks, tt=track type (0=video, 1=audio, 7=text), sg=segments, d=duration(ms), m=metadata.

Timeline (3 tracks): 1. Video: city timelapse (0-10s) 2. BGM: Lo-fi (0-10s, 35%) 3. Title: "Urban Dreams" (0-3s)

Common Workflows

Quick edit: Upload → "turn this text into a 30-second video with visuals and background music" → Download MP4. Takes 1-2 minutes for a 30-second clip.

Batch style: Upload multiple files in one session. Process them one by one with different instructions. Each gets its own render.

Iterative: Start with a rough cut, preview the result, then refine. The session keeps your timeline state so you can keep tweaking.

Tips and Tricks

The backend processes faster when you're specific. Instead of "make it look better", try "turn this text into a 30-second video with visuals and background music" — concrete instructions get better results.

Max file size is 500MB. Stick to TXT, DOCX, PDF, copied text for the smoothest experience.

Export as MP4 for widest compatibility across social platforms and web.

Comments

Loading comments...